July 9th, 2013 | Tags: , , , ,

Most modern switches include the ability to use SSH as a remote communications protocol.  Here we will enable that functionality and disable telnet on the Dell Powerconnect 5548 switches.

Enable SSH

In order to enable SSH we need to generate the keys which will be used by SSH to encrypt the traffic.

console>enable
console# configure terminal
(config)# crypto key generate rsa
(config)# crypto key generate dsa
(config)# ip ssh server
(config)# exit
console# write memory

Disable Telnet

Depending on your config you might not need to disable this, but if it is enabled you should disable it.

console>enable
console# configure terminal
(config)# no ip telnet server
(config)# exit
console# write memory

 

 

 

 

Comments Off
July 8th, 2013 | Tags: , , , ,

Most modern switches include the ability to use SSH as a remote communications protocol.  Here we will enable that functionality and disable telnet on the Dell Powerconnect 6248 switches.

Enable SSH

In order to enable SSH we need to generate the keys which will be used by SSH to encrypt the traffic.

console>enable
console# configure
(config)# crypto key generate rsa
(config)# crypto key generate dsa
(config)# ip ssh server
(config)# exit
console# copy running-config startup-config

Disable Telnet

Depending on your config you might not need to disable this, but if it is enabled you should disable it.

console>enable
console# configure
(config)# ip telnet server disable
(config)# exit
console# copy running-config startup-config

 

Comments Off
June 17th, 2013 | Tags: , , , , ,

I have been doing a bit of cleanup around my home file server, and have noticed that while I have a really beautiful collections of family photographs there is no logical organization around these photos.  I have many duplicates names, which would collide in the event of trying to merge multiple directories, and even worse I have many duplicate copies of pictures from various uploads from various devices.

Today we are going to focus on one aspect of that problem, we are going to rename our JPG images based on when the photo was taken.  This will require that the camera which took the photo was writing this metadata to the image in the EXIF format, or that you have manually coded the correct dates in them.  Additionally it also requires that your camera have the correct time on it at the time the picture was taken.  This isn’t so much an issue in the smart phone era but it was an issue with some of my older photos.

I am doing these actions on my Fedora 18 box.  Given the correct tooling you should be able to accomplish them on any Linux.  I used two different tools jhead and exiftool.  I started with jhead as it seemed simpler, however it didn’t work on any of the photos taken with one of my old Android phones.  So I then switched to exiftool.  Both tools worked fine with iPhone photos.

Install Jhead and Exiftool

# yum install jhead perl-Image-ExifTool

Execute JHead to Rename Images

We are going to use a naming convention of YYYY-MM-DD_HH.MM.SS.jpg.  I included a snippet of the output as well.

$ jhead -autorot -nf%Y-%m-%d_%H.%M.%S *.jpg
...
IMG_0023.JPG --> 2012-08-27_18.14.10.jpg
File 'IMG_0024.JPG' contains no exif date stamp. Using file date
IMG_0024.JPG --> 2013-05-27_20.28.44.jpg
IMG_1351.JPG --> 2011-03-19_11.12.16.jpg
Modified: IMG_1352.JPG
IMG_1352.JPG --> 2011-03-25_19.39.30.jpg
...

Some of my Android images threw this error.  For those I switched to the exiftool.

Corrupt JPEG data: 233 extraneous bytes before marker 0xd9

Execute Exiftool to rename Images

Here is the exiftool equivalent of our jhead command above.

$ exiftool -r '-FileName<CreateDate' -d %Y-%m-%d_%H.%M.%S.%%e *.jpg
 630 image files updated
 63 image files unchanged

 

Comments Off
June 3rd, 2013 | Tags: , , , ,

Quite often I will find it necessary to create a recursive snapshot over a large number of ZFS file systems.  In this case, I find that frequently have little long term need for them, and need a quick and easy way of disposing of the snapshots created without spending large amounts of time manually deleting all of the snapshots.

Creating Some Snapshots

Here we will create some test snapshots.  Make sure you do this on a test system, in a throw away area, this way if something is mistyped it doesn’t cause any problems.

# zfs snapshot -r tank/zones@testsnap

You can of course generate a date-based name for your snapshots too.

# zfs snapshot -r tank/zones@`date +%m%d%Y.%H%M`

Bulk Deleting Snapshots

Now we can create a simple for loop to identify the snapshots and then perform actions against them.

# for snap in `zfs list -H -t snapshot -r tank/zones | grep "@testsnap" | cut -f 1`; do echo -n "Destroying $snap..."; zfs destroy $snap; echo "  Done."; done

In this case our zfs list finds all the snapshots below tank/zones which have a snapshot named testsnap.  Once we identify all of these the for loop takes over and executes a zfs destroy against each snap.

Comments Off

Today we are going to go through the process of creating a clustered file system on a pair of Oracle Linux 6.3 nodes.  This exercise is not very resource intensive.  I am using two VMs each with 1GB of RAM a single CPU and a shared virtual disk file in addition to the OS drivers.

The Basic Concepts

Now why is a clustered file system important?  So basically if you have the need to have a shared volume between two hosts, you can provision the disk to both machines, and everything could appear to work, however in the event that writes ever happened to the same areas of the disk at the same time you will end up with data corruption.  Now the key here is that you need a way to track locks from multiple nodes.  This is called a Distributed Locking Manager or DLM.  Now to get this DLM functionality working then it will create a cluster.  Valid cluster nodes can then mount the disk and interact with it as a normal disk.  So as part of OCFS2 we have two file systems which are created /sys/kernel/config and /dlm the prior is used for the cluster configurations, and the latter is for the distributed lock manager

Requirements

OCFS2 has been in the mainline Linux kernel for years, so it is widely available, though if you compile your own kernels then you will need to include support in your kernel.  Other than that all you need is the userland configuration tools to interact with it.

Install OCFS2 Tools

# yum install ocfs2-tools

Load and Online the O2CB Service

# service o2cb load
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading stack plugin "o2cb": OK
Loading filesystem "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
# service o2cb online
Setting cluster stack "o2cb": OK
Checking O2CB cluster configuration : Failed

Notice that when we online o2cb, that it fails at checking the O2CB cluster configuration.  This is expected.  It is due to not having a cluster configuration to check at this point.

Create the OCFS2 Cluster Configuration

Now we need to create the /etc/ocfs2/cluster.conf.  This can be done with o2cb_ctl or manually.  Though it is considerably easier with o2cb_ctl.

# o2cb_ctl -C -n prdcluster -t cluster -a name=prdcluster

Here we are naming our cluster prdcluster.  The cluster itself doesn’t know anything about nodes until we add them in the next step.

Add Nodes to the OCFS2 Cluster Configuration

Create an entry for each node, using the below command.  We will need the IP of the nodes, the port, the cluster name we defined before and the host name of each node.

# o2cb_ctl -C -n ocfs01 -t node -a number=0 -a ip_address=172.16.88.131 -a ip_port=11111 -a cluster=prdcluster
# o2cb_ctl -C -n ocfs02 -t node -a number=1 -a ip_address=172.16.88.132 -a ip_port=11111 -a cluster=prdcluster

The IP Address and Port are used for the Cluster heartbeat.  The node name is used to verify a cluster member when attempting to join the cluster.  The node name needs to match the systems host name.

Review the OCFS2 Cluster Configuration

Now we can take a peek at the cluster.conf which our o2cb_ctl command created.

# cat /etc/ocfs2/cluster.conf
node:
name = ocfs01
cluster = prdcluster
number = 0
ip_address = 172.16.88.131
ip_port = 11111

node:
name = ocfs02
cluster = prdcluster
number = 1
ip_address = 172.16.88.132
ip_port = 11111

cluster:
name = prdcluster
heartbeat_mode = local
node_count = 2

Configure the O2CB Service

In order to have the cluster start with the correct information we need to update the o2cb service and include the name of our cluster.

# service o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]: prdcluster
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Setting cluster stack "o2cb": OK
Registering O2CB cluster "prdcluster": OK
Setting O2CB cluster timeouts : OK

Offline and Online the O2CB Service

To ensure that everything is working as we expect, I like to offline and online the service.

# service o2cb offline
Clean userdlm domains: OK
Stopping O2CB cluster prdcluster: Unregistering O2CB cluster "prdcluster": OK

We just want to watch that it is unregistering and registering the correct cluster, in this case the prdcluster.

# service o2cb online
Setting cluster stack "o2cb": OK
Registering O2CB cluster "prdcluster": OK
Setting O2CB cluster timeouts : OK

Repeat for All Nodes

All of the above actions need to be done on all nodes in the cluster, with no variations.  Once all nodes are Registering O2CB cluster “prdcluster”: OK then you can move on.

Format Our Shared Disk

This part is no different from any other format, keep in mind that once you have formatted the disk on one cluster node, it does not need to be done on the other node.

# mkfs.ocfs2 /dev/xvdb
mkfs.ocfs2 1.8.0
Cluster stack: classic o2cb
Label:
Features: sparse extended-slotmap backup-super unwritten inline-data strict-journal-super xattr indexed-dirs refcount discontig-bg
Block size: 4096 (12 bits)
Cluster size: 4096 (12 bits)
Volume size: 53687091200 (13107200 clusters) (13107200 blocks)
Cluster groups: 407 (tail covers 11264 clusters, rest cover 32256 clusters)
Extent allocator size: 8388608 (2 groups)
Journal size: 268435456
Node slots: 8
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 3 block(s)
Formatting Journals: done
Growing extent allocator: done
Formatting slot map: done
Formatting quota files: done
Writing lost+found: done
mkfs.ocfs2 successful

Mount Our OCFS2 Volume

You can either use a manual issuance of the mount command, or you can create an entry in the /etc/fstab

# mount -t ocfs2 /dev/xvdb /d01/share
# cat /etc/fstab

#
 # /etc/fstab
 # Created by anaconda on Wed Feb 27 13:44:01 2013
 #
 # Accessible filesystems, by reference, are maintained under '/dev/disk'
 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
 #
 /dev/mapper/vg_system-lv_root /                       ext4    defaults        1 1
 UUID=4b397e61-7954-40e9-943f-8385e46d263d /boot                   ext4    defaults        1 2
 /dev/mapper/vg_system-lv_swap swap                    swap    defaults        0 0
 tmpfs                   /dev/shm                tmpfs   defaults        0 0
 devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
 sysfs                   /sys                    sysfs   defaults        0 0
 proc                    /proc                   proc    defaults        0 0
 /dev/xvdb        /d01/share        ocfs2    defaults    1 1

Then mount our entry from the /etc/fstab.

# mount /d01/share

Mounts will need to be configured on all cluster nodes.

Check Our Mounts

Once we have mounted our devices we need to ensure that they are showing up correctly.

# mount
 /dev/mapper/vg_system-lv_root on / type ext4 (rw)
 proc on /proc type proc (rw)
 sysfs on /sys type sysfs (rw)
 devpts on /dev/pts type devpts (rw,gid=5,mode=620)
 tmpfs on /dev/shm type tmpfs (rw)
 /dev/xvda1 on /boot type ext4 (rw)
 none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
 sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
 configfs on /sys/kernel/config type configfs (rw)
 ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
 /dev/xvdb on /d01/share type ocfs2 (rw,_netdev,heartbeat=local)

Notice that /d01/share is mounted as ocfs2, and that it is mounted with rw, _netdev, heartbeat=local.  These are the expected options (these are gathered from the previous configuration).

Check Service Status

Finally we can check the status on the o2cb service and we can see information about our cluster, heartbeat and the various other mounts that are needed to maintain the cluster (configfs, and ocfs2_dlmfs).

# service o2cb status
 Driver for "configfs": Loaded
 Filesystem "configfs": Mounted
 Stack glue driver: Loaded
 Stack plugin "o2cb": Loaded
 Driver for "ocfs2_dlmfs": Loaded
 Filesystem "ocfs2_dlmfs": Mounted
 Checking O2CB cluster "prdcluster": Online
 Heartbeat dead threshold: 31
 Network idle timeout: 30000
 Network keepalive delay: 2000
 Network reconnect delay: 2000
 Heartbeat mode: Local
 Checking O2CB heartbeat: Active
Page 5 of 27« First...34567...1020...Last »
TOP