June 17th, 2013 | Tags: , , , , ,

I have been doing a bit of cleanup around my home file server, and have noticed that while I have a really beautiful collections of family photographs there is no logical organization around these photos.  I have many duplicates names, which would collide in the event of trying to merge multiple directories, and even worse I have many duplicate copies of pictures from various uploads from various devices.

Today we are going to focus on one aspect of that problem, we are going to rename our JPG images based on when the photo was taken.  This will require that the camera which took the photo was writing this metadata to the image in the EXIF format, or that you have manually coded the correct dates in them.  Additionally it also requires that your camera have the correct time on it at the time the picture was taken.  This isn’t so much an issue in the smart phone era but it was an issue with some of my older photos.

I am doing these actions on my Fedora 18 box.  Given the correct tooling you should be able to accomplish them on any Linux.  I used two different tools jhead and exiftool.  I started with jhead as it seemed simpler, however it didn’t work on any of the photos taken with one of my old Android phones.  So I then switched to exiftool.  Both tools worked fine with iPhone photos.

Install Jhead and Exiftool

# yum install jhead perl-Image-ExifTool

Execute JHead to Rename Images

We are going to use a naming convention of YYYY-MM-DD_HH.MM.SS.jpg.  I included a snippet of the output as well.

$ jhead -autorot -nf%Y-%m-%d_%H.%M.%S *.jpg
...
IMG_0023.JPG --> 2012-08-27_18.14.10.jpg
File 'IMG_0024.JPG' contains no exif date stamp. Using file date
IMG_0024.JPG --> 2013-05-27_20.28.44.jpg
IMG_1351.JPG --> 2011-03-19_11.12.16.jpg
Modified: IMG_1352.JPG
IMG_1352.JPG --> 2011-03-25_19.39.30.jpg
...

Some of my Android images threw this error.  For those I switched to the exiftool.

Corrupt JPEG data: 233 extraneous bytes before marker 0xd9

Execute Exiftool to rename Images

Here is the exiftool equivalent of our jhead command above.

$ exiftool -r '-FileName<CreateDate' -d %Y-%m-%d_%H.%M.%S.%%e *.jpg
 630 image files updated
 63 image files unchanged

 

June 3rd, 2013 | Tags: , , , ,

Quite often I will find it necessary to create a recursive snapshot over a large number of ZFS file systems.  In this case, I find that frequently have little long term need for them, and need a quick and easy way of disposing of the snapshots created without spending large amounts of time manually deleting all of the snapshots.

Creating Some Snapshots

Here we will create some test snapshots.  Make sure you do this on a test system, in a throw away area, this way if something is mistyped it doesn’t cause any problems.

# zfs snapshot -r tank/zones@testsnap

You can of course generate a date-based name for your snapshots too.

# zfs snapshot -r tank/zones@`date +%m%d%Y.%H%M`

Bulk Deleting Snapshots

Now we can create a simple for loop to identify the snapshots and then perform actions against them.

# for snap in `zfs list -H -t snapshot -r tank/zones | grep "@testsnap" | cut -f 1`; do echo -n "Destroying $snap..."; zfs destroy $snap; echo "  Done."; done

In this case our zfs list finds all the snapshots below tank/zones which have a snapshot named testsnap.  Once we identify all of these the for loop takes over and executes a zfs destroy against each snap.

Today we are going to go through the process of creating a clustered file system on a pair of Oracle Linux 6.3 nodes.  This exercise is not very resource intensive.  I am using two VMs each with 1GB of RAM a single CPU and a shared virtual disk file in addition to the OS drivers.

The Basic Concepts

Now why is a clustered file system important?  So basically if you have the need to have a shared volume between two hosts, you can provision the disk to both machines, and everything could appear to work, however in the event that writes ever happened to the same areas of the disk at the same time you will end up with data corruption.  Now the key here is that you need a way to track locks from multiple nodes.  This is called a Distributed Locking Manager or DLM.  Now to get this DLM functionality working then it will create a cluster.  Valid cluster nodes can then mount the disk and interact with it as a normal disk.  So as part of OCFS2 we have two file systems which are created /sys/kernel/config and /dlm the prior is used for the cluster configurations, and the latter is for the distributed lock manager

Requirements

OCFS2 has been in the mainline Linux kernel for years, so it is widely available, though if you compile your own kernels then you will need to include support in your kernel.  Other than that all you need is the userland configuration tools to interact with it.

Install OCFS2 Tools

# yum install ocfs2-tools

Load and Online the O2CB Service

# service o2cb load
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading stack plugin "o2cb": OK
Loading filesystem "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
# service o2cb online
Setting cluster stack "o2cb": OK
Checking O2CB cluster configuration : Failed

Notice that when we online o2cb, that it fails at checking the O2CB cluster configuration.  This is expected.  It is due to not having a cluster configuration to check at this point.

Create the OCFS2 Cluster Configuration

Now we need to create the /etc/ocfs2/cluster.conf.  This can be done with o2cb_ctl or manually.  Though it is considerably easier with o2cb_ctl.

# o2cb_ctl -C -n prdcluster -t cluster -a name=prdcluster

Here we are naming our cluster prdcluster.  The cluster itself doesn’t know anything about nodes until we add them in the next step.

Add Nodes to the OCFS2 Cluster Configuration

Create an entry for each node, using the below command.  We will need the IP of the nodes, the port, the cluster name we defined before and the host name of each node.

# o2cb_ctl -C -n ocfs01 -t node -a number=0 -a ip_address=172.16.88.131 -a ip_port=11111 -a cluster=prdcluster
# o2cb_ctl -C -n ocfs02 -t node -a number=1 -a ip_address=172.16.88.132 -a ip_port=11111 -a cluster=prdcluster

The IP Address and Port are used for the Cluster heartbeat.  The node name is used to verify a cluster member when attempting to join the cluster.  The node name needs to match the systems host name.

Review the OCFS2 Cluster Configuration

Now we can take a peek at the cluster.conf which our o2cb_ctl command created.

# cat /etc/ocfs2/cluster.conf
node:
name = ocfs01
cluster = prdcluster
number = 0
ip_address = 172.16.88.131
ip_port = 11111

node:
name = ocfs02
cluster = prdcluster
number = 1
ip_address = 172.16.88.132
ip_port = 11111

cluster:
name = prdcluster
heartbeat_mode = local
node_count = 2

Configure the O2CB Service

In order to have the cluster start with the correct information we need to update the o2cb service and include the name of our cluster.

# service o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]: prdcluster
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Setting cluster stack "o2cb": OK
Registering O2CB cluster "prdcluster": OK
Setting O2CB cluster timeouts : OK

Offline and Online the O2CB Service

To ensure that everything is working as we expect, I like to offline and online the service.

# service o2cb offline
Clean userdlm domains: OK
Stopping O2CB cluster prdcluster: Unregistering O2CB cluster "prdcluster": OK

We just want to watch that it is unregistering and registering the correct cluster, in this case the prdcluster.

# service o2cb online
Setting cluster stack "o2cb": OK
Registering O2CB cluster "prdcluster": OK
Setting O2CB cluster timeouts : OK

Repeat for All Nodes

All of the above actions need to be done on all nodes in the cluster, with no variations.  Once all nodes are Registering O2CB cluster “prdcluster”: OK then you can move on.

Format Our Shared Disk

This part is no different from any other format, keep in mind that once you have formatted the disk on one cluster node, it does not need to be done on the other node.

# mkfs.ocfs2 /dev/xvdb
mkfs.ocfs2 1.8.0
Cluster stack: classic o2cb
Label:
Features: sparse extended-slotmap backup-super unwritten inline-data strict-journal-super xattr indexed-dirs refcount discontig-bg
Block size: 4096 (12 bits)
Cluster size: 4096 (12 bits)
Volume size: 53687091200 (13107200 clusters) (13107200 blocks)
Cluster groups: 407 (tail covers 11264 clusters, rest cover 32256 clusters)
Extent allocator size: 8388608 (2 groups)
Journal size: 268435456
Node slots: 8
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 3 block(s)
Formatting Journals: done
Growing extent allocator: done
Formatting slot map: done
Formatting quota files: done
Writing lost+found: done
mkfs.ocfs2 successful

Mount Our OCFS2 Volume

You can either use a manual issuance of the mount command, or you can create an entry in the /etc/fstab

# mount -t ocfs2 /dev/xvdb /d01/share
# cat /etc/fstab

#
 # /etc/fstab
 # Created by anaconda on Wed Feb 27 13:44:01 2013
 #
 # Accessible filesystems, by reference, are maintained under '/dev/disk'
 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
 #
 /dev/mapper/vg_system-lv_root /                       ext4    defaults        1 1
 UUID=4b397e61-7954-40e9-943f-8385e46d263d /boot                   ext4    defaults        1 2
 /dev/mapper/vg_system-lv_swap swap                    swap    defaults        0 0
 tmpfs                   /dev/shm                tmpfs   defaults        0 0
 devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
 sysfs                   /sys                    sysfs   defaults        0 0
 proc                    /proc                   proc    defaults        0 0
 /dev/xvdb        /d01/share        ocfs2    defaults    1 1

Then mount our entry from the /etc/fstab.

# mount /d01/share

Mounts will need to be configured on all cluster nodes.

Check Our Mounts

Once we have mounted our devices we need to ensure that they are showing up correctly.

# mount
 /dev/mapper/vg_system-lv_root on / type ext4 (rw)
 proc on /proc type proc (rw)
 sysfs on /sys type sysfs (rw)
 devpts on /dev/pts type devpts (rw,gid=5,mode=620)
 tmpfs on /dev/shm type tmpfs (rw)
 /dev/xvda1 on /boot type ext4 (rw)
 none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
 sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
 configfs on /sys/kernel/config type configfs (rw)
 ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
 /dev/xvdb on /d01/share type ocfs2 (rw,_netdev,heartbeat=local)

Notice that /d01/share is mounted as ocfs2, and that it is mounted with rw, _netdev, heartbeat=local.  These are the expected options (these are gathered from the previous configuration).

Check Service Status

Finally we can check the status on the o2cb service and we can see information about our cluster, heartbeat and the various other mounts that are needed to maintain the cluster (configfs, and ocfs2_dlmfs).

# service o2cb status
 Driver for "configfs": Loaded
 Filesystem "configfs": Mounted
 Stack glue driver: Loaded
 Stack plugin "o2cb": Loaded
 Driver for "ocfs2_dlmfs": Loaded
 Filesystem "ocfs2_dlmfs": Mounted
 Checking O2CB cluster "prdcluster": Online
 Heartbeat dead threshold: 31
 Network idle timeout: 30000
 Network keepalive delay: 2000
 Network reconnect delay: 2000
 Heartbeat mode: Local
 Checking O2CB heartbeat: Active
February 13th, 2013 | Tags: , , , , , , ,

Starting in Oracle VM 3.2.1 the built in database of the Oracle VM Manager was MySQL.  I had hoped that this change would also signal a change in the database schema.  In prior versions of OVM 3.x all data was populated in the database in a completely useless longblob form.

# mysql ovs  -u root -p
Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'

As we can see it is attempting and failing to use /var/lib/mysql/mysql.sock as the connection.  So lets take a look at the process and see if it has any clues.

# ps -ef | grep mysql
oracle    2234  1778  1 Jan23 ?        00:47:09 /usr/sbin/mysqld --defaults-file=/u01/app/oracle/mysql/data/my.cnf --basedir=/usr --datadir=/u01/app/oracle/mysql/data --plugin-dir=/usr/lib64/mysql/plugin --user=oracle --log-error=/u01/app/oracle/mysql/data/mysqld.err --pid-file=/u01/app/oracle/mysql/data/mysqld.pid --socket=/u01/app/oracle/mysql/data/mysqld.sock --port=49500

Above we see a couple of key pieces of information.  We now know that the socket is /u01/app/oracle/mysql/data/mysqld.sock and we also see that our configuration file is /u01/app/oracle/mysql/data/my.cnf.  So based on this new socket we can attempt to connect to mysql again.

# mysql ovs -S /u01/app/oracle/mysql/data/mysqld.sock -u root -p
Enter password:
mysql>

Now we are connected to the backend, here comes the bad news.  The database is completely worthless, they are still using longblobs for everything.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| ovs                |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)

Use the ovs database so we can look at its content.

mysql> use ovs;
Database changed

Next we will show all tables so that we can get an idea of what the schema looks like.

mysql> show tables;
+--------------------------------+
| Tables_in_ovs                  |
+--------------------------------+
| Mgr_AbcStore                   |
| Mgr_AccessManager              |
| Mgr_ActionEngineProperties     |
| Mgr_ActionManager              |
| Mgr_ArchiveManager             |
| Mgr_BackupManager              |
| Mgr_BalancerControl            |
| Mgr_BindingMismatchEvent       |
| Mgr_BondPort                   |
| Mgr_BusinessManager            |
| Mgr_Cluster                    |
| Mgr_Coherence                  |
| Mgr_ControlDomain              |
| Mgr_CpuCompatibilityGroup      |
| Mgr_CreateStatisticLog         |
| Mgr_CreatedEvent               |
| Mgr_DeletedEvent               |
| Mgr_DiscoverEngineProperties   |
| Mgr_DiscoverManager            |
| Mgr_EthernetNetwork            |
| Mgr_EthernetPort               |
| Mgr_EventEngineProperties      |
| Mgr_EventLog                   |
| Mgr_EventManager               |
| Mgr_FibreChannelStorageArray   |
| Mgr_FileManager                |
| Mgr_FileSystemMount            |
| Mgr_FileSystemPlugin           |
| Mgr_Foundry                    |
| Mgr_HashMap                    |
| Mgr_InformationalEvent         |
| Mgr_InternalJob                |
| Mgr_InternalPort               |
| Mgr_InternalSystemLog          |
| Mgr_InternalTaggingObject      |
| Mgr_IscsiStorageArray          |
| Mgr_IscsiStorageInitiator      |
| Mgr_Iterator                   |
| Mgr_JobConstructingEvent       |
| Mgr_JobDoneEvent               |
| Mgr_JobRunningEvent            |
| Mgr_LinkedList                 |
| Mgr_LocalFileServer            |
| Mgr_LocalFileSystem            |
| Mgr_LocalStorageArray          |
| Mgr_LocalStorageInitiator      |
| Mgr_LocalStoragePath           |
| Mgr_LogEngineProperties        |
| Mgr_LogManager                 |
| Mgr_LogStore                   |
| Mgr_ModelEngineProperties      |
| Mgr_ModelManager               |
| Mgr_NetworkFileServer          |
| Mgr_NetworkFileSystem          |
| Mgr_NetworkSelectionManager    |
| Mgr_ObjectChangeEvent          |
| Mgr_ObjectCheckerTask          |
| Mgr_OdofManager                |
| Mgr_OvfAssembly                |
| Mgr_PathDownEvent              |
| Mgr_PathUpEvent                |
| Mgr_PerfManager                |
| Mgr_PortDownEvent              |
| Mgr_PortUpEvent                |
| Mgr_Processor                  |
| Mgr_Properties                 |
| Mgr_QueuedJobCreateEvent       |
| Mgr_QueuedServerUpdateNtpServe |
| Mgr_QueuedServerYumRepositoryU |
| Mgr_RasEngineProperties        |
| Mgr_RasManager                 |
| Mgr_RefreshRepoFileSystemsTask |
| Mgr_Repository                 |
| Mgr_RestoreManager             |
| Mgr_RoleService                |
| Mgr_RootStatisticLog           |
| Mgr_RulesEngineProperties      |
| Mgr_RulesManager               |
| Mgr_SchedulableTaskProperties  |
| Mgr_Server                     |
| Mgr_ServerClusterStateDownEven |
| Mgr_ServerDefaultInfo          |
| Mgr_ServerDisconnectErrorEvent |
| Mgr_ServerDiscoverScanEvent    |
| Mgr_ServerNotification         |
| Mgr_ServerOfflineEvent         |
| Mgr_ServerOutofDateEvent       |
| Mgr_ServerPool                 |
| Mgr_ServerPoolMasterMissingEve |
| Mgr_ServerRunningEvent         |
| Mgr_ServerSelectionManager     |
| Mgr_ServerStartingEvent        |
| Mgr_ServerStoppedEvent         |
| Mgr_ServerUserMissingEvent     |
| Mgr_ServerVersionMismatchWarni |
| Mgr_ServerYumRepositoryInforma |
| Mgr_ServerYumUpdateCheckingEve |
| Mgr_SeverityChangeEvent        |
| Mgr_StatisticManager           |
| Mgr_StatisticSubjectLog        |
| Mgr_StatisticTypeLog           |
| Mgr_StatsIntervalAdjusterTask  |
| Mgr_StorageArrayPlugin         |
| Mgr_StorageDeviceUpEvent       |
| Mgr_StorageElement             |
| Mgr_StorageSelectionManager    |
| Mgr_Tag                        |
| Mgr_TaskEngineProperties       |
| Mgr_TaskManager                |
| Mgr_TreeMap                    |
| Mgr_TreeStore                  |
| Mgr_User                       |
| Mgr_UserAccount                |
| Mgr_UserStore                  |
| Mgr_VirtualCdrom               |
| Mgr_VirtualDisk                |
| Mgr_VirtualMachine             |
| Mgr_VirtualMachineCfgFile      |
| Mgr_VirtualMachineDisconnectEr |
| Mgr_VirtualMachineRunningEvent |
| Mgr_VirtualMachineStartingEven |
| Mgr_VirtualMachineStoppedEvent |
| Mgr_VirtualMachineStoppingEven |
| Mgr_VirtualMachineSuspendedEve |
| Mgr_VirtualMachineTemplate     |
| Mgr_VmApiMessages              |
| Mgr_VmCloneDefinition          |
| Mgr_VmCloneNetworkMapping      |
| Mgr_VmCloneStorageMapping      |
| Mgr_VmDiskMapping              |
| Mgr_VmSelectionManager         |
| Mgr_Vnic                       |
| Mgr_VnicManager                |
| Mgr_VnicManagerProperties      |
| Mgr_VolumeGroup                |
| Mgr_XenHypervisor              |
| Mgr_YumRepoOutofDateEvent      |
| Mgr_YumUpdateCheckerTask       |
| Odof_id_to_type                |
| Odof_not_tabled                |
| Odof_sys_properties            |
| Odof_type_to_class             |
| WL_LLR_ADMINSERVER             |
+--------------------------------+
143 rows in set (0.00 sec)

Now lets look at the columns of the Mgr_VirtualMachine table.

mysql> describe Mgr_VirtualMachine;
+--------+------------+------+-----+---------+-------+
| Field  | Type       | Null | Key | Default | Extra |
+--------+------------+------+-----+---------+-------+
| m_id   | bigint(20) | NO   | PRI | 0       |       |
| m_data | longblob   | YES  |     | NULL    |       |
+--------+------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

Now lets look at the columns of the Mgr_Server table.

mysql> describe Mgr_Server;
+--------+------------+------+-----+---------+-------+
| Field  | Type       | Null | Key | Default | Extra |
+--------+------------+------+-----+---------+-------+
| m_id   | bigint(20) | NO   | PRI | 0       |       |
| m_data | longblob   | YES  |     | NULL    |       |
+--------+------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

Here is a command to pull the whole schema, and every single table has two tables, m_id and m_data with the m_data being longblog.

mysqldump --no-data ovs -S /u01/app/oracle/mysql/data/mysqld.sock -u root -p
February 12th, 2013 | Tags: , , , ,

Recently I have been spending some time learning about database technologies (Oracle Databases at Keste as well as MySQL on my own).  Part of this I have decided to carry over into my existing work with Solaris, and go through the installation process using the Image Packaging System which is in Solaris.  Now really the IPS does all the heavy lifting for us, but we still need to know how to utilize the package manager to get the desired result.

Searching for the MySQL Package

Now when we search using the below command, you will notice that we are using the parameter -r this tells it to use the remote repository in addition to the local repository, this allows us to find software that we do not have installed on the machine.

# pkg search -r mysql
INDEX       ACTION VALUE                                                                   PACKAGE
pkg.summary set    A MySQL database adapter for the Python programming language            pkg:/library/python-2/python-mysql-26@1.2.2-0.175.1.0.0.11.0
pkg.summary set    Apache Portable Runtime Utility (APR-util) 1.3 DBD Driver for MySQL 5.0 pkg:/library/apr-util-13/dbd-mysql@1.3.9-0.175.1.0.0.24.0
pkg.summary set    MySQL Database Management System (Base)                                 pkg:/database/mysql-common@0.5.11-0.175.1.0.0.24.0
pkg.summary set    MySQL extension module for PHP                                          pkg:/web/php-53/extension/php-mysql@5.3.14-0.175.1.0.0.24.0
pkg.summary set    MySQL extension module for PHP                                          pkg:/web/php-52/extension/php-mysql@5.2.17-0.175.1.0.0.24.0
pkg.summary set    MySQL 5.1 Database Management System                                    pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
pkg.summary set    MySQL 5.1 libraries                                                     pkg:/database/mysql-51/library@5.1.37-0.175.1.0.0.24.0
pkg.summary set    MySQL 5.1 tests                                                         pkg:/database/mysql-51/tests@5.1.37-0.175.1.0.0.24.0
basename    file   usr/mysql/5.1/bin/amd64/mysql                                           pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
basename    file   usr/mysql/5.1/bin/mysql                                                 pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
basename    file   usr/mysql/5.1/bin/sparcv9/mysql                                         pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
name        group  mysql                                                                   pkg:/database/mysql-common@0.5.11-0.175.1.0.0.24.0
basename    link   usr/bin/mysql                                                           pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
name        user   mysql                                                                   pkg:/database/mysql-common@0.5.11-0.175.1.0.0.24.0
basename    dir    etc/mysql                                                               pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql                                                               pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql/5.1/include/mysql                                             pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql/5.1/share/mysql                                               pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
basename    dir    var/mysql                                                               pkg:/database/mysql-51@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql                                                               pkg:/database/mysql-51/library@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql/5.1/lib/amd64/mysql                                           pkg:/database/mysql-51/library@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql/5.1/lib/mysql                                                 pkg:/database/mysql-51/library@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql/5.1/lib/sparcv9/mysql                                         pkg:/database/mysql-51/library@5.1.37-0.175.1.0.0.24.0
basename    dir    usr/mysql                                                               pkg:/database/mysql-51/tests@5.1.37-0.175.1.0.0.24.0

Now in the output we are looking for a pkg.summary which is the software we are looking for, in our case pkg:/database/mysql-51 or simply mysql-51.

Once we think we have the right package I like to do a pkg info to make sure that it is what I expect, again here we want to look against remote repositories as well with the -r parameter.

# pkg info -r mysql-51
Name: database/mysql-51
Summary: MySQL 5.1 Database Management System
Category: Development/Databases
State: Not installed
Publisher: solaris
Version: 5.1.37
Build Release: 5.11
Branch: 0.175.1.0.0.24.0
Packaging Date: September  4, 2012 05:09:22 PM
Size: 147.23 MB
FMRI: pkg://solaris/database/mysql-51@5.1.37,5.11-0.175.1.0.0.24.0:20120904T170922Z

Install the MySQL Package

Here we can install MySQL 5.1 via the IPS repositories.

# pkg install mysql-51
Packages to install:  2
Create boot environment: No
Create backup boot environment: No
Services to change:  2

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                                2/2       252/252    52.2/52.2 16.3M/s

PHASE                                          ITEMS
Installing new actions                       343/343
Updating package state database                 Done
Updating image state                            Done
Creating fast lookup database                   Done 

Enable the MySQL Service

Now lets take a look at the service.  We can see that the service is installed but disabled.

# svcs -a | grep mysql
disabled       10:28:40 svc:/application/database/mysql:version_51

Enable the service.

# svcadm enable mysql
# svcs -a | grep mysql
online         10:30:26 svc:/application/database/mysql:version_51

Connect to MySQL

# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.1.37 Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql&gt;

Please note this is not a secure MySQL configuration.  You will need to secure this before use.

Page 3 of 2512345...1020...Last »
TOP