May 19th, 2014 | Tags: , , ,

In this article we are going to go over Datalink Multipathing Protocol (DLMP) available in Solaris.  DLMP is similar to IPMP, however there are some key differences, the biggest being the layer at which it operates.  DLMP operates at the datalink layer of the OSI model, while IPMP operates at the network layer.  Due to the differences in those layers DLMP opens up a lot of possibilities that were not possible with IPMP.  For example if you had a requirement for redundant networking for a service, either IPMP and DLMP would both be able to meet that requirement very well.  However if you had a requirement that the service runs inside of a zone or a logical domain then the level of work in IPMP becomes much higher.  This is because these hypervisors assign datalink devices to their guests.  Since the redundancy for IPMP is built on a level higher we need to assign multiple non-redundant interfaces to the guests and then build the interfaces and IPMP groups inside each of the guests.

Benefits of DLMP

  1. Virtualization friendly, you configure the aggregation on the control domain (or global zone) and hand out a redundant interface to a guest.
  2. Single command to configure a DLMP aggregation group.
  3. More portable, no switch side configuration or support is required

Drawbacks of DLMP

  1. Requires the same media speed for all members (differing speeds get put into standby and will not receive a failover).
  2. Requires a switch to mediate the connections, so no server to server connections.

Create a DLMP Aggregation

Creating and DLMP based aggregation is really similar to creating an LACP aggregation.  Simply change the mode to be dlmp.

# dladm create-aggr -m dlmp -l net0 -l net1 aggr0

Change an Existing Aggregration to DLMP

If you forgot to include the mode flag, that can be added using modify-aggr.

# dladm modify-aggr -m dlmp aggr0

Add Additional Interfaces to an Aggregration

Add additional interfaces (net2 and net3) to the existing aggregation group aggr0.

# dladm add-aggr -l net2 -l net3 aggr0

Remove Interfaces from an Aggregation

Remove interfaces (net2 and net3) to the existing aggregation group aggr0.  You cannot remove the last interface using this method.

# dladm remove-aggr -l net2 -l net3 aggr0

Delete an Aggregation Group

To delete an existing aggregation group, you can use the following command.

# dladm delete-aggr aggr0

Show Detailed Aggregation Information

Below will show you additional information about the aggregations.  In this case I find the speed, duplex and portstate fields helpful.  Additionally you can see the mac address on an interface.

# dladm show-aggr -x
xgaggr1    --             10000Mb full   up        0:10:e0:2d:ec:a4   --
           net0           10000Mb full   up        0:10:e0:2d:ec:a4   attached
           net1           10000Mb full   up        0:10:e0:2d:ec:a5   attached
aggr1      --             1000Mb full    up        a0:36:9f:1e:b5:9c  --
           net8           1000Mb full    up        a0:36:9f:1e:b5:88  attached
           net4           1000Mb full    up        a0:36:9f:1e:b5:9c  attached


May 15th, 2014 | Tags: , , , ,

One of the biggest benefits of migrating to the Service Management Framework is that we can introduce dependencies to a services, these dependencies can be other services which is very valuable, however they can also be file based.  This can be used in a number of ways, say for example you have some application trees that live on some NFS mounts, in the event that the mount hasn’t been successfully mounted it will still attempt to start the service.  However if we make our service dependent on the file, then if the file doesn’t exist it won’t even attempt to start the service.


We can use svccfg to navigate the service tree and set the properties that we require.  The big gotcha here comes when you are defining the file (line 5), notice we define it including file://localhost/ as part of the path, if you don’t include this it will be unable to locate the file and your test will fail.  In this example we are assuming that this configuration file is needed for the service to start.

# svccfg -s application/xvfb
svc:/application/xvfb> select default
svc:/application/xvfb:default> addpg config_file dependency
svc:/application/xvfb:default> setprop config_file/grouping = astring: require_all
svc:/application/xvfb:default> setprop config_file/entities = fmri: file://localhost/etc/xvfb.conf
svc:/application/xvfb:default> setprop config_file/type = astring: path
svc:/application/xvfb:default> setprop config_file/restart_on = astring: refresh
svc:/application/xvfb:default> end


We can also execute the same actions without entering the SMF

# svccfg -s application/xvfb:default addpg config_file
# svccfg -s application/xvfb:default setprop config_file/grouping = astring: require_all
# svccfg -s application/xvfb:default setprop config_file/entities = fmri: file://localhost/etc/xvfb.conf
# svccfg -s application/xvfb:default setprop config_file/type = astring: path
# svccfg -s application/xvfb:default setprop config_file/restart_on = astring: none


Now of course if your service doesn’t exist yet, then the best way is to include it in the service definition.  Dependencies belong inside of the instance tags (shown below).

<instance name='default' enabled='true'>

Here is how the above example of a file-based dependency would look inside of a service definition.

<dependency name='config_file' grouping='require_all' restart_on='refresh' type='
<service_fmri value='file://localhost/etc/xvfb-securitypolicy.conf'/>

Also keep in mind when using this approach it is invaluable to use the svccfg utility to validate the structure of your service definition.

svccfg validate xvfb.xml


Now of course all of this is without any value if we don’t test the outcome.  The best way to do this is by moving your file (so that it doesn’t exist) and restarting the service.  Of course this goes without saying that this will cause downtime (as we are trying to) so please ensure that you have coordinated everything necessary.

May 14th, 2014 | Tags: , , , ,

Frequently we want a service to execute as a non-root user, this is pretty trivial in the context of the Service Management Facility in Solaris.  This article will go over exactly what goes into that.  The one complexity around that is the environment, if you have environment variables that the user will depend on those, they will need to be setup inside of the SMF.  I will not be going into that in this article.


I am making the following assumptions as part of writing this article.

  • Your user is ebsdev (feel free to substitute your own)
  • Your group is ebsdev (feel free to substitute your own)
  • Both the user and group have been created and have permissions to execute the application binaries
  • Your service is application/xvfb
  • Your service is single_instance, and the instance name is default


We will be using svccfg to navigate the SMF tree to find our service.

# svccfg
svc:> select application/xvfb

Now that we have found our service, we can look at all of our available instances, in this case we are using the default instance as it is a single instance service.

svc:/application/xvfb> list
svc:/application/xvfb> select default

Now lets set our user.

svc:/application/xvfb:default> setprop method_context/user = astring: ebsdev

Lets also set the group for our user.

svc:/application/xvfb:default> setprop method_context/group = astring: ebsdev

If you have additional groups that need to be setup you can use the method_context/supp_groups property to do so.  I am not covering that in this article.

svc:/application/xvfb:default> end


Here we can add our user using a single command

# svccfg -s application/xvfb:default setprop method_context/user = astring: ebsdev

Here we can add our group using a single command.

# svccfg -s application/xvfb:default setprop method_context/group = astring: ebsdev


Assuming that you have not yet defined your service, then you can simply include the following in the service definition.  I insert it after the property_group section or if you don’t use it then after the start/stop/refresh methods.

<instance enabled="true" name="default">
<method_credential user='ebsdev' group='ebsdev'/>

Now the user/group will be defined when you import the service.

May 13th, 2014 | Tags: , , , , ,

In many of the environments that I deliver to a client we have a concept of an “environment account” this account is the credential set that the environment runs as.  Over time and the maturation of the Service Management Facility (SMF) we have been able to simplify this offering and increase its capabilities.  Today we are going to talk about allowing the environment user to manage a service controlled by SMF.

So lets start by defining the problem.  The basics of the problem is that SMF is what we use to control services in Solaris.  It is incredibly robust and allows for very fine grained dependency managements as well as managing service failure.  One of the primary differences between SMF and init.d is that init.d will only start or stop a service during a runlevel change or a request by a user.  SMF will notice the failure of the service and request an immediate restart, resolving simple service failures.  So we have a lot of good reasons to use SMF, but where is the problem.  We the problem here is simple.  Once we add a service into SMF then only root (or a user who can act on behalf of root) can issue commands to change state of that service.

In order to overcome this we can use Role-based Access Control (RBAC) to delegate control over just that one service.  Obviously you could do sudo, and have a granular policy allowing only certain commands to be executed, but honestly this approach is far more elegant, and frankly it doesn’t require the additional command of sudo to be used at time of invocation.

I am assuming that you already have an SMF created for your service, if you do not you can refer to my article on creating one for your service.  In this example we are going to be using our ebsdev user to control an xvfb service.


We are using RBAC to create authorizations, these can go anywhere within the file, Solaris 11 will have a blank file other than some comments, Solaris 10 will have a huge file which includes a bunch of system settings already in there.  I prefer to include them at the end to make a clear delineation between the system and the custom rules.  But this doesn’t matter from a functionality perspective.

# grep xvfb /etc/security/auth_attr
solaris.smf.manage.xvfb:::Manage Xvfb Service States::
solaris.smf.value.xvfb:::Change Xvfb Service Properties::


This authorization allows a user to change the state of the service.  Changing the state is not enabling or disabling, we will be adding that separately.  This only allows for service restart.

# svccfg -s application/xvfb setprop general/action_authorization = astring: solaris.smf.manage.xvfb


This authorization allows a user to change values associated with the service.  This is where we are able to allow the disabling and enabling of the service.

# svccfg -s application/xvfb setprop general/value_authorization = astring: solaris.smf.value.xvfb


Here we need to associate our user with the authorizations we have created.  Important to note here that this will overwrite all existing authorizations, so if you have any already existing then you will need to include them comma delimited in this command.

# usermod -A solaris.smf.manage.xvfb,solaris.smf.value.xvfb ebsdev


Here we are just going to give it a quick test and make sure that our user can interact with the services in the way we expect based on our security policies.  Below will test solaris.smf.manage.xvfb

# su - ebsdev
$ svcadm restart xvfb
$ svcs xvfb
STATE          STIME    FMRI
online         May_08   svc:/application/xvfb:default

Now to test solaris.smf.value.xvfb

# su - ebsdev
$ svcadm disable xvfb
$ svcs xvfb
STATE          STIME    FMRI
disabled         12:16:10   svc:/application/xvfb:default
$ svcadm enable xvfb
$ svcs xvfb
STATE          STIME    FMRI
online       12:17:07 svc:/application/xvfb:default


If you already have a service definition, but haven’t yet imported the service you can also include the authorizations into the service definitions to reduce the number of steps which need to be manually executed.  This step is not required for the manual configuration we did above, this is a more sreamlined approach for new services.

# sed -n -e '/property_group/,/\/property_group/ p' xvfb.xml
<property_group name='general' type='framework'>
<propval name='action_authorization' type='astring' value='solaris.smf.manage.xvfb'/>
<propval name='value_authorization' type='astring' value='solaris.smf.value.xvfb'/>

Now when you import it you will be able to avoid the “Modify Service to Allow Service State Authorization” and “Modify Service to Allow Service Statue Authorization” sections.

After you import you can validate that everything got put into the correct place by executing the following.

# svccfg -s application/xvfb listprop general/value_authorization
general/value_authorization  astring  solaris.smf.value.xvfb
# svccfg -s application/xvfb listprop general/action_authorization
general/action_authorization  astring  solaris.smf.manage.xvfb


September 24th, 2013 | Tags: , ,

ZFS gives us the ability to move data with ZFS send and receive.  This can be used to combine snapshots, create full or incremental backups, or replicate data between servers on your LAN or even over the WAN.

Local Backup with Send

This is very simple, simply perform a send of a snapshot then output it to the file, I have complicated it a bit to include a folder which is named off of the current date (as would be helpful with backups).

# zfs send tank/filesystem@now > /backup/`date +%m%d%Y`/tank-filesystem.zsnd

Keep in mind zfs send only makes the backups.  In order to restore them we will need to be familiar with zfs receive as well.

Local Restore with Receive

With the basic restore it is very easy, simply specify the file system to restore to, and then define the file to pull it from.

# zfs receive tank/restoredfilesystem < /backup/03152013/tank-filesystem.zsnd

Notice here we are doing redirection back into the ZFS receive command from the file, also important to note that

Local Copy of a File System – Full Clone

Occasionally you may have the need to perform a full copy clone.  With a normal zfs clone you will end up with a zero copy clone (copy-on-write), you will also have additional dependency when using zfs clones, which require a little extra caution.  If using zfs send and receive to make a copy, then there is no interdependency.

# zfs send tank/originalfilesystem@now | zfs receive tank/copyfilesystem

Here we are able to structure the command with  just a simple pipe to separate the send and receive statements.

Local Compressed Backup with Send

Perhaps you want to store your backups in a compressed form.  All we need to do is insert a gzip into the mix.

# zfs send tank/test@now | gzip > /backup/`date +%m%d%Y`/tank-originalfilesystem.zsnd.gz

This method combines a pipe with a redirect, and will give us a compressed file.

Local Compressed Restore with Receive

Of course backups don’t do any good for us if we cannot restore them.

# gunzip -c -d /backup/03152013/tank-originalfilesystem.zsnd.gz | zfs receive tank/copyfilesystem

Here we unzip the file, and pipe that into the zfs receive statement.

Local Encrypted (and Compressed) Backup with Send

Some workloads have serious security requirements.  This will encrypt and compress the contents of the file system into a backup file.  In this example I am using encrypt/decrypt to perform the encryption with aes, I also was unable to get the piping and redirection working without also injecting gzip as well.

# zfs send tank/test@now | encrypt -a aes | gzip > /backup/`date +%m%d%Y`/tank-originalfilesystem.zsnd.aes.gz
Enter passphrase:
Re-enter passphrase:

Since we aren’t using any keys with encrypt it will prompt us for a passphrase.  Remember when using encryption your data becomes worthless if you do not

Local Encrypted (and Compressed) Restore with Receive

Decrypting and restoring the backup file is pretty straight forward, unzip the file, pipe it through decrypt and pipe it into zfs.

# gunzip -c -d /backup/03152013/tank-originalfilesystem.zsnd.aes.gz | decrypt -a aes | zfs receive tank/copyfilesystem
Enter passphrase:

Use the passphrase that you used when encrypting the backup to decrypt it.

Remote Copy of a File System – Full Clone

Sometimes a full clone is needed from one server to another, perhaps for data migration.

# zfs send tank/originalfilesystem@now | ssh root@ "zfs receive tank/copyfilesystem"
Page 3 of 2812345...1020...Last »