Skip to main content

How to Use the Rules Command With VIO Servers

I do a lot of customizations on my VIO servers, and I was asked recently how the rules command could be used to deploy some of those changes so they can be consistently applied to all VIO servers. The rules command is primarily used to tune and modify device settings on the VIO server and there is a predefined set of rules for device configuration for VIO server best practices. I don’t tend to use rules, as the MPIO drivers tend to install the correct settings for the devices; however, in this article I will cover how to use rules if you want to do so.

Rules Command Checks

The rules command has been around since version 2.2.4 and is used to set up rules for your VIO server. It can be used to capture, deploy, change, compare and view VIO server rules. When making changes you should always compare the rules command recommendations with those recommended by your disk vendor.

Rules management consists of two rules files in XML format. There is a default rules file and a current rules file. The default rules file is provided by IBM and contains the critical suggested device rules for VIO server best practice. It has read-only permissions. The current rules file has the current system settings based on default rules and can be used to customize device settings. It is modified using the rules command.

The default rules file should NEVER be modified but the current rules file is at /home/padmin/rules/vios_current_rules.xml.

Depending on your VIO server level this file contains around 575 lines. It should only be changed using the modify or add command (avoid delete so you don’t break things) or you can create a new .xml file with just your changes and then deploy it.

To get help, you type in:

rules -h

The basic syntax for the rules command is:
rules -o operation [-l deviceName | -t class/subclass/type ] [-a attribute=value][-f rulesFile]
“operation” is one of capture, deploy, import, list, diff, add, modify or delete.

For all the commands, if you use no flag then it shows the current rules, -d shows default rules and -s shows system rules.

To list the current rules in place:

rules -o list

To list the default rules:

rules -o list -d

To list the system rules:

rules -o list -s

On my VIO at 3.1.4.21 the number of rules for current, default and system are the same.

$ rules -o list | wc -l
     150
$ rules -o list -d | wc -l
     150
$ rules -o list -s | wc -l
     150

On my VIO at 4.1.0.10 where I have never customized rules there are two rules files – tmp_rules_deploy.xml and vios_current_rules.xml. The tmp one has 477 lines and the current one has 601 lines.

There is also a difference between the number of default rules compares to the current and system rules.

# rules -o list | wc -l
     158
# rules -o list -s | wc -l
     158
# rules -o list -d | wc -l
     123

In order to check specific settings, you need to know which adapters, etc. you are looking for. As an example, I wanted to check the settings in rules for my fiber adapters, so I started with lsmcode.

$ ioslevel
4.1.0.10
 
# lsmcode -A | grep fcs
fcs0!df1000e314101506.00014000000057400007
fcs1!df1000e314101506.00014000000057400007

I then checked fcs0 settings:

# lsattr -El fcs0
io_dma                      512        IO_DMA                                           
max_xfer_size          0x400000   Maximum Transfer Size                         
num_cmd_elems     2048       Maximum number of COMMANDS to queue to the adapter
num_io_queues       8          Desired number of IO queues                     
num_nvme_queues 8          Desired number of NVMe queues

Now I can check the rules settings for those fiber adapters. For some reason you have to drop the last two digits off the adapter type shown in lsmcode. Instead of using grep for df1000e314101506, I had to use grep for df1000e3141015. I had set these adapters using chdev rather than the rules command. So num_cmd_elems in the actual adapter is showing as 1024 which is what it is currently set to, but as you can see below, the system settings look different from what we are actually using:

rules -o list  | grep df1000e3141015
adapter/pciex/df1000e31410150  max_xfer_size                     0x100000
adapter/pciex/df1000e31410150  num_cmd_elems                 1024
adapter/pciex/df1000e31410150  num_io_queues                   8
adapter/pciex/df1000e31410150  num_nvme_queues            8
adapter/pciex/df1000e31410150  io_dma                                 512
 
rules -o list -s | grep df1000e3141015
adapter/pciex/df1000e31410150  max_xfer_size                     0x400000
adapter/pciex/df1000e31410150  num_cmd_elems                 6144
adapter/pciex/df1000e31410150  num_io_queues                   8
adapter/pciex/df1000e31410150  num_nvme_queues            8
adapter/pciex/df1000e31410150  io_dma                                 512
 
rules -o list -d | grep df1000e3141015
adapter/pciex/df1000e31410150  max_xfer_size                     0x100000
adapter/pciex/df1000e31410150  num_cmd_elems                 1024
adapter/pciex/df1000e31410150  num_io_queues                   8
adapter/pciex/df1000e31410150  num_nvme_queues            8
adapter/pciex/df1000e31410150  io_dma                                 512

Because I only have internal disks in the VIO server, I also checked the settings for those disks.

# lsmcode -A | grep disk
pdisk0!ST300MP.A1800013.38463148
 
# lsdev -Ccdisk
hdisk0 Available 01-00-00 SAS 4K RAID 0 Disk Array
 
# lsattr -El pdisk0
reserve_policy  no_reserve                                                Reserve Policy

To check the internal disk you either need to look at sisarray or nvme (depending on if they are nvme disks or not).

When I check sisarray on this system it only shows reserve_policy as a rule:

$: rules -o list  | grep sisarray
disk/sas/sisarray              reserve_policy       no_reserve
$: rules -o list -d  | grep sisarray
disk/sas/sisarray              reserve_policy       no_reserve
$: rules -o list -s  | grep sisarray
disk/sas/sisarray              reserve_policy       no_reserve

You can also check for specific settings such as reserve_policy or num_cmd_elems as follows:

rules -o list | grep reserve_policy
rules -o list -d | grep reserve_policy
rules -o list -s | grep reserve_policy
 
rules -o list | grep reserve_policy
disk/iscsi/mpioosdisk          reserve_policy       single_path
disk/sas/sisarray              reserve_policy       no_reserve
disk/sas/scsd                  reserve_policy       no_reserve
disk/sas/mpioosdisk            reserve_policy       no_reserve
disk/fcp/aixmpiods8k           reserve_policy       single_path
disk/fcp/nonmpiodisk           reserve_policy       single_path
disk/fcp/mpioosdisk            reserve_policy       single_path
disk/fcp/mpioapdisk            reserve_policy       single_path
disk/sas/mpioapdisk            reserve_policy       single_path
 
$: rules -o list | grep num_cmd_elems | wc -l
      42
$: rules -o list -d | grep num_cmd_elems | wc -l
      26
$: rules -o list -s | grep num_cmd_elems | wc -l
      42

On my 3.1.4.21 POWER10 system I have nvme disks.

# lsdev -Ccdisk
hdisk0 Available 03-00 NVMe 4K Flash Disk
hdisk1 Available 02-00 NVMe 4K Flash Disk
 
# lsmcode -A | grep nvme
nvme0!A1800110.534e3536
nvme1!A1800110.534e3536
 
$: rules -o list  | grep nvm
adapter/pciex/df1000e31410150  num_nvme_queues      8
pseudo/vios/npiv               num_per_nvme                                8
pseudo/vios/npiv               dflt_enabl_nvme                               no
adapter/vdevice/IBM,vfc-server num_per_nvme                       0
adapter/pciex/df1000f51410c10  num_nvme_queues              8
adapter/pciex/df1000f51410c20  num_nvme_queues              8
 
$: rules -o list -d  | grep nvm
adapter/pciex/df1000e31410150  num_nvme_queues            8
pseudo/vios/npiv               num_per_nvme                                8
pseudo/vios/npiv               dflt_enabl_nvme                               no
adapter/vdevice/IBM,vfc-server num_per_nvme                       0
adapter/pciex/df1000f51410c10  num_nvme_queues              8
adapter/pciex/df1000f51410c20  num_nvme_queues              8
 
$: rules -o list -s  | grep nvm
adapter/pciex/df1000e31410150  num_nvme_queues            8
pseudo/vios/npiv               num_per_nvme                                8
pseudo/vios/npiv               dflt_enabl_nvme                               no
adapter/vdevice/IBM,vfc-server num_per_nvme                       0
adapter/pciex/df1000f51410c10  num_nvme_queues              8
adapter/pciex/df1000f51410c20  num_nvme_queues              8

Below are the settings for the fiber adapters that will be used for NPIV on the Power10. In this case, the adapter had been set using chdev (not rules) so max_xfer_size and num_cmd_elems on the actual adapter do not match all the rules values.

# lsmcode -A | grep fcs
fcs0!7710712014109e06.070006
 
# lsattr -El fcs0
io_dma                      512                IO_DMA                                          
max_xfer_size          0x200000      Maximum Transfer Size         
num_cmd_elems     2048              Maximum number of COMMANDS to queue to the adapter True
num_io_queues       8                     Desired number of IO queues                  
 
$: rules -o list  | grep 7710712014109e
adapter/pciex/7710712014109e0  max_xfer_size                    0x400000
adapter/pciex/7710712014109e0  num_cmd_elems                2048
adapter/pciex/7710712014109e0  num_io_queues                  8
adapter/pciex/7710712014109e0  io_dma                                512
 
$: rules -o list -d  | grep 7710712014109e
adapter/pciex/7710712014109e0  max_xfer_size                    0x400000
adapter/pciex/7710712014109e0  num_cmd_elems                2048
adapter/pciex/7710712014109e0  num_io_queues                  8
adapter/pciex/7710712014109e0  io_dma                                512
 
$: rules -o list -s  | grep 7710712014109e
adapter/pciex/7710712014109e0  max_xfer_size                    0x100000
adapter/pciex/7710712014109e0  num_cmd_elems                1024
adapter/pciex/7710712014109e0  num_io_queues                  8
adapter/pciex/7710712014109e0  io_dma                                512

Looking at Differences

The diff flag is used to list the differences depending on what flags you choose. To list the number of differences add -n to the end of any of the rules -o diff commands.

To list the mismatched devices and attributes between VIOS system settings and current rules. This command looks at the vios_current_rules.xml and compares it to the predefined value in the system (ODM) and will list any differences.

rules -o diff -s
$: rules -o diff -s -n
0

To list the mismatched devices and attributes between current rules and factory default rules:

rules -o diff -d
$: rules -o diff -d -n
98

There were 98 of these. If you take the -n off you can list those differences to determine if there is an issue.

To view the differences between system and the recommended settings, run the following:

rules -o diff -s -d
$: rules -o diff -s -d -n
63

There were 63 of these. If you take the -n off you can list those differences to determine if there is an issue.

On the 3.1.4.21 Power10 the differences were:

$: rules -o diff -s -n
98
$: rules -o diff -d -n
0
$: rules -o diff -s -d -n
98

You can also use rules to check and set some of the virtual buffer settings:

rules -o list | grep buf
rules -o list -d | grep buf
rules -o list -s | grep buf
 
adapter/vdevice/IBM,l-lan      max_buf_tiny         4096
adapter/vdevice/IBM,l-lan      min_buf_tiny         4096
adapter/vdevice/IBM,l-lan      max_buf_small        4096
adapter/vdevice/IBM,l-lan      min_buf_small        4096

I have a script that sets these that I run when I set up a VIO server, but you can use rules if you prefer to do so.

Deployment

When changing or deploying rules it should be noted that this does not change individual adapters—the change is applied to all adapters of the same type. So before making any changes to the current rules, you should save the current rules file and then you should capture the current system settings and save them to the current rules file. The default rules file should NEVER be modified.

The current rules file is at /home/padmin/rules/vios_current_rules.xml
Copy this file somewhere safe and then capture the rules as well:

rules -o capture

Now you can deploy rules as needed. The deploy operation captures the current system configuration and overwrites the current rules file with that configuration. If there is no current configuration, then the factory default rules will be used to capture the settings. If you specify -d it will overwrite the current rules with the default ones.

To deploy the VIO current rules you use the rules -o deploy command. This will apply the suggested factory rules or the current rules. The new settings do not take place till after a reboot.

rules -o deploy

To deploy the recommended default rules:

rules -o deploy -d

Then reboot. I highly recommend taking a clone and/or a mksysb backup prior to modifying rules in any way.

Current rules can be deleted or modified and new ones can be added. Since I don’t use rules and all my VIO servers are in production I have included some examples from the man pages:

To add a rule to the current rules for reserve policy on MPIO disks:

rules -o add -t disk/fcp/mpioosdisk  -a reserve_policy=no_reserve

To add a rule to the current rules for reserve policy for hdisk0:

rules -o add -l hdisk0 -a reserve_policy=no_reserve

You can delete the rule above by using:

rules -o delete -l hdisk0 -a reserve_policy

It is recommended that rather than deleting rules you use the modify command below to set them back to the original setting.

And you can modify current rules as follows:

rules -o modify -t adapter/pciex/df1000fe -a num_cmd_elems=2048

You can add, delete or modify rules in a specific file by adding the following to the end of the command:

-f /tmp/myrules.xml

If you want to create a user specified rules file and then import it to the current rules, you use the import parameter. This operation merges the imported rules and the current rules. The user-specified rules precede the current rules during the merge operation. When a rule is not supported on the specific VIO server level, the import operation fails and displays a message to indicate that the VIO server level does not support a rule specified in the import file. You must remove the unsupported rule entries before attempting the import operation again.

By default, the system will do an ioslevel compatibility check to ensure the rules can be safely applied at the current VIO server level:

rules -o import -f myrules.xml

Summary

The rules command is designed to allow you to set up specific consistent settings for devices such as fiber adapter and disks so that they can be deployed across all your VIO servers. Additionally, you can use rules to set network buffer settings along with many other device settings. I recommend running “rules -o -list -d” to see what the options are on your specific VIO server as they vary depending on the level you are at. Just make sure you take a backup before doing this and remember rules are not activated until you reboot after making changes.

References

Rules Command

Managing VIOS Rules Files

Rob McNelly – Applying VIOS Rules Post Install 2018

Demystifying the Rules Command

sg248535 – Introduction to PowerVM(Pages 153–154)

Rob McNelly – Info on VIO Commands 2017