12 Power VIO best practices to help you achieve the greatest total system availability and highest performance possible

I’ve been doing quite a bit of work with VIO servers lately and it got me thinking. Maybe it’s a good time to review some best practices related to VIO servers.

Here are some of the best practices to help you achieve the simplicity and flexibility, while maintaining maximum redundancy in your environment:

1. Keep your VIO servers updated

VIOS is considered a firmware rather than an operating system and with a few exceptions, the only level with additional hardware and bug fix support is the latest one.

IBM’s general recommendation for VIOS is to upgrade to the latest Fix Pack and its latest Service Pack.

2. Implement dual (or quad) VIO servers

Having VIO server pairs provides greater redundancy and flexibility than having a single VIO server. I typically implement 2 VIO servers per system, but I have seen 4 VIO servers implemented that have 1 pair handling network IO and another pair handling disk IO. I have also seen 4 VIO server implementations per server that have one pair managing network and disk IO for production partitions and another pair managing network and disk IO for non-production partitions.

3.  Implement a fully virtualized environment

Use Full virtualization and shared CPU resources everywhere. This will increase utilization without sacrificing security, performance or scalability.

4. Add all virtual adapters to each VIO server (and client partitions) as desired

This will allow the most flexibility and will enable Dynamic LPAR operations on all virtual adapters.

5. Enable VIO servers for Live Partition Mobility (with PowerVM Enterprise Edition)

Live Partition Mobility enables migration to a new platform by leveraging a fully virtualized environment (enabled for Live Partition Mobility with PowerVM Enterprise Edition or manual migration with PowerVM Standard).

6. “Right size” client partition CPU entitlements

Use Virtual CPU’s to cater for load spikes combined with entitled capacity close to average utilization where possible. It should be considered normal to have a sum of virtual CPU’s around 2x-5x the number of physical processor cores for production workloads

7. Implement the following weight settings based on partition type

IBM recommends the following settings:

  • VIOS255
  • Production DB partition128
  • Production App/Web partition128
  • Dev/test DB partition25
  • Dev/test App/Web partition5

This will allow the most important partitions to have the best opportunity for acquiring available resources if they need them.

8. Configure your systems with High function Network adapters.

Some examples of these type of adapters are EN0H, EN0J, EN11, EN15, EN16, EL38 and EL56.

Note that these cards DO NOT need to be connect to FCoE switches. They typically connect to regular Ethernet switches.

9. Optimize clock speed.

With firmware levels 740, 760 or 770 and above on POWER7 systems and all POWER8/POWER7+ models, the ASMI interface includes the favor performance setting.

With POWER8 and HMC8 this interface can be directly accessed from the HMC configuration panels, ASMI is not required. A new option fixed max frequency is also available.

Engage favor performance as a mandatory modification for most environments in the ”Power Management Mode” menu

This safely boosts system clock speed by 4-9%

10. Configure your VIO servers with vNIC failover if SR_IOV adapters are present, or with SEA Failover with Load Sharing if they are not.

There are currently 4 options for setting up virtual networks on your VIO servers:

  • SR_IOV
  • vNIC
  • vNIC failover
  • SEA failover with load sharing

SR_IOV: Share parts of a dedicated network adapter among several partitions. Works with all current IBM AIX, IBM i and Linux distributions. Restrictions: prevents Live Partition Mobility, restrictions on Etherchannel. Max 20 VM per network port

vNIC: SR_IOV enhanced with Live Partition Mobility support. No Etherchannel or bonding support. Max 20 VM per network port

vNIC failover: Provides server-side high availability solution (similar to SEA failover). In the vNIC failover configuration, a vNIC client adapter can be backed by multiple logical ports, preferably allocated from a different SR-IOV adapter and hosted by different VIOSes to avoid a single point failure. At any time, only one logical port is connected with the vNIC adapter. If the active (connected) backing device or its hosting VIOS fails, a new backing device is selected to serve the client. The selection of the active backing device is done by Power Hypervisor. In contrast to the SEA failover, the vNIC failover does not rely on any communication protocol between/among the multiple backing devices. The vNIC failover resorts to the Power Hypervisor as the decision maker because it (the Power Hypervisor) has a complete view and receives real time status of all the backing devices and is best situated for selecting the right logical port. Without the implementation of the communication protocol, the vNIC failover is a much simpler, and more robust solution.

Shared Ethernet Adapter:

  • Deploy Shared Ethernet Failover with Load Sharing to greatly reduce LPAR configuration complexity. Use Link aggregation on 1Gbit or 10Gbit ports on the VIO servers
  • Create as many shared ethernet adapters as deemed necessary from a performance/ separation / security perspective
  • Connect and configure one virtual network adapter per VLAN to the client partition

11. Tune your VIO server virtual ethernet adapters for optimum performance.

Large Send and Large Receive:

Enable largesend for each Virtual Ethernet and each SEA adapter. This leads to reduced CPU consumption and higher network throughput. This is not the default setting for SEA and virtual Ethernet interfaces and has to be manually adjusted on each VIOS and Virtual machine

a) For all SEA ent(x) devices on all VIO Servers: Use “chdev”

b) For all SEA interfaces, chdev -l entX -a largesend=1 survives reboot

There is a “large_receive” parameter introduced for 10Gbit adapters. Enable large_receive for the SEA when using 10Gbit network adapters

a) For all SEA ent(x) devices on all VIO Servers: Use “chdev”

b) For all SEA interfaces, chdev -l entX -a large_receive=yes survives reboot

Hypervisor receive buffer:

For typical virtualized setups the default hypervisor Ethernet receive buffers might become congested. The buffers are maintained per interface and defaults are the same for VIOS and client partition virtual ethernet interfaces.

Default:

  • Receive Buffers
  • Buffer TypeTiny SmallMedLargeHuge
  • Min Buffers5125121282424
  • Max Buffers2048204825664 64

Change to:

  • Min Buffers4096 4096 1024 128 96
  • Max Buffers4096 4096 1024 128 96

Performance is better when buffers are pre-allocated, rather than allocated dynamically when needed. As a rule of thumb, increase “Tiny”, “Small”, “Medium”, “Large” and “Huge” Min buffers setting to Max as a starting point on each VIOS virtual ethernet interface.

For each virtual Ethernet interface in the VIOS and on large partitions execute:

#chdev -l entX -a min_buf_small=4096 -a max_buf_small=4096 –P

Repeat for tiny, medium, large, and huge buffers.

12. Keep an eye on VIO server performance.

Periodically run VIOS Advisor.

The VIOS Performance Advisor tool provides advisory reports that are based on the key performance metrics on various partition resources collected from the VIOS environment.

Starting with Virtual I/O Server (VIOS) Version 2.2.2.0, you can use the VIOS Performance Advisor tool. Use this tool to provide health reports that have proposals for making configurational changes to the VIOS environment and to identify areas to investigate further. On the VIOS command line, enter the part command to start the VIOS Performance Advisor tool.

The following are the types of advisory reports that are generated by the VIOS Performance Advisor tool:

  • System configuration advisory report
  • CPU (central processing unit) advisory report
  • Memory advisory report
  • Disk advisory report
  • Disk adapter advisory report
  • I/O activities (disk and network) advisory report

Mike Sly

Mike Sly, Solutions Architect, boasts over 25 years of experience with IBM. He specializes in AIX, Power VM Virtualization and Power HA.

Like what you read? Follow Evolving Solutions on LinkedIn at https://www.linkedin.com/company/evolving-solutions

Evolving Solutions

Photo of Evolving Solutions

Related Blog Posts