Getting the Best Oracle performance on XtremIO

Getting the Best Oracle performance on XtremIO

XtremIO LogoXtremIO is EMC’s all-flash scale out storage array designed to delivery the full performance of flash. The array is designed for 4k random I/O, low latency, inline data reduction, and even distribution of data blocks.  This even distribution of data blocks leads to maximum performance and minimal flash wear.  You can find all sorts of information on the architecture of the array, but I haven’t seen much talking about archive maximum performance from an Oracle database on XtremIO.

The nature of XtremIO ensures that’s any Oracle workload (OLTP, DSS, or Hybrid) will have high performance and low latency, however we can maximize performance with some configuration options.  Most of what I’ll be talking about is around RAC and ASM on Redhat Linux 6.x in a Fiber Channel Storage Area Network.

A single XtremIO X-Brick has two storage controllers. Each storage controller has two fiber channel ports. Best practices are two have two HBAs in your host and zone each initiator all targets in the fabric. This changes as you add X-Bricks to your configuration.

While we’re on the subject of HBAs we need to set the LUN-queue depth. If we have a single host connected to the X-Brick we can set this to the maximum supported by the HBA (256 of QLogic and 128 for Emulex). As we add hosts we need to reduce that settings by half. As we add additional hosts this number decreases proportionately until we reach the minimum of 32.

For the number of disk groups we’ll be making a deviation from Oracle’s best practices. Oracle recommends separating disk groups into three parts: Data, FRA/Redo, and System. Due to the nature of Redo we’ll be dedicated a disk group to it. While the XtremIO array will perform great using a single LUN in a single disk group, we can use multi-threading and parallelism to maximize performance for our database. To that end it’s best to use 4 LUNs for the data disk group allowing the host to use simultaneous threads at different queuing points.  That means the RAC system will have 4 LUNs dedicated for control files and data files; 1 for Redo; 1 for archive logs, flashback logs, and RMAN backup; and one for your system files.

Let’s talk about Block Size. Oracle’s default block size of 8k works just fine with XtremIO.  This setting provides a great balance between IOPS and bandwidth, but can be improved on in the right conditions. If your data rows fit into a 4k-block size you can see an IOPS improvement of over 20% by using a 4KB request size. If your rows don’t fit nicely in 4k block sizes, best to stick with the default setting.

The default block size for Oracle Redo Logs is 512 bytes. This default block size will cause redo log entries encapsulated in large-block I/O requests that (likely) do no align to the 4k boundary. In order to avoid extra computational work and I/O subroutines on the array back-end we need to set redo log block size to 4k. In order to create redo log with a non-default block size you’ll need to add the option ”_disk_sector_size_override=TRUE” to the parameter file of the database instance.

Oracle controls the maximum number of blocks read in one I/O operation during a full scan using the “DB_FILE_MULTIBLOCK_READ_COUNT” parameter.  This parameter is specified in blocks and is, generally, defaulted to 1MB.  Generally you set this value to the maximum effective I/O block size divided by the database block size.  If you have a lot of tables with parallel degree set, you’ll want to drop this to 64k or 128k.  If you’re running with the default block size of 8k, this DB_FILE_MULTIBLOCK_READ_COUNT will need to be 8 or 16.

These few settings will ensure you’ll get the best performance from XtremIO in an Oracle RAC environment. Next post I’ll share my tips for getting the best possible duduplication ratio with Oracle and XtremIO. If you are interested in saving capacity with XtremIO, and who isn’t,  check out this related post.

Comments are closed, but trackbacks and pingbacks are open.