In my last post I covered some settings to ensure you get the best performance from your XtremIO when running an Oracle workload.  When running any type of workload on XtremIO it’s a given that it will perform well, but what about helping us manage capacity as well? XtremIO offers inline deduplication, which we can leverage to save capacity without sacrificing performance. Any lun created on XtremIO will be thin provisioned at a 4k-granularity level.  That’s all well a good, but we know Oracle and thin provisioning aren’t exactly best friends so this won’t buy us a lot. XtremIO, however, does have an inline global deduplication that we can leverage to our benefit. Like thin provisioning this feature also operates at a 4k-granularity.  What that’s means is only unique 4k blocks consume physical capacity. How can we best use that to our benefit? The short answer is multiple copies of the same data. These copies can beRead More →

XtremIO is EMC’s all-flash scale out storage array designed to delivery the full performance of flash. The array is designed for 4k random I/O, low latency, inline data reduction, and even distribution of data blocks.  This even distribution of data blocks leads to maximum performance and minimal flash wear.  You can find all sorts of information on the architecture of the array, but I haven’t seen much talking about archive maximum performance from an Oracle database on XtremIO. The nature of XtremIO ensures that’s any Oracle workload (OLTP, DSS, or Hybrid) will have high performance and low latency, however we can maximize performance with some configuration options.  Most of what I’ll be talking about is around RAC and ASM on Redhat Linux 6.x in a Fiber Channel Storage Area Network. A single XtremIO X-Brick has two storage controllers. Each storage controller has two fiber channel ports. Best practices are two have two HBAs in your host andRead More →

SMB Change Notification is a concept that allows clients to keep up with file and directory changes.  The idea is to prevent clients from seeing stale content or having to constantly refresh their view. The server looks for changes to files/directories and, when detected, it sends a notification to the client to inform them of the change. Isilon supports three settings related to change notification.  The first of which, and also the default, is “All”.  With this settings Isilon will send unnecessary change notifications to far to many clients. Why?  If we take a look at a scenario where we have 300 users connecting to an Isilon share called “Applications”.  This share has a folder structure with a depth of 6 folders.  Lets say a user goes into the deepest folder, “Folder 6”, and changes a file on the share.  The server will notice the change and attempt to notify the clients.  When “All” is set allRead More →

I’ve always been an EMC Celerra guy since I cut my teeth on it so many years ago,  and it’s support of de-duplication (single instancing) left a lot to be desired by me – mainly that it could not work across filesystems.  When I first started investigating Isilon I had high hopes for de-dupe across the entire array because it din’t have separate file systems.. New announcements today bring that idea to light finally. It is certainly not a surprise that OneFS will support de-dupe,  but the fact that it allows it across the entire IFS is a huge benefit.   Data written to the cluster will be written full size,  but in post processing objects and files will be de-duped using an 8k block size. EMC is suggesting you’ll see a 30% reduction in storage consumption, but your mileage will vary.  This is great news for the Isilon TCO – it’s already low overhead in rawRead More →