You can achieve a 5-to-1 reduction in disks, power, and space because Avere separates performance scaling from capacity scaling and more efficiently delivers both. Avere OS moves your application’s active data to FXT Edge filers and as a result minimizes the data access requirements on your Core filer. Because your Core filer is lightly loaded, it can be optimized for cost, power, and space by using low-power and high-density SATA disks.
By contrast, the performance of traditional NAS is limited by the quantity and performance of the attached disks. For even modest application performance requirements, traditional NAS uses large numbers of high-performance FC or SAS disks. This results in costly over-provisioning of storage capacity, inefficient use of limited data center space, and wasted power consumption.
Our posting of SPECsfs2008_nfs.v3 performance results provides a good example of the disk, power, and space saving that can be achieved with Avere. As of February 9, 2011, you will find nine solutions on the SPECsfs2008 website that achieved greater than 120,000 ops/sec throughput. See below table for the complete list. The table also includes the number of disks used by each vendor and a calculation of the ops/sec per disk used, with this last number being the best measure of performance per dollar delivered by the vendors. Based on the below results, Avere delivers performance with not five but seven times fewer disks than the other vendors.
Power and space savings with Avere are even more dramatic than the disk savings. A closer look at the results shows that all the other vendors use only high-power and low-capacity 15k FC/SAS disks for storing data. Avere on the other hand uses a more efficient, hybrid approach. In the Avere test runs, the Core filer is optimized for capacity and uses only low-power and high-capacity SATA disks. Avere FXT Edge filers, which are optimized for performance, are the only place where 15k SAS disks are used.
For an estimate on the disk, power, and space reduction you can achieve with Avere, please consult the Avere savings calculator.
SPEC® and the benchmark name SPECsfs®2008 are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of February 9, 2011. Above we compare all SPECsfs2008_nfs.v3 results that achieved greater than 120k ops/sec throughput. For the comparison we calculate ops/sec per disk by dividing the reported ops/sec throughput by the total number of disks used in the system under test. For the latest SPECsfs2008 benchmark results, visit http://www.spec.org/sfs2008.
There are two dimensions to consider when evaluating performance. First, there’s the application workload. Avere solutions provide high performance across a wide range of workloads by accelerating read, write, and metadata operations. In addition, these operations are accelerated across the full range of access patterns, including random access to small files, sequential access to large files, and a mix of both.
The second dimension is the amount of performance (in ops/sec) or throughput (in MB/sec) required for a given workload. The Avere FXT Series meet the needs of most applications because they provide high performance on a single node and linearly scale performance as nodes are added to a cluster. In small file, random access tests, FXT clusters can achieve millions of ops/sec. In large file, sequential tests, FXT clusters can achieve tens of gigabytes/sec of throughput.
Our posting of SPECsfs2008 results provides a good example of how the FXT Series provides high performance, low latency, and linear performance scaling through clustering. SPECsfs2008 is an excellent benchmark for testing the throughput and latency of read, write, and metadata operations on an NFS file server. SPECsfs2008 emulates a typical, large-scale file server environment and executes a mix of operations are follows: 18% read data, 10% write data, 25% read directory, 2% write directory, 41% read metadata, 4% write metadata. When comparing all SPECsfs2008_nfs.v3 results in the neighborhood of 20,000 ops/sec, the Avere FXT 2500 (1 Node) results, 22,025 ops/sec with 1.30ms overall response time (ORT), provide the lowest latency and use the fewest disks, demonstrating Avere as an efficient solution for accelerating read, write, and metadata performance. The Avere FXT 2500 (6 Node Cluster) results, 131,591 ops/sec with 1.38ms ORT, demonstrate the linear performance scaling achieved when clustering FXT nodes. When comparing Avere's 6 node results to the 1 node results on a per node basis, each of the 6 nodes provides 99.7% of the throughput with only a 6% increase in latency. For more discussion on SPECsfs2008 performance, see the Avere blog posting SPECsfs2008 - A Year in Review.
The best way to size an Avere system for your performance requirements is to work with an Avere systems engineer. To contact an Avere systems engineer, please submit a request.
SPEC® and the benchmark name SPECsfs®2008 are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of Oct 12, 2009. For the latest SPECsfs2008 benchmark results, visit http://www.spec.org/sfs2008.
The FXT 4000 Series was designed to accelerate performance in two types of NAS environments. The first type is where high-performance (10k or 15k) FC or SAS disks are already in use and not keeping up with the performance demand. The FXT 4000 Series complements these environments by adding even higher-performance solid-state storage tiers including up to 144GB of DRAM and up to 3TGB of Flash SSD per FXT node. Adding an FXT 4000 Series cluster as a consolidated SSD tier to your NAS environment is less expensive, less disruptive, and more scalable than adding Flash storage directly into each of your NAS filers. Up to 50 FXT 4000 Series nodes can be clustered, providing up to 7.2TB of DRAM and 150TB of Flash SSD.
The second type of application where the FXT 4000 Series is a great fit is when the primary application workload is random IO and a large amount of data is being accessed. Each FXT 4000 Series node can support up to 3TB of randomly accessed data and is the best performance/$ solution for such environments. Core filers with SATA disks can be used in this case provided the FXT 4000 Series cluster is sized to store the complete working set so that accesses to the Core filer are minimized.
Selection between FXT 4200 and 4500 models is dependent on the size of your application’s working set. The FXT 4200 supports a working set of 1.6TB raw per node, scaling by 1.6TB increments to a maximum working set size of 80TB raw in a cluster with 50 nodes. The FXT 4500 supports a working set that is almost two times greater. That is, the FXT 4500 supports a working set of 3TB raw per node, scaling by 3TB increments to a maximum working set size of 150TB raw in a cluster with 50 nodes.
The FXT 3000 Series is the best performance/$ choice for most other datacenter applications and the primary product line to use when implementing a cloud solution with Avere.
In the datacenter, the FXT 3000 Series is ideal for NAS environments where many high-capacity SATA disks are in use. The FXT 3000 Series supports large working sets and this typically results in many “cache hits” on the FXT cluster and less performance required from the backend SATA storage. The FXT 3000 Series is an especially good fit when the application workload is not entirely random IO but rather a mix of large sequential IO and random IO.
The FXT 3000 Series is also well suited for cloud solutions. The primary role of the FXT cluster in cloud applications is to store data at remote locations, provide low latency access to the data to the remote clients, and avoid the need for data to traverse the high-latency WAN back to the datacenter. Cache hits on the FXT cluster at the remote site result in very low (<0.1msec) latency as data traverses only the LAN at the remote site. Cache misses on the other hand result in latency that is often 100 or 1000 times greater (10-100msec) as data needs to traverse the WAN back to the datacenter. The FXT 3000 Series supports large working sets and therefore increases the cache hit rates and reduces the accesses over the high-latency WAN.
Selection between FXT 3200 and 3500 models is dependent on the size of your application’s working set. The FXT 3200 supports a working set of 4.8TB raw per node, scaling by 4.8TB increments to a maximum working set size of 240TB raw in a cluster with 50 nodes. The FXT 3500 supports a working set that is almost two times greater. That is, the FXT 3500 supports a working set of 9TB raw per node, scaling by 9TB increments to a maximum working set size of 450TB raw in a cluster with 50 nodes.
The best way to select an FXT model for your requirements is to work with an Avere systems engineer. To contact an Avere systems engineer, please submit a request.
How Does it Work?
An Avere cluster of two or more FXT nodes is placed at the edge near the clients and provides access to NAS filers in the core of the storage network. The FXT cluster performs read/write cache functions, dynamically tiers data across storage devices independent of location, and creates a global namespace across it all.
Each Avere FXT Edge filer has multiple types of storage media within it. As clients read or write data, algorithms look at how frequently the data is accessed, the characteristics of the data, and how it is accessed (i.e. random or sequential). Blocks of data are then placed on the most optimal tier within the FXT Edge filer for delivery to clients or sent to the Core filers in the case of infrequent access requests. As demand patterns change, data is moved in real-time from one tier to another.