Relieving I/O Bottlenecks with Clustered Storage

In the quest to provide better customer service, many companies now use automated technical support software to cut costs and provide quick service to their customers. One such cutting-edge application is real-time speech recognition. So when a customer calls in to say, “I need help,” the system understands the request and quickly routes them to a solution. 

One large telecommunications company is using historical customer calls to improve the accuracy of their speech recognition software. Says the company’s principal technical architect:

“Eventually we plan to have speech recognition for all the technical support calls on our network, so in real time we can find patterns of words and get customers routed quickly to the help they’re seeking. The faster and more accurate we are, the happier our customers will be.“

To consolidate storage and simplify management of their research system, the stored voice data that the speech recognition software analyzes is shared with other applications. But hundreds of technology researchers and a compute farm of more than 100 nodes accessing the shared data have led to problems. Since all of the applications accessed the same storage pool and NFS servers, running the speech-recognition application often brought down the servers, impacting all the applications.

“We wanted to run 800 jobs simultaneously per cluster, but NFS performance limited our capability and the number of jobs we could run. Typically we could run only 50 jobs at a time. The I/O bottleneck just kept us from getting there.“

They investigated using several open-source, scalable file systems to create a single storage pool and file system across multiple physical servers but ran into performance problems with small file workloads due to metadata limitations. Those problems could be overcome by copying the small file data sets to specialized servers with Direct Attached Storage (DAS) but this led to other management and data sharing problems and unnecessary complexity. 

To improve performance, keep management simple, and leverage their existing storage servers, they added a three-node cluster of Avere FXT 2550s between the applications and the data. (See Figure 1)

“The tiered storage of the Avere cluster allows us to complete twice as many jobs as before, reduces demand on the NFS filers and lets them keep operating with more traffic. Now we can run speech recognition without affecting the performance of other applications and still use NFS for management simplicity.

“Avere was the natural choice—I already had existing storage, a file system, and the data. I didn’t want to have to build a new system totally from scratch. Avere was the simplest way to boost performance without disturbing our existing environment.”

Avere has provided several other advantages as well. For example, reducing the ongoing cost of the entire system and helping to visualize the operations of the entire environment

“Previously, we had to decide which type of disk to use, sometimes low-cost SATA, sometimes more expensive Fiber Channel. But now with Avere, going forward we will use all low cost SATA to reduce the total cost of the system.

“The Avere user interface is very useful and provides a powerful management tool. Before Avere, I didn’t know which files or clients were hottest, so I had to do some research to figure it out. Now, with the Avere UI it’s very easy to see what is happening and take action to alleviate the problem. 

“Looking ahead, demand on this system and storage will only increase. The FXT nodes are currently being used to improve access to 10TB of storage, but plans are in the works for 20 to 40 more terabytes supporting more projects. Avere can easily scale to maintain that performance with it’s clustered architecture designed for scalability. As I add new nodes to the cluster, my performance scales linearly.”



Other research teams within the service provider organization are impressed with the results delivered by Avere. 

“The I/O bottleneck is a common problem for other projects at our company,” says the principal technical architect. “I expect Avere to be able to help there too.”

Customer Challenges

  • Poor NFS performance limited the number of jobs
  • Constant I/O bottleneck
  • Server uptime impacted

Avere Benefits

  • The tiered storage of the Avere cluster enables the completion of twice as many jobs, reduces demand on NFS filers and lets the storage system operate with more traffic.
  • Future capacity growth using all low-cost SATA drives
  • The Avere UI quickly identifies hot clients and files

About Telecom Service Provider

Automated technical support software enables many service providers to cut costs and provide quick service to their customers. One telecom service provider uses Avere FXT appliances to improve application performance and reduce storage complexity