Red Hat is outlining its five big data "must haves" with a new strategy that pushes ahead with its cloud big data analytics workload plans for the next year or so. With a new Apache Hadoop plugin that connects Red Hat storage products into Hadoop open platforms, the company is looking to help partners provide customers with a comprehensive big data solutions portfolio.

"Red Hat is uniquely positioned to excel in enterprise big data solutions, a market that IDC expects to grow from $6 billion in 2011 to $23.8 billion in 2016.2 Red Hat is one of the very few infrastructure providers that can deliver a comprehensive big data solution because of the breadth of its infrastructure solutions and application platforms for on-premises or cloud delivery models. As a leading contributor to open source communities developing essential technologies for the big data IT stack – from Linux to OpenStack Origin and Gluster – Red Hat will continue to play a pivotal role in in Big Data," said Ashish Nadkarni, research director of storage systems and co-lead of Big Data Global Overview at IDC, in a prepared statement.

One of the biggest aspects of the announcement has to do with a Hadoop plugin for the Red Storage portfolio that will be available to customers later this year.  According to Red Hat, the technology is currently in "preview" and will provide new storage options for enterprise Hadoop deployments. This makes sense from the company that has focused on open source computing since its founding. Red Hat plans to launch enterprise storage features while maintaining the API compatibility and local data access expected from the Hadoop community.

The new features will connect into Red Hat Enterprise Virtualization 3.1,which was announced in December 2012, as well as Red Hat Jboss Middleware. The company is also thinking about channel partners with this latest announcement and is hoping to enable partners with big data solutions that they can take to customers.