Your first Big Data success: Why choosing the right use case matters

May 6, 2015 Ben Sharma

Pick the right use case for Big Data and Hadoop success

Why choosing the right use case is the difference between Hadoop success and failure.

Here’s a typical scenario: you finally get the green light to implement Hadoop and you want to “go big” and prove to everyone that Hadoop is a good investment. So, your team chooses to tackle the most complex business problem possible in order to demonstrate how powerful Hadoop can be in enabling new insights to transform business.

While this feeling is understandable – using Hadoop for big data is exciting stuff – we instead suggest that you go small. Very small. For example, starting with one low-key business problem and a few easily accessible datasets. If you don’t, you could unknowingly be winding down the path to Hadoop failure. 

Here’s why. In our experience, we’ve worked with very smart, ambitious partners who have brilliant ideas for use cases. The problem was, permission to use the data was six months in coming and by that time, the project was dead in the water. We’ve also seen projects so complex that they took nearly a year to demonstrate results. 

When implementing Hadoop, the key is to score some quick wins by showing business value (e.g., cost savings, new insights, etc.) for smaller projects. This will help bolster confidence in your Hadoop data strategy and motivate business partners to come on board. 

Do I really need a use case at all?

If you’re asking, why do I need a use case, let’s back up. You should always build your Hadoop strategy around a specific business problem. Many times, companies first build the Hadoop cluster and then later figure out what business problems they’ll solve with it. We call this the Field of Dreams approach.

However, what often happens on the Field of Dreams is that you spend resources to build it and they don’t come. And, for a very good reason. Your business partners want proof that their time and money will show a return on investment and so typically don’t want to be the first Hadoop guinea pig.

That’s why my strong advice for Hadoop is to go big by first “going small.” By starting small, with the support of one business partner and with one manageable business problem and dataset, you can quickly show success. Building from there, you’ll have the evidence you need to bring other business partners along.

Relevant reading: Read our recent post on how to prepare your IT team for Big Data success. 


Let us help you choose your first use case!

 

 

About the Author

Ben Sharma

Ben Sharma, is CEO and co-founder of Zaloni. He is a passionate technologist with experience in business development, solutions architecture, and service delivery of big data, analytics and enterprise infrastructure solutions. Having previously worked in management positions for NetApp, Fujitsu and others, Ben’s expertise ranges from business development to production deployment in a wide array of technologies including Hadoop, HBase, databases, virtualization and storage. Ben is the co-author of Java in Telecommunications and holds two patents. He received his MS in Computer Science from the University of Texas at Dallas.

Follow on Twitter More Content by Ben Sharma
Previous Article
Spark: The next generation programming model for Hadoop
Spark: The next generation programming model for Hadoop

What's the hype? Apache Spark is currently revolutionizing the world of Hadoop as a powerful open source d...

Next Article
Why the right data management platform matters for Hadoop
Why the right data management platform matters for Hadoop

Having the right data management platform for your Hadoop project is very important. The right platform can...

×

Get the latest tips and how-to's delivered straight to your inbox!

First Name
Last Name
Zaloni Blog Email Subscription
Thank you!
Error - something went wrong!