Today we continue our Meet Zaloni series with Raj Nadipalli, Senior Software Development Manager.
Raj is originally from Mumbai, India where he developed his passion for cricket and watching Bollywood movies. He currently resides in Morrisville, North Carolina with his wife and 2 year old daughter. He is an active volunteer for several local non-profit organizations, and donates blood on a regular basis.
Prior to moving to the US, Raj received his Bachelors degree in Electrical Engineering from Mumbai University. He then moved to the US where he received his MBA from North Carolina State.
Raj spent the early part of his career as a Programmer at CMC in Mumbai, India. Following that, he spent time as a Software Consultant for GE Healthcare, and prior to Zaloni he was a Technical Leader for Cisco Systems. He joined Zaloni in 2012 as a Senior Software Development Manager.
Raj is the author of HDInsight Essentials (1st edition 2012; 2nd edition 2015)
He has been a Certified Oracle Database Administrator for 5 years
I spoke with Raj recently and he shared his thoughts on his career, Zaloni, Bedrock, and what he’s expecting from his team in 2015.
Q. Where did you begin your career?
I started my software career in 1996 at CMC, Mumbai, India. My first job was to build an dividend processing system that printed and mailed checks to clients of the firm.
Q. When did you join Zaloni?
I joined Zaloni in July, 2012
Q. Please tell us about your role at Zaloni
My current role is Senior Software Development Manager for Bedrock product. My responsibilities include driving technical architecture, collaborating with Product Management to define the Product vision, coordinating with clients on release schedules; resource management and mentoring employees for career development. Additionally, I deliver Big Data training for our clients on Hive, Pig & MapReduce technologies.
Q. What does a typical day in your life at Zaloni look like?
My work day starts with a 7:30 AM daily scrum call with the development team. This is followed by a series of additional meetings with the development leads and managers in the India Development Center. During the afternoon hours, I typically have working sessions with architects in the US regarding new feature development. Additionally, depending on where we are in the release cycle, I arrange sessions with Product Management, CEO, VP and other architects.
Q. What advantages do you find to working at a smaller startup vs a large enterprise organization?
Before Zaloni, I worked for Cisco IT and there were several things that were different. Key benefit of a small organization is the flexibility and freedom. Additionally, because it is a startup the company attracts a lot of new college graduates who bring in fresh perspectives.
Q. Bedrock manages the ingestion, organization and preparation stages of data management. Can you break each of those stages down for us?
The best way to understand this is, as organizations move from relational environments to Hadoop, Bedrock is positioned as the single tool that can reduce their pain in this transition.
Bedrock has a simple user interface to configure how you want to load data into Hadoop. This is called "ingestion".
Bedrock then provides an inventory view and allows the user to organize data in the new Data Lake. This allows consumers to later find what they are looking for. This is "organization".
Finally, Bedrock allows data in the Data Lake to be transformed into new useful insights. This data can then be used for reporting or further analysis. This is "preparation".
Q. What makes Bedrock unique?
Bedrock is a single tool that enables data providers to publish data, data consumers to extract data, the operations team to manage data pipelines, and the data governance team to manage quality.
This is a unique combination and is not present in any single tool in the marketplace. There are tools that only handle certain aspects of what Bedrock does.
Q. How does a solution like Bedrock provide a competitive advantage for a customer?
It reduces time to develop and implement a production Hadoop based solution. Additionally it provides consistency from development and operations and avoids one off implementations that get hard to manage over time.
Q. What are some common use cases you've seen?
Enterprise Data Lake and RDBMS offload are the 2 common use cases I have seen with my clients.
Q. What do you hope your team will accomplish in 2015?
We have a good foundation on the ingest, organize and prepare stages for structured datasets. In 2015, we will focus on unstructured datasets, provide lineage of data as it goes through the various stages, and improve self service capabilities for teams extracting data out of the Data Lake.