We are really excited to tell you that Bedrock 3.1 is now available! We’ve been working hard since the release of Bedrock 3.0 to bring 3.1 to you. Bedrock 3.1 provides greater data quality capabilities, operational improvements, and security enhancements.
A few highlights of Bedrock 3.1 include:
Hadoop Data Quality feature set: Enhancing the Bedrock managed data pipeline to include checking data for compliance to a user-defined set of quality rules adds to Bedrock’s ability to ensure that data in a Bedrock managed Hadoop data lake is always organized, prepared and ready for users to consume.
Hadoop Inventory Dashboard: Finding and managing access to datasets that have been ingested into Hadoop can become difficult, or in some cases impossible, if a Hadoop inventory is not captured and made available to data consumers. The Bedrock Inventory Dashboard now resolves this challenge by providing a familiar faceted search interface with both graphical and tabular views of Hadoop data inventory.
Spark and SparkSQL Support: Bedrock users now have full support for using Spark and SparkSQL as part of their data preparation workflows. This brings real-time, in-memory Spark processing capability into the array of Hadoop workflow options available to the Bedrock user.
Bedrock CDC (Hadoop Change Data Capture): As incremental updates are ingested info Hadoop they often need to be integrated into existing datasets. Bedrock Hadoop CDC is completely metadata driven and automates the entire process of ingesting and integrating incremental updates to Hadoop datasets.
Bedrock Query Engine: The Bedrock Query Engine provides the user advanced tools to build/design queries and then submit them for execution. With 3.1 Query Engine has been enhanced to support Spark and Impala as execution targets. This broadens the options for users wanting to use the latest and fastest Hadoop SQL query environments in conjunction with the Hadoop metadata management of Bedrock.
HDFS Landing Zone: Release 3.1 extends the managed data ingestion capabilities of Bedrock to support HDFS Landing Zones. While there are many tools that can copy data into Hadoop, most lack managed data ingestion capability or metadata capture and management capability. The Bedrock managed landing zone architecture has been extended to include directories that are inside HDFS. This means that other tools can be used to move data into Hadoop; once in Hadoop, Bedrock provides the capability to see new data as it arrives and begins the managed data pipeline for that new data.
The latest Bedrock release is now available through the company’s direct sales representatives.
For a complete list of updates, read the full press release.