What Is Metadata and Why Is It Critical in Today’s Data Environment?

February 15, 2017 Scott Gidley

Excerpt from ebook, Understanding Metadata: Create the Foundation for a Scalable Data Architecture, by Federico Castanedo and Scott Gidley.

Modern data architectures promise the ability to enable access to more and different types of data to an increasing number of data consumers within an organization. Without proper governance, enabled by a strong foundation of metadata, these architectures often show initial promise, but ultimately fail to deliver.

Let’s take logistics distribution as an analogy to explain metadata, and why it’s critical in managing the data in today’s business environment. When you are shipping one package to an international destination, you want to know where in the route the package is located in case something happens with the package delivery. Logistic companies keep manifests to track the movement of packages and the successful delivery of packages along the shipping process.

Metadata provides this same type of visibility into today’s data rich environment. Data is moving in and out of companies, as well as within companies. Tracking data changes and detecting any process that causes problems when you are doing data analysis is hard if you don’t have information about the data and the data movement process. Today, even the change of a single column in a source table can impact hundreds of reports that use that data—making it extremely important to know beforehand which columns will be affected.

Metadata provides information about each dataset, like size, the schema of a database, format, last modified time, access control lists,usage, etc. The use of metadata enables the management of a scalable data lake platform and architecture, as well as data governance. Metadata is commonly stored in a central catalog to provide users with information on the available datasets.

Metadata can be classified into three groups:

  • Technical metadata captures the form and structure of each dataset, such as the size and structure of the schema or type of data (text, images, JSON, Avro, etc.). The structure of the schema includes the names of fields, their data types, their lengths, whether they can be empty, and so on. Structure is commonly provided by a relational database or the heading in a spreadsheet, but may also be added during ingestion and data preparation. There are some basic technical metadata that can be obtained directly from the datasets (i.e., size), but other metadata types are derived.
  • Operational metadata captures the lineage, quality, profile, and provenance (e.g., when did the data elements arrive, where are they located, where did they arrive from, what is the quality of the data, etc.). It may also contain how many records were rejected during data preparation or a job run, and the success or failure of that run itself. Operational metadata also identifies how often the data may be updated or refreshed.
  • Business metadata captures what the data means to the end user to make data fields easier to find and understand, for example, business names, descriptions, tags, quality, and masking rules. These tie into the business attributes definition so that everyone is consistently interpreting the same data by a set of rules and concepts that is defined by the business users. A business glossary is a central location that provides a business description for each data element through the use of metadata information.Metadata information can be obtained in different ways. Sometimes it is encoded within the datasets, other times it can be inferred by reading the content of the datasets; or the information can be spread across log files that are written by the processes that access these datasets.

In all cases, metadata is a key element in the management of the data lake, and is the foundation that allows for the following data lake characteristics and capabilities to be achieved:

  • Data visibility
  • Data reliability
  • Data profiling
  • Data lifecycle/age
  • Data security and privacy
  • Democratized access to useful data
  • Data lineage and change data capture

Data lakes must be architected properly to leverage metadata and integrate with existing metadata tools, otherwise it will create a hole in organizations’ data governance process because how data is used, transformed, and related outside the data lake can be lost. An incorrect metadata architecture can often prevent data lakes making the transition from an analytical sandbox to an enterprise data platform.

Ultimately, most of the time spent in data analysis is in preparing and cleaning the data, and metadata helps to reduce the time to insight by providing easy access to discovering what data is available, and maintaining a full data tracking map (data lineage).

Download the full ebook here.

About the Author

Scott Gidley

Scott Gidley is Vice President of Product Management for Zaloni, where he is responsible for the strategy and roadmap of existing and future products within the Zaloni portfolio. Scott is a nearly 20 year veteran of the data management software and services market. Prior to joining Zaloni, Scott served as senior director of product management at SAS and was previously CTO and cofounder of DataFlux Corporation.

More Content by Scott Gidley
Previous Article
Data Preparation: Meeting the Challenges of Analytics-Ready Data
Data Preparation: Meeting the Challenges of Analytics-Ready Data

Excerpt from Eckerson Report, Big Data Management Software for the Data-Driven Enterprise, by David Wells, ...

Next Article
Zaloni Zip: Solving the Challenges of Hybrid Data Lake Architecture
Zaloni Zip: Solving the Challenges of Hybrid Data Lake Architecture

In this Zaloni Zip, we will discuss the challenges of a Hybrid Data-Lake architecture and how Zaloni’s cent...

×

Get the latest tips and how-to's delivered straight to your inbox!

First Name
Last Name
Zaloni Blog Email Subscription
Thank you!
Error - something went wrong!