blob: 199151acd766a97e80b8d27864e6b49c79fa2104 [file] [log] [blame]
---
layout: post
title: Avoiding The Mess In the Hadoop Cluster (Part 1)
date: '2015-05-26T00:00:00+00:00'
categories: falcon
---
<div style="clear: both;"></div>
<h2 class="singletitle"><a href="http://getindata.com/blog/post/avoiding-the-mess-from-the-hadoop-cluster-part-1/"><img width="645" height="300" alt="elephant-238106_1280" class="main-single wp-post-image" src="http://getindata.com/wp-content/uploads/2015/03/elephant-238106_1280-645x300.jpg" /></a></h2>
<h2 class="singletitle"><a href="http://getindata.com/blog/post/avoiding-the-mess-from-the-hadoop-cluster-part-1/">Avoiding The Mess In The Hadoop Cluster (Part 1)</a></h2>
<p style="text-align: justify; font-size: 16px;">Author: <a href="http://getindata.com/author/adam/">Adam Kawa</a> (Orignally appeared in http://getindata.com/blog/post/avoiding-the-mess-from-the-hadoop-cluster-part-1/)<br /></p>
<div class="hrline"><span></span></div>
<div class="entry">
<div style="text-align: justify; font-size: 16px;">This blog series is based on the talk <a target="_blank" href="http://slideshare.net/getindata/simplified-data-management-and-process-scheduling-in-hadoop">“Simplified Data Management and Process Scheduling in Hadoop”</a> that we gave at <a target="_blank" href="http://bigdatatech.pl">Big Data Technical Conference</a> in Poland in February 2015. Because the talk was very well received by the audience, we decided to convert it into blog series.
<p style="text-align: justify; font-size: 16px;">In the first part we describe possible open-source solutions for data cataloguing, data discovery and process scheduling such as Apache Hive, HCatalog and Apache Falcon.</p>
<h2>Mess in the Hadoop cluster</h2>
<p style="text-align: justify; font-size: 16px;">We start with a true story that happened around year ago. That day I wanted to use Hive to analyse terabytes of data available in the 3-year old Hadoop cluster to find answer to my simple, but important question. My query was straightforward – just joining three production tables and counting the number of occurrences of some event.</p>
<p style="text-align: justify; font-size: 16px;"><img width="481" class="alignleft" src="http://getindata.com/wp-content/uploads/2015/03/be-optimist-find-dataset.png" /> I estimated that roughly 10 minutes are needed to implement this simple query. Unfortunately, the problems arose very quickly and slowed me down badly. First of all, I wasn’t able to quickly discover what datasets I should process and where they are located. When looking at HDFS and Hive, I saw many directories and tables with the names that suggested me that it’s the data that I needed, but I wasn’t 100% sure of that. Some of them looked like junk, like samples, like duplicates and like production datasets. To learn what a given dataset is about I had to send a group email to all analysts because it wasn’t stated anywhere who is the exact and true owner of a given dataset. There was no good documentation available, so that I simply guessed the meaning of the fields and their relationships with the fields from related tables. After several hours, a few emails and confirmations from +2 analysts, I found the data that I needed!</p>
<p style="text-align: justify; font-size: 16px;">I implemented my HiveQL query very quickly, but it was running very slowly. It eventually finished successfully and returned the numbers that I wanted to see. I thought that it would be nice to schedule this query every week to check later if the numbers are still great. In this case, the frequent scheduling of that query would require me to implement a small piece of Python code in the framework called <a target="_blank" href="https://github.com/spotify/luigi">Luigi</a> and modify the Unix’s crontab files that trigger the computation. It’s not that difficult, but slightly time-consuming, so that I gave up again and decided that I could simply run this query manually whenever it’s needed.</p>
<h2>Wild Wild West</h2>
<p style="text-align: justify; font-size: 16px;">This made me sad. I had all data in Hadoop needed to solve a (simple) business problem, but I wasted not only my time by searching for and understanding the input data, but also somebody elses time by asking where the input data is.</p>
<p style="text-align: justify; font-size: 16px;"><img width="481" class="alignright" src="http://getindata.com/wp-content/uploads/2015/03/cowboy-283449_1280.jpg" />The core reason for the above is the lack of good practices that are introduced early enough and continuously enforced. Its surprising, but proper data management and easy scheduling of processes are serious challenges that all data-driven companies face, but many of them under-prioritise. Although turning a blind eye to these aspects might not cause troubles when your data is small, it definitely becomes a nightmare when the number of your datasets, processes and data analyst is large. At scale, this kind of big data technical debt becomes more expensive!</p>
<p style="text-align: justify; font-size: 16px;">Knowing some of the possible problems caused by bad practices, my colleagues and I started thinking how to avoid them and what open-source tools could possibly address them. Thankfully, there is a number of useful (but still less adopted) tools from the Hadoop ecosystem that help you to discover datasets quicker, schedule processes easier, smoothly migrate from one file format to another (without even touching your parsing code) and many others. We will cover them in this blog series.</p>
<h2>Data catalogue</h2>
<p style="text-align: justify; font-size: 16px;"><img width="481" class="alignleft" src="http://getindata.com/wp-content/uploads/2015/03/hive-description.png" />First of all, your should give identity to your datasets, so that analysts can easily find the data with no time-consuming investigation, understand the meaning of their fields and be sure that what a given dataset is about.</p>
<p style="text-align: justify; font-size: 16px;">Perhaps you might want to hear about an innovative tool that automatically makes HDFS self-documented? Well, the tool that we propose isnt revolutionary at all. Our simple idea is to add each production dataset to Hive. By creating Hive table on top of your data in HDFS, you can supply a lot of information about it: its name, fields, types and comments, the location, the data format, various properties in form of the key-value pairs and meaningful description of the dataset.</p>
<p style="text-align: justify; font-size: 16px;">A nice side effect of adding your production datasets to Hive is that everyone can process them using SQL-like queries in Hive, Presto, Impala, Spark SQL and take advantage of BI tools that often integrate with Hadoop through Hive.</p>
<h2>Reading data from Hive tables</h2>
<p style="text-align: justify; font-size: 16px;">When your datasets are nicely abstracted by Hive tables, you should benefit from this abstraction as often as possible. Therefore, the Hive Metastore becomes increasingly important and should be always up and running. Apart from that, all frameworks that wish to process your data should integrate with Hive to learn where a given dataset is located, what its file format is and so on. Obviously, HiveQL queries integrate with Hive perfectly, but in case of other tools like Spark, Scalding, Pig, MapReduce or Sqoop, you need some kind of adapter that knows how to talk to Hive.</p>
<p style="text-align: justify; font-size: 16px;"><img width="481" class="alignright" alt="hcatalog" src="http://getindata.com/wp-content/uploads/2015/03/hcatalog.jpg" />In case of <a href="https://cwiki.apache.org/confluence/display/Hive/HCatalog+LoadStore">Pig</a>, <a href="http://scalding.io/2014/06/reading-data-from-a-external-partitioned-hive-table-in-scalding/">Scalding</a> and MapReduce (and some others), HCatalog becomes this adapter. Thanks to HCatalog, you don’t need to hardcode or parametrize the path or format of the dataset – just specify the name of the dataset which is the same as its Hive table.</p>
<p style="text-align: justify; font-size: 16px;">Currently, Spark Core doesn’t integrate with HCatalog well. This is not a big reason to worry about, though, because you can use Spark SQL in the Hive context instead. Spark SQL allows you to fetch the interesting data from the Hive table using an SQL-like query. Later, you can seamlessly process this data using the Spark Core API in Scala, Java or Python.</p>
<p style="text-align: justify; font-size: 16px;"> </p>
<p style="text-align: justify; font-size: 16px;"> </p>
<h2>Web UI to your data</h2>
<p style="text-align: justify; font-size: 16px;">Thanks to Hive, you have a central and nicely documented repository of our datasets. Thanks to HCatalog (or Spark SQL), your Hive tables can be accessed by the most popular Big Data frameworks. Unfortunately, neither Hive nor HCatalog offer out-of-the box web UI to search, discover and learn about your datasets.</p>
<p style="text-align: justify; font-size: 16px;">There is a small temptation to implement such a web UI by yourself because it doesn’t seem to be that hard. Alternative approach is to use some ready-to-use solution and one of them is <a target="_blank" href="http://falcon.apache.org">Apache Falcon</a>.</p>
<p style="text-align: justify; font-size: 16px;"><img width="330" class="alignleft" alt="falcon-feed" src="http://getindata.com/wp-content/uploads/2015/03/falcon-feed.png" />Falcon allows you to define datasets and submit them to the Falcon server, so it can manage them. In Falcons nomenclature, datasets are called feeds. For each feed, you can specify which Hive table (or HDFS dataset) it corresponds to. You can tag dataset and later use these tags when searching. You can define who is the owner of of the dataset, write its description, specify the group that a given dataset belongs to etc. This information is somewhat redundant to what you specify in Hive tables, but there is the <a target="_blank" href="https://issues.apache.org/jira/browse/FALCON-1096">idea</a> that Falcon could scan the Hive Metastore and automatically create feeds for each Hive table and inherit its properties. As we also see later, Falcon lets you supply additional properties of your datasets that mean nothing to Hive, but are used by Falcon for popular data management tasks (e.g. retention) which we cover later.</p>
<p style="text-align: justify; font-size: 16px;">In Falcon, a dataset can be described by writing relatively short XML file, or filling a simple form in <a target="_blank" href="https://issues.apache.org/jira/browse/FALCON-790">the web UI</a>. This new (and improved) web UI is currently in the code-review phase under the <a target="_blank" href="https://issues.apache.org/jira/browse/FALCON-790">FALCON-790</a> ticket, but it should be committed to the trunk before the end of 2015Q1 (hopefully).</p>
<p style="text-align: justify; font-size: 16px;"> </p>
<p style="text-align: justify; font-size: 16px;"> </p>
<p style="text-align: justify; font-size: 16px;"><img width="330" class="alignright" alt="falcon-search-webui" src="http://getindata.com/wp-content/uploads/2015/03/falcon-search-webui.png" />Once feeds are added to Falcon, you can see and search them by name, tag and status using the web UI and CLI. Currently, the web UI isn’t mind-blowing, but probably there isn’t anything more suitable in open-source projects today. Naturally, it can be extended by the community, so that additional search criteria are possible e.g. field name, field comment, partition field, file format, ownership, compression codec, directory name, total size, the last modification or access time, the number of applications that process the feed.</p>
<h2>Automatic data retention</h2>
<p style="text-align: justify; font-size: 16px;">Apart from the nice web UI, Falcon offers multiple useful features related to data management. For example, you can define the retention period for a dataset to enforce that each instance of this dataset should be automatically removed from Hive/HDFS after a specified period of time since its creation or <a target="_blank" href="https://issues.apache.org/jira/browse/FALCON-870">last access</a>. Thanks to that you won’t keep old or unused datasets on disks, and you won’t have to schedule the spontaneous “HDFS cleaning day” to urgently remove useless data when the free HDFS disk space falls below the critical threshold (let’s say, 10% of total disk space) and/or implement own cleaning scripts for that.</p>
<h2>Process scheduling</h2>
<p style="text-align: justify; font-size: 16px;">You <a target="_blank" href="https://github.com/kawaa/Beetest">tested your Hive query</a>, run it once and saw that it works. Now you want to schedule it periodically.</p>
<p style="text-align: justify; font-size: 16px;">There are a couple of scheduling tools especially built for Hadoop. While Oozie, Azkaban and Luigi (which is often combined with cron) are probably the most popular and widely-adopted ones, the project that we found interesting is Falcon.</p>
<p style="text-align: justify; font-size: 16px;"><img width="380" class="alignright" alt="falcon-new-process" src="http://getindata.com/wp-content/uploads/2015/03/falcon-new-process.png" />Assuming that your datasets are described as feeds, Falcon allows you to define and schedule processes that process these feeds. For each process, you define what the input and output feeds are, how often a process should be executed, how to retry it when something fails (i.e. how many times to retry and what time intervals are), what the type of the process is, and many others. Out of the box, Falcon can schedule Hive queries, Pig scripts and Oozie workflows (a native support for other frameworks like Spark or MapReduce is to be added soon, but you can still schedule them through Oozie shell/ssh actions).</p>
<p style="text-align: justify; font-size: 16px;">The integration between Falcon and Oozie is very close. Falcon isnt actually a scheduler built from scratch, it simply delegates most of the scheduling responsibilities to Oozie, but gives us a bit nicer, shorter and more powerful scheduling API.</p>
<p style="text-align: justify; font-size: 16px;"> </p>
<p style="text-align: justify; font-size: 16px;"> </p>
<p style="text-align: justify; font-size: 16px;"><img width="330" class="alignleft" alt="process-instances-falcon" src="http://getindata.com/wp-content/uploads/2015/03/process-instances-falcon.png" />Falcon also helps you to keep track of the execution of the recent instances of your process. It shows you which instances finished successfully, which failed, which still wait for the execution time and/or the input dataset, which time-outed and so on. For these events, Falcon sends JMS messages <a href="http://mail-archives.apache.org/mod_mbox/falcon-dev/201503.mbox/%3CCAHodO=+=TPaaHn9+3-x0hE8nY0Hrm2xCa23e06ERwLJfhRY6Gg@mail.gmail.com%3E">you can consume them from ActiveMQ</a> and possibly convert into email and/or alert to let yourself or colleagues know when something important happens.</p>
<p style="text-align: justify; font-size: 16px;">Please note that I dont claim that scheduling a process in Falcon is easier than in Luigi, Azkaban or Oozie. It definitely requires some effort, time, practice and discipline, but once you do so, you get many benefits for free (some of them will be described in the remaining part of this blog post series).</p>
<p style="text-align: justify; font-size: 16px;"><img width="962" class="alignleft" alt="falcon-landing-page" src="http://getindata.com/wp-content/uploads/2015/03/falcon-landing-page.jpg" /></p>
<h2>Data lineage</h2>
<p style="text-align: justify; font-size: 16px;">Since Falcon has a wealth of information about your processes and their input and output feeds, it becomes easy to keep track how your data is transformed through the pipeline. Falcon displays this information in form of graph that shows the lineage”. You can navigate through this graph by clicking each vertex. The lineage helps you quickly answer questions such as where the data came from, where and how it is currently used, what the consequences of removing it are everything without the need of sending emails to your colleagues.</p>
<p style="text-align: justify; font-size: 16px;"><img width="962" class="alignleft" alt="falcon-lineage" src="http://getindata.com/wp-content/uploads/2015/03/falcon-lineage-e1425760613986.png" /></p>
<p style="text-align: justify; font-size: 16px;">Another useful (but still in the code-review phase) feature is <a target="_blank" href="https://issues.apache.org/jira/browse/FALCON-796">“triaging issues related to data-processing</a>. Imagine that your dataset hasn’t been generated yet and you want to discover why. In this scenario, Falcon could easily walk up the dependency tree to identify the root cause such as a failed or still running parent process and display it in a consumable form to the users.</p>
<h2>The second part</h2>
<p style="text-align: justify; font-size: 16px;">In the second (and last) part of this blog series, we explain how to smoothly change the format of your datasets, highlight move advanced features of Falcon such as SLA, data backups, disaster recovery, late data handling as well as future enhancements and ideas. We will describe some disadvantages of Falcon as well. Stay tuned!
</p>
</div>
<h2>Acknowledges</h2>
<p style="text-align: justify; font-size: 16px;">I would like to thank Josh Baer and Piotr Krewski for a technical review of this article.</p>
<p> </p>
<p><i>This blog is reproduced from </i>http://getindata.com/blog/post/avoiding-the-mess-from-the-hadoop-cluster-part-1/ with <a href="http://mail-archives.us.apache.org/mod_mbox/falcon-dev/201503.mbox/%3CCAHodO=+QOrKvXn0j45H7-xyegRenS57HbGbqLpTnV0NOz8cNig@mail.gmail.com%3E">permissions</a> from author. Images used in the blog are from http://pixabay.com/en and licensed under &quot;<a data-go="%2Fservice%2Fterms%2F%23download_terms">CC0 Public Domain&quot;</a> </p>
<div style="clear: both;"></div>
<div class="hrline"><span></span></div>
</div>