| <!DOCTYPE html> |
| <html lang="en"> |
| <head> |
| <meta charset="utf-8" /> |
| <meta http-equiv="X-UA-Compatible" content="IE=edge" /> |
| <meta name="viewport" content="width=device-width, initial-scale=1" /> |
| <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags --> |
| <meta name="description" content="A new open source Apache Hadoop ecosystem project, Apache Kudu completes Hadoop's storage layer to enable fast analytics on fast data" /> |
| <meta name="author" content="Cloudera" /> |
| <title>Apache Kudu - Building Near Real-time Big Data Lake</title> |
| <!-- Bootstrap core CSS --> |
| <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" |
| integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" |
| crossorigin="anonymous"> |
| |
| <!-- Custom styles for this template --> |
| <link href="/css/kudu.css" rel="stylesheet"/> |
| <link href="/css/asciidoc.css" rel="stylesheet"/> |
| <link rel="shortcut icon" href="/img/logo-favicon.ico" /> |
| <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.6.1/css/font-awesome.min.css" /> |
| |
| |
| <link rel="alternate" type="application/atom+xml" |
| title="RSS Feed for Apache Kudu blog" |
| href="/feed.xml" /> |
| |
| |
| <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries --> |
| <!--[if lt IE 9]> |
| <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script> |
| <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script> |
| <![endif]--> |
| </head> |
| <body> |
| <div class="kudu-site container-fluid"> |
| <!-- Static navbar --> |
| <nav class="navbar navbar-default"> |
| <div class="container-fluid"> |
| <div class="navbar-header"> |
| <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar"> |
| <span class="sr-only">Toggle navigation</span> |
| <span class="icon-bar"></span> |
| <span class="icon-bar"></span> |
| <span class="icon-bar"></span> |
| </button> |
| |
| <a class="logo" href="/"><img |
| src="//d3dr9sfxru4sde.cloudfront.net/i/k/apachekudu_logo_0716_80px.png" |
| srcset="//d3dr9sfxru4sde.cloudfront.net/i/k/apachekudu_logo_0716_80px.png 1x, //d3dr9sfxru4sde.cloudfront.net/i/k/apachekudu_logo_0716_160px.png 2x" |
| alt="Apache Kudu"/></a> |
| |
| </div> |
| <div id="navbar" class="collapse navbar-collapse"> |
| <ul class="nav navbar-nav navbar-right"> |
| <li > |
| <a href="/">Home</a> |
| </li> |
| <li > |
| <a href="/overview.html">Overview</a> |
| </li> |
| <li > |
| <a href="/docs/">Documentation</a> |
| </li> |
| <li > |
| <a href="/releases/">Releases</a> |
| </li> |
| <li class="active"> |
| <a href="/blog/">Blog</a> |
| </li> |
| <!-- NOTE: this dropdown menu does not appear on Mobile, so don't add anything here |
| that doesn't also appear elsewhere on the site. --> |
| <li class="dropdown"> |
| <a href="/community.html" role="button" aria-haspopup="true" aria-expanded="false">Community <span class="caret"></span></a> |
| <ul class="dropdown-menu"> |
| <li class="dropdown-header">GET IN TOUCH</li> |
| <li><a class="icon email" href="/community.html">Mailing Lists</a></li> |
| <li><a class="icon slack" href="https://getkudu-slack.herokuapp.com/">Slack Channel</a></li> |
| <li role="separator" class="divider"></li> |
| <li><a href="/community.html#meetups-user-groups-and-conference-presentations">Events and Meetups</a></li> |
| <li><a href="/committers.html">Project Committers</a></li> |
| <li><a href="/ecosystem.html">Ecosystem</a></li> |
| <!--<li><a href="/roadmap.html">Roadmap</a></li>--> |
| <li><a href="/community.html#contributions">How to Contribute</a></li> |
| <li role="separator" class="divider"></li> |
| <li class="dropdown-header">DEVELOPER RESOURCES</li> |
| <li><a class="icon github" href="https://github.com/apache/incubator-kudu">GitHub</a></li> |
| <li><a class="icon gerrit" href="http://gerrit.cloudera.org:8080/#/q/status:open+project:kudu">Gerrit Code Review</a></li> |
| <li><a class="icon jira" href="https://issues.apache.org/jira/browse/KUDU">JIRA Issue Tracker</a></li> |
| <li role="separator" class="divider"></li> |
| <li class="dropdown-header">SOCIAL MEDIA</li> |
| <li><a class="icon twitter" href="https://twitter.com/ApacheKudu">Twitter</a></li> |
| <li><a href="https://www.reddit.com/r/kudu/">Reddit</a></li> |
| <li role="separator" class="divider"></li> |
| <li class="dropdown-header">APACHE SOFTWARE FOUNDATION</li> |
| <li><a href="https://www.apache.org/security/" target="_blank">Security</a></li> |
| <li><a href="https://www.apache.org/foundation/sponsorship.html" target="_blank">Sponsorship</a></li> |
| <li><a href="https://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a></li> |
| <li><a href="https://www.apache.org/licenses/" target="_blank">License</a></li> |
| </ul> |
| </li> |
| <li > |
| <a href="/faq.html">FAQ</a> |
| </li> |
| </ul><!-- /.nav --> |
| </div><!-- /#navbar --> |
| </div><!-- /.container-fluid --> |
| </nav> |
| |
| <div class="row header"> |
| <div class="col-lg-12"> |
| <h2><a href="/blog">Apache Kudu Blog</a></h2> |
| </div> |
| </div> |
| |
| <div class="row-fluid"> |
| <div class="col-lg-9"> |
| <article> |
| <header> |
| <h1 class="entry-title">Building Near Real-time Big Data Lake</h1> |
| <p class="meta">Posted 30 Jul 2020 by Boris Tyukin</p> |
| </header> |
| <div class="entry-content"> |
| <p>Note: This is a cross-post from the Boris Tyukin’s personal blog <a href="https://boristyukin.com/building-near-real-time-big-data-lake-part-2/">Building Near Real-time Big Data Lake: Part 2</a></p> |
| |
| <p>This is the second part of the series. In <a href="https://boristyukin.com/building-near-real-time-big-data-lake-part-i/">Part 1</a> |
| I wrote about our use-case for the Data Lake architecture and shared our success story.</p> |
| |
| <!--more--> |
| <h2 id="requirements">Requirements</h2> |
| <p>Before we embarked on our journey, we had identified high-level requirements and guiding principles. |
| It is crucial to think it through to envision who and how will use your Data Lake. Identify your |
| first three projects to keep them while you are building the Data Lake.</p> |
| |
| <p>The best way is to start a few smaller proof-of-concept projects: play with various distributed |
| engines and tools, run tons of benchmarks, and learn from others, who implemented a similar solution |
| successfully. Do not forget to learn from others’ mistakes too.</p> |
| |
| <p>We had settled on these 7 guiding principles before we started looking at technology and architecture:</p> |
| <ol> |
| <li>Scale-out, not scale-up.</li> |
| <li>Design for resiliency and availability.</li> |
| <li>Support both real-time and batch ingestion into a Data Lake.</li> |
| <li>Enable both ad-hoc exploratory analysis as well as interactive queries.</li> |
| <li>Replicate in near real-time 300+ Cerner Millennium tables from 3 remote-hosted Cerner Oracle RAC |
| instances with average latency less than 10 seconds (time between a change made in Cerner EHR system |
| by clinicians and data ingested and ready for consumption in Data Lake).</li> |
| <li>Have robust logging and monitoring processes to ensure reliability of the pipeline and to simplify |
| troubleshooting.</li> |
| <li>Reduce manual work greatly and ease the ongoing support.</li> |
| </ol> |
| |
| <p>We decided to embrace the benefits and scalability of Big Data technology. In fact, it was a pretty |
| easy sell as our leadership was tired of constantly buying expensive software and hardware from |
| big-name vendors and not being able to scale-out to support an avalanche of new projects and requests.</p> |
| |
| <p>We started looking at Change Data Capture (CDC) products to mine and ship database logs from Oracle.</p> |
| |
| <p>We knew we had to implement a metadata- or code-as-configuration driven solution to manage hundreds |
| of tables, without expanding our team.</p> |
| |
| <p>We needed a flexible orchestration and scheduling tool, designed with real-time workloads in mind.</p> |
| |
| <p>Finally, we engaged our and Cerner’s leadership early, as it would take time to hash out all the |
| details, and to make their DBAs confident that we were not going to break their production systems |
| by streaming 1000s of messages every second 24x7. In fact, one of the goals was to relieve production |
| systems from analytical workloads.</p> |
| <h2 id="platform-selection">Platform selection</h2> |
| <p>First off, we had to decide on the actual platform. After spending 3 months researching, 4 options |
| emerged, given the realities of our organization:</p> |
| <ol> |
| <li>On-premises virtualized cluster, using preferred vendors, recommended by our infrastructure team.</li> |
| <li>On-premises Big Data appliance (bundled hardware and software, optimized for Big Data workloads).</li> |
| <li>Big Data cluster in cloud, managed by ourselves (IaaS model, which just means renting a bunch of |
| VMs and running Cloudera or Hortonworks Big Data distribution).</li> |
| <li>A fully managed cloud data platform and native cloud data warehouse (Snowflake, Google BigQuery, |
| Amazon Redshift, and etc.)</li> |
| </ol> |
| |
| <p>Each option had a long list of pros and cons, but ultimately we went with option 2. The price was |
| really attractive, it was a capital expense (our finance people rightfully hate subscriptions), the |
| best performance, security, and control.</p> |
| |
| <p>It was in 2017 when we made a decision. While we could not provision cluster resources and add nodes |
| with a click of button and we learned that software and hardware upgrades were a real chore, it was |
| still very much worth that as we’ve saved a 7 number figure for the organization to get the |
| performance we needed.</p> |
| |
| <p>Owning hardware also made a lot of sense for us as we could not forecast our needs far enough in the |
| future and we could get a really powerful 6 node cluster for a fraction the cost that we would end |
| up paying in subscription fees in the next 12 months. Of course, it did help that we already had a |
| state-of-the-art data center and people managing it.</p> |
| |
| <p>Fully-managed or serverless architecture was not really an option back then, but if you asked me |
| today, this would be the first thing I would look at if I had to build a data lake today |
| (definitely check AWS Lake Formation, AWS Athena, Amazon Redshift, Azure Synapse, Snowflake and |
| Google BigQuery).</p> |
| |
| <p>Your organization, goals, projects and situation could be very different and you should definitely |
| evaluate cloud solutions, especially in 2020 when prices are decreasing, cloud providers are |
| extremely competitive and there are new attractive pricing options with 3 year commitment. Make sure |
| you understand the cost and billing model. Or, hire a company (there are plenty now), that will |
| explain your cloud bills before you get a horrifying check.</p> |
| |
| <p>Some of the things to consider:</p> |
| |
| <ol> |
| <li>Existing data center infrastructure and access to people, supporting it.</li> |
| <li>Integration with current tools (BI, ETL, Advanced Analytics etc.) Do they stay on-premises or can |
| be moved into cloud to avoid networking lags or charges for data egress?</li> |
| <li>Total ownership cost and cost to performance ratio.</li> |
| <li>Do you really need elasticity? This is the first thing that cloud advocates are preaching but |
| think if and how this applies to you.</li> |
| <li>Is time-to-market so crucial for you, or you can wait a few months to build Big Data |
| infrastructure on-premises to save some money and get much better performance and control of |
| physical hardware?</li> |
| <li>Are you okay with locking yourself in to vendor’s solution XYZ? This is an especially crucial |
| question if you are selecting a fully managed platform.</li> |
| <li>Can you easily change your cloud provider? Or can you even afford to put all your trust and faith |
| in a single cloud provider?</li> |
| </ol> |
| |
| <p>Do your homework, spend a lot of time reading and talking to other people (engineers and architects, |
| not sales reps), and make sure you understand what you are signing up for.</p> |
| |
| <p>And remember, there is no magic! You still need to architect, design, build, support, test, and make |
| good choices and use common sense. No matter what your favorite vendor tells you. You might save |
| time by spinning up a cluster in minutes, but you still need people to manage all that. You still |
| need great architects and engineers to realize benefits from all that hot new tech.</p> |
| |
| <h2 id="building-blocks">Building blocks</h2> |
| |
| <p>Once we agreed on the platform of our choice, powered by Cloudera Enterprise Data Hub, we started |
| prototyping and benchmarking various engines and tools that came with it. We looked at other |
| open-source projects, as nothing really prevents you from installing and using any open-source |
| product you desire and trust. One of these products for us was Apache NiFi, which proved to be a |
| tremendous value.</p> |
| |
| <p>After a lot of trials and errors, we decided on this architecture:</p> |
| |
| <p><img src="https://boristyukin.com/content/images/pipelinearchitecture.png" width="100%" /></p> |
| |
| <p>One of the toughest challenges we faced right away was the fact that most of the Big Data data |
| engines were not designed to support mutable data but rather immutable append-only data. All the |
| workarounds we had tried did not work for us and no matter what we did with partitioning strategy, |
| we just needed a simple ability to update and delete data, not only insert. Anyone who worked with |
| RDBMS or legacy columnar databases takes this capability for granted, but surprisingly it is a very |
| difficult task in Big Data world.</p> |
| |
| <p>We considered Apache HBase, but the performance of analytics-style ETL and interactive queries was |
| really bad. We were blown away by Apache Impala’s performance on HDFS as no matter what we threw at |
| Impala, it was hundreds of times faster…but we could not update data in place.</p> |
| |
| <p>At about the same time, Cloudera released and open-sourced Apache Kudu project that became part of |
| its official distribution. We got very excited about it (refer to our benchmarks <a href="http://boristyukin.com/benchmarking-apache-kudu-vs-apache-impala/">here</a>), and decided |
| to proceed with Kudu as a storage engine, while using Apache Impala as SQL query engine. One of the |
| ambitious goals of Apache Kudu is to cut the need for the infamous <a href="https://en.wikipedia.org/wiki/Lambda_architecture">Lambda architecture</a>.</p> |
| |
| <p>After talking to 7 vendors and playing with our top picks, we selected a Change Data Capture product |
| (Oracle GoldenGate for Big Data edition). It deserves a separate post but let’s just say it was the |
| only product out of 7, that was able to handle complexities of the source Oracle RAC systems and |
| offer great performance without the need to install any agents or software on the actual production |
| database. Other solutions had a very long list of limitations for Oracle systems, make sure to read |
| and understand these limitations.</p> |
| |
| <p>Our homegrown tool <a href="http://boristyukin.com/how-to-ingest-a-large-number-of-tables-into-a-big-data-lake-or-why-i-built-metazoo/">MetaZoo</a> |
| has been instrumental to bring order and peace, and that’s why it earned |
| its own blog post!</p> |
| |
| <h2 id="how-it-works">How it works</h2> |
| <p>Initial ingest is pretty typical - we use Sqoop to extract data from Cerner Oracle databases, and |
| NiFi helps orchestrate initial load for hundreds of tables. Actually, this NiFi flow below can |
| handle initial ingest of hundreds of tables!</p> |
| |
| <p><img src="https://boristyukin.com/content/images/nifi_initial.png" width="100%" /></p> |
| |
| <p>Our secret sauce though is <a href="http://boristyukin.com/how-to-ingest-a-large-number-of-tables-into-a-big-data-lake-or-why-i-built-metazoo/">MetaZoo</a>. |
| MetaZoo generates optimal parameters for Sqoop (such as a number of mappers, split-by column, and so |
| forth), generates DDLs for staging and final tables, and SQL commands to transform data before they |
| land in the Data Lake. MetaZoo also provides control tables to record status of every table.</p> |
| |
| <p>The throughput of Sqoop is nothing but amazing. Gone are the days when we had to ask Cerner to dump |
| tables on a hard-drive and ship it by snail mail (do not ask how much it cost us!). And we like how |
| YARN queues help to limit the load on production databases.</p> |
| |
| <p>To give you one example, a few years ago it took us 4 weeks to reload <code class="language-plaintext highlighter-rouge">clinical_event</code> table from |
| Cerner using Informatica into our local Oracle database. With Sqoop and Big Data, it was done in 11 |
| hours!</p> |
| |
| <p>This is what happens during the initial ingest.</p> |
| |
| <p>First, MetaZoo gathers relevant metadata from source system about tables to ingest, and based on |
| that metadata will generate DDL scripts, SQL commands snippets, Sqoop parameters, and more. It will |
| initialize tables in MetaZoo control tables as well.</p> |
| |
| <p>Then NiFi picks a list of tables to ingest from MetaZoo control tables and run the following steps |
| to:</p> |
| |
| <ul> |
| <li>Execute and wait for Sqoop to finish.</li> |
| <li>Apply some basic rules to map data types to the corresponding data types in the lake. We convert |
| timestamps to a proper time zone as well. While you do not want to do any heavy processing or or any |
| data modeling in Data Lake and keep data closer to raw format as much as you can, some light |
| processing upfront goes a long way and make it easier for analysts and developers to use these |
| tables later.</li> |
| <li>Load processed data into final tables after some basic validation.</li> |
| <li>Compute Impala statistics.</li> |
| <li>Set initial ingest status to completed in MetaZoo control tables so it is ready for real-time |
| streaming.</li> |
| </ul> |
| |
| <p>Before we kick off the initial ingest process, we start Oracle GoldenGate extracts and replicats |
| (that’s the actual term) to begin capturing changes from a database and send them into Kafka. Every |
| message, depending on database operation type and GoldenGate configuration, might have before/after |
| table row values, operation type and database commit transaction time (it only extracts changes for |
| committed transactions). Once the initial ingest is finished, and because GoldenGate continues |
| sending changes since the moment we started, we can now start real-time ingest flow in NiFi.</p> |
| |
| <p>A side benefit of decoupling GoldenGate from Kafka and NiFi and Kudu is to make this process |
| resilient to failures. This allows us as well to bring one of these systems down for maintenance |
| without much impact.</p> |
| |
| <p>Below is the NiFi flow than handles real-time streaming from Oracle/GoldenGate/Kafka and persists |
| data into Kudu:</p> |
| |
| <p><img src="https://boristyukin.com/content/images/nifi_rt.png" width="100%" /></p> |
| |
| <ol> |
| <li>NiFi flow consumes Kafka messages, produced by GoldenGate. Every table from every domain has |
| its own Kafka topic. Topics have only one partition to preserve the original order of messages.</li> |
| <li>New messages are queued in NiFi, using a simple First-In-First-Out pattern and grouped by a |
| table. It is important to ensure the order of messages but still process tables concurrently.</li> |
| <li>Messages are transformed, using the same basic rules we apply during the initial ingest.</li> |
| <li>Finally, messages are persisted into Kudu. Some of them represent INSERT type operation, which |
| results in brand new rows added to Kudu tables. Other messages are UPDATE and DELETE operations. |
| And we have to deal with an exotic PK_UPDATE operation, when a primary key was changed for some |
| reason in the source system (e.g. PK=111 was renamed to 222). We had to write a custom Kudu client |
| to handle all these cases using Java Kudu API that was fun to use. NiFi allowed us to write custom |
| processors and integrate that custom Kudu code directly into our flow.</li> |
| <li>Useful metrics are stored in a separate Kudu table. We collect number of messages processed, |
| operation type (insert, update, delete or primary key update), latency, important timestamps. |
| Using this data, we can optimize and tweak the performance of the pipeline, and to monitor it by |
| visualizing data on a dashboard.</li> |
| </ol> |
| |
| <p>The entire flow handles 900+ tables today (as we capture 300 tables from 3 Cerner domains).</p> |
| |
| <p>We process ~2,000 messages per second or 125MM messages per day. GoldenGate accumulates 150Gb worth |
| of database changes per day. In Kudu, we store over 120B rows of data.</p> |
| |
| <p>Our average latency is 6 seconds and the pipeline is running 24x7.</p> |
| |
| <h2 id="user-experience">User experience</h2> |
| <p>I am biased, but I think this is a game-changer for analysts, BI developers, or any data people. |
| What they get is an ability to access near real-time production data, with all the benefits and |
| scalability of Big Data technology.</p> |
| |
| <p>Here, I run a query in Impala to count patients, admitted to our hospitals within the last 7 days, |
| who are still in the hospitals (not discharged yet):</p> |
| |
| <p><img src="https://boristyukin.com/content/images/query1.png" alt="query 1" /></p> |
| |
| <p>Then 5 seconds later I run the same query again to see numbers changed - more patients got admitted |
| and discharged:</p> |
| |
| <p><img src="https://boristyukin.com/content/images/query2.png" alt="query 2" />]</p> |
| |
| <p>This query below counts certain clinical events in the 20B row Kudu table (which is updated in near |
| real-time). While it takes 28 seconds to finish, this query would never even finish I ran it against |
| our Oracle database. It found 13.7B events:</p> |
| |
| <p><img src="https://boristyukin.com/content/images/query3.png" alt="query 3" /></p> |
| |
| <h2 id="credits">Credits</h2> |
| <p>Apache Impala, Apache Kudu and Apache NiFi were the pillars of our real-time pipeline. Back in 2017, |
| Impala was already a rock solid battle-tested project, while NiFi and Kudu were relatively new. We |
| did have some reservations about using them and were concerned about support if/when we needed it |
| (and we did need it a few times).</p> |
| |
| <p>We were amazed by all the help, dedication, knowledge sharing, friendliness, and openness of Impala, |
| NiFi and Kudu developers. Huge thank you to all of you who helped us alone the way. You guys are |
| amazing and you are building fantastic products!</p> |
| |
| <p>To be continued…</p> |
| |
| </div> |
| </article> |
| |
| |
| </div> |
| <div class="col-lg-3 recent-posts"> |
| <h3>Recent posts</h3> |
| <ul> |
| |
| <li> <a href="/2020/09/21/apache-kudu-1-13-0-release.html">Apache Kudu 1.13.0 released</a> </li> |
| |
| <li> <a href="/2020/08/11/fine-grained-authz-ranger.html">Fine-Grained Authorization with Apache Kudu and Apache Ranger</a> </li> |
| |
| <li> <a href="/2020/07/30/building-near-real-time-big-data-lake.html">Building Near Real-time Big Data Lake</a> </li> |
| |
| <li> <a href="/2020/05/18/apache-kudu-1-12-0-release.html">Apache Kudu 1.12.0 released</a> </li> |
| |
| <li> <a href="/2019/11/20/apache-kudu-1-11-1-release.html">Apache Kudu 1.11.1 released</a> </li> |
| |
| <li> <a href="/2019/11/20/apache-kudu-1-10-1-release.html">Apache Kudu 1.10.1 released</a> </li> |
| |
| <li> <a href="/2019/07/09/apache-kudu-1-10-0-release.html">Apache Kudu 1.10.0 Released</a> </li> |
| |
| <li> <a href="/2019/04/30/location-awareness.html">Location Awareness in Kudu</a> </li> |
| |
| <li> <a href="/2019/04/22/fine-grained-authorization-with-apache-kudu-and-impala.html">Fine-Grained Authorization with Apache Kudu and Impala</a> </li> |
| |
| <li> <a href="/2019/03/19/testing-apache-kudu-applications-on-the-jvm.html">Testing Apache Kudu Applications on the JVM</a> </li> |
| |
| <li> <a href="/2019/03/15/apache-kudu-1-9-0-release.html">Apache Kudu 1.9.0 Released</a> </li> |
| |
| <li> <a href="/2019/03/05/transparent-hierarchical-storage-management-with-apache-kudu-and-impala.html">Transparent Hierarchical Storage Management with Apache Kudu and Impala</a> </li> |
| |
| <li> <a href="/2018/12/11/call-for-posts.html">Call for Posts</a> </li> |
| |
| <li> <a href="/2018/10/26/apache-kudu-1-8-0-released.html">Apache Kudu 1.8.0 Released</a> </li> |
| |
| <li> <a href="/2018/09/26/index-skip-scan-optimization-in-kudu.html">Index Skip Scan Optimization in Kudu</a> </li> |
| |
| </ul> |
| </div> |
| </div> |
| |
| <footer class="footer"> |
| <div class="row"> |
| <div class="col-md-9"> |
| <p class="small"> |
| Copyright © 2020 The Apache Software Foundation. |
| </p> |
| <p class="small"> |
| Apache Kudu, Kudu, Apache, the Apache feather logo, and the Apache Kudu |
| project logo are either registered trademarks or trademarks of The |
| Apache Software Foundation in the United States and other countries. |
| </p> |
| </div> |
| <div class="col-md-3"> |
| <a class="pull-right" href="https://www.apache.org/events/current-event.html"> |
| <img src="https://www.apache.org/events/current-event-234x60.png"/> |
| </a> |
| </div> |
| </div> |
| </footer> |
| </div> |
| <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script> |
| <script> |
| // Try to detect touch-screen devices. Note: Many laptops have touch screens. |
| $(document).ready(function() { |
| if ("ontouchstart" in document.documentElement) { |
| $(document.documentElement).addClass("touch"); |
| } else { |
| $(document.documentElement).addClass("no-touch"); |
| } |
| }); |
| </script> |
| <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js" |
| integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" |
| crossorigin="anonymous"></script> |
| <script> |
| (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ |
| (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), |
| m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) |
| })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); |
| |
| ga('create', 'UA-68448017-1', 'auto'); |
| ga('send', 'pageview'); |
| </script> |
| <script src="https://cdnjs.cloudflare.com/ajax/libs/anchor-js/3.1.0/anchor.js"></script> |
| <script> |
| anchors.options = { |
| placement: 'right', |
| visible: 'touch', |
| }; |
| anchors.add(); |
| </script> |
| </body> |
| </html> |
| |