blob: f49bf0f8a7c5a6d411da432e1f3fbad7f5a8b3c4 [file] [log] [blame]
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="Hadoop Ozone Documentation">
<title>Documentation for Apache Hadoop Ozone</title>
<link href="css/bootstrap.min.css" rel="stylesheet">
<link href="css/ozonedoc.css" rel="stylesheet">
</head>
<body>
<nav class="navbar navbar-inverse navbar-fixed-top">
<div class="container-fluid">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#sidebar" aria-expanded="false" aria-controls="navbar">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand hidden-xs" href="#">Apache Hadoop Ozone/HDDS documentation</a>
<a class="navbar-brand visible-xs-inline" href="#">Hadoop Ozone</a>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav navbar-right">
<li><a href="https://github.com/apache/hadoop">Source</a></li>
<li><a href="https://hadoop.apache.org">Apache Hadoop</a></li>
<li><a href="https://apache.org">ASF</a></li>
</ul>
</div>
</div>
</nav>
<div class="container-fluid">
<div class="row">
<div class="col-sm-3 col-md-2 sidebar" id="sidebar">
<img src="ozone-logo.png" style="max-width: 100%;"/>
<ul class="nav nav-sidebar">
<li class="">
<a href="index.html">
<span>Ozone Overview</span>
</a>
</li>
<li class="">
<a href="runningviadocker.html">
<span>Getting Started</span>
</a>
<ul class="nav">
<li class="">
<a href="./runningviadocker.html">Alpha Cluster</a>
</li>
<li class="">
<a href="./settings.html">Configuration</a>
</li>
<li class="">
<a href="./realcluster.html">Starting an Ozone Cluster</a>
</li>
<li class="">
<a href="./ozonefs.html">Ozone File System</a>
</li>
<li class="">
<a href="./runningwithhdfs.html">Running concurrently with HDFS</a>
</li>
<li class="">
<a href="./buildingsources.html">Building from Sources</a>
</li>
</ul>
</li>
<li class="">
<a href="commandshell.html">
<span>Client</span>
</a>
<ul class="nav">
<li class="">
<a href="./commandshell.html">
<span>Ozone CLI</span>
</a>
<ul class="nav">
<li class="">
<a href="./volumecommands.html">Volume Commands</a>
</li>
<li class="">
<a href="./bucketcommands.html">Bucket Commands</a>
</li>
<li class="">
<a href="./keycommands.html">Key Commands</a>
</li>
</ul>
</li>
<li class="">
<a href="./s3.html">S3</a>
</li>
<li class="">
<a href="./javaapi.html">Java API</a>
</li>
</ul>
</li>
<li class="">
<a href="dozone.html">
<span>Tools</span>
</a>
<ul class="nav">
<li class="">
<a href="./auditparser.html">Audit Parser</a>
</li>
<li class="">
<a href="./dozone.html">Dozone &amp; Dev Tools</a>
</li>
<li class="">
<a href="./freon.html">Freon</a>
</li>
<li class="">
<a href="./genconf.html">Generate Configurations</a>
</li>
<li class="">
<a href="./scmcli.html">SCMCLI</a>
</li>
</ul>
</li>
<li class="">
<a href="prometheus.html">
<span>Recipes</span>
</a>
<ul class="nav">
<li class="">
<a href="./prometheus.html">Monitoring with Prometheus</a>
</li>
<li class="">
<a href="./sparkozonefsk8s.html">Spark in Kubernetes with OzoneFS</a>
</li>
</ul>
</li>
<li class="">
<a href="./concepts.html">
<span>Architecture</span>
</a>
<ul class="nav">
<li class="">
<a href="./hdds.html">Hadoop Distributed Data Store</a>
</li>
<li class="">
<a href="./ozonemanager.html">Ozone Manager</a>
</li>
<li class="active">
<a href="./ozonesecurityarchitecture.html">Ozone Security Overview</a>
</li>
<li class="">
<a href="./setupsecureozone.html">Setup secure ozone cluster</a>
</li>
</ul>
</li>
<li class="visible-xs"><a href="#">References</a>
<ul class="nav">
<li><a href="https://github.com/apache/hadoop"><span class="glyphicon glyphicon-new-window" aria-hidden="true"></span> Source</a></li>
<li><a href="https://hadoop.apache.org"><span class="glyphicon glyphicon-new-window" aria-hidden="true"></span> Apache Hadoop</a></li>
<li><a href="https://apache.org"><span class="glyphicon glyphicon-new-window" aria-hidden="true"></span> ASF</a></li>
</ul></li>
</ul>
</div>
<div class="col-sm-9 col-sm-offset-3 col-md-10 col-md-offset-2 main">
<h1>Ozone Security Overview</h1>
<div class="col-md-9">
<!---
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<h1 id="security-in-ozone">Security in Ozone</h1>
<p>Starting with badlands release (ozone-0.4.0-alpha) ozone cluster can be secured against external threats. Specifically it can be configured for following security features:</p>
<ol>
<li>Authentication</li>
<li>Authorization</li>
<li>Audit</li>
<li>Transparent Data Encryption (TDE)</li>
</ol>
<h2 id="authentication">Authentication</h2>
<h3 id="kerberos">Kerberos</h3>
<p>Similar to hadoop, Ozone allows kerberos-based authentication. So one way to setup identities for all the daemons and clients is to create kerberos keytabs and configure it like any other service in hadoop.</p>
<h3 id="tokens">Tokens</h3>
<p>Tokens are widely used in Hadoop to achieve lightweight authentication without compromising on security. Main motivation for using tokens inside Ozone is to prevent the unauthorized access while keeping the protocol lightweight and without sharing secret over the wire. Ozone utilizes three types of token:</p>
<h4 id="delegation-token">Delegation token</h4>
<p>Once client establishes their identity via kerberos they can request a delegation token from OzoneManager. This token can be used by a client to prove its identity until the token expires. Like Hadoop delegation tokens, an Ozone delegation token has 3 important fields:</p>
<ol>
<li><strong>Renewer</strong>: User responsible for renewing the token.</li>
<li><strong>Issue date</strong>: Time at which token was issued.</li>
<li><strong>Max date</strong>: Time after which token can’t be renewed.</li>
</ol>
<p>Token operations like get, renew and cancel can only be performed over an Kerberos authenticated connection. Clients can use delegation token to establish connection with OzoneManager and perform any file system/object store related operations like, listing the objects in a bucket or creating a volume etc.</p>
<h4 id="block-tokens">Block Tokens</h4>
<p>Block tokens are similar to delegation tokens in sense that they are signed by OzoneManager. Block tokens are created by OM (OzoneManager) when a client request involves interaction with DataNodes such as read/write Ozone keys.</p>
<p>Unlike delegation tokens there is no client API to request block tokens. Instead, they are handed transparently to client along with key/block locations. Block tokens are validated by Datanodes when receiving read/write requests from clients. Block token can&rsquo;t be renewed explicitly by client. Client with expired block token will need to refetch the key/block locations to get new block tokens.</p>
<h4 id="s3token">S3Token</h4>
<p>Like block tokens S3Tokens are handled transparently for clients. It is signed by S3secret created by client. S3Gateway creates this token for every s3 client request. To create an S3Token user must have a S3 secret.</p>
<h3 id="certificates">Certificates</h3>
<p>Apart from kerberos and tokens Ozone utilizes certificate based authentication for Ozone service components. To enable this, SCM (StorageContainerManager) bootstraps itself as an Certificate Authority when security is enabled. This allows all daemons inside Ozone to have an SCM signed certificate. Below is brief descriptions of steps involved:</p>
<ol>
<li>Datanodes and OzoneManagers submits a CSR (certificate signing request) to SCM.</li>
<li>SCM verifies identity of DN (Datanode) or OM via Kerberos and generates a certificate.</li>
<li>This certificate is used by OM and DN to prove their identities.</li>
<li>Datanodes use OzoneManager certificate to validate block tokens. This is possible because both of them trust SCM signed certificates. (i.e OzoneManager and Datanodes)</li>
</ol>
<h2 id="authorization">Authorization</h2>
<p>Ozone provides a pluggable API to control authorization of all client related operations. Default implementation allows every request. Clearly it is not meant for production environments. To configure a more fine grained policy one may configure Ranger plugin for Ozone. Since it is a pluggable module clients can also implement their own custom authorization policy and configure it using <code>ozone.acl.authorizer.class</code>.</p>
<h2 id="audit">Audit</h2>
<p>Ozone provides ability to audit all read &amp; write operations to OM, SCM and Datanodes. Ozone audit leverages the Marker feature which enables user to selectively audit only READ or WRITE operations by a simple config change without restarting the service(s).</p>
<p>To enable/disable audit of READ operations, set filter.read.onMatch to NEUTRAL or DENY respectively. Similarly, the audit of WRITE operations can be controlled using filter.write.onMatch.</p>
<p>Generating audit logs is only half the job, so Ozone also provides AuditParser - a sqllite based command line utility to parse/query audit logs with predefined templates(ex. Top 5 commands) and options for custom query. Once the log file has been loaded to AuditParser, one can simply run a template as shown below:
ozone auditparser <path to db file> template top5cmds</p>
<p>Similarly, users can also execute custom query using:</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">ozone auditparser &lt;path to db file&gt; query <span style="color:#e6db74">&#34;select * from audit where level==&#39;FATAL&#39;&#34;</span></code></pre></div>
<h2 id="transparent-data-encryption">Transparent Data Encryption</h2>
<p>Ozone TDE setup process and usage are very similar to HDFS TDE. The major difference is that Ozone TDE is enabled at Ozone bucket level when a bucket is created.</p>
<p>To create an encrypted bucket, client need to</p>
<ul>
<li>Create a bucket encryption key with hadoop key CLI (same as you do for HDFS encryption zone key)</li>
</ul>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">hadoop key create key1</code></pre></div>
<ul>
<li>Create an encrypted bucket with -k option</li>
</ul>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">ozone sh bucket create -k key1 /vol1/ez1</code></pre></div>
<p>After that the usage will be transparent to the client and end users, i.e., all data written to encrypted bucket are encrypted at datanodes.</p>
<p>To know more about how to setup a secure Ozone cluster refer to <a href="./setupsecureozone.html">How to setup secure Ozone cluster</a></p>
<p>Ozone <a href="https://issues.apache.org/jira/secure/attachment/12911638/HadoopStorageLayerSecurity.pdf">security architecture document</a> can be referred for a deeper dive into Ozone Security architecture.</p>
</div>
</div>
</div>
</div>
<script src="./js/jquery.min.js"></script>
<script src="./js/ozonedoc.js"></script>
<script src="./js/bootstrap.min.js"></script>
</body>
</html>