blob: 59a80e4e6fb3e8c20ee0f4c5ab175f88f55b5626 [file] [log] [blame]
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Ozone Overview on Ozone</title>
<link>/</link>
<description>Recent content in Ozone Overview on Ozone</description>
<generator>Hugo -- gohugo.io</generator>
<language>en-us</language>
<lastBuildDate>Thu, 14 Sep 2017 00:00:00 +0000</lastBuildDate>
<atom:link href="/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>Alpha Cluster</title>
<link>/runningviadocker.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/runningviadocker.html</guid>
<description>This is an alpha release of Ozone. Please don&amp;rsquo;t use this release in production. Please check the road map page for features under development.
The easiest way to run ozone is to download the release tarball and launch ozone via Docker. Docker will create a small ozone cluster on your machine, including the data nodes and ozone services.
Running Ozone via Docker This assumes that you have Docker installed on the machine.</description>
</item>
<item>
<title>Building from Sources</title>
<link>/buildingsources.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/buildingsources.html</guid>
<description>This is a guide on how to build the ozone sources. If you are not planning to build sources yourself, you can safely skip this page.
If you are a Hadoop ninja, and wise in the ways of Apache, you already know that a real Apache release is a source release.
If you want to build from sources, Please untar the source tarball and run the ozone build command. This instruction assumes that you have all the dependencies to build Hadoop on your build machine.</description>
</item>
<item>
<title>Configuration</title>
<link>/settings.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/settings.html</guid>
<description>If you are feeling adventurous, you can setup ozone in a real cluster. Setting up a real cluster requires us to understand the components of Ozone. Ozone is designed to work concurrently with HDFS. However, Ozone is also capable of running independently. The components of ozone are the same in both approaches.
Ozone Components Ozone Manager - Is the server that is in charge of the namespace of Ozone.</description>
</item>
<item>
<title>Running concurrently with HDFS</title>
<link>/runningwithhdfs.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/runningwithhdfs.html</guid>
<description>Ozone is designed to work with HDFS. So it is easy to deploy ozone in an existing HDFS cluster.
Ozone does not support security today. It is a work in progress and tracked in HDDS-4. If you enable ozone in a secure HDFS cluster, for your own protection Ozone will refuse to work.
In other words, till Ozone security work is done, Ozone will not work in any secure clusters.</description>
</item>
<item>
<title>Starting an Ozone Cluster</title>
<link>/realcluster.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/realcluster.html</guid>
<description>Before we boot up the Ozone cluster, we need to initialize both SCM and Ozone Manager.
ozone scm --init This allows SCM to create the cluster Identity and initialize its state. The init command is similar to Namenode format. Init command is executed only once, that allows SCM to create all the required on-disk structures to work correctly. ozone --daemon start scm
Once we know SCM is up and running, we can create an Object Store for our use.</description>
</item>
<item>
<title>Hadoop Distributed Data Store</title>
<link>/hdds.html</link>
<pubDate>Thu, 14 Sep 2017 00:00:00 +0000</pubDate>
<guid>/hdds.html</guid>
<description>SCM Overview Storage Container Manager or SCM is a very important component of ozone. SCM offers block and container-based services to Ozone Manager. A container is a collection of unrelated blocks under ozone. SCM and data nodes work together to maintain the replication levels needed by the cluster.
It is easier to look at a putKey operation to understand the role that SCM plays.
To put a key, a client makes a call to OM with the following arguments.</description>
</item>
<item>
<title>Ozone Manager</title>
<link>/ozonemanager.html</link>
<pubDate>Thu, 14 Sep 2017 00:00:00 +0000</pubDate>
<guid>/ozonemanager.html</guid>
<description>OM Overview Ozone Manager or OM is the namespace manager for Ozone. The clients (RPC clients, Rest proxy, Ozone file system, etc.) communicate with OM to create and delete various ozone objects.
Each ozone volume is the root of a namespace under OM. This is very different from HDFS which provides a single rooted file system.
Ozone&amp;rsquo;s namespace is a collection of volumes or is a forest instead of a single rooted tree as in HDFS.</description>
</item>
<item>
<title>Architecture</title>
<link>/concepts.html</link>
<pubDate>Tue, 10 Oct 2017 00:00:00 +0000</pubDate>
<guid>/concepts.html</guid>
<description>Ozone is a redundant, distributed object store build by leveraging primitives present in HDFS. The primary design point of ozone is scalability, and it aims to scale to billions of objects.
Ozone consists of volumes, buckets, and keys. A volume is similar to a home directory in the ozone world. Only an administrator can create it. Volumes are used to store buckets. Once a volume is created users can create as many buckets as needed.</description>
</item>
<item>
<title>Java API</title>
<link>/javaapi.html</link>
<pubDate>Thu, 14 Sep 2017 00:00:00 +0000</pubDate>
<guid>/javaapi.html</guid>
<description>Introduction Ozone ships with it own client library, that supports both RPC(Remote Procedure call) and REST(Representational State Transfer). This library is the primary user interface to ozone.
It is trivial to switch from RPC to REST or vice versa, by setting the property ozone.client.protocol in the configuration or by calling the appropriate factory method.
Creating an Ozone client The Ozone client factory creates the ozone client. It allows the user to specify the protocol of communication.</description>
</item>
<item>
<title>Ozone File System</title>
<link>/ozonefs.html</link>
<pubDate>Thu, 14 Sep 2017 00:00:00 +0000</pubDate>
<guid>/ozonefs.html</guid>
<description>There are many Hadoop compatible files systems under Hadoop. Hadoop compatible file systems ensures that storage backends like Ozone can easily be integrated into Hadoop eco-system.
Setting up the Ozone file system To create an ozone file system, we have to choose a bucket where the file system would live. This bucket will be used as the backend store for OzoneFileSystem. All the files and directories will be stored as keys in this bucket.</description>
</item>
<item>
<title>Freon</title>
<link>/freon.html</link>
<pubDate>Sat, 02 Sep 2017 23:58:17 -0700</pubDate>
<guid>/freon.html</guid>
<description>Overview Freon is a load-generator for Ozone. This tool is used for testing the functionality of ozone.
Random keys In randomkeys mode, the data written into ozone cluster is randomly generated. Each key will be of size 10 KB.
The number of volumes/buckets/keys can be configured. The replication type and factor (eg. replicate with ratis to 3 nodes) Also can be configured.
For more information use
bin/ozone freon --help</description>
</item>
<item>
<title>Dozone &amp; Dev Tools</title>
<link>/dozone.html</link>
<pubDate>Thu, 10 Aug 2017 00:00:00 +0000</pubDate>
<guid>/dozone.html</guid>
<description>Dozone stands for docker for ozone. Ozone supports docker to make it easy to develop and test ozone. Starting a docker based ozone container is simple.
In the compose/ozone directory there are two files that define the docker and ozone settings.
Developers can
cd compose/ozone and simply run
docker-compose up -d to run a ozone cluster on docker.
This command will launch a Namenode, OM, SCM and a data node.</description>
</item>
<item>
<title>SCMCLI</title>
<link>/scmcli.html</link>
<pubDate>Thu, 10 Aug 2017 00:00:00 +0000</pubDate>
<guid>/scmcli.html</guid>
<description>SCM is the block service for Ozone. It is also the workhorse for ozone. But user process never talks to SCM. However, being able to read the state of SCM is useful.
SCMCLI allows the developer to access SCM directly. Please note: Improper usage of this tool can destroy your cluster. Unless you know exactly what you are doing, Please do not use this tool. In other words, this is a developer only tool.</description>
</item>
<item>
<title>Bucket Commands</title>
<link>/bucketcommands.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/bucketcommands.html</guid>
<description>Ozone shell supports the following bucket commands.
create delete info list update Create The bucket create command allows a user to create a bucket.
Params:
Arguments Comment Uri The name of the bucket in /volume/bucket format. ozone sh bucket create /hive/jan The above command will create a bucket called jan in the hive volume. Since no scheme was specified this command defaults to O3 (RPC) protocol.</description>
</item>
<item>
<title>Key Commands</title>
<link>/keycommands.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/keycommands.html</guid>
<description>Ozone shell supports the following key commands.
get put delete info list Get The key get command downloads a key from Ozone cluster to local file system.
Params:
Arguments Comment Uri The name of the key in /volume/bucket/key format. FileName Local file to download the key to. ozone sh key get /hive/jan/sales.orc sales.orc Downloads the file sales.</description>
</item>
<item>
<title>Ozone CLI</title>
<link>/commandshell.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/commandshell.html</guid>
<description>Ozone has a set of command line tools that can be used to manage ozone.
All these commands are invoked via the ozone script.
The commands supported by ozone are:
classpath - Prints the class path needed to get the hadoop jar and the required libraries. fs - Runs a command on ozone file system. datanode - Via daemon command, the HDDS data nodes can be started or stopped.</description>
</item>
<item>
<title>REST API</title>
<link>/rest.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/rest.html</guid>
<description>The Ozone REST API&amp;rsquo;s allows user to access ozone via REST protocol.
Authentication and Authorization For time being, The default authentication mode of REST API is insecure access mode, which is Simple mode. Under this mode, ozone server trusts the user name specified by client and it does not perform any authentication.
User name can be specified in HTTP header by
x-ozone-user: {USER_NAME} for example if add following header x-ozone-user: bilbo in the HTTP request, then operation will be executed as bilbo user.</description>
</item>
<item>
<title>S3</title>
<link>/s3.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/s3.html</guid>
<description>Ozone provides S3 compatible REST interface to use the object store data with any S3 compatible tools.
Getting started S3 Gateway is a separated component which provides the S3 compatible. It should be started additional to the regular Ozone components.
You can start a docker based cluster, including the S3 gateway from the release package.
Go to the compose/ozones3 directory, and start the server:
docker-compose up -d You can access the S3 gateway at http://localhost:9878</description>
</item>
<item>
<title>Volume Commands</title>
<link>/volumecommands.html</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>/volumecommands.html</guid>
<description>Volume commands generally need administrator privileges. The ozone shell supports the following volume commands.
create delete info list update Create The volume create command allows an administrator to create a volume and assign it to a user.
Params:
Arguments Comment -q, &amp;ndash;quota Optional, This argument that specifies the maximum size this volume can use in the Ozone cluster. -u, &amp;ndash;user Required, The name of the user who owns this volume.</description>
</item>
</channel>
</rss>