blob: 4869f9f9a05623e113f0b678168faa82b8b93114 [file] [log] [blame]
---++ Building & Installing Apache Atlas
---+++ Building Atlas
<verbatim>
git clone https://git-wip-us.apache.org/repos/asf/incubator-atlas.git atlas
cd atlas
export MAVEN_OPTS="-Xmx1536m -XX:MaxPermSize=512m" && mvn clean install
</verbatim>
Once the build successfully completes, artifacts can be packaged for deployment.
<verbatim>
mvn clean package -Pdist
</verbatim>
NOTE:
1. Use option '-DskipTests' to skip running unit and integration tests
2. Use option '-P perf' to instrument atlas to collect performance metrics
To build a distribution that configures Atlas for external HBase and Solr, build with the external-hbase-solr profile.
<verbatim>
mvn clean package -Pdist,external-hbase-solr
</verbatim>
Note that when the external-hbase-solr profile is used the following steps need to be completed to make Atlas functional.
* Configure atlas.graph.storage.hostname (see "Graph persistence engine - HBase" in the [[Configuration][Configuration]] section).
* Configure atlas.graph.index.search.solr.zookeeper-url (see "Graph Search Index - Solr" in the [[Configuration][Configuration]] section).
* Set HBASE_CONF_DIR to point to a valid HBase config directory (see "Graph persistence engine - HBase" in the [[Configuration][Configuration]] section).
* Create the SOLR indices (see "Graph Search Index - Solr" in the [[Configuration][Configuration]] section).
To build a distribution that packages HBase and Solr, build with the embedded-hbase-solr profile.
<verbatim>
mvn clean package -Pdist,embedded-hbase-solr
</verbatim>
Using the embedded-hbase-solr profile will configure Atlas so that an HBase instance and a Solr instance will be started
and stopped along with the Atlas server by default.
Atlas also supports building a distribution that can use BerkeleyDB and Elastic search as the graph and index backends.
To build a distribution that is configured for these backends, build with the berkeley-elasticsearch profile.
<verbatim>
mvn clean package -Pdist,berkeley-elasticsearch
</verbatim>
An additional step is required for the binary built using this profile to be used along with the Atlas distribution.
Due to licensing requirements, Atlas does not bundle the BerkeleyDB Java Edition in the tarball.
You can download the Berkeley DB jar file from the URL: <verbatim>http://download.oracle.com/otn/berkeley-db/je-5.0.73.zip</verbatim>
and copy the je-5.0.73.jar to the ${atlas_home}/libext directory.
Tar can be found in atlas/distro/target/apache-atlas-${project.version}-bin.tar.gz
Tar is structured as follows
<verbatim>
|- bin
|- atlas_start.py
|- atlas_stop.py
|- atlas_config.py
|- quick_start.py
|- cputil.py
|- conf
|- atlas-application.properties
|- atlas-env.sh
|- hbase
|- hbase-site.xml.template
|- log4j.xml
|- solr
|- currency.xml
|- lang
|- stopwords_en.txt
|- protowords.txt
|- schema.xml
|- solrconfig.xml
|- stopwords.txt
|- synonyms.txt
|- docs
|- hbase
|- bin
|- conf
...
|- server
|- webapp
|- atlas.war
|- solr
|- bin
...
|- README
|- NOTICE
|- LICENSE
|- DISCLAIMER.txt
|- CHANGES.txt
</verbatim>
Note that if the embedded-hbase-solr profile is specified for the build then HBase and Solr are included in the
distribution.
In this case, a standalone instance of HBase can be started as the default storage backend for the graph repository.
During Atlas installation the conf/hbase/hbase-site.xml.template gets expanded and moved to hbase/conf/hbase-site.xml
for the initial standalone HBase configuration. To configure ATLAS
graph persistence for a different HBase instance, please see "Graph persistence engine - HBase" in the
[[Configuration][Configuration]] section.
Also, a standalone instance of Solr can be started as the default search indexing backend. To configure ATLAS search
indexing for a different Solr instance please see "Graph Search Index - Solr" in the
[[Configuration][Configuration]] section.
---+++ Installing & Running Atlas
---++++ Installing Atlas
<verbatim>
tar -xzvf apache-atlas-${project.version}-bin.tar.gz
cd atlas-${project.version}
</verbatim>
---++++ Configuring Atlas
By default config directory used by Atlas is {package dir}/conf. To override this set environment
variable ATLAS_CONF to the path of the conf dir.
atlas-env.sh has been added to the Atlas conf. This file can be used to set various environment
variables that you need for you services. In addition you can set any other environment
variables you might need. This file will be sourced by atlas scripts before any commands are
executed. The following environment variables are available to set.
<verbatim>
# The java implementation to use. If JAVA_HOME is not found we expect java and jar to be in path
#export JAVA_HOME=
# any additional java opts you want to set. This will apply to both client and server operations
#export ATLAS_OPTS=
# any additional java opts that you want to set for client only
#export ATLAS_CLIENT_OPTS=
# java heap size we want to set for the client. Default is 1024MB
#export ATLAS_CLIENT_HEAP=
# any additional opts you want to set for atlas service.
#export ATLAS_SERVER_OPTS=
# java heap size we want to set for the atlas server. Default is 1024MB
#export ATLAS_SERVER_HEAP=
# What is is considered as atlas home dir. Default is the base locaion of the installed software
#export ATLAS_HOME_DIR=
# Where log files are stored. Defatult is logs directory under the base install location
#export ATLAS_LOG_DIR=
# Where pid files are stored. Defatult is logs directory under the base install location
#export ATLAS_PID_DIR=
# where the atlas titan db data is stored. Defatult is logs/data directory under the base install location
#export ATLAS_DATA_DIR=
# Where do you want to expand the war file. By Default it is in /server/webapp dir under the base install dir.
#export ATLAS_EXPANDED_WEBAPP_DIR=
</verbatim>
*Settings to support large number of metadata objects*
If you plan to store several tens of thousands of metadata objects, it is recommended that you use values
tuned for better GC performance of the JVM.
The following values are common server side options:
<verbatim>
export ATLAS_SERVER_OPTS="-server -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+PrintTenuringDistribution -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dumps/atlas_server.hprof -Xloggc:logs/gc-worker.log -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1m -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps"
</verbatim>
The =-XX:SoftRefLRUPolicyMSPerMB= option was found to be particularly helpful to regulate GC performance for
query heavy workloads with many concurrent users.
The following values are recommended for JDK 7:
<verbatim>
export ATLAS_SERVER_HEAP="-Xms15360m -Xmx15360m -XX:MaxNewSize=3072m -XX:PermSize=100M -XX:MaxPermSize=512m"
</verbatim>
The following values are recommended for JDK 8:
<verbatim>
export ATLAS_SERVER_HEAP="-Xms15360m -Xmx15360m -XX:MaxNewSize=5120m -XX:MetaspaceSize=100M -XX:MaxMetaspaceSize=512m"
</verbatim>
*NOTE for Mac OS users*
If you are using a Mac OS, you will need to configure the ATLAS_SERVER_OPTS (explained above).
In {package dir}/conf/atlas-env.sh uncomment the following line
<verbatim>
#export ATLAS_SERVER_OPTS=
</verbatim>
and change it to look as below
<verbatim>
export ATLAS_SERVER_OPTS="-Djava.awt.headless=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
</verbatim>
*Hbase as the Storage Backend for the Graph Repository*
By default, Atlas uses Titan as the graph repository and is the only graph repository implementation available currently.
The HBase versions currently supported are 1.1.x. For configuring ATLAS graph persistence on HBase, please see "Graph persistence engine - HBase" in the [[Configuration][Configuration]] section
for more details.
Pre-requisites for running HBase as a distributed cluster
* 3 or 5 !ZooKeeper nodes
* Atleast 3 !RegionServer nodes. It would be ideal to run the !DataNodes on the same hosts as the Region servers for data locality.
HBase tablename in Titan can be set using the following configuration in ATLAS_HOME/conf/atlas-application.properties:
<verbatim>
atlas.graph.storage.hbase.table=apache_atlas_titan
atlas.audit.hbase.tablename=apache_atlas_entity_audit
</verbatim>
*Configuring SOLR as the Indexing Backend for the Graph Repository*
By default, Atlas uses Titan as the graph repository and is the only graph repository implementation available currently.
For configuring Titan to work with Solr, please follow the instructions below
* Install solr if not already running. The version of SOLR supported is 5.2.1. Could be installed from http://archive.apache.org/dist/lucene/solr/5.2.1/solr-5.2.1.tgz
* Start solr in cloud mode.
!SolrCloud mode uses a !ZooKeeper Service as a highly available, central location for cluster management.
For a small cluster, running with an existing !ZooKeeper quorum should be fine. For larger clusters, you would want to run separate multiple !ZooKeeper quorum with atleast 3 servers.
Note: Atlas currently supports solr in "cloud" mode only. "http" mode is not supported. For more information, refer solr documentation - https://cwiki.apache.org/confluence/display/solr/SolrCloud
* For e.g., to bring up a Solr node listening on port 8983 on a machine, you can use the command:
<verbatim>
$SOLR_HOME/bin/solr start -c -z <zookeeper_host:port> -p 8983
</verbatim>
* Run the following commands from SOLR_BIN (e.g. $SOLR_HOME/bin) directory to create collections in Solr corresponding to the indexes that Atlas uses. In the case that the ATLAS and SOLR instance are on 2 different hosts,
first copy the required configuration files from ATLAS_HOME/conf/solr on the ATLAS instance host to the Solr instance host. SOLR_CONF in the below mentioned commands refer to the directory where the solr configuration files
have been copied to on Solr host:
<verbatim>
$SOLR_BIN/solr create -c vertex_index -d SOLR_CONF -shards #numShards -replicationFactor #replicationFactor
$SOLR_BIN/solr create -c edge_index -d SOLR_CONF -shards #numShards -replicationFactor #replicationFactor
$SOLR_BIN/solr create -c fulltext_index -d SOLR_CONF -shards #numShards -replicationFactor #replicationFactor
</verbatim>
Note: If numShards and replicationFactor are not specified, they default to 1 which suffices if you are trying out solr with ATLAS on a single node instance.
Otherwise specify numShards according to the number of hosts that are in the Solr cluster and the maxShardsPerNode configuration.
The number of shards cannot exceed the total number of Solr nodes in your !SolrCloud cluster.
The number of replicas (replicationFactor) can be set according to the redundancy required.
Also note that solr will automatically be called to create the indexes when the Atlas server is started if the
SOLR_BIN and SOLR_CONF environment variables are set and the search indexing backend is set to 'solr5'.
* Change ATLAS configuration to point to the Solr instance setup. Please make sure the following configurations are set to the below values in ATLAS_HOME/conf/atlas-application.properties
<verbatim>
atlas.graph.index.search.backend=solr5
atlas.graph.index.search.solr.mode=cloud
atlas.graph.index.search.solr.zookeeper-url=<the ZK quorum setup for solr as comma separated value> eg: 10.1.6.4:2181,10.1.6.5:2181
atlas.graph.index.search.solr.zookeeper-connect-timeout=<SolrCloud Zookeeper Connection Timeout>. Default value is 60000 ms
atlas.graph.index.search.solr.zookeeper-session-timeout=<SolrCloud Zookeeper Session Timeout>. Default value is 60000 ms
</verbatim>
* Restart Atlas
For more information on Titan solr configuration , please refer http://s3.thinkaurelius.com/docs/titan/0.5.4/solr.htm
Pre-requisites for running Solr in cloud mode
* Memory - Solr is both memory and CPU intensive. Make sure the server running Solr has adequate memory, CPU and disk.
Solr works well with 32GB RAM. Plan to provide as much memory as possible to Solr process
* Disk - If the number of entities that need to be stored are large, plan to have at least 500 GB free space in the volume where Solr is going to store the index data
* !SolrCloud has support for replication and sharding. It is highly recommended to use !SolrCloud with at least two Solr nodes running on different servers with replication enabled.
If using !SolrCloud, then you also need !ZooKeeper installed and configured with 3 or 5 !ZooKeeper nodes
*Configuring Kafka Topics*
Atlas uses Kafka to ingest metadata from other components at runtime. This is described in the [[Architecture][Architecture page]]
in more detail. Depending on the configuration of Kafka, sometimes you might need to setup the topics explicitly before
using Atlas. To do so, Atlas provides a script =bin/atlas_kafka_setup.py= which can be run from the Atlas server. In some
environments, the hooks might start getting used first before Atlas server itself is setup. In such cases, the topics
can be run on the hosts where hooks are installed using a similar script =hook-bin/atlas_kafka_setup_hook.py=. Both these
use configuration in =atlas-application.properties= for setting up the topics. Please refer to the [[Configuration][Configuration page]]
for these details.
---++++ Setting up Atlas
There are a few steps that setup dependencies of Atlas. One such example is setting up the Titan schema
in the storage backend of choice. In a simple single server setup, these are automatically setup with default
configuration when the server first accesses these dependencies.
However, there are scenarios when we may want to run setup steps explicitly as one time operations. For example, in a
multiple server scenario using [[HighAvailability][High Availability]], it is preferable to run setup steps from one
of the server instances the first time, and then start the services.
To run these steps one time, execute the command =bin/atlas_start.py -setup= from a single Atlas server instance.
However, the Atlas server does take care of parallel executions of the setup steps. Also, running the setup steps multiple
times is idempotent. Therefore, if one chooses to run the setup steps as part of server startup, for convenience,
then they should enable the configuration option =atlas.server.run.setup.on.start= by defining it with the value =true=
in the =atlas-application.properties= file.
---++++ Starting Atlas Server
<verbatim>
bin/atlas_start.py [-port <port>]
</verbatim>
By default,
* To change the port, use -port option.
* atlas server starts with conf from {package dir}/conf. To override this (to use the same conf with multiple atlas upgrades), set environment variable ATLAS_CONF to the path of conf dir
---+++ Using Atlas
* Quick start model - sample model and data
<verbatim>
bin/quick_start.py [<atlas endpoint>]
</verbatim>
* Verify if the server is up and running
<verbatim>
curl -v http://localhost:21000/api/atlas/admin/version
{"Version":"v0.1"}
</verbatim>
* List the types in the repository
<verbatim>
curl -v http://localhost:21000/api/atlas/types
{"results":["Process","Infrastructure","DataSet"],"count":3,"requestId":"1867493731@qtp-262860041-0 - 82d43a27-7c34-4573-85d1-a01525705091"}
</verbatim>
* List the instances for a given type
<verbatim>
curl -v http://localhost:21000/api/atlas/entities?type=hive_table
{"requestId":"788558007@qtp-44808654-5","list":["cb9b5513-c672-42cb-8477-b8f3e537a162","ec985719-a794-4c98-b98f-0509bd23aac0","48998f81-f1d3-45a2-989a-223af5c1ed6e","a54b386e-c759-4651-8779-a099294244c4"]}
curl -v http://localhost:21000/api/atlas/entities/list/hive_db
</verbatim>
* Search for entities (instances) in the repository
<verbatim>
curl -v http://localhost:21000/api/atlas/discovery/search/dsl?query="from hive_table"
</verbatim>
*Dashboard*
Once atlas is started, you can view the status of atlas entities using the Web-based dashboard. You can open your browser at the corresponding port to use the web UI.
---+++ Stopping Atlas Server
<verbatim>
bin/atlas_stop.py
</verbatim>
---+++ Known Issues
---++++ Setup
If the setup of Atlas service fails due to any reason, the next run of setup (either by an explicit invocation of
=atlas_start.py -setup= or by enabling the configuration option =atlas.server.run.setup.on.start=) will fail with
a message such as =A previous setup run may not have completed cleanly.=. In such cases, you would need to manually
ensure the setup can run and delete the Zookeeper node at =/apache_atlas/setup_in_progress= before attempting to
run setup again.
If the setup failed due to HBase Titan schema setup errors, it may be necessary to repair the HBase schema. If no
data has been stored, one can also disable and drop the 'titan' schema in HBase to let setup run again.