AMBARI-25710. Update Ambari website (#1)
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..9fdd47b
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,28 @@
+# Dependencies
+/node_modules
+
+# Production
+/build
+
+# Generated files
+.docusaurus
+.cache-loader
+
+# Misc
+.DS_Store
+.env.local
+.env.development.local
+.env.test.local
+.env.production.local
+
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+
+.vscode
+package-lock.json
+
+static/javadoc
+static/old
+static/swagger
+target
\ No newline at end of file
diff --git a/README.md b/README.md
index 2cbad70..f063204 100644
--- a/README.md
+++ b/README.md
Binary files differ
diff --git a/babel.config.js b/babel.config.js
new file mode 100644
index 0000000..e00595d
--- /dev/null
+++ b/babel.config.js
@@ -0,0 +1,3 @@
+module.exports = {
+ presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
+};
diff --git a/docs/ambari-alerts.md b/docs/ambari-alerts.md
new file mode 100644
index 0000000..04288af
--- /dev/null
+++ b/docs/ambari-alerts.md
@@ -0,0 +1,13 @@
+# Ambari Alerts
+
+Help page for Alerts defined in Ambari.
+
+## Ambari Agent Heartbeat
+
+**Service**: Ambari
+**Component**: Ambari Server
+**Type**: SERVER
+**Groups**: AMBARI Default
+**Description**: This alert is triggered if the server has lost contact with an agent.
+
+If this alert is generated then the alert text will contain the host name (e.g. c6401.ambari.apache.org is not sending heartbeats.). Check that the agent is running and if its running tail the log to see if its receiving and heartbeat response from the server. Check if the server host name is correct in /etc/ambari-agent/conf/ambari-agent.ini file and is reachable.
\ No newline at end of file
diff --git a/docs/ambari-design/alerts.md b/docs/ambari-design/alerts.md
new file mode 100644
index 0000000..95655a2
--- /dev/null
+++ b/docs/ambari-design/alerts.md
@@ -0,0 +1,208 @@
+---
+
+title: Alerts
+
+---
+WEB and METRIC alert types include a `connection_timeout` property on the alert definition (see below in `AlertDefinition : source : uri : connection_timeout`). This value is in seconds and defaults to 5.0. Use the Ambari REST API by updating the `source` block if you need to modify the connection timeout.
+
+```json
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster/alert_definitions/42",
+ "AlertDefinition" : {
+ "cluster_name" : "MyCluster",
+ "component_name" : "APP_TIMELINE_SERVER",
+ "description" : "This host-level alert is triggered if the App Timeline Server Web UI is unreachable.",
+ "enabled" : true,
+ "id" : 42,
+ "ignore_host" : false,
+ "interval" : 1,
+ "label" : "App Timeline Web UI",
+ "name" : "yarn_app_timeline_server_webui",
+ "scope" : "ANY",
+ "service_name" : "YARN",
+ "source" : {
+ "reporting" : {
+ "ok" : {
+ "text" : "HTTP {0} response in {2:.3f}s"
+ },
+ "warning" : {
+ "text" : "HTTP {0} response from {1} in {2:.3f}s ({3})"
+ },
+ "critical" : {
+ "text" : "Connection failed to {1} ({3})"
+ }
+ },
+ "type" : "WEB",
+ "uri" : {
+ "http" : "{{yarn-site/yarn.timeline-service.webapp.address}}",
+ "https" : "{{yarn-site/yarn.timeline-service.webapp.https.address}}",
+ "https_property" : "{{yarn-site/yarn.http.policy}}",
+ "https_property_value" : "HTTPS_ONLY",
+ "kerberos_keytab" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.keytab}}",
+ "kerberos_principal" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.principal}}",
+ "default_port" : 0.0,
+ "connection_timeout" : 5.0
+ }
+ }
+ }
+}
+```
+
+For example, to update the `connection_timeout` with the API, you need to PUT the entire contents of the `source` block with your API call. For example, you need to PUT the following to update the `connection_timeout` to **6.5** seconds.
+
+```
+PUT /api/v1/clusters/MyCluster/alert_definitions/42
+
+{
+"AlertDefinition" : {
+ "source" : {
+ "reporting" : {
+ "ok" : {
+ "text" : "HTTP {0} response in {2:.3f}s"
+ },
+ "warning" : {
+ "text" : "HTTP {0} response from {1} in {2:.3f}s ({3})"
+ },
+ "critical" : {
+ "text" : "Connection failed to {1} ({3})"
+ }
+ },
+ "type" : "WEB",
+ "uri" : {
+ "http" : "{{yarn-site/yarn.timeline-service.webapp.address}}",
+ "https" : "{{yarn-site/yarn.timeline-service.webapp.https.address}}",
+ "https_property" : "{{yarn-site/yarn.http.policy}}",
+ "https_property_value" : "HTTPS_ONLY",
+ "kerberos_keytab" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.keytab}}",
+ "kerberos_principal" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.principal}}",
+ "default_port" : 0.0,
+ "connection_timeout" : 6.5
+ }
+ }
+ }
+}
+```
+
+## Creating a Script-based Alert Dispatcher
+
+This document will describe how to enable a custom script-based alert dispatcher which is capable of responding to Ambari alerts. The dispatcher will invoke a script with the parameters of the alert as command line arguments.
+
+The dispatcher must know the location of the script that is being executed. This is configured through `ambari.properties` by setting either:
+
+* `notification.dispatch.alert.script`
+* a custom property key that points to the script, such as `foo.bar.alert.dispatch.script`
+
+Some examples of this are:
+
+```
+notification.dispatch.alert.script=/contrib/ambari-alerts/scripts/default_logger.py
+com.mycompany.dispatch.syslog.script=/contrib/ambari-alerts/scripts/legacy_sys_logger.py
+com.mycompany.dispatch.shell.script=/contrib/ambari-alerts/scripts/shell_logger.sh
+```
+
+When an alert instance changes state and Ambari needs to dispatch that alert state change, the custom script will be invoked:
+
+```
+# main method which is called when invoked on the command line
+# :param definitionName: the alert definition unique ID
+# :param definitionLabel: the human readable alert definition label
+# :param serviceName: the service that the alert definition belongs to
+# :param alertState: the state of the alert (OK, WARNING, etc)
+# :param alertText: the text of the alert
+
+def main():
+ definitionName = sys.argv[1]
+ definitionLabel = sys.argv[2]
+ serviceName = sys.argv[3]
+ alertState = sys.argv[4]
+ alertText = sys.argv[5]
+```
+
+```
+POST api/v1/alert_targets
+
+ {
+ "AlertTarget": {
+ "name": "syslogger",
+ "description": "Syslog Target",
+ "notification_type": "ALERT_SCRIPT",
+ "global": true
+ }
+ }
+```
+
+The above call will create a global alert target that will dispatch all alerts across all alert groups. Without specifying `ambari.dispatch-property.script` as a property of the alert target, Ambari will look for the default configuration key of `notification.dispatch.alert.script` in `ambari.properties`.
+
+```
+POST api/v1/alert_targets
+
+ {
+ "AlertTarget": {
+ "name": "syslogger",
+ "description": "Syslog Target",
+ "notification_type": "ALERT_SCRIPT",
+ "global": true,
+ "properties": {
+ "ambari.dispatch-property.script": "com.mycompany.dispatch.syslog.script"
+ }
+ }
+ }
+```
+
+The above call also creates a global alert target. However, a specific script key is being specified. The result is that `ambari.properties` should contain something similar to:
+
+```
+com.mycompany.dispatch.syslog.script=/contrib/ambari-alerts/scripts/legacy_sys_logger.py
+```
+
+## Customizing the Alert Template
+Ambari 2.0 leverages its own alerting system to convey the state of various aspects of managed clusters. The notification template content produced by Ambari is tightly coupled to a notification type. Email and SNMP both have customizable templates that can be used to generate content. This document describes the steps necessary to change the template used by Ambari 2.0 when creating alert notifications.
+
+This procedure is targeted at Ambari Administrators that have access to the Ambari Server file system and the `ambari.properties` file. Those Administrators can create new templates or change the existing templates that are used when generating alert notification content. At this time, there is no mechanism to expose this flexibility via the APIs or the web client to end users.
+
+By default, an [alert-templates.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/alert-templates.xml) ships with Ambari 2.0 bundled inside of Ambari Server JAR. This file contains all of the templates for every known type of notification (for example EMAIL and SNMP). This file is bundled in the Ambari Server JAR so the template is not exposed on the disk. But we can use that file as a reference example.
+
+When you customize the alert template, you are effectively overriding the template bundled by default. To override the alert templates XML:
+
+Some alert notification types, such as EMAIL, automatically combine all pending alerts into a single outbound notification (" **digest**"). Others, like SNMP, never combine pending alerts and will always create a 1:1 notification for every alert in the system (" **individual**"). All alert notification types are specified in the same alert templates file, but the specific alert template for each notification type will most likely vary greatly.
+
+The structure of the template file is defined as follows. Each `<alert-template></alert-template>` element declares what type of alert notification it should be used for.
+
+Variable |Description
+---------------------------------------------|-------------------------------------------------
+$alert.getAlertDefinition() |The definition that the alert is an instance of.
+$alert.getAlertName() |The name of the alert.
+$alert.getAlertState() |The alert state (OK\|WARNING\|CRITICAL\|UNKNOWN)
+$alert.getAlertText() |The specific alert text.
+$alert.getComponentName() |The component, if any, that the alert is defined for.
+$alert.getHostName() |The hostname, if any, that the alert was triggered for.
+$alert.getServiceName() |The name of the service that the alert is defined for.
+$alert.hasComponentName() |True if the alert is for a specific service component.
+$alert.hasHostName() |True if the alert was triggered for a specific host.
+$ambari.getServerHostName() |The Ambari Server hostname.
+$ambari.getServerUrl() |The Ambari Server URL.
+$ambari.getServerVersion() |The Ambari Server version.
+$dispatch.getTargetDescription() |The notification target description.
+$dispatch.getTargetName() |The notification target name.
+$summary.getAlerts() |A list of all of the alerts in the notification.
+$summary.getAlerts(service,alertState) |A list of all alerts for a given service or alert state (OK\|WARNING\|CRITICAL\|UNKNOWN).
+$summary.getCriticalCount() |The CRITICAL alert count.
+$summary.getOkCount() |The OK alert count.
+$summary.getServices() |A list of all services that are reporting an alert in the notification.
+$summary.getServicesByAlertState(alertState) |A list of all services for a given alert state (OK|WARNING|CRITICAL|UNKNOWN).
+$summary.getTotalCount() |The total alert count.
+$summary.getUnknownCount() |The UNKNOWN alert count.
+$summary.getWarningCount() |The WARNING alert count.
+
+The template uses Apache Velocity to render all tokenized content. The following variables are available for use in your template:
+
+```
+$summary.getTotalCount()
+```
+
+```
+$summary.getAlerts()
+```
+
+The following example illustrates how to change the subject line of all outbound email notifications to include a hard coded identifier:
+
diff --git a/docs/ambari-design/ambari-architecture.md b/docs/ambari-design/ambari-architecture.md
new file mode 100644
index 0000000..042fb32
--- /dev/null
+++ b/docs/ambari-design/ambari-architecture.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 1
+---
+
+# Ambari Architecture
+
+ <embed src="/pdf/Ambari_Architecture.pdf" type="application/pdf" width="960px" height="700px"></embed>
+
+
+
diff --git a/docs/ambari-design/blueprints/blueprint-ha.md b/docs/ambari-design/blueprints/blueprint-ha.md
new file mode 100644
index 0000000..7f6d869
--- /dev/null
+++ b/docs/ambari-design/blueprints/blueprint-ha.md
@@ -0,0 +1,550 @@
+
+# Blueprint Support for HA Clusters
+
+## Summary
+
+As of Ambari 2.0, Blueprints are able to deploy the following components with HA:
+
++ HDFS NameNode HA
+
++ YARN ResourceManager HA
++ HBase RegionServers HA
+
+As of Ambari 2.1, Blueprints are able to deploy the following components with HA:
+
++ Hive Components ([AMBARI-10489](https://issues.apache.org/jira/browse/AMBARI-10489))
++ Storm Nimbus ([AMBARI-11087](https://issues.apache.org/jira/browse/AMBARI-11087))
++ Oozie Server ([AMBARI-6683](https://issues.apache.org/jira/browse/AMBARI-6683))
+
+
+
+This functionality currently requires providing fine-grained configurations. This document provides examples.
+
+### FAQ
+
+#### Compatibility with Ambari UI
+
+While this feature does not require the Ambari UI to function, the Blueprints HA feature is completely compatible with the Ambari UI. An HA cluster created via Blueprints can be monitored and configured via the Ambari UI, just as any other Blueprints cluster would function.
+
+#### Supported Stack Versions
+
+This feature is supported as of HDP 2.1 and newer releases. Previous versions of HDP have not been tested with this feature.
+
+### Examples
+
+#### Blueprint Example: HDFS NameNode HA Cluster
+
+HDFS NameNode HA allows a cluster to be configured such that a NameNode is not a single point of failure.
+
+For more details on [HDFS NameNode HA see the Apache Hadoop documentation](http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).
+
+In an Ambari-deployed HDFS NameNode HA cluster:
+
++ 2 NameNodes are deployed: an “active” and a “passive” namenode.
++ If the active NameNode should stop functioning properly, the passive node’s Zookeeper client will detect this case, and the passive node will become the new active node.
++ HDFS relies on Zookeeper to manage the details of failover in these cases.
++ The Blueprints HA feature will automatically invoke all required commands and setup steps for an HDFS NameNode HA cluster, provided that the correct configuration is provided in the Blueprint. The shared edit logs of each NameNode are managed by the Quorum Journal Manager, rather than NFS shared storage. The use of NFS shared storage in an HDFS HA setup is not supported by Ambari Blueprints, and is generally not encouraged.
+
+#### How
+
+The Blueprints HA feature will automatically invoke all required commands and setup steps for an HDFS NameNode HA cluster, provided that the correct configuration is provided in the Blueprint. The shared edit logs of each NameNode are managed by the Quorum Journal Manager, rather than NFS shared storage. The use of NFS shared storage in an HDFS HA setup is not supported by Ambari Blueprints, and is generally not encouraged.
+
+The following HDFS stack components should be included in any host group in a Blueprint that supports an HA HDFS NameNode:
+
+1. NAMENODE
+
+2. ZKFC
+
+3. ZOOKEEPER_SERVER
+
+4. JOURNALNODE
+
+
+#### Configuring Active and Standby NameNodes
+
+The HDFS “NAMENODE” component must be assigned to two servers, either via two separate host groups, or to a host group that maps to two physical servers in the Cluster Creation Template for this cluster.
+
+By default, the Blueprint processor will assign the “active” NameNode to one host, and the “standby” NameNode to another. The user of an HA Blueprint does not need to configure the initial status of each NameNode, since this can be assigned automatically.
+
+If desired, the user can configure the initial state of each NameNode by adding the following configuration properties in the “hadoop-env” namespace:
+
+1. dfs_ha_initial_namenode_active - This property should contain the hostname for the “active” NameNode in this cluster.
+
+2. dfs_ha_initial_namenode_standby - This property should contain the host name for the “passive” NameNode in this cluster.
+
+:::caution
+These properties should only be used when the initial state of the active or standby NameNodes needs to be configured to a specific node. This setting is only guaranteed to be accurate in the initial state of the cluster. Over time, the active/standby state of each NameNode may change as failover events occur in the cluster.
+
+The active or standby status of a NameNode is not recorded or expressed when an HDFS HA Cluster is being exported to a Blueprint, using the Blueprint REST API endpoint. Since clusters change over time, this state is only accurate in the initial startup of the cluster.
+
+Generally, it is assumed that most users will not need to choose the active or standby status of each NameNode, so the default behavior in Blueprints HA is to assign the status of each node automatically.
+:::
+
+#### Example Blueprint
+
+This is a minimal blueprint with HDFS HA: [hdfs_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/hdfs_ha_blueprint.json?version=4&modificationDate=1434548806000&api=v2)
+
+These are the base configurations required. See the blueprint above for more details:
+```json
+ "configurations":
+ { "core-site": {
+ "properties" : {
+ "fs.defaultFS" : "hdfs://mycluster",
+ "ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_3%:2181"
+ }}
+ },
+ { "hdfs-site": {
+ "properties" : {
+ "dfs.client.failover.proxy.provider.mycluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+ "dfs.ha.automatic-failover.enabled" : "true",
+ "dfs.ha.fencing.methods" : "shell(/bin/true)",
+ "dfs.ha.namenodes.mycluster" : "nn1,nn2",
+ "dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
+ "dfs.namenode.http-address.mycluster.nn1" : "%HOSTGROUP::master_1%:50070",
+ "dfs.namenode.http-address.mycluster.nn2" : "%HOSTGROUP::master_3%:50070",
+ "dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
+ "dfs.namenode.https-address.mycluster.nn1" : "%HOSTGROUP::master_1%:50470",
+ "dfs.namenode.https-address.mycluster.nn2" : "%HOSTGROUP::master_3%:50470",
+ "dfs.namenode.rpc-address.mycluster.nn1" : "%HOSTGROUP::master_1%:8020",
+ "dfs.namenode.rpc-address.mycluster.nn2" : "%HOSTGROUP::master_3%:8020",
+ "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::master_2%:8485;%HOSTGROUP::master_3%:8485/mycluster",
+ "dfs.nameservices" : "mycluster"
+ }}
+ }
+ ]
+ ```
+
+#### HostName Topology Substitution in Configuration Property Values
+
+The host-related properties should be set using the “HOSTGROUP” syntax to refer to a given Blueprint’s host group, in order to map each NameNode’s actual host (defined in the Cluster Creation Template) to the properties in hdfs-site that require these host mappings.
+
+The syntax for these properties should be:
+
+“%HOSTGROUP::HOST_GROUP_NAME%:PORT””
+
+For example, the following property from the snippet above:
+
+"dfs.namenode.http-address.mycluster.nn1":
+
+"%HOSTGROUP::master_1%:50070"
+
+This property value is interpreted by the Blueprint processor to refer to the host that maps to the “master_1” host group, which should include a “NAMENODE” among its list of components. The address property listed above should map to the host for “master_1”, at the port “50070”.
+
+Using this syntax is the most portable way to define host-specific properties within a Blueprint, since no direct host name references are used. This will allow a Blueprint to be applied in a wider variety of cluster deployments, and not be tied to a specific set of hostnames.
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-hdfs-ha" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/blueprint-hdfs-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+#### Example Cluster Creation Template
+
+```json
+{
+ "blueprint": "blueprint-hdfs-ha",
+ "default_password": "changethis",
+ "host_groups": [
+ { "hosts": [
+ { "fqdn": "c6401.ambari.apache.org" }
+ ], "name": "gateway"
+ },
+ { "hosts": [
+ { "fqdn": "c6402.ambari.apache.org" }
+ ], "name": "master_1"
+ },
+ { "hosts": [
+ { "fqdn": "c6403.ambari.apache.org" }
+ ], "name": "master_2"
+ },
+ { "hosts": [
+ { "fqdn": "c6404.ambari.apache.org" }
+ ], "name": "master_3"
+ },
+ { "hosts": [
+ { "fqdn": "c6405.ambari.apache.org" }
+ ],
+ "name": "slave_1"
+ }
+ ]
+}
+```
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+```
+POST /api/v1/clusters/my-hdfs-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-hdfs-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
+
+### Blueprint Example: Yarn ResourceManager HA Cluster
+
+#### Summary
+
+Yarn ResourceManager High Availability (HA) adds support for deploying two Yarn ResourceManagers in a given Yarn cluster. This support removes the single point of failure that occurs when single ResourceManager is used.
+
+The Yarn ResourceManager support for HA is somewhat similar to HDFS NameNode HA in that an “active/standby” architecture is adopted, with Zookeeper used to handle the failover scenarios between the two ResourceManager instances.
+
+The following Apache Hadoop documentation describes the steps required in order to setup Yarn ResourceManager HA manually:
+
+[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html)
+
+:::caution
+Ambari Blueprints will handle much of the details of server setup listed in this documentation, but the user of Ambari will need to define the various configuration properties listed in this article (yarn.resourcemanager.ha.enabled, yarn.resourcemanager.hostname.$NAME_OF_RESOURCE_MANAGER, etc). The example Blueprints listed below will demonstrate the configuration properties that must be included in the Blueprint for this feature to startup the HA cluster properly.
+:::
+
+ The following stack components should be included in any host group in a Blueprint that supports an HA Yarn ResourceManager
+
+1. RESOURCEMANAGER
+
+2. ZOOKEEPER_SERVER
+
+
+#### Initial setup of Active and Standby ResourceManagers
+
+The Yarn ResourceManager HA feature depends upon Zookeeper in order to manage the details of the “active” or “standby” status of a given ResourceManager. When the two ResourceManagers are starting up initially, the first ResourceManager instance to acquire a Zookeeper lock, called a “znode”, will become the “active” ResourceManager for the cluster, with the other instance assuming the role of the “standby” ResourceManager.
+
+:::caution
+The Blueprints HA feature does not support configuring the initial “active” or “standby” status of the ResourceManagers deployed in a Yarn HA cluster. The first instance to obtain the Zookeeper lock will become the “active” node. This allows the user to specify the host groups that contain the 2 ResourceManager instances, but also shields the user from the need to select the first “active” node.
+
+After the cluster has started up initially, the state of the “active” and “standby” ResourceManagers may change over time. The initial “active” server is not guaranteed to remain the “active” node over the lifetime of the cluster. During a failover event, the “standby” node may be required to fulfill the role of the “active” server.
+
+The active or standby status of a ResourceManager is not recorded or expressed when a Yarn cluster is being exported to a Blueprint, using the Blueprint REST API endpoint. Since clusters change over time, this state is only accurate in the initial startup of the cluster.
+:::
+
+#### Example Blueprint
+
+The following link includes an example Blueprint for a 3-node Yarn ResourceManager HA Cluster:
+
+[yarn_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/yarn_ha_blueprint.json?version=2&modificationDate=1432208770000&api=v2)
+
+```json
+{
+ "Blueprints": {
+ "stack_name": "HDP",
+ "stack_version": "2.2"
+ },
+ "host_groups": [
+ {
+ "name": "gateway",
+ "cardinality" : "1",
+ "components": [
+ { "name": "HDFS_CLIENT" },
+ { "name": "MAPREDUCE2_CLIENT" },
+ { "name": "METRICS_COLLECTOR" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "TEZ_CLIENT" },
+ { "name": "YARN_CLIENT" },
+ { "name": "ZOOKEEPER_CLIENT" }
+ ]
+ },
+ {
+ "name": "master_1",
+ "cardinality" : "1",
+ "components": [
+ { "name": "HISTORYSERVER" },
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "NAMENODE" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "master_2",
+ "cardinality" : "1",
+ "components": [
+ { "name": "APP_TIMELINE_SERVER" },
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "RESOURCEMANAGER" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "master_3",
+ "cardinality" : "1",
+ "components": [
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "RESOURCEMANAGER" },
+ { "name": "SECONDARY_NAMENODE" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "slave_1",
+ "components": [
+ { "name": "DATANODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "NODEMANAGER" }
+ ]
+ }
+ ],
+ "configurations": [
+ {
+ "core-site": {
+ "properties" : {
+ "fs.defaultFS" : "hdfs://%HOSTGROUP::master_1%:8020"
+ }}
+ },{
+ "yarn-site" : {
+ "properties" : {
+ "hadoop.registry.rm.enabled" : "false",
+ "hadoop.registry.zk.quorum" : "%HOSTGROUP::master_3%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_1%:2181",
+ "yarn.log.server.url" : "http://%HOSTGROUP::master_2%:19888/jobhistory/logs",
+ "yarn.resourcemanager.address" : "%HOSTGROUP::master_2%:8050",
+ "yarn.resourcemanager.admin.address" : "%HOSTGROUP::master_2%:8141",
+ "yarn.resourcemanager.cluster-id" : "yarn-cluster",
+ "yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
+ "yarn.resourcemanager.ha.enabled" : "true",
+ "yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
+ "yarn.resourcemanager.hostname" : "%HOSTGROUP::master_2%",
+ "yarn.resourcemanager.recovery.enabled" : "true",
+ "yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::master_2%:8025",
+ "yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::master_2%:8030",
+ "yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
+ "yarn.resourcemanager.webapp.address" : "%HOSTGROUP::master_2%:8088",
+ "yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::master_2%:8090",
+ "yarn.timeline-service.address" : "%HOSTGROUP::master_2%:10200",
+ "yarn.timeline-service.webapp.address" : "%HOSTGROUP::master_2%:8188",
+ "yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::master_2%:8190"
+ }
+ }
+ }
+ ]
+}
+```
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-yarn-ha" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/blueprint-yarn-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+
+```
+
+#### Example Cluster Creation Template
+
+```json
+{
+ "blueprint": "blueprint-yarn-ha",
+ "default_password": "changethis",
+ "configurations": [
+ { "yarn-site" : {
+ "yarn.resourcemanager.zk-address" : "c6402.ambari.apache.org:2181,c6403.ambari.apache.org:2181,c6404.ambari.apache.org:2181”,
+ ”yarn.resourcemanager.hostname.rm1" : "c6403.ambari.apache.org",
+ "yarn.resourcemanager.hostname.rm2" : "c6404.ambari.apache.org"
+ }}
+ ],
+ "host_groups": [
+ { "hosts": [
+ { "fqdn": "c6401.ambari.apache.org" }
+ ], "name": "gateway"
+ },
+ { "hosts": [
+ { "fqdn": "c6402.ambari.apache.org" }
+ ], "name": "master_1"
+ },
+ { "hosts": [
+ { "fqdn": "c6403.ambari.apache.org" }
+ ], "name": "master_2"
+ },
+ { "hosts": [
+ { "fqdn": "c6404.ambari.apache.org" }
+ ], "name": "master_3"
+ },
+ { "hosts": [
+ { "fqdn": "c6405.ambari.apache.org" }
+ ],
+ "name": "slave_1"
+ }
+ ]
+}
+```
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/my-yarn-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-yarn-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
+
+
+### Blueprint Example: HBase RegionServer HA Cluster
+
+#### Summary
+
+HBase provides a High Availability feature for reads across HBase Region Servers.
+
+The following link to the Apache HBase documentation provides more information on HA support in HBase:
+
+[http://hbase.apache.org/book.html#arch.timelineconsistent.reads](http://hbase.apache.org/book.html#arch.timelineconsistent.reads)
+
+:::caution
+The documentation listed here explains how to deploy an HBase RegionServer HA cluster via a Blueprint, but there are separate application-specific steps that must be taken in order to enable this feature for a specific table in HBase. A table must be created with replication enabled, so that multiple Region Servers can handle the keys for this table.
+:::
+
+For more information on how to define an HBase table with replication enabled (after the cluster has been created), please refer to the following HBase documentation:
+
+[http://hbase.apache.org/book.html#_creating_a_table_with_region_replication](http://hbase.apache.org/book.html#_creating_a_table_with_region_replication)
+
+The following stack components should be included in any host group in a Blueprint that supports an HA HBase RegionServer:
+
+1. HBASE_REGIONSERVER
+
+
+At least two “HBASE_REGIONSERVER” components must be deployed in order to enable this feature, so that table information can be replicated across more than one Region Server.
+
+#### Example Blueprint
+
+The following link includes an example Blueprint for a 2-node HBase RegionServer HA Cluster:
+
+[hbase_rs_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/hbase_rs_ha_blueprint.json?version=1&modificationDate=1427136904000&api=v2)
+
+The following JSON snippet includes the “hbase-site” configuration typically required for a cluster that utilizes the HBase RegionServer HA feature:
+
+```json
+{
+ "configurations" : [
+ {
+ "hbase-site" : {
+ ...
+ "hbase.regionserver.global.memstore.lowerLimit" : "0.38",
+ "hbase.regionserver.global.memstore.upperLimit" : "0.4",
+ "hbase.regionserver.handler.count" : "60",
+ "hbase.regionserver.info.port" : "60030",
+ "hbase.regionserver.storefile.refresh.period" : "20",
+ "hbase.rootdir" : "hdfs://%HOSTGROUP::host_group_1%:8020/apps/hbase/data",
+ "hbase.security.authentication" : "simple",
+ "hbase.security.authorization" : "false",
+ "hbase.superuser" : "hbase",
+ "hbase.tmp.dir" : "/hadoop/hbase",
+ "hbase.zookeeper.property.clientPort" : "2181",
+ "hbase.zookeeper.quorum" : "%HOSTGROUP::host_group_1%,%HOSTGROUP::host_group_2%",
+ "hbase.zookeeper.useMulti" : "true",
+ "hfile.block.cache.size" : "0.40",
+ "zookeeper.session.timeout" : "30000",
+ "zookeeper.znode.parent" : "/hbase-unsecure"
+ }
+
+ }
+ ]
+}
+```
+:::caution
+The JSON example above is not a complete set of “hbase-site” configurations, but rather shows the configuration settings that are relevant to HBase RegionServer HA. In particular, the “hbase.regionserver.storefile.refresh.period” setting is the most relevant to HBase RegionServer HA, since this property must be set to a value greater than zero in order for the HA feature to be enabled.
+:::
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-hbase-rs-ha" resource to the Ambari Server.
+
+POST /api/v1/blueprints/blueprint-hbase-rs-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+
+#### Example Cluster Creation Template
+```json
+{
+ "blueprint" : "blueprint-hbase-rs-ha",
+ "default_password" : "default",
+ "host_groups" :[
+ {
+ "name" : "host_group_1",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ },
+ {
+ "name" : "host_group_2",
+ "hosts" : [
+ {
+ "fqdn" : "c6402.ambari.apache.org"
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/my-hbase-rs-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-hbase-rs-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
\ No newline at end of file
diff --git a/docs/ambari-design/blueprints/blueprint-ranger.md b/docs/ambari-design/blueprints/blueprint-ranger.md
new file mode 100644
index 0000000..e14f468
--- /dev/null
+++ b/docs/ambari-design/blueprints/blueprint-ranger.md
@@ -0,0 +1,423 @@
+---
+title: Blueprint support for Ranger
+---
+Starting from HDP2.3 Ranger can be deployed using Blueprints in two ways either using stack advisor or setting all the needed properties in the Blueprint.
+
+## Deploy Ranger with the use of stack advisor
+
+Stack advisor makes simple the deployment of Ranger as it sets automatically the needed properties thus the user has to provided only a minimal set of configurations. The configurations properties that must be provided in either the Blueprint or cluster creation template are:
+
+* admin-properties:
+ - DB_FLAVOR - the default is MYSQL. No need to provide this if MYSQL is to be used the database server for Ranger databases. Consult Ranger documentation for supported database servers. Also ensure the ambari-server has the appropriate jdbc driver installed for the selected database server type (e.g.: ambari-server setup --jdbc-driver)
+ - db_host - set the host:port of the database server that Ranger Admin will use
+ - db_root_user - the db user with root access that will be used during deployment to create the databases used by Ranger. By default root is used if this property is not specified.
+
+ - db_root_password - the password for root user
+ - db_password - the password that will be used access the Ranger database
+ - audit_db_password - the password that will be used to access the Ranger Audit db
+* ranger-env
+ - ranger_admin_password - this is the Ambari user password created for creating repositories and policies in Ranger Admin for each plugin
+ - ranger-yarn-plugin-enabled - Enable/Disable YARN Ranger plugin. The default is Disable.
+
+ - ranger-hdfs-plugin-enabled - Enable/Disable HDFS Ranger plugin. The default is Disable.
+
+ - ranger-hbase-plugin-enabled -Enable/Disable HBase Ranger plugin. The default is Disable.
+
+ - ... - check Ranger documentation for the list of supported ranger plugins
+* kms-properties
+ - DB_FLAVOR - the default is MYSQL. No need to provide this if MYSQL is to be used the database server for Ranger databases. Consult Ranger KMS documentation for supported database servers. Also ensure the ambari-server has the appropriate jdbc driver installed for the selected database server type (e.g.: ambari-server setup --jdbc-driver)
+ - SQL_CONNECTOR_JAR - the default is /usr/share/java/mysql-connector-java.jar
+ - KMS_MASTER_KEY_PASSWD
+ - db_host - the host:port of the database server that Ranger KMS will use
+ - db_root_user - the db user with root access that will be used during deployment to create the databases used by Ranger KMS. By default root is used if this property is not specified.
+
+ - db_root_password - database password for root user
+ - db_password - database password for the Ranger KMS schema
+
+* hadoop-env
+ - keyserver_port - Port number where Key Management Server is available
+ - keyserver_host - Hostnames where Key Management Server is installed
+
+## Deploy Ranger without the use of stack advisor
+
+Without stack advisor all the configs related to Ranger, Ranger KMS and ranger plugins that don't have default values must be set in the Blueprint or cluster creation template. Consult Ranger and ranger plugin plugins documentation for all properties.
+
+An example of such Blueprint where everything is set manually (note that this just covers a subset of currently supported configuration properties and ranger plugins):
+
+```json
+{
+ "configurations" : [
+ {
+ "admin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "DB_FLAVOR" : "MYSQL",
+ "audit_db_name" : "ranger_audit",
+ "db_name" : "ranger",
+ "audit_db_user" : "rangerlogger",
+ "SQL_CONNECTOR_JAR" : "/usr/share/java/mysql-connector-java.jar",
+ "db_user" : "rangeradmin",
+ "policymgr_external_url" : "http://%HOSTGROUP::host_group_1%:6080",
+ "db_host" : "172.17.0.9:3306",
+ "db_root_user" : "root"
+ }
+ }
+ },
+ {
+ "ranger-kms-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.kms.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient",
+ "ranger.plugin.kms.service.name" : "{{repo_name}}",
+ "ranger.plugin.kms.policy.rest.url" : "{{policymgr_mgr_url}}"
+ }
+ }
+ },
+ {
+ "kms-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "hadoop.kms.security.authorization.manager" : "org.apache.ranger.authorization.kms.authorizer.RangerKmsAuthorizer",
+ "hadoop.kms.key.provider.uri" : "dbks://http@localhost:9292/kms"
+ }
+ }
+ },
+ {
+ "ranger-hdfs-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "hadoop",
+ "ranger-hdfs-plugin-enabled" : "Yes",
+ "common.name.for.certificate" : "",
+ "policy_user" : "ambari-qa",
+ "hadoop.rpc.protection" : ""
+ }
+ }
+ },
+ {
+ "ranger-admin-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.ldap.group.searchfilter" : "{{ranger_ug_ldap_group_searchfilter}}",
+ "ranger.ldap.group.searchbase" : "{{ranger_ug_ldap_group_searchbase}}",
+ "ranger.sso.enabled" : "false",
+ "ranger.externalurl" : "{{ranger_external_url}}",
+ "ranger.sso.browser.useragent" : "Mozilla,chrome",
+ "ranger.service.https.attrib.ssl.enabled" : "false",
+ "ranger.ldap.ad.referral" : "ignore",
+ "ranger.jpa.jdbc.url" : "jdbc:mysql://172.17.0.9:3306/ranger",
+ "ranger.https.attrib.keystore.file" : "/etc/ranger/admin/conf/ranger-admin-keystore.jks",
+ "ranger.ldap.user.searchfilter" : "{{ranger_ug_ldap_user_searchfilter}}",
+ "ranger.jpa.jdbc.driver" : "com.mysql.jdbc.Driver",
+ "ranger.authentication.method" : "UNIX",
+ "ranger.service.host" : "{{ranger_host}}",
+ "ranger.jpa.audit.jdbc.user" : "{{ranger_audit_db_user}}",
+ "ranger.ldap.referral" : "ignore",
+ "ranger.jpa.audit.jdbc.credential.alias" : "rangeraudit",
+ "ranger.service.https.attrib.keystore.pass" : "SECRET:ranger-admin-site:2:ranger.service.https.attrib.keystore.pass",
+ "ranger.audit.solr.username" : "ranger_solr",
+ "ranger.sso.query.param.originalurl" : "originalUrl",
+ "ranger.service.http.enabled" : "true",
+ "ranger.audit.source.type" : "solr",
+ "ranger.ldap.url" : "{{ranger_ug_ldap_url}}",
+ "ranger.service.https.attrib.clientAuth" : "want",
+ "ranger.ldap.ad.domain" : "",
+ "ranger.ldap.ad.bind.dn" : "{{ranger_ug_ldap_bind_dn}}",
+ "ranger.credential.provider.path" : "/etc/ranger/admin/rangeradmin.jceks",
+ "ranger.jpa.audit.jdbc.driver" : "{{ranger_jdbc_driver}}",
+ "ranger.audit.solr.urls" : "",
+ "ranger.sso.publicKey" : "",
+ "ranger.ldap.bind.dn" : "{{ranger_ug_ldap_bind_dn}}",
+ "ranger.unixauth.service.port" : "5151",
+ "ranger.ldap.group.roleattribute" : "cn",
+ "ranger.jpa.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.sso.cookiename" : "hadoop-jwt",
+ "ranger.service.https.attrib.keystore.keyalias" : "rangeradmin",
+ "ranger.audit.solr.zookeepers" : "NONE",
+ "ranger.jpa.jdbc.user" : "{{ranger_db_user}}",
+ "ranger.jpa.jdbc.credential.alias" : "rangeradmin",
+ "ranger.ldap.ad.user.searchfilter" : "{{ranger_ug_ldap_user_searchfilter}}",
+ "ranger.ldap.user.dnpattern" : "uid={0},ou=users,dc=xasecure,dc=net",
+ "ranger.ldap.base.dn" : "dc=example,dc=com",
+ "ranger.service.http.port" : "6080",
+ "ranger.jpa.audit.jdbc.url" : "{{audit_jdbc_url}}",
+ "ranger.service.https.port" : "6182",
+ "ranger.sso.providerurl" : "",
+ "ranger.ldap.ad.url" : "{{ranger_ug_ldap_url}}",
+ "ranger.jpa.audit.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.unixauth.remote.login.enabled" : "true",
+ "ranger.ldap.ad.base.dn" : "dc=example,dc=com",
+ "ranger.unixauth.service.hostname" : "{{ugsync_host}}"
+ }
+ }
+ },
+ {
+ "dbks-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.ks.jpa.jdbc.url" : "jdbc:mysql://172.17.0.9:3306/rangerkms",
+ "hadoop.kms.blacklist.DECRYPT_EEK" : "hdfs",
+ "ranger.ks.jpa.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.ks.jdbc.sqlconnectorjar" : "{{ews_lib_jar_path}}",
+ "ranger.ks.jpa.jdbc.user" : "{{db_user}}",
+ "ranger.ks.jpa.jdbc.credential.alias" : "ranger.ks.jdbc.password",
+ "ranger.ks.jpa.jdbc.credential.provider.path" : "/etc/ranger/kms/rangerkms.jceks",
+ "ranger.ks.masterkey.credential.alias" : "ranger.ks.masterkey.password",
+ "ranger.ks.jpa.jdbc.driver" : "com.mysql.jdbc.Driver"
+ }
+ }
+ },
+ {
+ "kms-env" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "kms_log_dir" : "/var/log/ranger/kms",
+ "create_db_user" : "true",
+ "kms_group" : "kms",
+ "kms_user" : "kms",
+ "kms_port" : "9292"
+ }
+ }
+ },
+ {
+ "ranger-hdfs-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.hdfs.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+
+ {
+ "ranger-env" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "xml_configurations_supported" : "true",
+ "ranger_user" : "ranger",
+ "xasecure.audit.destination.hdfs.dir" : "hdfs://ambari-agent-1.node.dc1.consul:8020/ranger/audit",
+ "create_db_dbuser" : "true",
+ "ranger-hdfs-plugin-enabled" : "Yes",
+ "ranger_privelege_user_jdbc_url" : "jdbc:mysql://172.17.0.9:3306",
+ "ranger-knox-plugin-enabled" : "No",
+ "is_solrCloud_enabled" : "false",
+ "bind_anonymous" : "false",
+ "ranger-yarn-plugin-enabled" : "Yes",
+ "ranger-kafka-plugin-enabled" : "No",
+ "xasecure.audit.destination.hdfs" : "true",
+ "ranger-hive-plugin-enabled" : "No",
+ "xasecure.audit.destination.solr" : "false",
+ "xasecure.audit.destination.db" : "true",
+ "ranger_group" : "ranger",
+ "ranger_admin_username" : "amb_ranger_admin",
+ "ranger-hbase-plugin-enabled" : "Yes",
+ "admin_username" : "admin"
+ }
+ }
+ },
+
+ {
+ "kms-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "keyadmin",
+ "KMS_MASTER_KEY_PASSWD" : "SECRET:kms-properties:1:KMS_MASTER_KEY_PASSWD",
+ "DB_FLAVOR" : "MYSQL",
+ "db_name" : "rangerkms",
+ "SQL_CONNECTOR_JAR" : "/usr/share/java/mysql-connector-java.jar",
+ "db_user" : "rangerkms",
+ "db_host" : "172.17.0.9:3306",
+ "db_root_user" : "root"
+ }
+ }
+ },
+
+ {
+ "ranger-yarn-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.yarn.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+
+ {
+ "usersync-properties" : {
+ "properties_attributes" : { },
+ "properties" : { }
+ }
+ },
+
+ {
+ "ranger-hbase-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.hbase.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+ {
+ "hdfs-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "dfs.encryption.key.provider.uri" : "kms://http@%HOSTGROUP::host_group_1%:9292/kms",
+ "dfs.namenode.inode.attributes.provider.class" : "org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer"
+ }
+ }
+ },
+ {
+ "ranger-yarn-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "yarn",
+ "common.name.for.certificate" : "",
+ "ranger-yarn-plugin-enabled" : "Yes",
+ "policy_user" : "ambari-qa",
+ "hadoop.rpc.protection" : ""
+ }
+ }
+ },
+ {
+ "ranger-hbase-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "hbase",
+ "common.name.for.certificate" : "",
+ "ranger-hbase-plugin-enabled" : "Yes",
+ "policy_user" : "ambari-qa"
+ }
+ }
+ }
+ ],
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "RANGER_ADMIN"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "HBASE_CLIENT"
+ },
+ {
+ "name" : "HBASE_MASTER"
+ },
+ {
+ "name" : "RANGER_USERSYNC"
+ },
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "RANGER_KMS_SERVER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_2",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "APP_TIMELINE_SERVER"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_3",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "HBASE_CLIENT"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.3"
+ }
+}
+```
+## Deploy Ranger in HA mode
+
+The difference from deploying Ranger in non-HA mode is:
+
+* Deploy RANGER_ADMIN component to multiple host
+* Setup a load balancer and configure it to front all RANGER_ADMIN instances (The URL of a Ranger Admin instance is [http://host:port](http://hostport) (default port 6080) )
+* admin-properties
+ - policymgr_external_url - override the value of this configuration property with the URL of the load balancer. Each component interacting with Ranger is using the value of this property to connect to Ranger thus these will connect via the balancer.
diff --git a/docs/ambari-design/blueprints/index.md b/docs/ambari-design/blueprints/index.md
new file mode 100644
index 0000000..4acb444
--- /dev/null
+++ b/docs/ambari-design/blueprints/index.md
@@ -0,0 +1,1003 @@
+---
+slug: /blueprints
+---
+# Blueprints
+
+## Introduction
+
+Ambari Blueprints are a declarative definition of a cluster. With a Blueprint, you specify a [Stack](../stack-and-services/overview.mdx), the Component layout and the Configurations to materialize a Hadoop cluster instance (via a REST API) **without** having to use the Ambari Cluster Install Wizard.
+
+### **Notable JIRAs**
+JIRA | Description
+------------------------------------------------------------------|---------------------------------------------
+[AMBARI-4467](https://issues.apache.org/jira/browse/AMBARI-4467) |Blueprints REST resource.
+[AMBARI-5077](https://issues.apache.org/jira/browse/AMBARI-5077) |Provision cluster with blueprint.
+[AMBARI-4786](https://issues.apache.org/jira/browse/AMBARI-4786) |Export blueprints from running/existing cluster.
+[AMBARI-5114](https://issues.apache.org/jira/browse/AMBARI-5114) |Configurations with blueprints.
+[AMBARI-6275](https://issues.apache.org/jira/browse/AMBARI-6275) |Add hosts using blueprints.
+[AMBARI-10750](https://issues.apache.org/jira/browse/AMBARI-10750)|2.1 blueprint changes.
+
+## API Resources and Syntax
+
+The following table lists the basic Blueprints API resources.
+
+The API calls on this wiki page include the HTTP Method (for example: `GET, PUT, POST`) and a sample URI (for example: `/api/v1/blueprints`) . When actually calling the Ambari REST API, you want to be sure to set the `X-Requested-By` header and provide authentication information as appropriate. For example, calling the API using `curl`:
+
+```bash
+curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://c6401.ambari.apache.org:8080 /api/v1/blueprints
+```
+
+## Blueprint Usage Overview
+
+#### Step 0: Prepare Ambari Server and Agents
+
+Install the Ambari Server, run setup and start. Install the Ambari Agents on all hosts and perform manual registration.
+
+#### Step 1: Create Blueprint
+
+A blueprint can be created by hand or can be created by exporting a blueprint from an existing cluster.
+
+To export a cluster from an existing cluster: `GET /api/v1/clusters/:clusterName?format=blueprint`
+
+#### Step 2: Register Blueprint with Ambari
+
+`POST /api/v1/blueprints/:blueprintName`
+
+Request body is blueprint created in **Step 1**.
+
+To disable topology validation and register a blueprint:
+
+`<span>POST /api/v1/blueprints/:blueprintName?validate_topology=false</span>`
+
+Disabling topology validation allows a user to force registration of a blueprint that fails topology validation.
+
+#### Step 3: Create Cluster Creation Template
+
+Map Physical Hosts to Blueprint: Create the mapping between blueprint host groups and physical hosts.
+
+Provide Cluster Specific Configuration Overrides :Configuration can be applied at the cluster and host group scope and overrides any configurations specified in the blueprint.
+
+#### Step 4: Setup Stack Repositories (Optional)
+
+There are scenarios where public Stack repositories may not be accessible during creation of cluster via blueprint or an alternate repository is required for Stack.
+
+To use a local or alternate repository:
+
+```
+PUT /api/v1/stacks/:stack/versions/:stackVersion/operating_systems/:osType/repositories/:repoId
+
+{
+ "Repositories" : {
+ "base_url" : "",
+ "verify_base_url" : true
+ }
+}
+```
+
+This API may be invoked multiple times to set Base URL for multiple OS types or Stack versions. If this step is not performed, by default, blueprints will use the latest Base URL defined in the Stack.
+
+#### Step 5: Create Cluster
+
+`POST /api/v1/clusters/:clusterName`
+
+Request body includes blueprint name, host mappings and configurations from **Step 3**.
+
+Request is asynchronous and returns a `/requests` URL which can be used to monitor progress.
+
+#### Step 6: Monitor Cluster Creation Progress
+
+Using the `/requests` URL returned in **Step 4**, monitor the progress of the tasks associated with cluster creation.
+
+#### Limitations
+
+Ambari Blueprints currently does not support creating cluster reflecting a High Availability topology.
+
+Ambari 2.0 adds support for deploying High Availability clusters in Blueprints. Please see [Blueprint Support for HA Clusters](./blueprint-ha.md) for more information on this topic.
+
+## Blueprint Details
+
+### Prepare Ambari Server and Agents
+
+1. Perform your Ambari Server install and setup.
+
+```bash
+yum install ambari-server
+ambari-server setup
+```
+
+2. After setup completes, start your Ambari Server.
+
+```bash
+ambari-server start
+```
+
+3. Install Ambari Agents on all of the hosts you plan to include in your cluster.
+
+ ```bash
+ yum install ambari-agent
+ ```
+
+4. Set the Ambari Server on the Ambari Agents.
+
+```bash
+vi /etc/ambari-agent/conf/ambari-agent.ini
+```
+
+5. Set hostname= to the Fully Qualified Domain Name for the Ambari Server. Save and exit.
+
+```bash
+hostname=c6401.ambari.apache.org
+```
+
+6. Start the Agents to initiate registration to Server.
+
+```bash
+ambari-agent start
+```
+
+7. Confirm the Agent hosts are registered with the Server.
+[http://your.ambari.server:8080/api/v1/hosts](http://your.ambari.server:8080/api/v1/hosts)
+
+
+### Blueprint Structure
+
+A blueprint document is in JSON format and has the following structure:
+
+```json
+{
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value",
+ "property-name2" : "property-value"
+ }
+ },
+ {
+ "configuration-type2" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "host_groups" : [
+ {
+ "name" : "host-group-name",
+ "components" : [
+ {
+ "name" : "component-name"
+ },
+ {
+ "name" : "component-name2",
+ "provision_action" : "(INSTALL_AND_START | INSTALL_ONLY)"
+ }
+ ...
+
+ ],
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "settings" : [
+ "deployment_settings": [
+ {"skip_failure":"true"}
+ ],
+ "repository_settings":[
+ {
+ "override_strategy":"ALWAYS_APPLY",
+ "operating_system":"redhat7",
+ "repo_id":"HDP",
+ "base_url":"http://myserver/hdp"
+ },
+ {
+ "override_strategy":"APPLY_WHEN_MISSING",
+ "operating_system":"redhat7",
+ "repo_id":"HDP-UTIL-1.1",
+ "base_url":"http://myserver/hdp-util"
+ }
+ ],
+ "recovery_settings":[
+ {"recovery_enabled":"true"}
+ ],
+ "service_settings":[
+ {
+ "name":"SERVICE_ONE",
+ "recovery_enabled":"true"
+ },
+ {
+ "name":"SERVICE_TWO",
+ "recovery_enabled":"true"
+ }
+ ],
+ "component_settings":[
+ {
+ "name":"COMPONENT_A_OF_SERVICE_ONE"
+ "recover_enabled":"true"
+ },
+ {
+ "name":"COMPONENT_B_OF_SERVICE_TWO",
+ "recover_enabled":"true"
+ }
+ ]
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.1",
+ "security" : {
+ "type" : "(NONE | KERBEROS)",
+ "kerberos_descriptor" : {
+ ...
+
+ }
+ }
+ }
+}
+```
+
+#### **Blueprint Field Descriptions**
+
+* **configurations** : A list of configuration maps keyed by configuration type. An example of a configuration type is "core-site". When specified at the top level, configurations are applied at cluster scope and override default values for the cluster. When specified within a "host_groups" element, configurations are applied at the host level for all hosts mapped to the host group. Host scoped configurations override cluster scoped configuration for all hosts mapped to the host group. The configurations element is optional at both levels.
+
+* **host_groups** : A list of host groups which define topology (components) and configuration for all hosts which are mapped to the host group. At least one host group must be specified.
+
+ - **name** : The name of the host group. Mandatory field which is referred to in the cluster creation body when mapping physical hosts to host groups.
+
+ - **components** : A list of components which will run on all hosts that are mapped to the host group. At least one component must be specified for each host group.
+
+ - **provision_action** : Cluster wide provision action can be specified in Cluster Creation Template (see below), but optionally this can be overwritten on component level, by specifying a different provision_action here. The default provision_action is INSTALL_AND_START.
+
+ - **cardinality** : This field is optional and intended to provide a hint to the deployer as to how many instances of a particular host_group can be instantiated; it has no bearing on how the cluster is deployed. When a blueprint is exported for an existing cluster, this field will indicate the number of hosts that correspond the the host group in that cluster.
+
+* **Blueprints** : Blueprint and stack information
+ - **stack_name** : The name of the stack. All stacks currently shipped with Ambari have the name "HDP". This is a required field.
+
+ - **stack_version** : The version of the stack. For example: "1.3.2" or "2.1". This is a required field.When deploying a cluster using a blueprint, the stack definition identified in the blueprint must be available to the Ambari instance in the new cluster.
+
+ - **blueprint_name** : Optional field which specifies that name of the blueprint. Typically the name of the blueprint is specified in the URL of the REST invocation. The only reason to specify the name in the blueprint is when creating multiple blueprints via a single REST invocation. **Be aware that the name specified in this field will override the name specified in the URL.**
+ - **security** : Optional block to specify security settings for blueprint. Supported security types are **NONE** and **KERBEROS**. In case of KERBEROS, users have the option to embed a valid kerberos descriptor - to override default values defined for HDP stack - in field **kerberos_descriptor**or as an alternative they may reference a previously saved kerberos descriptor using **kerberos_descriptor_reference**field.
+
+In case of selecting **KERBEROS** as security_type it's mandatory to add **kerberos-env** and **krb5-conf** config types. (Checkout configurations section in **Blueprint example with KERBEROS** on this page)
+Be aware that, Kerberos client packages needs to be installed on the host running Ambari server and krb5.conf needs to be configured properly to contain your realm (admin_server and kdc).
+
+[Automated Kerberization](../kerberos/index.md) page describes structure of kerberos_descriptor.
+
+* **settings**: An optional section to provide additional configuration for cluster behavior during and after the blueprint deployment. You can provide configurations for the following properties:
+ - recovery_settings: A section to specify if all services (globally) should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart). For example:
+ ```json
+ "settings": [
+ "recovery_settings":[
+ {
+ "recovery_enabled":"true"
+ }
+ ]
+ ],
+ ```
+ - service_settings: A section to specify if services should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart).
+ For example:
+ ```json
+ "settings": [
+ "service_settings":[
+ {
+ "name":"HDFS",
+ "recovery_enabled":"true"
+ },
+ {
+ "name":"ZOOKEEPER",
+ "recovery_enabled":"true"
+ }
+ ]
+ ],_
+ ```
+ - component_settings: A section to specify if components should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart).
+ For example:
+ ```json
+ "settings": [
+ "component_settings":[
+ {
+ "name":"KAFKA_CLIENT"
+ "recover_enabled":"true"
+ },
+ {
+ "name":"METRICS_MONITOR",
+ "recover_enabled":"true"
+ }
+ ]
+ ],
+ ```
+ - deployment_settings: A section to specify if the blueprint deployment should auto skip install and start failures. To configure this behavior, specify "skip_failure" property to either "true" (auto skip failures), or "false" (do not auto skip failures). Blueprint deployment will fail on the very first deployment failure if the blueprint file does not contain the "deployment_settings" section.
+
+ For example:
+
+ ```json
+ "settings": [
+ "recovery_settings":[
+ {
+ "recovery_enabled":"true"
+ }
+ ]
+ ],
+ ```
+
+ - repository_settings: A section to specify custom repository URLs for the blueprint deployment. This section allows you to provide custom URLs to override the default ones. Without this section, you will need to update the repository URLs via REST API before deploying the cluster with the blueprint. "override_strategy" can be "ALWAYS_APPLY" ( always override the default one), or "APPLY_WHEN_MISSING" (only add it if no repository exists with the specific operating system and the repository id information). Repository URLs stored in the Ambari server database will be used if the blueprint does not have the "repository_settings" section.
+ For example:
+ ```json
+ settings: [
+ "repository_settings":[
+ {
+ "override_strategy":"ALWAYS_APPLY"
+ "operating_system":"redhat7",
+ "repo_id":"HDP",
+ "base_url":"[http://myserver/hdp](http:// myserver/hdp) "
+ },
+ {
+ "override_strategy":"APPLY_WHEN_MISSING",
+ "operating_system":"redhat7",
+ "repo_id":"HDP-UTIL-1.1",
+ "base_url":"[http://myserver/hdp-util](http:// myserver/hdp-util) "
+ }
+ ]
+ ]
+ ```
+
+### Cluster Creation Template Structure
+
+A Cluster Creation Template is in JSON format and has the following structure:
+
+```json
+{
+ "blueprint" : "blueprint-name",
+ "default_password" : "super-secret-password",
+ "provision_action" : "(INSTALL_AND_START | INSTALL_ONLY)"
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "host_groups" :[
+ {
+ "name" : "host-group-name",
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ],
+ "hosts" : [
+ {
+ "fqdn" : "host.domain.com"
+ },
+ {
+ "fqdn" : "host2.domain.com"
+ }
+ ...
+
+ ]
+ }
+ ...
+
+ ],
+ "credentials" : [
+ {
+ "alias" : "kdc.admin.credential",
+ "principal" : "{PRINCIPAL}",
+ "key" : "{KEY}",
+ "type" : "(TEMPORARY | PERSISTED)"
+ }
+ ],
+ "security" : {
+ "type" : "(NONE | KERBEROS)",
+ "kerberos_descriptor" : {
+ ...
+
+ }
+ }
+}
+```
+
+Starting in Ambari version 2.1.0, it is possible to specify a host count and a host predicate in the cluster creation template host group section instead of a host name.
+
+```json
+{
+ "name" : "group-using-host-count",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+}
+```
+
+Starting in Ambari version 2.2.0, it is possible to specify configuration recommendation strategy in the cluster creation template.
+
+```json
+{
+ "blueprint" : "blueprint-name",
+ "config_recommendation_strategy" : "ONLY_STACK_DEFAULTS_APPLY",
+ ...
+
+}
+```
+
+Starting in Ambari version 2.2.1, it is possible to specify the host rack info in the cluster creation template ([AMBARI-14600](https://issues.apache.org/jira/browse/AMBARI-14600)).
+
+```json
+"hosts" : [
+ {
+ "fqdn" : "amb2.service.consul",
+ "rack_info": "/dc1/rack1"
+ }
+ ]
+```
+
+**Cluster Creation Template Structure: Host Mappings and Configuration Field Descriptions**
+
+* **blueprint** : Name of the blueprint that defines the cluster to be deployed. Blueprint must already exist. Required field.
+
+* **default_password** : Optional field which specifies a default password for all required passwords which are not specified in the blueprint or cluster creation template configurations.
+
+* **provision_action** : Default provision_action is INSTALL_AND_START, optionally this can be overwritten on component level, by specifying a different provision_action for a given component.
+
+* **configurations** : A list of configuration maps keyed by configuration type. An example of a configuration type is "core-site". When specified at the top level, configurations are applied at cluster scope and override default values for the cluster. When specified within a "host_groups" element, configurations are applied at the host level for all hosts mapped to the host group. Host scoped configurations override cluster scoped configuration for all hosts mapped to the host group. All cluster scoped and host group scoped configurations specified here override configurations specified in the corresponding blueprint. The configurations element is optional at both levels.
+
+* **config_recommendation_strategy :** Optional field which specifies the strategy of applying configuration recommendations to a cluster. Recommended configurations gathered by the response of the stack advisor, they may override partly/totally user defined custom configurations depending on selected strategy. A property value is considered as custom configuration in case it has a value other then stack default. Available from Ambari 2.2.0.
+
+ - **NEVER_APPLY** : Configuration recommendations are ignored with this option. (This is the default)
+ - **ONLY_STACK_DEFAULTS_APPLY**:Configuration recommendations are applied only for properties defined in HDP stack by default.
+
+ - **ALWAYS_APPLY**: All c onfiguration recommendations are applied, they may override custom configurations provided by the user in the Blueprint and/or Cluster Creation Template.
+
+ - **ALWAYS_APPLY_DONT_OVERRIDE_CUSTOM_VALUES**: All c onfiguration recommendations are applied,however custom configurations defined by the user in the Blueprint and/or Cluster Creation Template are not overridden by recommended configuration values. Available as of Ambari 2.4.0.
+
+* **host_groups** : A list of host groups being deployed to the cluster. At least one host group must be specified.
+
+ - **name** : Required field which must correspond to a name of a host group in the associated blueprint.
+
+ - **hosts** : List of host mapping information
+ + **fqdn** : Fully qualified domain name for each host that is being mapped to the host group. At least one host must be specified
+ - **host_count** : The number of hosts that should be mapped to this host group. This can be specified instead of concrete host names. If no host_predicate is specified, any host that isn't explicitly mapped to another host group is available to be mapped to this host group. Available as of Ambari 2.1.0.
+
+ - **host_predicate** : Optional field which is used together with host_count to control which hosts are mapped to the host group. This is useful in supporting host 'flavors' where different host groups require different host types. The default predicate matches all hosts which aren't explicitly mapped to another host group. The syntax of the predicate is the standard Ambari API query syntax applied against the "api/v1/hosts" endpoint. Available as of Ambari 2.1.0.
+
+* **credentials** : Optional block to create credentials, kdc.admin.credential is required in case of setting up KERBEROS security. Store type could be **PERSISTED**
+or **TEMPORARY**. Temporary admin credentials are valid 90 minutes or until server restart.
+
+* **security** : Optional block to override security settings defined in Blueprint. Supported security types are **NONE** and **KERBEROS**. In case of KERBEROS, users have the option to embed a valid kerberos descriptor - to override default values defined for HDP stack - in field **kerberos_descriptor**or as an alternative they may reference a previously saved kerberos descriptor using **kerberos_descriptor_reference**field. Security settings defined here will override Blueprint settings, however overriding security type used in Blueprint to a less secure mode is not possible (ex. set security.type=NONE in cluster template having security.type=KERBEROS in Blueprint). In case of selecting **KERBEROS** as security_type it's mandatory to add **kerberos-env** and **krb5-conf** config types. (Checkout configurations section in **Blueprint example with KERBEROS** on this page)
+[Automated Kerberization](../kerberos/index.md) page describes structure of kerberos_descriptor.
+
+### Configurations
+
+#### Default Values and Overrides
+
+* **Stack Defaults**: Each Stack provides configurations for all included services which serve as defaults for all clusters deployed via Blueprints.
+
+* **Blueprint Cluster Scoped**: Configurations provided at the top level of a Blueprint override the corresponding default values for the entire cluster.
+
+* **Blueprint Host Group Scoped**: Configurations provided within a host_group element of a Blueprint override both the corresponding default values and blueprint cluster scoped values only for hosts mapped to the host group.
+
+* **Cluster Creation Template Cluster Scoped**: Configurations provided at the top level of the Cluster Creation Template override both the corresponding default and blueprint cluster scoped values for the entire cluster.
+
+* **Cluster Creation Template Host Group Scoped**: Configurations provided within a host_group element of a Cluster Creation Template override all other values for hosts mapped to the host group.
+
+#### Required Configurations
+
+* Not all configuration properties have valid defaults
+* Required properties must be specified by the Blueprint user
+* Come in two categories, passwords and non passwords
+* Non password required properties are validated at Blueprint creation time
+* Required password properties are validated at cluster creation time
+* For required password properties, they may be explicitly set in either the Blueprint or Cluster Creation Template configurations or a default password may be specified in the Cluster Creation Template which will be applied to all passwords which have not been explicitly set
+ - "default_password" : "super-secret-password"
+* If required configuration validation fails, a 400 response is returned indicating which properties must be specified
+
+## [Blueprint Examples](#)
+
+## Blueprint Example: Single-Node HDP 2.4 Cluster
+
+* Single-node cluster (c6401.ambari.apache.org)
+* HDP 2.4 Stack
+* Install Core Hadoop Services (HDFS, YARN, MapReduce2, ZooKeeper)
+
+### Example Blueprint
+
+```json
+{
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "components" : [
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "blueprint_name" : "single-node-hdfs-yarn",
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Register blueprint with Ambari Server
+
+Post the blueprint to the "single-node-hdfs-yarn" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/single-node-hdfs-yarn
+
+...
+
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+### Example Cluster Creation Template
+
+We are performing a single-node install and the blueprint above has **one** host group. Therefore, for our cluster instance, we define **one** host in **host_group_1** and reference the **single-node-hdfs-yarn** blueprint.
+
+**Explicit Host Name Example**
+
+```json
+{
+ "blueprint" : "single-node-hdfs-yarn",
+ "host_groups" :[
+ {
+ "name" : "host_group_1",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/MySingleNodeCluster
+
+...
+
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+```
+
+## Blueprint Example: Multi-Node HDP 2.4 Cluster
+
+* Multi-node cluster (three hosts)
+* Host Groups: "master", "slaves" (one master host, two slave hosts)
+* Use HDP 2.4 Stack
+* Install Core Hadoop Services (HDFS, YARN, MapReduce2, ZooKeeper)
+
+### Example Blueprint
+
+The blueprint ("multi-node-hdfs-yarn") below defines with **two** host groups (a "master" and the "slaves") which hosts the various Service components (masters, slaves and clients).
+
+```json
+{
+ "host_groups" : [
+ {
+ "name" : "master",
+ "components" : [
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "slaves",
+ "components" : [
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ }
+ ],
+ "cardinality" : "1+"
+ }
+ ],
+ "Blueprints" : {
+ "blueprint_name" : "multi-node-hdfs-yarn",
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Register blueprint with Ambari Server
+
+Post the blueprint to the "single-node-hdfs-yarn" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/multi-node-hdfs-yarn
+...
+
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+### Example Cluster Creation Template
+
+We are performing a multi-node install and the blueprint above has **two** host groups. Therefore, for our cluster instance, we define **one** host in **masters**, **two** hosts in **slaves** and reference the **multi-node-hdfs-yarn** blueprint.
+
+The below multi-node cluster creation template example uses the "host_count" and "host_predicate" syntax for the "slave" host group which is available as of Ambari 2.1.0. For older versions of Ambari, the "hosts/fqdn" syntax must be used.
+
+```
+{
+ "blueprint" : "multi-node-hdfs-yarn",
+ "default_password" : "my-super-secret-password",
+ "host_groups" :[
+ {
+ "name" : "master",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ },
+ {
+ "name" : "slaves",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+ }
+ ]
+}
+```
+
+### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/MyThreeNodeCluster
+...
+
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyThreeNodeCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+```
+
+## Adding Hosts to an Existing Cluster
+
+After creating a cluster using the Ambari Blueprint API, you may scale up the cluster using the API.
+
+There are two forms of the API, one for adding a single host and another for adding multiple hosts.
+
+The blueprint add hosts API is available as of Ambari 2.0.
+
+Currently, only clusters originally provisioned via the blueprint API may be scaled using this API.
+
+### Example Add Host Template
+
+#### Single Host Example
+
+Host is specified in URL
+
+```
+{
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves"
+}
+```
+
+#### Multiple Host Form
+
+Host is specified in request body
+
+```
+[
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_name" : "c6403.ambari.apache.org"
+ },
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_name" : "c6404.ambari.apache.org"
+ }
+]
+```
+
+#### Multiple Host Form using host_count
+
+Starting with Ambari 2.1, the fields "host_count" and "host_predicate" can also be used when adding a host.
+
+These fields behave exactly the same as they do when specified in the cluster creation template.
+
+```
+[
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+ }
+]
+```
+
+### Add Host Request
+
+#### Single Host
+
+```
+POST /api/v1/clusters/myExistingCluster/hosts/c6403.ambari.apache.org
+...
+
+[ Request Body is above Single Host Add Host Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/myExistingCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "Pending"
+ }
+}
+```
+
+#### Multiple Hosts
+
+```
+POST /api/v1/clusters/myExistingCluster/hosts
+...
+
+[ Request Body is above Multiple Host Add Host Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/myExistingCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "Pending"
+ }
+}
+```
+
+## Blueprint Example : Provisioning Multi-Node HDP 2.3 Cluster to use KERBEROS
+
+The blueprint below could be used to setup a cluster containing three host groups with KERBEROS security. Overriding default kerberos descriptor is not necessary however specifying a few Kerberos specific properties in kerberos-env and krb5-conf is a must to setup services to use Kerberos. Note: prior to Ambari 2.4.0 use "kdc_host" instead of "kdc_hosts".
+
+```json
+{
+ "configurations" : [
+ {
+ "kerberos-env": {
+ "properties_attributes" : { },
+ "properties" : {
+ "realm" : "AMBARI.APACHE.ORG",
+ "kdc_type" : "mit-kdc",
+ "kdc_hosts" : "(kerberos_server_name)",
+ "admin_server_host" : "(kerberos_server_name)"
+ }
+ }
+ },
+ {
+ "krb5-conf": {
+ "properties_attributes" : { },
+ "properties" : {
+ "domains" : "AMBARI.APACHE.ORG",
+ "manage_krb5_conf" : "true"
+ }
+ }
+ }
+ ],
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_2",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "KERBEROS_CLIENT"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_3",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "KERBEROS_CLIENT"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.3",
+ "security" : {
+ "type" : "KERBEROS"
+ }
+ }
+}
+```
+
+The **Cluster Creation Template** below could be used to setup a cluster containing hosts with KERBEROS security using the Blueprint from above. Overriding default kerberos descriptor is not necessary however specifying kdc.admin credentials is a must.
+
+```json
+{
+ "blueprint": "kerberosBlueprint",
+ "default_password": "admin",
+ "host_groups": [
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-1" }
+ ],
+ "name": "host_group_1",
+ "configurations" : [ ]
+ },
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-2" }
+ ],
+ "name": "host_group_2",
+ "configurations" : [ ]
+ },
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-3" }
+ ],
+ "name": "host_group_3",
+ "configurations" : [ ]
+ }
+ ],
+ "credentials" : [
+ {
+ "alias" : "kdc.admin.credential",
+ "principal" : "admin/admin",
+ "key" : "admin",
+ "type" : "TEMPORARY"
+ }
+ ],
+ "security" : {
+ "type" : "KERBEROS"
+ },
+ "Clusters" : {"cluster_name":"kerberosCluster"}
+}
+```
+
+## Blueprint Support for High Availability Clusters
+
+Support for deploying HA clusters for HDFS, Yarn, and HBase has been added in Ambari 2.0. Please see the following link for more information:
+
+[Blueprint Support for HA Clusters](./blueprint-ha.md)
diff --git a/docs/ambari-design/enhanced-configs/imgs/create_theme.png b/docs/ambari-design/enhanced-configs/imgs/create_theme.png
new file mode 100644
index 0000000..ab504a3
--- /dev/null
+++ b/docs/ambari-design/enhanced-configs/imgs/create_theme.png
Binary files differ
diff --git a/docs/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png b/docs/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png
new file mode 100644
index 0000000..df0bed0
--- /dev/null
+++ b/docs/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png
Binary files differ
diff --git a/docs/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png b/docs/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png
new file mode 100644
index 0000000..1b96243
--- /dev/null
+++ b/docs/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png
Binary files differ
diff --git a/docs/ambari-design/enhanced-configs/index.md b/docs/ambari-design/enhanced-configs/index.md
new file mode 100644
index 0000000..52a2279
--- /dev/null
+++ b/docs/ambari-design/enhanced-configs/index.md
@@ -0,0 +1,342 @@
+---
+title: Enhanced Configs
+---
+
+Introduced in Ambari-2.1.0, the Enhanced Configs feature makes it possible for service providers to customize their service's configs to a great deal and determine which configs are prominently shown to user without making any UI code changes. Customization includes providing a service friendly layout, better controls (sliders, combos, lists, toggles, spinners, etc.), better validation (minimum, maximum, enums), automatic unit conversion (MB, GB, seconds, milliseconds, etc.), configuration dependencies and improved dynamic recommendations of default values.
+
+A service provider can accomplish all the above just by changing their service definition in the _stacks_/folder.
+
+Example: HBase Enhanced Configs
+
+
+
+## Features
+
+* Define theme with custom layout of configs
+ - Tabs
+ - Sections
+ - Sub-sections
+* Place selected configs in the layout defined above
+* Associate UI widget to use for a config
+ - Radio Buttons
+ - Slider
+ - Combo
+ - Time Interval Spinner
+ - Toggle
+ - Directory
+ - Directories
+ - List
+ - Password
+ - Text Field
+ - Checkbox
+ - Text Area
+* Automatic unit conversion for configs which have to be shown in units different from the units being saved as.
+
+ - Memory - B, KB, MB, GB, TB, PB
+ - Time - milliseconds, seconds, minutes, hours, days, months, years
+ - Percentage - float, percentage
+* Ability to define dependencies between configurations across services (depends-on, depended-by).
+
+* Ability to dynamically update values of other depended-by configs when a config is changed.
+
+## Enable Enhanced Configs - Steps
+
+### Step 1 - Create Theme (UI Metadata)
+
+The first step is to create a theme for your service in the stack definition folder. A theme provides necessary information of the UI to construct the enhanced configs. UI information like layout (tabs, sections, sub-sections), placement of configs in the sub-sections, and which widgets and units to use for each config
+
+
+
+1. Modify metainfo.xml to define a theme by including a themes block.
+
+```
+themes-dir theme.json true
+```
+2. The optional element can be used if the default theme folder of ' _themes_' is not desired, or taken by another service in the same _metainfo.xml_.
+
+3. Multiple themes can be defined, however only the first _default_theme will be used for the service.
+
+4. Each theme points to a theme JSON file (via _fileName_element) in the _themes-dir_folder.
+
+5. The _theme.json_file contains one _configuration_block containing three main keys
+ 1. _layouts_- specify tabs, sections and sub-section layout
+ 2. _placement_- specify configurations to place in sub-sections
+ 3. _widgets_- specify which UI widgets to use for each config
+
+```json
+{
+ "configuration": {
+ "layouts": [ ... ],
+ "placement": { ... },
+ "widgets": [ ... ]
+ }
+}
+```
+6. Layouts - Multiple layouts can be defined in a theme. Currently only the first layout will be used while rendering. A _layout_ has following content:
+ 1. Tabs: Multiple tabs can be defined in a layout. Each tab can have its contents laid out using a simple grid-layout using the _tab-columns_and _tab-rows_keys.
+
+In below example the _Settings_tab has a grid of 3 rows and 2 columns in which sections can be placed.
+
+```json
+"layouts": [
+ {
+ "name": "default",
+ "tabs": [
+ {
+ "name": "settings",
+ "display-name": "Settings",
+ "layout": {
+ "tab-columns": "2",
+ "tab-rows": "3",
+ "sections": [ ... ]
+ }
+ }
+ ]
+ }
+]
+```
+ 2. Sections: Each section is defined inside a tab and specifies its location and size inside the tab's grid-layout by using the _row-index_, _column-index_, _row-span_and _column-span_keys. Being a container itself, it can further define a grid-layout for the sub-sections it contains using the _section-rows_and _section-columns_keys.
+
+In below example the _MapReduce_section occupies the first cell of the _Settings_tab grid, and itself has a grid-layout of 1 row and 3 columns.
+
+```
+"sections": [ { "name": "section-mr-scheduler", "display-name": "MapReduce", "row-index": "0", "column-index": "0", "row-span": "1", "column-span": "1", "section-columns": "3", "section-rows": "1", "subsections": [ ... ] }, ...]
+```
+ 3. Sub-sections: Each sub-section is defined inside a section and specifies its location and size inside the section's grid-layout using the _row-index_, _column-index_, _row-span_and _column-span_keys. Each section also has an optional _border_boolean key which tells if a border should encapsulate its content.
+
+```
+"subsections": [ { "name": "subsection-mr-scheduler-row1-col1", "display-name": "MapReduce Framework", "row-index": "0", "column-index": "0", "row-span": "1", "column-span": "1" }, ...]
+```
+7. Placement: Specifies the order of configurations that are to be placed into each sub-section. Each placement identifies a config, and which sub-section it should appear in. The placement specifies which layout it applies to using the _configuration-layout_key.
+
+```
+"placement": { "configuration-layout": "default", "configs": [ { "config": "mapred-site/mapreduce.map.memory.mb", "subsection-name": "subsection-mr-scheduler-row1-col1" }, { "config": "mapred-site/mapreduce.reduce.memory.mb", "subsection-name": "subsection-mr-scheduler-row1-col2" }, ... ]}
+```
+8. Wigets: The widgets array specifies which UI widget should be used to show a specific config. It also contains extra UI specific metadata required to show the widget.
+
+In the example below, both configs are using the slider widget. However the unit varies, resulting in one config being shown in bytes and another being shown as percentage. This unit is purely for showing a config - which is different from the units in which it is actually persisted in Ambari. For example, the percent unit below maybe persisted as a _float_, while the MB config below can be persisted in B (bytes).
+
+```
+"widgets": [ { "config": "yarn-site/yarn.nodemanager.resource.memory-mb", "widget": { "type": "slider", "units": [ { "unit-name": "MB" } ] } }, { "config": "yarn-site/yarn.nodemanager.resource.percentage-physical-cpu-limit", "widget": { "type": "slider", "units": [ { "unit-name": "percent" } ] } }, { "config": "yarn-site/yarn.node-labels.enabled", "widget": { "type": "toggle" } }, ...]
+```
+
+For a complete reference to what UI widgets are available and what metadata can be specified per widget, please refer to _Appendix A_.
+
+### Step 2 - Annotate stack configs (Non-UI Metadata)
+
+Each configuration that is used by the service's theme has to provide extra metadata about the configuration. The list of available metadata are:
+
+* display-name
+* value-attributes
+ - type
+ + string
+ + value-list
+ + float
+ + int
+ + boolean
+ - minimum
+ - maximum
+ - unit
+ - increment-step
+ - entries
+ + entry
+ * value
+ * description
+* depends-on
+ - property
+ + type
+ + name
+
+The value-attributes provide meta information about the value that can used as hints by the appropriate widget. For example the slider widget can make use of the minimum and maximum values in its working.
+
+Examples:
+
+```xml
+<property>
+ <name>namenode_heapsize</name>
+ <value>1024</value>
+ <description>NameNode Java heap size</description>
+ <display-name>NameNode Java heap size</display-name>
+ <value-attributes>
+ <type>int</type>
+ <minimum>0</minimum>
+ <maximum>268435456</maximum>
+ <unit>MB</unit>
+ <increment-step>256</increment-step>
+ </value-attributes>
+ <depends-on>
+ <property>
+ <type>hdfs-site</type>
+ <name>dfs.datanode.data.dir</name>
+ </property>
+ </depends-on>
+</property>
+
+```
+
+```xml
+<property>
+ <name>hive.default.fileformat</name>
+ <value>TextFile</value>
+ <description>Default file format for CREATE TABLE statement.</description>
+ <display-name>Default File Format</display-name>
+ <value-attributes>
+ <type>value-list</type>
+ <entries>
+ <entry>
+ <value>ORC</value>
+ <description>The Optimized Row Columnar (ORC) file format provides a highly efficient way to store Hive data. It was designed to overcome limitations of the other Hive file formats. Using ORC files improves performance when Hive is reading, writing, and processi
+ </entry>
+ <entry>
+ <value>TextFile</value>
+ <description>Text file format saves Hive data as normal text.</description>
+ </entry>
+ </entries>
+ </value-attributes>
+</property>
+```
+
+The depends-on is useful in building a dependency graph between different configs in Ambari. Ambari uses these bi-directional relationships (depends-on and depended-by) to automatically update dependent configs using the stack-advisor functionality in Ambari.
+
+Dependencies between configurations is a directed-acyclic-graph (DAG). When a configuration is updated, the UI has to determine its effect on other configs in the graph. To determine this, the /recommendations_endpoint should be provided an array of what configurations have been just changed in the changed_configurations field. Based on the provided changed-configs, only its dependencies are updated in the response.
+
+Example:
+
+Figure below shows some config dependencies - A effects B and C, each of which effects DE and FG respectively.
+
+
+
+Now assume user changes B to B' - a call to _/_ _recommendations_will only change D and E to D' and E' respectively (AB'CD'E'FG). No other config will be changed. Now assume that C is changed to C' -/recommendations will only change F and G to F' and G' while still keeping the values of B' D' E' intact (AB'C'D'E'F'G'). Now if you change A to A', it will affect all its children (A'B''C''D''E''F''G''). The user will have chance to pick and choose which he wants to apply.
+
+The call to _/recommendations_ happens whenever a configuration with dependencies is changed. The POST call has the action configuration-dependencies - which will only change the configurations and its dependencies identified by the changed_configurations field.
+
+### Step 3 - Restart Ambari server
+
+Restarting ambari-server is required for any changes in the themes or the stack-definition to be loaded.
+
+## Reference
+
+* HDFS HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/HDFS/themes/theme.json)
+* YARN HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/themes/theme.json)
+* HIVE HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/themes/theme.json)
+* RANGER HDP-2.3 [theme_version_2.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/RANGER/themes/theme_version_2.json)
+
+## Appendix
+
+### Appendix A - Widget Non-UI Metadata
+
+<table>
+ <tr>
+ <th>Widget</th>
+ <th>Metadata Used</th>
+ </tr>
+ <tr>
+ <td>Slider</td>
+ <td>
+ <value-attributes><br></br>
+ <type>int</type><br></br>
+ <minimum>1073741824</minimum><br></br>
+ <maximum>17179869184</maximum><br></br>
+ <unit>B</unit><br></br>
+ <increment-step>1073741824</increment-step><br></br>
+ </value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Combo</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>4</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>8</value><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+ </value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Directory, Directories, Password, Text Field, Text Area</td>
+ <td>No value-attributes required</td>
+ </tr>
+ <tr>
+ <td>List</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>4</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>8</value><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>2+</selection-cardinality><br></br>
+</value-attributes><br></br>
+ </td>
+ </tr>
+ <tr>
+ <td>Radio-Buttons</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>1</value><br></br>
+ <label>Radio Option 1</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ <label>Radio Option 2</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>3</value><br></br>
+ <label>Radio Option 3</label><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+</value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Time Interval Spinner</td>
+ <td>
+<value-attributes> <br></br>
+ <type>int</type><br></br>
+ <minimum>0</minimum><br></br>
+ <maximum>2592000000</maximum><br></br>
+ <unit>milliseconds</unit><br></br>
+</value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Toggle, Checkbox</td>
+ <td>
+ <value-attributes>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>true</value><br></br>
+ <label>Native</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>false</value><br></br>
+ <label>Off</label><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+</value-attributes>
+ </td>
+ </tr>
+</table>
diff --git a/docs/ambari-design/index.md b/docs/ambari-design/index.md
new file mode 100644
index 0000000..c0afb15
--- /dev/null
+++ b/docs/ambari-design/index.md
@@ -0,0 +1,19 @@
+# Ambari Design
+
+Ambari Architecture: https://issues.apache.org/jira/secure/attachment/12559939/Ambari_Architecture.pdf
+
+Ambari Server-Agent Registration Flow: http://www.slideshare.net/hortonworks/ambari-agentregistrationflow-17041261
+
+Ambari Local Repository Setup: http://www.slideshare.net/hortonworks/ambari-using-a-local-repository
+
+API Documentation: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
+
+Technology Stack: [Technology Stack](./technology-stack.md)
+
+Integration: http://developer.teradata.com/viewpoint/articles/viewpoint-integration-with-apache-ambari-for-hadoop-monitoring
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+
+<DocCardList />
+```
\ No newline at end of file
diff --git a/docs/ambari-design/kerberos/enabling_kerberos.md b/docs/ambari-design/kerberos/enabling_kerberos.md
new file mode 100644
index 0000000..2b11190
--- /dev/null
+++ b/docs/ambari-design/kerberos/enabling_kerberos.md
@@ -0,0 +1,373 @@
+---
+title: Enabling Kerberos
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](#enabling-kerberos)
+ - [The Enable Kerberos Wizard](#the-enable-kerberos-wizard)
+ - [The REST API](#the-rest-api)
+ - [Miscellaneous Technical Information](#miscellaneous-technical-information)
+ - [Password Generation](#password-generation)
+
+<a name="enabling-kerberos"></a>
+
+## Enabling Kerberos
+
+Enabling Kerberos on the cluster may be done using the _Enable Kerberos Wizard_ within the Ambari UI
+or using the REST API.
+
+<a name="the-enable-kerberos-wizard"></a>
+
+### The Enable Kerberos Wizard
+
+The _Enable Kerberos Wizard_, in the Ambari UI, provides an easy to use wizard interface that walks
+through the process of enabling Kerberos.
+
+<a name="the-rest-api"></a>
+
+### The REST API
+
+It is possible to enable Kerberos using Ambari's REST API using the following API calls:
+
+**_Notes:_**
+
+- Change the authentication credentials as needed
+ - `curl ... -u username:password ...`
+ - The examples below use
+ - username: admin
+ - password: admin
+- Change the Ambari server host name and port as needed
+ - `curl ... http://HOST:PORT/api/v1/...`
+ - The examples below use
+ - HOST: AMBARI_SERVER
+ - PORT: 8080
+- Change the cluster name as needed
+ - `curl ... http://.../CLUSTER/...`
+ - The examples below use
+ - CLUSTER: CLUSTER_NAME
+- @./payload indicates the the payload data is stored in some file rather than declared inline
+ - `curl ... -d @./payload ...`
+ - The examples below use `./payload` which should be replace with the actual file path
+ - The contents of the payload file are indicated below the curl statement
+
+#### Add the KERBEROS Service to cluster
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS
+```
+
+#### Add the KERBEROS_CLIENT component to the KERBEROS service
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS/components/KERBEROS_CLIENT
+```
+
+#### Create and set KERBEROS service configurations
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME
+```
+
+Example payload when using an MIT KDC:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "mit-kdc",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.KDC.SERVER",
+ "master_kdc" : "FQDN.MASTER.KDC.SERVER",
+ "admin_server_host" : "FQDN.ADMIN.KDC.SERVER",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false"
+ }
+ }
+ }
+ }
+]
+```
+
+Example payload when using an Active Directory:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "active-directory",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.AD.SERVER",
+ "master_kdc" : "FQDN.MASTER.AD.SERVER",
+ "admin_server_host" : "FQDN.AD.SERVER",
+ "ldap_url" : "LDAPS://AD_HOST:PORT",
+ "container_dn" : "OU=....,....",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "password_length": "20",
+ "password_min_lowercase_letters": "1",
+ "password_min_uppercase_letters": "1",
+ "password_min_digits": "1",
+ "password_min_punctuation": "1",
+ "password_min_whitespace": "0",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false",
+ "create_attributes_template" : "{\n \"objectClass\": [\"top\", \"person\", \"organizationalPerson\", \"user\"],\n \"cn\": \"$principal_name\",\n #if( $is_service )\n \"servicePrincipalName\": \"$principal_name\",\n #end\n \"userPrincipalName\": \"$normalized_principal\",\n \"unicodePwd\": \"$password\",\n \"accountExpires\": \"0\",\n \"userAccountControl\": \"66048\"}"
+ }
+ }
+ }
+ }
+]
+```
+Example payload when using IPA:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "ipa",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.KDC.SERVER",
+ "master_kdc" : "FQDN.MASTER.KDC.SERVER",
+ "admin_server_host" : "FQDN.ADMIN.KDC.SERVER",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false"
+ }
+ }
+ }
+ }
+]
+```
+
+#### Create the KERBEROS_CLIENT host components
+_Once for each host, replace HOST_NAME_
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d '{"host_components" : [{"HostRoles" : {"component_name":"KERBEROS_CLIENT"}}]}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/hosts?Hosts/host_name=HOST_NAME
+```
+
+#### Install the KERBEROS service and components
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "INSTALLED"}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS
+```
+
+#### Stop all services
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services
+```
+
+#### Get the default Kerberos Descriptor
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://AMBARI_SERVER:8080/api/v1/stacks/STACK_NAME/versions/STACK_VERSION/artifacts/kerberos_descriptor
+```
+
+#### Get the customized Kerberos Descriptor (if previously set)
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/artifacts/kerberos_descriptor
+```
+
+#### Set the Kerberos Descriptor
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/artifacts/kerberos_descriptor
+```
+
+Payload:
+
+```
+{
+ "artifact_data" : {
+ ...
+ }
+}
+```
+
+**_Note:_** The Kerberos Descriptor payload may be a complete Kerberos Descriptor or just the updates to overlay on top of the default Kerberos Descriptor.
+
+#### Set the KDC administrator credentials
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/credentials/kdc.admin.credential
+```
+
+Payload:
+
+```
+{
+ "Credential" : {
+ "principal" : "admin/admin@EXAMPLE.COM",
+ "key" : "h4d00p&!",
+ "type" : "temporary"
+ }
+}
+```
+
+**_Note:_** the _principal_ and _key_ (password) values should be updated to match the correct credentials
+for the KDC administrator account
+
+**_Note:_** the `type` value may be `temporary` or `persisted`; however the value may only be `persisted`
+if Ambari's credential store has been previously setup.
+
+#### Enable Kerberos
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME
+```
+
+Payload
+
+```
+{
+ "Clusters": {
+ "security_type" : "KERBEROS"
+ }
+}
+```
+
+#### Start all services
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "STARTED"}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services
+```
+
+<a name="miscellaneous-technical-information"></a>
+
+### Miscellaneous Technical Information
+
+<a name="password-generation"></a>
+
+#### Password Generation
+
+When enabling Kerberos using an Active Directory, Ambari must use an internal mechanism to build
+the keytab files. This is because keytab files cannot be requested remotely from an Active Directory.
+In order to create keytab files, Ambari needs to know the password for each relevant Kerberos
+identity. Therefore, Ambari sets or updates the identity's password as needed.
+
+The password for each Ambari-managed account in an Active Directory is randomly generated and
+stored only long enough in memory to set the account's password and generate the keytab file.
+Passwords are generated using the following user-settable parameters:
+
+- Password length (`kerberos-env/password_length`)
+ - Default Value: 20
+- Minimum number of lower-cased letters (`kerberos-env/password_min_lowercase_letters`)
+ - Default Value: 1
+ - Character Set: `abcdefghijklmnopqrstuvwxyz`
+- Minimum number of upper-cased letters (`kerberos-env/password_min_uppercase_letters`)
+ - Default Value: 1
+ - Character Set: `ABCDEFGHIJKLMNOPQRSTUVWXYZ`
+- Minimum number of digits (`kerberos-env/password_min_digits`)
+ - Default Value: 1
+ - Character Set: `1234567890`
+- Minimum number of punctuation characters (`kerberos-env/password_min_punctuation`)
+ - Default Value: 1
+ - Character Set: `?.!$%^*()-_+=~`
+- Minimum number of whitespace characters (`kerberos-env/password_min_whitespace`)
+ - Default Value: 0
+ - Character Set: `(space character)`
+
+The following algorithm is executed:
+
+1. Create an array to store password characters
+2. For each character class (upper-case letter, lower-case letter, digit, ...), randomly select the
+minimum number of characters from the relevant character set and store them in the array
+3. For the number of characters calculated as the difference between the expected password length and
+the number of characters already collected, randomly select a character from a randomly-selected character
+class and store it into the array
+4. For the number of characters expected in the password, randomly pull one from the array and append
+to the password result
+5. Return the generated password
+
+To generate a random integer used to identify an index within a character set, a static instance of
+the `java.security.SecureRandom` class ([http://docs.oracle.com/javase/7/docs/api/java/security/SecureRandom.html](http://docs.oracle.com/javase/7/docs/api/java/security/SecureRandom.html))
+is used.
diff --git a/docs/ambari-design/kerberos/index.md b/docs/ambari-design/kerberos/index.md
new file mode 100644
index 0000000..51c56e1
--- /dev/null
+++ b/docs/ambari-design/kerberos/index.md
@@ -0,0 +1,125 @@
+---
+slug: /kerberos
+---
+
+# Ambari Kerberos Automation
+
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+
+- [Introduction](#introduction)
+ - [How it Works](#how-it-works)
+ - [Enabling Kerberos](#enabling-kerberos)
+ - [Adding Components](#adding-components)
+ - [Adding Hosts](#adding-hosts)
+ - [Regenerating Keytabs](#regenerating-keytabs)
+ - [Disabling Kerberos](#disabling-kerberos)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="introduction"></a>
+
+## Introduction
+
+Before Ambari 2.0.0, configuring an Ambari cluster to use Kerberos involved setting up the Kerberos
+client infrastructure on each host, creating the required identities, generating and distributing the
+needed keytabs files, and updating the necessary configuration properties. On a small cluster this may
+not seem to be too large of an effort; however as the size of the cluster increases, so does the amount
+of work that is involved.
+
+This is where Ambari’s Kerberos Automation facility can help. It performs all of these steps and
+also helps to maintain the cluster as new services and hosts are added.
+
+Kerberos automation can be invoked using Ambari’s REST API as well as via the _Enable Kerberos Wizard_
+in the Ambari UI.
+
+<a name="how-it-works"></a>
+
+### How it works
+
+Stacks and services that can utilize Kerberos credentials for authentication must have a Kerberos
+Descriptor declaring required Kerberos identities and how to update configurations. The Ambari
+infrastructure uses this data, and any updates applied by an administrator, to perform Kerberos
+related operations such as initially enabling Kerberos, enabling Kerberos for on hosts and added
+components, regenerating credentials, and disabling Kerberos.
+
+It should be notated that the Kerberos service is required to be installed on all hosts of the cluster
+before any automated tasks can be performed. If using the Ambari UI, this should happen as part of the
+Enable Kerberos wizard workflow.
+
+<a name="enabling-kerberos"></a>
+
+### Enabling Kerberos
+
+When enabling Kerberos, all of the services in the cluster are expected to be stopped. The main
+reason for this is to avoid state issues as the services are stopped and then started when the cluster
+is transitioning to use Kerberos.
+
+The following steps are taken to enable Kerberos on the cluster en masse:
+
+1. Create or update accounts in the configured KDC (or Active Directory)
+2. Generate keytab files and distribute them to the appropriate hosts
+3. Update relevant configurations
+
+<a name="adding-components"></a>
+
+### Adding Components
+
+If Kerberos is enabled for the Ambari cluster, whenever new components are added, the new components
+will automatically be configured for Kerberos, and any necessary principals and keytab files will be
+created and distributed as needed.
+
+For each new component, the following steps will occur before the component is installed and started:
+
+1. Update relevant configurations
+2. Create or update accounts in the configured KDC (or Active Directory)
+3. Generate keytab files and distribute them to the appropriate hosts
+
+<a name="adding-hosts"></a>
+
+### Adding Hosts
+
+When adding a new host, the Kerberos client must be installed on it. This does not happen automatically;
+however the _Add Host Wizard_ in the Ambari UI will will perform this step if Kerberos was enabled for
+the Ambari cluster. Once the host is added, generally one or more components are installed on
+it - see [Adding Components](#adding-components).
+
+<a name="regenerating-keytabs"></a>
+
+### Regenerating Keytabs
+
+Once a cluster has Kerberos enabled, it may be necessary to regenerate keytabs. There are two options
+for this:
+
+- `all` - create any missing principals and unconditionally update the passwords for existing principals, then create and distribute all relevant keytab files
+- `missing` - create any missing principals; then create and distribute keytab files for the newly-created principals
+
+In either case, the affected services should be restarted after the regeneration process is complete.
+
+If performed through the Ambari UI, the user will be asked which keytab regeneration mode to use and
+whether services are to be restarted or not.
+
+<a name="disabling-kerberos"></a>
+
+### Disabling Kerberos
+
+In the event Kerberos needs to be removed from the Ambari cluster, Ambari will remove the managed
+Kerberos identities, keytab files, and Kerberos-specific configurations. The Ambari UI will perform
+the steps of stopping and starting the services as well as removing the Kerberos service; however
+this will need to be done manually, if using the Ambari REST API.
diff --git a/docs/ambari-design/kerberos/kerberos_descriptor.md b/docs/ambari-design/kerberos/kerberos_descriptor.md
new file mode 100644
index 0000000..2dd4798
--- /dev/null
+++ b/docs/ambari-design/kerberos/kerberos_descriptor.md
@@ -0,0 +1,855 @@
+---
+title: The Kerberos Descriptor
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](#the-kerberos-descriptor)
+ - [Components of a Kerberos Descriptor](#components-of-a-kerberos-descriptor)
+ - [Stack-level Properties](#stack-level-properties)
+ - [Stack-level Identities](#stack-level-identities)
+ - [Stack-level Auth-to-local-properties](#stack-level-auth-to-local-properties)
+ - [Stack-level Configurations](#stack-level-configuratons)
+ - [Services](#services)
+ - [Service-level Identities](#service-level-identities)
+ - [Service-level Auth-to-local-properties](#service-level-auth-to-local-properties)
+ - [Service-level Configurations](#service-level-configurations)
+ - [Components](#service-components)
+ - [Component-level Identities](#component-level-identities)
+ - [Component-level Auth-to-local-properties](#component-level-auth-to-local-properties)
+ - [Component-level Configurations](#component-level-configurations)
+ - [Kerberos Descriptor specifications](#kerberos-descriptor-specifications)
+ - [properties](#properties)
+ - [auth-to-local-properties](#auth-to-local-properties)
+ - [configurations](#configurations)
+ - [identities](#identities)
+ - [principal](#principal)
+ - [keytab](#keytab)
+ - [services](#services)
+ - [components](#components)
+ - [Examples](#examples)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="the-kerberos-descriptor"></a>
+
+## The Kerberos Descriptor
+
+The Kerberos Descriptor is a JSON-formatted text file containing information needed by Ambari to enable
+or disable Kerberos for a stack and its services. This file must be named **_kerberos.json_** and should
+be in the root directory of the relevant stack or service definition. Kerberos Descriptors are meant to
+be hierarchical such that details in the stack-level descriptor can be overwritten (or updated) by details
+in the service-level descriptors.
+
+For the services in a stack to be Kerberized, there must be a stack-level Kerberos Descriptor. This
+ensures that even if a common service has a Kerberos Descriptor, it may not be Kerberized unless the
+relevant stack indicates that supports Kerberos by having a stack-level Kerberos Descriptor.
+
+For a component of a service to be Kerberized, there must be an entry for it in its containing service's
+service-level descriptor. This allows for some of a services' components to be managed and other
+components of that service to be ignored by the automated Kerberos facility.
+
+Kerberos Descriptors are inherited from the base stack or service, but may be overridden as a full
+descriptor - partial descriptors are not allowed.
+
+A complete descriptor (which is built using the stack-level descriptor, the service-level descriptors,
+and any updates from user input) has the following structure:
+
+- Stack-level Properties
+- Stack-level Identities
+- Stack-level Configurations
+- Stack-level Auth-to-local-properties
+- Services
+ - Service-level Identities
+ - Service-level Auth-to-local-properties
+ - Service-level Configurations
+ - Components
+ - Component-level Identities
+ - Component-level Auth-to-local-properties
+ - Component-level Configurations
+
+Each level of the descriptor inherits the data from its parent. This data, however, may be overridden
+if necessary. For example, a component will inherit the configurations and identities of its container
+service; which in turn inherits the configurations and identities from the stack.
+
+<a name="components-of-a-kerberos-descriptor"></a>
+
+### Components of a Kerberos Descriptor
+
+<a name="stack-level-properties"></a>
+
+#### Stack-level Properties
+
+Stack-level properties is an optional set of name/value pairs that can be used in variable replacements.
+For example, if a property named "**_property1_**" exists with the value of "**_value1_**", then any instance of
+"**_${property1}_**" within a configuration property name or configuration property value will be replaced
+with "**_value1_**".
+
+This property is only relevant in the stack-level Kerberos Descriptor and may not be overridden by
+lower-level descriptors.
+
+See [properties](#properties).
+
+<a name="stack-level-identities"></a>
+
+#### Stack-level Identities
+
+Stack-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are common among all services in the stack. An example of such an identity is the
+Ambari smoke test user, which is used by all services to perform service check operations. Service-
+and component-level identities may reference (and specialize) stack-level identities using the
+identity’s name with a forward slash (/) preceding it. For example if there was a stack-level identity
+with the name "smokeuser", then a service or a component may create an identity block that references
+and specializes it by declaring a "**_reference_**" property and setting it to "/smokeuser". Within
+this identity block details of the identity may be and overwritten as necessary. This does not alter
+the stack-level identity, it essentially creates a copy of it and updates the copy's properties.
+
+See [identities](#identities).
+
+<a name="stack-level-auth-to-local-properties"></a>
+
+#### Stack-level Auth-to-local-properties
+
+Stack-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="stack-level-configurations"></a>
+
+#### Stack-level Configurations
+
+Stack-level configurations is an optional configurations block containing a list of zero or more
+configuration descriptors that are common among all services in the stack. Configuration descriptors
+are overridable due to the structure of the data. However, overriding configuration properties may
+create undesired behavior since it is not known until after the Kerberization process is complete
+what value a property will have.
+
+See [configurations](#configurations).
+
+<a name="services"></a>
+
+#### Services
+
+Services is a list of zero or more service descriptors. A stack-level Kerberos Descriptor should not
+list any services; however a service-level Kerberos Descriptor should contain at least one.
+
+See [services](#services).
+
+<a name="service-level-identities"></a>
+
+#### Service-level Identities
+
+Service-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are common among all components of the service. Component-level identities may
+reference (and specialize) service-level identities by specifying a relative or an absolute path
+to it.
+
+For example if there was a service-level identity with the name "service_identity", then a child
+component may create an identity block that references and specializes it by setting its "reference"
+attribute to "../service_identity" or "/service_name/service_identity" and overriding any values as
+necessary. This does not override the service-level identity, it essentially creates a copy of it and
+updates the copy's properties.
+
+##### Examples
+
+```
+{
+ "name" : "relative_path_example",
+ "reference" : "../service_identity",
+ ...
+}
+```
+
+```
+{
+ "name" : "absolute_path_example",
+ "reference" : "/SERVICE/service_identity",
+ ...
+}
+```
+
+**Note**: By using the absolute path to an identity, any service-level identity may be referenced by
+any other service or component.
+
+See [identities](#identities).
+
+<a name="service-level-auth-to-local-properties"></a>
+
+#### Service-level Auth-to-local-properties
+
+Service-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="service-level-configurations"></a>
+
+#### Service-level Configurations
+
+Service-level configurations is an optional configurations block listing of zero or more configuration
+descriptors that are common among all components within a service. Configuration descriptors may be
+overridden due to the structure of the data. However, overriding configuration properties may create
+undesired behavior since it is not known until after the Kerberization process is complete what value
+a property will have.
+
+See [configurations](#configurations).
+
+<a name="service-components"></a>
+
+#### Components
+
+Components is a list of zero or more component descriptor blocks.
+
+See [components](#components).
+
+<a name="component-level-identities"></a>
+
+#### Component-level Identities
+
+Component-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are specific to the component. A Component-level identity may be referenced
+(and specialized) by using the absolute path to it (`/service_name/component_name/identity_name`).
+This does not override the component-level identity, it essentially creates a copy of it and updates
+the copy's properties.
+
+See [identities](#identities).
+
+<a name="component-level-auth-to-local-properties"></a>
+
+#### Component-level Auth-to-local-properties
+
+Component-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="component-level-configurations"></a>
+
+#### Component-level Configurations
+
+Component-level configurations is an optional configurations block listing zero or more configuration
+descriptors that are specific to the component.
+
+See [configurations](#configurations).
+
+### Descriptor Specifications
+
+<a name="properties"></a>
+
+#### properties
+
+The `properties` block is only valid in the service-level Kerberos Descriptor file. This block is
+a set of name/value pairs as follows:
+
+```
+"properties" : {
+ "property_1" : "value_1",
+ "property_2" : "value_2",
+ ...
+}
+```
+
+<a name="auth-to-local-properties"></a>
+
+#### auth-to-local-properties
+
+The `auth-to-local-properties` block is valid in the stack-, service-, and component-level
+descriptors. This block is a list of configuration specifications
+(`config-type/property_name[|concatenation_scheme]`) indicating which properties contain
+auth-to-local rules that should be dynamically updated based on the identities used within the
+Kerberized cluster.
+
+The specification optionally declares the concatenation scheme to use to append
+the rules into a rule set value. If specified one of the following schemes may be set:
+
+- **`new_lines`** - rules in the rule set are separated by a new line (`\n`)
+- **`new_lines_escaped`** - rules in the rule set are separated by a `\` and a new line (`\n`)
+- **`spaces`** - rules in the rule set are separated by a whitespace character (effectively placing all rules in a single line)
+
+If not specified, the default concatenation scheme is `new_lines`.
+
+```
+"auth-to-local-properties" : [
+ "core-site/hadoop.security.auth_to_local",
+ "service.properties/http.authentication.kerberos.name.rules|new_lines_escaped",
+ ...
+]
+```
+
+<a name="configurations"></a>
+
+#### configurations
+
+A `configurations` block may exist in stack-, service-, and component-level descriptors.
+This block is a list of one or more configuration blocks containing a single structure named using
+the configuration type and containing values for each relevant property.
+
+Each property name and value may be a concrete value or contain variables to be replaced using values
+from the stack-level `properties` block or any available configuration. Properties from the `properties`
+block are referenced by name (`${property_name}`), configuration properties are reference by
+configuration specification (`${config-type/property_name}`) and kerberos principals are referenced by the principal path
+(`principals/SERVICE/COMPONENT/principal_name`).
+
+```
+"configurations" : [
+ {
+ "config-type-1" : {
+ "${cluster-env/smokeuser}_property" : "value1",
+ "some_realm_property" : "${realm}",
+ ...
+ }
+ },
+ {
+ "config-type-2" : {
+ "property-2" : "${cluster-env/smokeuser}",
+ ...
+ }
+ },
+ ...
+]
+```
+
+If `cluster-env/smokuser` was `"ambari-qa"` and realm was `"EXAMPLE.COM"`, the above block would
+effectively be translated to
+
+```
+"configurations" : [
+ {
+ "config-type-1" : {
+ "ambari-qa_property" : "value1",
+ "some_realm_property" : "EXAMPLE.COM",
+ ...
+ }
+ },
+ {
+ "config-type-2" : {
+ "property-2" : "ambari-qa",
+ ...
+ }
+ },
+ ...
+]
+```
+
+<a name="identities"></a>
+
+#### identities
+
+An `identities` descriptor may exist in stack-, service-, and component-level descriptors. This block
+is a list of zero or more identity descriptors. Each identity descriptor is a block containing a `name`,
+an optional `reference` identifier, an optional `principal` descriptor, and an optional `keytab`
+descriptor.
+
+The `name` property of an `identity` descriptor should be a concrete name that is unique with in its
+`local` scope (stack, service, or component). However, to maintain backwards-compatibility with
+previous versions of Ambari, it may be a reference identifier to some other identity in the
+Kerberos Descriptor. This feature is deprecated and may not be available in future versions of Ambari.
+
+The `reference` property of an `identitiy` descriptor is optional. If it exists, it indicates that the
+properties from referenced identity is to be used as the base for the current identity and any properties
+specified in the local identity block overrides the base data. In this scenario, the base data is copied
+to the local identities and therefore changes are realized locally, not globally. Referenced identities
+may be hierarchical, so a referenced identity may reference another identity, and so on. Because of
+this, care must be taken not to create cyclic references. Reference values must be in the form of a
+relative or absolute _path_ to the referenced identity descriptor. Relative _paths_ start with a `../`
+and may be specified in component-level identity descriptors to reference an identity descriptor
+in the parent service. Absolute _paths_ start with a `/` and may be specified at any level as follows:
+
+- **Stack-level** identity reference: `/identitiy_name`
+- **Service-level** identity reference: `/SERVICE_NAME/identitiy_name`
+- **Component-level** identity reference: `/SERVICE_NAME/COMPONENT_NAME/identitiy_name`
+
+```
+"identities" : [
+ {
+ "name" : "local_identity",
+ "principal" : {
+ ...
+ },
+ "keytab" : {
+ ...
+ }
+ },
+ {
+ "name" : "/smokeuser",
+ "principal" : {
+ "configuration" : "service-site/principal_property_name"
+ },
+ "keytab" : {
+ "configuration" : "service-site/keytab_property_name"
+ }
+ },
+ ...
+]
+```
+
+<a name="principal"></a>
+
+#### principal
+
+The `principal` block is an optional block inside an `identity` descriptor block. It declares the
+details about the identity’s principal, including the principal’s `value`, the `type` (user or service),
+the relevant `configuration` property, and a local username mapping. All properties are optional; however
+if no base or default value is available (via the parent identity's `reference` value) for all properties,
+the principal may be ignored.
+
+The `value` property of the principal is expected to be the normalized principal name, including the
+principal’s components and realm. In most cases, the realm should be specified using the realm variable
+(`${realm}` or `${kerberos-env/realm}`). Also, in the case of a service principal, "`_HOST`" should be
+used to represent the relevant hostname. This value is typically replaced on the agent side by either
+the agent-side scripts or the services themselves to be the hostname of the current host. However the
+built-in hostname variable (`${hostname}`) may be used if "`_HOST`" replacement on the agent-side is
+not available for the service. Examples: `smokeuser@${realm}`, `service/_HOST@${realm}`.
+
+The `type` property of the principal may be either `user` or `service`. If not specified, the type is
+assumed to be `user`. This value dictates how the identity is to be created in the KDC or Active Directory.
+It is especially important in the Active Directory case due to how accounts are created. It also,
+indicates to Ambari how to handle the principal and relevant keytab file reguarding the user interface
+behavior and data caching.
+
+The `configuration` property is an optional configuration specification (`config-type/property_name`)
+that is to be set to this principal's `value` (after its variables have been replaced).
+
+The `local_username` property, if supplied, indicates which local user account to use when generating
+auth-to-local rules for this identity. If not specified, no explicit auth-to-local rule will be generated.
+
+```
+"principal" : {
+ "value": "${cluster-env/smokeuser}@${realm}",
+ "type" : "user" ,
+ "configuration": "cluster-env/smokeuser_principal_name",
+ "local_username" : "${cluster-env/smokeuser}"
+}
+```
+
+```
+"principal" : {
+ "value": "component1/_HOST@${realm}",
+ "type" : "service" ,
+ "configuration": "service-site/component1.principal"
+}
+```
+
+<a name="keytab"></a>
+
+#### keytab
+
+The `keytab` block is an optional block inside an `identity` descriptor block. It describes how to
+create and store the relevant keytab file. This block declares the keytab file's path in the local
+filesystem of the destination host, the permissions to assign to that file, and the relevant
+configuration property.
+
+The `file` property declares an absolute path to use to store the keytab file when distributing to
+relevant hosts. If this is not supplied, the keytab file will not be created.
+
+The `owner` property is an optional block indicating the local user account to assign as the owner of
+the file and what access (`"rw"` - read/write; `"r"` - read-only) should
+be granted to that user. By default, the owner will be given read-only access.
+
+The `group` property is an optional block indicating which local group to assigned as the group owner
+of the file and what access (`"rw"` - read/write; `"r"` - read-only; `“”` - no access) should be granted
+to local user accounts in that group. By default, the group will be given no access.
+
+The `configuration` property is an optional configuration specification (`config-type/property_name`)
+that is to be set to the path of this keytabs file (after any variables have been replaced).
+
+```
+"keytab" : {
+ "file": "${keytab_dir}/smokeuser.headless.keytab",
+ "owner": {
+ "name": "${cluster-env/smokeuser}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "${cluster-env/smokeuser_keytab}"
+}
+```
+
+<a name="services"></a>
+
+#### services
+
+A `services` block may exist in the stack-level and the service-level Kerberos Descriptor file.
+This block is a list of zero or more service descriptors to add to the Kerberos Descriptor.
+
+Each service block contains a service `name`, and optionals `identities`, `auth_to_local_properties`
+`configurations`, and `components` blocks.
+
+```
+"services": [
+ {
+ "name": "SERVICE1_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ],
+ "components": [
+ ...
+ ]
+ },
+ {
+ "name": "SERVICE2_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ],
+ "components": [
+ ...
+ ]
+ },
+ …
+]
+```
+
+<a name="components"></a>
+
+#### components
+
+A `components` block may exist within a `service` descriptor block. This block is a list of zero or
+more component descriptors belonging to the containing service descriptor. Each component descriptor
+is a block containing a component `name`, and optional `identities`, `auth_to_local_properties`,
+and `configurations` blocks.
+
+```
+"components": [
+ {
+ "name": "COMPONENT_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ]
+ },
+ ...
+]
+```
+
+<a name="examples"></a>
+
+### Examples
+
+#### Example Stack-level Kerberos Descriptor
+The following example is annotated for descriptive purposes. The annotations are not valid in a real
+JSON-formatted file.
+
+```
+{
+ // Properties that can be used in variable replacement operations.
+ // For example, ${keytab_dir} will resolve to "/etc/security/keytabs".
+ // Since variable replacement is recursive, ${realm} will resolve
+ // to ${kerberos-env/realm}, which in-turn will resolve to the
+ // declared default realm for the cluster
+ "properties": {
+ "realm": "${kerberos-env/realm}",
+ "keytab_dir": "/etc/security/keytabs"
+ },
+ // A list of global Kerberos identities. These may be referenced
+ // using /identity_name. For example the “spnego” identity may be
+ // referenced using “/spnego”
+ "identities": [
+ {
+ "name": "spnego",
+ // Details about this identity's principal. This instance does not
+ // declare any value for configuration or local username. That is
+ // left up to the services and components use wish to reference
+ // this principal and set overrides for those values.
+ "principal": {
+ "value": "HTTP/_HOST@${realm}",
+ "type" : "service"
+ },
+ // Details about this identity’s keytab file. This keytab file
+ // will be created in the configured keytab file directory with
+ // read-only access granted to root and users in the cluster’s
+ // default user group (typically, hadoop). To ensure that only
+ // a single copy exists on the file system, references to this
+ // identity should not override the keytab file details;
+ // however if it is desired that multiple keytab files are
+ // created, these values may be overridden in a reference
+ // within a service or component. Since no configuration
+ // specification is set, the the keytab file location will not
+ // be set in any configuration file by default. Services and
+ // components need to reference this identity to update this
+ // value as needed.
+ "keytab": {
+ "file": "${keytab_dir}/spnego.service.keytab",
+ "owner": {
+ "name": "root",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ }
+ }
+ },
+ {
+ "name": "smokeuser",
+ // Details about this identity's principal. This instance declares
+ // a configuration and local username mapping. Services and
+ // components can override this to set additional configurations
+ // that should be set to this principal value. Overriding the
+ // local username may create undesired behavior since there may be
+ // conflicting entries in relevant auth-to-local rule sets.
+ "principal": {
+ "value": "${cluster-env/smokeuser}@${realm}",
+ "type" : "user",
+ "configuration": "cluster-env/smokeuser_principal_name",
+ "local_username" : "${cluster-env/smokeuser}"
+ },
+ // Details about this identity’s keytab file. This keytab file
+ // will be created in the configured keytab file directory with
+ // read-only access granted to the configured smoke user
+ // (typically ambari-qa) and users in the cluster’s default
+ // user group (typically hadoop). To ensure that only a single
+ // copy exists on the file system, references to this identity
+ // should not override the keytab file details; however if it
+ // is desired that multiple keytab files are created, these
+ // values may be overridden in a reference within a service or
+ // component.
+ "keytab": {
+ "file": "${keytab_dir}/smokeuser.headless.keytab",
+ "owner": {
+ "name": "${cluster-env/smokeuser}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "cluster-env/smokeuser_keytab"
+ }
+ }
+ ]
+}
+```
+
+#### Example Service-level Kerberos Descriptor
+The following example is annotated for descriptive purposes. The annotations are not valid in a real
+JSON-formatted file.
+
+```
+{
+ // One or more services may be listed in a service-level Kerberos
+ // Descriptor file
+ "services": [
+ {
+ "name": "SERVICE_1",
+ // Service-level identities to be created if this service is installed.
+ // Any relevant keytab files will be distributed to hosts with at least
+ // one of the components on it.
+ "identities": [
+ // Service-specific identity declaration, declaring all properties
+ // needed initiate the creation of the principal and keytab files,
+ // as well as setting the service-specific configurations. This may
+ // be referenced by contained components using ../service1_identity.
+ {
+ "name": "service1_identity",
+ "principal": {
+ "value": "service1/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/service1.principal"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/service1.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "service1-site/service1.keytab.file"
+ }
+ },
+ // Service-level identity referencing the stack-level spnego
+ // identity and overriding the principal and keytab configuration
+ // specifications.
+ {
+ "name": "service1_spnego",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/service1.web.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/service1.web.keytab.file"
+ }
+ },
+ // Service-level identity referencing the stack-level smokeuser
+ // identity. No properties are being overridden and overriding
+ // the principal and keytab configuration specifications.
+ // This ensures that the smokeuser principal is created and its
+ // keytab file is distributed to all hosts where components of this
+ // this service are installed.
+ {
+ "name": "service1_smokeuser",
+ "reference": "/smokeuser"
+ }
+ ],
+ // Properties related to this service that require the auth-to-local
+ // rules to be dynamically generated based on identities create for
+ // the cluster.
+ "auth_to_local_properties" : [
+ "service1-site/security.auth_to_local"
+ ],
+ // Configuration properties to be set when this service is installed,
+ // no matter which components are installed
+ "configurations": [
+ {
+ "service-site": {
+ "service1.security.authentication": "kerberos",
+ "service1.security.auth_to_local": ""
+ }
+ }
+ ],
+ // A list of components related to this service
+ "components": [
+ {
+ "name": "COMPONENT_1",
+ // Component-specific identities to be created when this component
+ // is installed. Any keytab files specified will be distributed
+ // only to the hosts where this component is installed.
+ "identities": [
+ // An identity "local" to this component
+ {
+ "name": "component1_service_identity",
+ "principal": {
+ "value": "component1/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/comp1.principal",
+ "local_username" : "${service1-env/service_user}"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/s1c1.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": ""
+ },
+ "configuration": "service1-site/comp1.keytab.file"
+ }
+ },
+ // The stack-level spnego identity overridden to set component-specific
+ // configurations
+ {
+ "name": "component1_spnego_1",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/comp1.spnego.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp1.spnego.keytab.file"
+ }
+ },
+ // The stack-level spnego identity overridden to set a different set of component-specific
+ // configurations
+ {
+ "name": "component1_spnego_2",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/comp1.someother.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp1.someother.keytab.file"
+ }
+ }
+ ],
+ // Component-specific configurations to set if this component is installed
+ "configurations": [
+ {
+ "service-site": {
+ "comp1.security.type": "kerberos"
+ }
+ }
+ ]
+ },
+ {
+ "name": "COMPONENT_2",
+ "identities": [
+ {
+ "name": "component2_service_identity",
+ "principal": {
+ "value": "component2/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/comp2.principal",
+ "local_username" : "${service1-env/service_user}"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/s1c2.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": ""
+ },
+ "configuration": "service1-site/comp2.keytab.file"
+ }
+ },
+ // The service-level service1_identity identity overridden to
+ // set component-specific configurations
+ {
+ "name": "component2_service1_identity",
+ "reference": "../service1_identity",
+ "principal": {
+ "configuration": "service1-site/comp2.service.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp2.service.keytab.file"
+ }
+ }
+ ],
+ "configurations" : [
+ {
+ "service-site" : {
+ "comp2.security.type": "kerberos"
+ }
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
diff --git a/docs/ambari-design/kerberos/kerberos_service.md b/docs/ambari-design/kerberos/kerberos_service.md
new file mode 100644
index 0000000..2045aa7
--- /dev/null
+++ b/docs/ambari-design/kerberos/kerberos_service.md
@@ -0,0 +1,341 @@
+---
+title: The Kerberos Service
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](#the-kerberos-service)
+ - [Configurations](#configurations)
+ - [kerberos-env](#kerberos-env)
+ - [krb5-conf](#krb5-conf)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="the-kerberos-service"></a>
+
+## The Kerberos Service
+
+<a name="configurations"></a>
+
+### Configurations
+
+<a name="kerberos-env"></a>
+
+#### kerberos-env
+
+##### kdc_type
+
+The type of KDC being used.
+
+_Possible Values:_
+- `none`
+ - Ambari is not to integrate with a KDC. In this case, it is expected that the Kerberos identities
+will be created and the keytab files are distributed manually
+- `mit-kdc`
+ - Ambari is to integrate with an MIT KDC
+- `active-directory`
+ - Ambari is to integrate with an Active Directory
+- `ipa`
+ - Ambari is to integrate with a FreeIPA server
+
+##### manage_identities
+
+Indicates whether the Ambari-specified user and service Kerberos identities (principals and keytab files)
+should be managed (created, deleted, updated, etc...) by Ambari (`true`) or managed manually by the
+user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+##### create_ambari_principal
+
+Indicates whether the Ambari Kerberos identity (principal and keytab file used by Ambari, itself, and
+its views) should be managed (created, deleted, updated, etc...) by Ambari (`true`) or managed manually
+by the user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+This property is dependent on the value of `manage_identities`, where as if `manage_identities` is
+false, `create_ambari_principal` will assumed to be `false` as well.
+
+##### manage_auth_to_local
+
+Indicates whether the Hadoop auth-to-local rules should be managed by Ambari (`true`) or managed
+manually by the user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+##### install_packages
+
+Indicates whether Ambari should install the Kerberos client packages (`true`) or not (`false`).
+If not, it is expected that Kerberos utility programs installed by the user (such as kadmin, kinit,
+klist, and kdestroy) are compatible with MIT Kerberos 5 version 1.10.3 in command line options and
+behaviors.
+
+_Possible Values:_ `true`, `false`
+
+##### ldap_url
+
+The URL to the Active Directory LDAP Interface. This value **must** indicate a secure channel using
+LDAPS since it is required for creating and updating passwords for Active Directory accounts.
+
+_Example:_ `ldaps://ad.example.com:636`
+
+If the `kdc_type` is `active-directory`, this property is mandatory.
+
+##### container_dn
+
+The distinguished name (DN) of the container used store the Ambari-managed user and service principals
+within the configured Active Directory
+
+_Example:_ `OU=hadoop,DC=example,DC=com`
+
+If the `kdc_type` is `active-directory`, this property is mandatory.
+
+##### encryption_types
+
+The supported (space-delimited) list of session key encryption types that should be returned by the KDC.
+
+_Default value:_ aes des3-cbc-sha1 rc4 des-cbc-md5
+
+##### realm
+
+The default realm to use when creating service principals
+
+_Example:_ `EXAMPLE.COM`
+
+This value is expected to be in all uppercase characters.
+
+##### kdc_hosts
+
+A comma-delimited list of IP addresses or FQDNs for the list of relevant KDC hosts. Optionally a
+port number may be included for each entry.
+
+_Example:_ `kdc.example.com, kdc1.example.com`
+
+_Example:_ `kdc.example.com:88, kdc1.example.com:88`
+
+##### admin_server_host
+
+The IP address or FQDN for the Kerberos administrative host. Optionally a port number may be included.
+
+_Example:_ `kadmin.example.com`
+
+_Example:_ `kadmin.example.com:88`
+
+If the `kdc_type` is `mit-kdc` or `ipa`, the value must be the FQDN of the Kerberos administrative host.
+
+##### master_kdc
+
+The IP address or FQDN of the master KDC host in a master-slave KDC deployment. Optionally a port
+number may be included.
+
+_Example:_ `kadmin.example.com`
+
+_Example:_ `kadmin.example.com:88`
+
+##### executable_search_paths
+
+A comma-delimited list of search paths to use to find Kerberos utilities like kadmin and kinit.
+
+_Default value:_ `/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin`
+
+##### password_length
+
+The length required length for generated passwords.
+
+_Default value:_ `20`
+
+##### password_min_lowercase_letters
+
+The minimum number of lowercase letters (a-z) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_uppercase_letters
+
+The minimum number of uppercase letters (A-Z) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_digits
+
+The minimum number of digits (0-9) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_punctuation
+
+The minimum number of punctuation characters (?.!$%^*()-_+=~) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_whitespace
+
+The minimum number of whitespace characters required in generated passwords
+
+_Default value:_ `0`
+
+##### service_check_principal_name
+
+The principal name to use when executing the Kerberos service check
+
+_Example:_ `${cluster_name}-${short_date}`
+
+##### case_insensitive_username_rules
+
+Force principal names to resolve to lowercase local usernames in auth-to-local rules
+
+_Possible values:_ `true`, `false`
+
+_Default value:_ `false`
+
+##### ad_create_attributes_template
+
+A Velocity template to use to generate a JSON-formatted document containing the set of attribute
+names and values needed to create a new Kerberos identity in the relevant Active Directory.
+
+Variables include:
+
+- `principal_name` - the components (primary and instance) portion of the principal
+- `principal_primary` - the _primary component_ of the principal name
+- `principal_instance` - the _instance component_ of the principal name
+- `realm` - the `realm` portion of the principal
+- `realm_lowercase` - the lowercase form of the `realm` of the principal
+- `normalized_principal` - the full principal value, including the component and realms parts
+- `principal_digest` - a binhexed-encoded SHA1 digest of the normalized principal
+- `principal_digest_256` - a binhexed-encoded SHA256 digest of the normalized principal
+- `principal_digest_512` - a binhexed-encoded SHA512 digest of the normalized principal
+- `password` - the generated password
+- `is_service` - `true` if the principal is a _service_ principal, `false` if the principal is a _user_ principal
+- `container_dn` - the `kerberos-env/container_dn` property value
+
+_Note_: A principal is made up of the following parts: primary component, instances component
+(optional), and realm:
+
+* User principal: **_`primary_component`_**@**_`realm`_**
+* Service principal: **_`primary_component`_**/**_`instance_component`_**@**_`realm`_**
+
+_Default value:_
+
+```
+{
+"objectClass": ["top", "person", "organizationalPerson", "user"],
+"cn": "$principal_name",
+#if( $is_service )
+"servicePrincipalName": "$principal_name",
+#end
+"userPrincipalName": "$normalized_principal",
+"unicodePwd": "$password",
+"accountExpires": "0",
+"userAccountControl": "66048"
+}
+```
+
+This property is mandatory and only used if the `kdc_type` is `active-directory`
+
+##### kdc_create_attributes
+
+The set of attributes to use when creating a new Kerberos identity in the relevant (MIT) KDC.
+
+_Example:_ `-requires_preauth max_renew_life=7d`
+
+This property is optional and only used if the `kdc_type` is `mit-kdc`
+
+##### ipa_user_group
+
+The group in IPA that user principals should be a member of.
+
+This property is optional and only used if the `kdc_type` is `ipa`
+
+<a name="krb5-conf"></a>
+
+#### krb5-conf
+
+##### manage_krb5_conf
+
+Indicates whether the krb5.conf file should be managed (created, updated, etc...) by Ambari (`true`)
+or managed manually by the user (`false`).
+
+_Possible values:_ `true`, `false`
+
+_Default value:_ `false`
+
+##### domains
+
+A comma-separated list of domain names used to map server host names to the realm name.
+
+_Example:_ host.example.com, example.com, .example.com
+
+This property is optional
+
+##### conf_dir
+
+The krb5.conf configuration directory
+Default value: /etc
+
+##### content
+
+Customizable krb5.conf template (Jinja template engine)
+
+_Default value:_
+
+```
+[libdefaults]
+ renew_lifetime = 7d
+ forwardable = true
+ default_realm = {{realm}}
+ ticket_lifetime = 24h
+ dns_lookup_realm = false
+ dns_lookup_kdc = false
+ default_ccache_name = /tmp/krb5cc_%{uid}
+ #default_tgs_enctypes = {{encryption_types}}
+ #default_tkt_enctypes = {{encryption_types}}
+{% if domains %}
+[domain_realm]
+{%- for domain in domains.split(',') %}
+ {{domain|trim()}} = {{realm}}
+{%- endfor %}
+{% endif %}
+[logging]
+ default = FILE:/var/log/krb5kdc.log
+ admin_server = FILE:/var/log/kadmind.log
+ kdc = FILE:/var/log/krb5kdc.log
+
+[realms]
+ {{realm}} = {
+{%- if master_kdc %}
+ master_kdc = {{master_kdc|trim()}}
+{%- endif -%}
+{%- if kdc_hosts > 0 -%}
+{%- set kdc_host_list = kdc_hosts.split(',') -%}
+{%- if kdc_host_list and kdc_host_list|length > 0 %}
+ admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}
+{%- if kdc_host_list -%}
+{%- if master_kdc and (master_kdc not in kdc_host_list) %}
+ kdc = {{master_kdc|trim()}}
+{%- endif -%}
+{% for kdc_host in kdc_host_list %}
+ kdc = {{kdc_host|trim()}}
+{%- endfor -%}
+{% endif %}
+{%- endif %}
+{%- endif %}
+ }
+
+{# Append additional realm declarations below #}
+```
diff --git a/docs/ambari-design/metrics/ambari-metrics-whitelisting.md b/docs/ambari-design/metrics/ambari-metrics-whitelisting.md
new file mode 100644
index 0000000..6852652
--- /dev/null
+++ b/docs/ambari-design/metrics/ambari-metrics-whitelisting.md
@@ -0,0 +1,60 @@
+# Ambari Metrics - Whitelisting
+
+In large clusters (500+ nodes), sometimes there are performance issues seen in AMS aggregations. In the ambari-metrics-collector log file, we can see log lines that look like
+
+```
+20:51:30,952 INFO 2080712366@qtp-974606690-381 AsyncProcess:1597 - #1, waiting for 13948 actions to finish
+20:51:31,601 INFO 1279097595@qtp-974606690-359 AsyncProcess:1597 - #1, waiting for 19376 actions to finish
+```
+
+In Ambari 3.0.0, we are tackling these performance issues through a complete schema and aggregation logic revamp. Until then, we can use AMS whitelisting to reduce the number of metrics tracked by AMS, there by solving this scale problem.
+
+## How do we enable whitelisting in AMS.
+
+**Until Ambari 2.4.3**
+ A metric whitelist file can be used to track the set of metrics in AMS. All other metrics will be discarded.
+
+**STEPS**
+
+* Metric whitelist file is present in /etc/ambari-metrics-collector/conf. If not present in older Ambari versions, it can be downloaded from https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-timelineservice/conf/unix/metrics_whitelist to the collector host.
+* Adding config ams-site : timeline.metrics.whitelist.file = <path_to_whitelist_file>
+* Restart AMS collector
+* Verify whitelisting config was used. In ambari-metrics-collector log file, verify the line 'Whitelisting # metrics'.
+
+**From Ambari 2.5.0 onwards**
+From Ambari 2.5.0, more refinements for whitelisting were included.
+
+* **App Blacklisting** - Blacklist metrics from one or more services. Other service metrics will be entirely allowed or controlled through a whitelist file.
+
+ ```
+ ams-site : timeline.metrics.apps.blacklist = hbase,namenode
+ ```
+
+* **App Whitelisting** - Whitelist metrics from one or more services.
+
+ ```
+ ams-site:timeline.metrics.apps.whitelist = nimbus,datanode
+ ```
+
+ NOTE : The App name can be found from the metadata URL :
+
+ ```
+ http:<metrics_collector_host>:6188/ws/v1/timeline/metrics/metadata
+ ```
+
+* **Metric Whitelisting** - Same as the whitelisting method in Ambari 2.4.3 (through a whitelist file).
+In addition to supplying metric names in the whitelist file, patterns can also be supplied using the ._p_ perfix. For example, a pattern can be specified as follows
+
+._p_dfs.FSNamesystem.*
+
+._p_jvm.JvmMetrics*
+
+An example of a metric whitelisting file that has both metrics and patterns - https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-timelineservice/src/test/resources/test_data/metric_whitelist.dat.
+
+These whitelisting/blacklisting techniques can be used together.
+
+* If you just have timeline.metrics.whitelist.file = <some_file>, only metrics in that file will be allowed (irrespective of whatever apps might be sending them).
+* If you just have timeline.metrics.apps.blacklist = datanode, all datanode metrics will be disallowed. Metrics from all other services will be allowed.
+* If you just have timeline.metrics.apps.whitelist = namenode, it is not useful since there is no blacklisting at all.
+* If you have metric whitelisting enabled (through a file), and have timeline.metrics.apps.blacklist = datanode, all datanode metrics will be disallowed. The whitelisted metrics from other services will be allowed.
+* If you have timeline.metrics.apps.blacklist = datanode, timeline.metrics.apps.whitelist = namenode and metric whitelisting enabled (through a file), datanode metrics will be blacklisted, all namenode metrics will be allowed, and whitelisted metrics from other services will be allowed.
diff --git a/docs/ambari-design/metrics/ambari-server-metrics.md b/docs/ambari-design/metrics/ambari-server-metrics.md
new file mode 100644
index 0000000..380fa38
--- /dev/null
+++ b/docs/ambari-design/metrics/ambari-server-metrics.md
@@ -0,0 +1,109 @@
+# Ambari Server Metrics
+
+## Outline
+Ambari Server can be used to manage a few tens of nodes to 1000+ nodes. In large clusters, or clusters with sub-optimal infrastructure, capturing Ambari Server performance can be useful for tuning the server as well as guiding future performance optimization efforts. Through this feature, a Metrics Source-Sink framework has been implemented within the AmbariServer which facilitates fine grained control of the various metric sources as well as eases addition of future metrics sources.
+
+Specifically, Ambari server JVM and database (EclipseLink) metric sources have been wired up to send metrics to AMS, and visualized through Grafana dashboards.
+
+## Metrics System Terminology
+
+## Configuration / Enabling
+* To enable Ambari Server metrics, make sure the following config file exists during Ambari Server start/restart - /etc/ambari-server/conf/metrics.properties.
+* Currently, only 2 metric sources have been implemented - JVM Metric Source and Database Metric Source.
+* To add / remove a metric source to be tracked the following config needs to be modified in the metrics.properties file.
+ ```
+ metric.sources=jvm,database
+ ```
+* Source specific configs are discussed in the metrics source section.
+
+## Metric Sources
+
+Name|Functionality|Interface|Implementation(s)
+----|-------------|---------|-----------------------
+Metrics Service |Serves as a starting point for the Metrics system.<br></br>Loads metrics configuration.<br></br>Initializes the sink. If the sink is not properly initialized (AMS is not yet deployed), it tries to re-initialize every 5 minutes asynchronously.<br></br>Initializes and starts configured sources. | org.apache.ambari.server.metrics.system.MetricsService | org.apache.ambari.server.metrics.system.impl.MetricsServiceImpl
+Metric Source | Any sub-component of Ambari Server that has metrics of interest.<br></br>Needs subset of metrics configuration corresponding to the source and the Sink to be initialized.<br></br>Periodically publishes metrics to the Sink.<br></br>Example - JVM, database etc. | org.apache.ambari.server.metrics.system.MetricsSource |org.apache.ambari.server.metrics.system.impl.JvmMetricsSource<br></br>org.apache.ambari.server.metrics.system.impl.DatabaseMetricsSource
+Metric Sink | Flushes the metrics to an external metrics collection system (Metrics Collector) | org.apache.ambari.server.metrics.system.MetricsSink | org.apache.ambari.server.metrics.system.impl.AmbariMetricSinkImp
+
+### JVM Metrics
+
+**Working**
+
+* Collects and publishes Ambari Server JVM related metrics using Codahale library.
+* Metrics collected for GC, Buffers, Threads, Memory and File descriptor.
+* To enable this source, add "jvm" to the metric.sources config in metrics.properties and restart Ambari Server.
+
+**Configs**
+
+Config Name|Default Value|Explanation
+-----------|-------------|---------------------
+source.jvm.class | org.apache.ambari.server.metrics.system.impl.JvmMetricsSource | Class used to collect JVM Metrics.
+source.jvm.interval | 10 | Interval, in seconds, used to denote how often metrics should be collected.
+
+**Grafana dashboard**
+
+* The 'Ambari Server - JVM' dashboard represents the metrics captured from the JvmMetricsSource.
+* Contains memory, GC and threads related graphs that might be of interest on a non performing syste
+
+### Database Metrics
+
+**Working**
+
+The EclipseLink PeformanceMonitor has been extended to support a custom Ambari Database Metrics source. It provides us with monitoring data per entity and per operation on the entity.
+
+The Performance Monitor provides 2 kinds of metrics -
+
+* Counter - Number of occurrences of the operation / query. For such type of metrics, the metric name will start with Counter.
+* Timer - Total cumulative time spent on the operation / query. For such type of metrics, the metric name will start with Timer.
+For example, some of the metrics being collected tothe Database Metrics Source.
+
+* Counter.ReadObjectQuery.HostRoleCommandEntity.readHostRoleCommandEntity
+
+* Timer.ReadAllQuery.StackEntity.StackEntity.findByNameAndVersion.ObjectBuilding
+
+
+In addition to the Counter & Timer metrics collected from EclipseLink, a computed metric of Timer/Counter (divided by) is also sent. This metrics provides the average time taken for an operation across time.
+
+For example, if
+
+```
+ Counter Metric : Counter.ReadAllQuery.HostRoleCommandEntity = 10000
+ Timer Metric : Timer.ReadAllQuery.HostRoleCommandEntity = 50
+ Computed Metric (Avg time for the operation) : ReadAllQuery.HostRoleCommandEntity = 200 (10000 div by 50)
+```
+
+As seen above, the computed metric name will be the same as the Timer & Counter metric except without the 'Timer.' / 'Counter.' prefix.
+
+To enable this source, add "**database**" to the **metric.sources** config in metrics.properties and restart Ambari Server.
+
+**configs**
+
+Config Name|Default Value|Explanation
+-----------|-------------|---------------------
+source.database.class | org.apache.ambari.server.metrics.system.impl.DatabaseMetricsSource | Class used to collect Database Metrics from extended Performance Monitor class - org.apache.ambari.server.metrics.system.impl.AmbariPerformanceMonitor.
+source.database.performance.monitor.query.weight | HEAVY | EclipseLink Performance monitor granularity : NONE / NORMAL / HEAVY / ALL
+source.database.monitor.dumptime | 60000 | Collection interval in milliseconds
+source.database.monitor.entities | Cluster(.*)Entity,Host(.*)Entity,ExecutionCommandEntity, ServiceComponentDesiredStateEntity,Alert(.*)Entity,StackEntity,StageEntity | Only these entities' metrics will be collected and tracked. (org.apache.ambari.server.orm.entities).
+source.database.monitor.query.keywords.include | CacheMisses | Include some metrics which have the keyword even if they are not part of requested Entities.
+
+**Grafana dashboards**
+
+Ambari database metrics have been represented in 2 Grafana dashboards.
+
+* 'Ambari Server - Database' dashboard
+ * An aggregate dashboard that displays Total ReadAllQuery, Cache Hits, Cache Misses, Query Stages, Query Types across all entities.
+ * It also contains an example of how to visualize Timer, Counter and Avg Timing data for a specific entity - HostRoleCommandEntity.
+* 'Ambari Server - Top N Entities' dashboard
+ * Shows Top N entities that have maximum number of ReadAllQuery operations done on them.
+ * Shows Top N entities that the database spent the most time in ReadAllQuery operations.
+ * Shows Top N entities that have maximum Cache Misses
+These dashboard graphs are meant to provide an example of how to create graphs to query specific entities or operations in an Ad Hoc manner.
+
+## Disabling Ambari Server metrics globally
+
+* Add the following config to /etc/ambari-server/conf/ambari.properties
+ * ambariserver.metrics.disable=true
+* Restart Ambari Server
+
+## Related JIRA
+
+[AMBARI-17589](https://issues.apache.org/jira/browse/AMBARI-17589)
\ No newline at end of file
diff --git a/docs/ambari-design/metrics/configuration.mdx b/docs/ambari-design/metrics/configuration.mdx
new file mode 100644
index 0000000..78d05c2
--- /dev/null
+++ b/docs/ambari-design/metrics/configuration.mdx
@@ -0,0 +1,190 @@
+# Configuration
+
+## Metrics Collector
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+ams-site | /etc/ambari-metrics-collector/conf/ams-site.xml |Settings that control the API daemon and the aggregator threads.
+ams-env | /etc/ambari-metrics-collector/conf/ams-env.sh |Memory / PATH settings for the API daemon
+ams-hbase-site | /etc/ams-hbase/conf/hbase-site.xml<br></br>/etc/ambari-metrics-collector/conf/hbase-site.xml |Settings for the HBase storage used for the metrics data.
+ams-hbase-env | /etc/ams-hbase/conf/hbase-env.sh |Memory / PATH settings for the HBase storage.<br></br>**Note**: In embedded more, the heap memory setting for master and regionserver is summed up as total memory for single HBase daemon.
+
+## Metrics Monitor
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+ams-env |/etc/ambari-metrics-monitor/conf/ams-env.sh |Used for log and pid dir modifications, this is the same configuration as above, common to both components.
+metric_groups |/etc/ambari-metrics-monitor/conf/metric_groups.conf |Not available in the UI. Used to control what **HOST/SYSTEM** metrics are reported.
+metric_monitor |/etc/ambari-metrics-monitor/conf/metric_monitor.ini |Not available in the UI. Settings for the monitor daemon.
+
+## Metric Collector - ams-site - Configuration details
+
+* Modifying retention interval for time aggregated data. Refer to Aggregation section for more information on aggregation: API spec
+(Note: In Ambari 2.0 and 2.1, the Phoenix version does not support Alter TTL queries. So these can be modified from the UI, only at install time. Please refer to Known Issues section for workaround.)
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.ttl |86400 |1 minute resolution data purge interval. Default is 1 day.
+timeline.metrics.host.aggregator.minute.ttl |604800 |Host based X minutes resolution data purge interval. Default is 7 days.<br></br>(X = configurable interval, default interval is 2 minutes)
+timeline.metrics.host.aggregator.hourly.ttl |2592000 |Host based hourly resolution data purge interval. Default is 30 days.
+timeline.metrics.host.aggregator.daily.ttl |31536000 |Host based daily resolution data purge interval. Default is 1 year.
+timeline.metrics.cluster.aggregator.minute.ttl |2592000 |Cluster wide minute resolution data purge interval. Default is 30 days.
+timeline.metrics.cluster.aggregator.hourly.ttl |31536000 |Cluster wide hourly resolution data purge interval. Default is 1 year.
+timeline.metrics.cluster.aggregator.daily.ttl |63072000 |Cluster wide daily resolution data purge interval. Default is 2 years.
+**Note**: The precision table at 1 minute resolution stores raw precision data for 1 day, when user queries for past 1 hour of data, the AMS API returns raw precision data.
+
+* Modifying the aggregation intervals for HOST and CLUSTER aggregators.
+On wake up the aggregator threads resume from (last run time + interval) as long as last run time is not too old.
+
+Property| Default Value Path | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.minute.interval |120 |Time in seconds to sleep for the minute resolution host based aggregator. Default resolution is 2 minutes.
+timeline.metrics.host.aggregator.hourly.interval |3600 |Time in seconds to sleep for the hourly resolution host based aggregator. Default resolution is 1 hour.
+timeline.metrics.host.aggregator.daily.interval |86400 |Time in seconds to sleep for the day resolution host based aggregator. Default resolution is 24 hours.
+timeline.metrics.cluster.aggregator.minute.interval |120 |Time in seconds to sleep for the minute resolution cluster wide aggregator. Default resolution is 2 minutes.
+timeline.metrics.cluster.aggregator.hourly.interval |3600 |Time in seconds to sleep for the hourly resolution cluster wide aggregator. Default is 1 hour.
+timeline.metrics.cluster.aggregator.daily.interval |86400 |Time in seconds to sleep for the day resolution cluster wide aggregator. Default is 24 hours.
+
+* Modifying checkpoint information. The aggregators store the timestamp or last run time on local FS.
+After reading last run time, the aggregator thread decides to aggregate as long as the (currentTime - lastRunTime) < multipler * aggregation_interval.
+The multiplier is configurable for each aggregator.
+
+Property | Default Value | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.minute.checkpointCutOffMultiplier |2 |Multiplier value * interval = Max allowed checkpoint lag. Effectively if aggregator checkpoint is greater than max allowed checkpoint delay,the checkpoint will be discarded by the aggregator.
+timeline.metrics.host.aggregator.hourly.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.host.aggregator.daily.checkpointCutOffMultiplier |1 |Same as above
+timeline.metrics.cluster.aggregator.minute.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.cluster.aggregator.hourly.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.cluster.aggregator.daily.checkpointCutOffMultiplier |1 |Same as above
+timeline.metrics.aggregator.checkpoint.dir |/var/lib/ambari-metrics-collector/checkpoint |Directory to store aggregator checkpoints. Change to a permanent location so that checkpoint are not lost.
+
+* Other important configuration properties
+
+
+Property | Default Value | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.*.disabled |false |Disable host based * aggregations. ( * => minute/hourly/daily)
+timeline.metrics.cluster.aggregator.*.disabled |false |Disable cluster based * aggregations. ( * => minute/hourly/daily)
+timeline.metrics.cluster.aggregator.minute.timeslice.interval |30 |Lowest resolution of desired data for cluster level minute aggregates.
+timeline.metrics.hbase.data.block.encoding |FAST_DIFF| Codecs are enabled on a table by setting the DATA_BLOCK_ENCODING property. Default encoding is FAST_DIFF. This can be changed only before creating tables.
+timeline.metrics.hbase.compression.scheme |SNAPPY |Compression codes need to be installed and available before setting the scheme. Default compression is SNAPPY. Disable by setting to None. This can be changed only before creating tables.
+timeline.metrics.service.default.result.limit |5760 |Max result limit on number of rows returned. Calculated as follows: 4 aggregate metrics/min * 60 * 24: Retrieve aggregate data for 1 day.
+timeline.metrics.service.checkpointDelay |60 |Time in seconds to sleep on the first run or when the checkpoint is too old.
+timeline.metrics.service.resultset.fetchSize |2000 |JDBC resultset prefect size for aggregator queries.
+timeline.metrics.service.cluster.aggregator.appIds |datanode,nodemanager,hbase |List of application ids to use for aggregating host level metrics for an application. Example: bytes_read across Yarn Nodemanagers.
+
+## Configuring Ambari Metrics service in distributed mode
+
+In distributed mode, Metric Collector writes go to HDFS of the cluster. Currently distributed mode does not support multi-mode Metric Collector, however the plan is to allow Metric Collector to scale horizontally to allow a multi-node HBase storage layer.
+
+**Note**: Make sure there is a local Datanode hosted with the Collector, it provides AMS HBase the distinct advantage of write and reads sharded across the data volumes available to the DN.
+
+Following steps need to be performed either at install time or after deploy to configure Metric Collector in distributed mode. Note: If configuring after install, the data will not be automatically copied over to HDFS.
+
+1. Edit ams-site, Set timeline.metrics.service.operation.mode = distributed
+2. Edit ams-hbase-site,
+ - Set hbase.rootdir = hdfs://namenode-host;:8020/user/ams/hbase [ If NN HA is enabled, hdfs://nameservice-id/user/ams/hbase ]
+ (Note: /user/ams/hbase here is the directory where metric data will be stored in HDFS)
+ - Set hbase.cluster.distributed = true
+ - Add dfs.client.read.shortcircuit = true (This is an optimization with a local DN present)
+3. Restart Metrics Collector
+
+**Note**: In Ambari 2.0.x, there is a bug in deploying AMS in distributed mode, if Namenode HA is enabled. Please follow the instruction listed in this JIRA as workaround steps: ([AMBARI-10707](https://issues.apache.org/jira/browse/AMBARI-10707))
+
+**Note**: In Ambari 2.2.1, stack advisor changes the dependent configs for distributed mode automatically through recommendations. Ideally, the only config that needs to be changed is timeline.metrics.service.operation.mode = distributed. The other configs - hbase.rootdir, hbase.cluster.distributed and dfs.client.read.shortcircuit will be changed automatically.
+
+## Migrating data from embedded to distributed mode
+
+Steps to migrate existing metric data to HDFS and starting AMS in distributed mode:
+
+* Stop AMS Metric Collector
+* Create hdfs directory for ams user, Example:
+
+ ```bash
+ su - hdfs -c 'hdfs dfs -mkdir /user/ams'
+ su - hdfs -c 'hdfs dfs -chown ams:hadoop /user/ams'
+ ```
+* Copy the metric data from the AMS local directory (existing value of hbase.rootdir in ams-hbase-site) to HDFS directory. Example:
+
+ ```bash
+ cd /var/lib/ambari-metrics-collector/
+ su - hdfs -c 'hdfs dfs -copyFromLocal hbase hdfs:// <namnode-http-address>:8020/user/ams/'
+ su - hdfs -c 'hdfs dfs -chown -R ams:hadoop /user/ams/hbase'
+ ```
+
+* Start the Metric Collector after making the changes needed for distributed mode.
+
+## Enabling HBase Region, User and Table Metrics
+
+Ambari disables HBase metrics (per region, per user and per table) by default. HBase metrics can be numerous and can cause performance issues. HBase RegionServer metrics are available by default.
+
+If you want HBase (per region, per user and per table) metrics to be collected by Ambari, you can do the following. It is **highly recommended** that you test turning on this option and confirm that your AMS performance is acceptable.
+
+### Step-by-step guide
+1. On the Ambari Server, browse to:
+
+ ```bash
+ var/lib/ambari-server/resources/stacks/HDP/3.0/services/HBASE/ package/templates
+ ```
+
+2. When Ambari is older than 2.7.0, on the Ambari Server, browse to:
+
+ ```bash
+ var/lib/ambari-server/resources/common-services/HBASE/ $VERSION/package/templates
+ ```
+
+3. Edit the following template files:
+
+ ```bash
+ hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2
+ hadoop-metrics2-hbase.properties-GANGLIA-RS.j2
+ ```
+
+4. Comment out (or remove) the following lines:
+
+ ```bash
+ *.source.filter.class=org.apache.hadoop.metrics2.filter. RegexFilter
+ hbase.*.source.filter.exclude=.*(Regions|Users|Tables).*
+ ```
+
+5. Save the template files and restart Ambari Server and then HBase for the changes to take effect.
+
+:::tip
+If you upgrade Ambari to a newer version, you will need to re-apply this change to the template file.
+:::
+
+## Enabling HDFS per-user Metrics
+
+HDFS per-user Metrics aren't emitted by default. Kindly exercise caution before enabling them and make sure to refer to the details of client and service port numbers.
+
+To be able to use the HDFS - Users dashboard in your Grafana instance as well as to view metrics for HDFS per user, you will need to add these custom properties to your configuration.
+
+### Step-by-step guide
+In Ambari, HDFS > Configs > Advanced > Custom hdfs-site, Add the following properties.
+
+```
+dfs.namenode.servicerpc-address=<namenodehost>:8021
+
+ipc.8020.callqueue.impl=org.apache.hadoop.ipc.FairCallQueue
+
+ipc.8020.backoff.enable=true
+
+ipc.8020.scheduler.impl=org.apache.hadoop.ipc.DecayRpcScheduler
+
+ipc.8020.scheduler.priority.levels=3
+
+ipc.8020.decay-scheduler.backoff.responsetime.enable=true
+
+ipc.8020.decay-scheduler.backoff.responsetime.thresholds=10,20,30
+```
+
+**Things to Consider**
+client port : 8020 (if different, replace it with appropriate port in all keys)Things to consider:
+
+* service port: 8021 (if different, replace it with appropriate port in first value)
+* namenodehost: needs to be a FQDN.
+Once these properties are added, it should look like this.
+
+
+**Restart HDFS & you should see the metrics being emitted. You should now also be able to use the HDFS - Users Dashboard in Grafana.**
diff --git a/docs/ambari-design/metrics/imgs/ams-arch.jpg b/docs/ambari-design/metrics/imgs/ams-arch.jpg
new file mode 100644
index 0000000..df41aa6
--- /dev/null
+++ b/docs/ambari-design/metrics/imgs/ams-arch.jpg
Binary files differ
diff --git a/docs/ambari-design/metrics/imgs/connect-phoenix.png b/docs/ambari-design/metrics/imgs/connect-phoenix.png
new file mode 100644
index 0000000..dac73fd
--- /dev/null
+++ b/docs/ambari-design/metrics/imgs/connect-phoenix.png
Binary files differ
diff --git a/docs/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png b/docs/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png
new file mode 100644
index 0000000..51eeed6
--- /dev/null
+++ b/docs/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png
Binary files differ
diff --git a/docs/ambari-design/metrics/imgs/hosts-metadata.png b/docs/ambari-design/metrics/imgs/hosts-metadata.png
new file mode 100644
index 0000000..7bd26bd
--- /dev/null
+++ b/docs/ambari-design/metrics/imgs/hosts-metadata.png
Binary files differ
diff --git a/docs/ambari-design/metrics/imgs/metrics-datastructure.png b/docs/ambari-design/metrics/imgs/metrics-datastructure.png
new file mode 100644
index 0000000..fe56c81
--- /dev/null
+++ b/docs/ambari-design/metrics/imgs/metrics-datastructure.png
Binary files differ
diff --git a/docs/ambari-design/metrics/imgs/metrics-metadata.png b/docs/ambari-design/metrics/imgs/metrics-metadata.png
new file mode 100644
index 0000000..4d1f8c2
--- /dev/null
+++ b/docs/ambari-design/metrics/imgs/metrics-metadata.png
Binary files differ
diff --git a/docs/ambari-design/metrics/imgs/restart-datanode.png b/docs/ambari-design/metrics/imgs/restart-datanode.png
new file mode 100644
index 0000000..31b0f0d
--- /dev/null
+++ b/docs/ambari-design/metrics/imgs/restart-datanode.png
Binary files differ
diff --git a/docs/ambari-design/metrics/index.md b/docs/ambari-design/metrics/index.md
new file mode 100644
index 0000000..63c92e3
--- /dev/null
+++ b/docs/ambari-design/metrics/index.md
@@ -0,0 +1,22 @@
+# Metrics
+
+**Ambari Metrics System** ("AMS") is a system for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+## Terminology
+
+Term | Definition
+--------------------------------|-------------------------------------------------------------
+Ambari Metrics System (“AMS”) | The built-in metrics collection system for Ambari.
+Metrics Collector | The standalone server that collects metrics, aggregates metrics, serves metrics from the Hadoop service sinks and the Metrics Monitor.
+Metrics Monitor | Installed on each host in the cluster to collect system-level metrics and forward to the Metrics Collector.
+Metrics Hadoop Sinks | Plugs into the various Hadoop components sinks to send Hadoop metrics to the Metrics Collector.
+
+## Architecture
+Following image depicts the high level conceptual architecture of the new Ambari Metrics System:
+
+
+
+The **Metrics Collector** is daemon that receives data from registered publishers (the Monitors and Sinks). The Collector itself is build using Hadoop technologies such as HBase Phoenix and ATS. The Collector can store data on the local filesystem (referred to as "embedded mode") or use an external HDFS (referred to as "distributed mode").
+
+## Learn More
+Browse the following to learn more about the [Ambari Metrics REST API](./metrics-api-specification.md) specification and about advanced [Configuration](./configuration.mdx) of AMS.
\ No newline at end of file
diff --git a/docs/ambari-design/metrics/metrics-api-specification.md b/docs/ambari-design/metrics/metrics-api-specification.md
new file mode 100644
index 0000000..72ae688
--- /dev/null
+++ b/docs/ambari-design/metrics/metrics-api-specification.md
@@ -0,0 +1,63 @@
+---
+title: Ambari Metrics API specification
+---
+
+The Ambari REST API supports metric queries at CLUSTER, HOST, COMPONENT and HOST COMPONENT levels.
+
+Broadly, the types of metrics queries supported are: **time range** or **point in time**.
+
+Following is an illustration of an API call that fetches metrics from the Metrics backend service using Ambari API.
+
+## CLUSTER
+
+E.g.: Dashboard metrics : Fetch load average across all nodes of a cluster
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>?fields=metrics/load[1430844925,1430848525,15]&_=1430848532904
+```
+The above API call retrieves the load average, aggregated across all hosts in the cluster.
+
+The request part of the API call selects the cluster instance while the predicate includes the metric with the time range query, followed by current time in milliseconds.
+
+Time range query:
+
+Field | Value |Comment
+-----------|------------|----------------------------------------
+Start time | 1430844925 |Start time for the time range. (Epoch)
+End time | 1430848525 |End time of the time range. (Epoch)
+Step | 15 |Default step, this is used only for zero padding or null padding if the padding interval cannot be determined from the retrieved data.
+
+## HOST
+
+E.g.: Host metrics: Get the cpu utilization on a particular host in the cluster
+
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>?fields=metrics/cpu/cpu_user[1430844610,1430848210,15],metrics/cpu/cpu_wio[1430844610,1430848210,15],metrics/cpu/cpu_nice[1430844610,1430848210,15],metrics/cpu/cpu_aidle[1430844610,1430848210,15],metrics/cpu/cpu_system[1430844610,1430848210,15],metrics/cpu/cpu_idle[1430844610,1430848210,15]&_=1430848217591
+```
+
+The above API call retrieves all cpu related metrics required to chart out cpu utilization on a host page.
+
+The request part of the above API call selects the host which is queried while the predicate part includes the metric names with time range query.
+
+## COMPONENT
+
+E.g.: Service metrics: Get the capacity utilization metrics aggregated across all datanodes but only the latest value (point in time)
+
+```
+ http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/services/HDFS/components/DATANODE?fields=metrics/dfs/datanode/DfsUsed,metrics/dfs/datanode/Capacity&_=1430849798630
+```
+
+The above API call retrieves two metrics values which represent the point in time metric value for the requested metrics obtained for the Metrics Service backend. (non JMX)
+
+For a call to get JMX metrics directly from a Hadoop daemon, use the metrics name that corresponds to the JMX MBean metric, example: metrics/dfs/FSNamesystem/CapacityUsedGB (Refer to Stack Defined Metrics for more info)
+
+The request part of the above API call selects the service from the cluster while predicate part includes the metrics names.
+
+## HOST COMPONENT
+E.g.: Daemon metrics: Get the heap memory usage for active Namenode
+
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/NAMENODE?fields=metrics/jvm/memHeapCommittedM[1430847303,1430850903,15],metrics/jvm/memHeapUsedM[1430847303,1430850903,15]&_=1430850903846
+The above API call retrieves JVM heap metrics for the Active Namenode in the cluster.
+```
+
+The request part of the api selects the Namenode host component while predicate part includes metrics with time range query.
\ No newline at end of file
diff --git a/docs/ambari-design/metrics/metrics-collector-api-specification.md b/docs/ambari-design/metrics/metrics-collector-api-specification.md
new file mode 100644
index 0000000..a9e90fa
--- /dev/null
+++ b/docs/ambari-design/metrics/metrics-collector-api-specification.md
@@ -0,0 +1,232 @@
+# Metrics Collector API Specification
+
+## Sending Metrics to AMS (POST)
+
+Sending metrics to Ambari Metrics Service can be achieved through the following API call.
+
+The Sink implementations responsible for sending metrics to AMS, buffer data for 1 minute before sending. TimelineMetricCache provides a simple cache implementation to achieve this behavior.
+
+Sample sink implementation use by Hadoop daemons: https://github.com/apache/ambari/tree/trunk/ambari-metrics/ambari-metrics-hadoop-sink
+
+```uri
+POST http://<ambari-metrics-collector>:6188/ws/v1/timeline/metrics
+```
+
+```json
+{
+ "metrics": [
+ {
+ "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",
+ "appid": "amssmoketestfake",
+ "hostname": "ambari20-5.c.pramod-thangali.internal",
+ "timestamp": 1432075898000,
+ "starttime": 1432075898000,
+ "metrics": {
+ "1432075898000": 0.963781711428,
+ "1432075899000": 1432075898000
+ }
+ }
+ ]
+}
+```
+
+```
+Connecting (POST) to <ambari-metrics-collector>:6188/ws/v1/timeline/metrics/
+Http response: 200 OK
+```
+
+## Fetching Metrics from AMS (GET)
+
+**Sample call**
+```
+GET http://<ambari-metrics-collector>:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric&appId=amssmoketestfake&hostname=<hostname>&precision=seconds&startTime=1432075838000&endTime=1432075959000
+Http response: 200 OK
+Http data:
+{
+ "metrics": [
+ {
+ "timestamp": 1432075898089,
+ "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",
+ "appid": "amssmoketestfake",
+ "hostname": "ambari20-5.c.pramod-thangali.internal",
+ "starttime": 1432075898000,
+ "metrics": {
+ "1432075898000": 0.963781711428,
+ "1432075899000": 1432075898000
+ }
+ }
+ ]
+}
+```
+
+**Generic GET call format**
+```uri
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=<>&hostname=<>&appId=<>&startTime=<>&endTime=<>&precision=<>
+```
+
+**Query Parameters Explanation**
+
+Parameter|Optional/Mandatory|Explanation|Values it can take
+---------|------------------|-----------|-------------------
+metricNames | Mandatory Comma | separated list of metrics that are required. | disk_free,mem_free... etc
+appId | Mandatory |The AppId that corresponds to the metricNames that were requested. Currently, only 1 AppId is required and allowed. | HOST/namenode/datanode/nimbus/hbase/kafka_broker/FLUME_HANDLER etc
+hostname | Optional | Comma separated list of hostnames. When no specified, cluster aggregates are returned. | h1,h2..etc
+startTime, endTime | Optional | Start and End time values. If not specified, the last data point of the metric is returned. | epoch times in seconds or milliseconds
+precision | Optional | What precision the data needs to be returned. If not specified, the precision is calculated based on the time range requested (Table below) |SECONDS/MINUTES/DAYS/HOURS
+
+**Precision query parameter (Default resolution)**
+
+ <table class="confluenceTable">
+ <colgroup>
+ <col />
+ <col />
+ <col />
+ </colgroup>
+ <tbody>
+ <tr>
+ <th class="confluenceTh"><p><span>Query Time</span></p><p><span>range</span></p></th>
+ <th class="confluenceTh"><p><span>Resolution of </span><span>returned metrics</span></p></th>
+ <th class="confluenceTh"><p><span>Comments</span></p></th>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>Upto 2 hours</span></p></td>
+ <td class="confluenceTd"><p><span>SECONDS</span></p></td>
+ <td class="confluenceTd">
+ <ul>
+ <li><span><span>10 second data for host metrics</span></span></li>
+ <li><span><span>30 second data for Aggregated query (No host specified)</span><br class="_mce_tagged_br" /></span></li>
+ </ul></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>2 hours - 1 day</span></p></td>
+ <td class="confluenceTd"><p><span>MINUTES</span></p></td>
+ <td class="confluenceTd"><p><span>5 minute data</span></p></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span><span>1 day</span> - 30 days</span></p></td>
+ <td class="confluenceTd"><p>HOURS</p></td>
+ <td class="confluenceTd"><p><span>1 hour data</span></p></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>> 30 days</span></p></td>
+ <td class="confluenceTd"><p><span>DAYS</span></p></td>
+ <td class="confluenceTd">1 day data</td>
+ </tr>
+ </tbody>
+ </table>
+
+**Specifying Aggregate Functions**
+
+The metricName can have a specific aggregate function qualifier after the metricName (as shown below) to request specific aggregates. Valid values are ._avg, ._max, ._min, ._sum. When an aggregate query is requested without an aggregate function in the metricName, the default is AVG.
+Examples
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._avg,regionserver.Server.writeRequestCount._max&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.readRequestCount,regionserver.Server.writeRequestCount._max&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+
+**Specifying Post processing Functions**
+
+Similar to aggregate functions, post processing functions can also be specified. Currently, we have 2 post processing functions - rate (Rate per second) and diff (difference between consecutive values). Post processing functions can also be applied after aggregate functions.
+Examples
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._rate,regionserver.Server.writeRequestCount._diff&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.readRequestCount._max._diff&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+**Specifying Wild Cards**
+
+Both metricNames and hostname take wildcard (%) values for a group of metric (or hosts). A query can have a combination of full metric names and names with wildcards also.
+
+Examples
+
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=master.AssignmentManger.ritCount,regionserver.Server.%&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&hostname=abc.testdomain12%.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+
+**Downsampling**
+
+As discussed before, AMS downsamples data when higher time ranges are requested. The default "downsampled across time" data returned is AVG. Specific downsamples can be requested by adding the aggregate function qualifiers ( ._avg, ._max, ._min, ._sum ) to the metric names the same way like requesting aggregates across the cluster.
+Example
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._max&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000&precision=MINUTES
+```
+The above query returns 5 minute data for the metric, where the data point value is the MAX of the values found in every 5 minute range.
+
+## AMS Metadata API
+
+AMS has 2 metadata endpoints that are useful for finding out the set of metrics it received, as well as the topology of the cluster.
+
+**METRICS METADATA**
+
+Endpoint :
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics/metadata
+```
+
+Data returned : A mapping between the set of APP_IDs to the list of metrics received with that AppId.
+
+Sample data returned
+
+
+
+**HOSTS METADATA**
+
+Endpoint :
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics/hosts
+```
+Data returned : A mapping between the hosts in the cluster and the set of APP_IDs on the host.
+
+Sample data returned
+
+
+
+## Guide to writing your own Sink
+* Include the ambari-metrics-common artifacts from source or maven-central (when available) into your project
+* Find below helpful info regarding common data-structures to use from the ambari-metrics-common module
+* Extend the org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink class and implement the required methods
+* Use the org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache to store intermediate data until it is time to send (example: collection interval = 10 seconds, send interval = 1 minute). The cache implementation provides the logic needed for buffering and local aggregation.
+* Use org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink#emitMetrics to send metrics to AMS backend.
+
+**METRIC DATA STRUCTURE**
+
+Source location for common data structures module: https://github.com/apache/ambari/tree/trunk/ambari-metrics/ambari-metrics-common/
+
+Example sink implementation: https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-hadoop-sink/
+
+
+
+**INTERNAL PHOENIX KEY STRUCTURE**
+
+ The Metric Record Key data structure is described below:
+
+Property|Type|Comment|Optional
+--------|----|--------|---------------
+Metric Name | String | First key part, important consideration while querying from HFile storage | N
+Hostname | String | Second key part | N
+Server time | Long | Timestamp on server when first metric write request was received | N
+Application Id | String | Uniquely identify service | N
+Instance Id | String | Second key part to identify instance/ component | Y
+Start time | Long | Start of the timeseries data |
+
+
+**HOW AGGREGATION WORKS**
+
+* The granularity of aggregate data can be controlled by setting wake up interval for each of the aggregator threads.
+* Presently we support 2 types of aggregators, HOST and APPLICATION with 3 time dimensions, per minute, per hour and per day.
+ * The HOST aggregates are just aggregates on precision data across the supported time dimensions.
+ * The APP aggregates are across appId. Note: We ignore instanceId for APP level aggregates. Same time dimensions apply for APP level aggregates.
+ * We also support HOST level metrics for APP, meaning you can expect a system metric example: "cpu_user" to be aggregated across datanodes, effectively calculating system metric for hosted apps.
+* Each aggregator performs checkpointing by storing last successful time of completion in a file. If the checkpoint is too old, the aggregators will discard checkpoint and aggregate data for the configured interval, meaning data in between (now - interval) time.
+* Refer to [Phoenix table schema](./operations.md) for details of tables and records.
+
\ No newline at end of file
diff --git a/docs/ambari-design/metrics/operations.md b/docs/ambari-design/metrics/operations.md
new file mode 100644
index 0000000..5bfd1f6
--- /dev/null
+++ b/docs/ambari-design/metrics/operations.md
@@ -0,0 +1,151 @@
+# Operations
+
+## Metrics Collector
+
+**Pid file locations**
+
+Daemon | Default User | Pid File Path
+---------------|-------------------------------------------------|----------------------------------------
+Metrics Collector API |ams |/var/run/ambari-metrics-collector/ambari-metrics-collector.pid
+Metrics Collector Hbase |ams |/var/run/ambari-metrics-collector/hbase-ams-master.pid
+
+**Log file locations**
+
+Daemon | Log File Path
+---------------|------------------------------------------------
+Metrics Collector API |/var/log/ambari-metrics-collector/ambari-metrics-collector.log<br></br>/var/log/ambari-metrics-collector/ambari-metrics-collector.out
+Metrics Collector HBase |/var/log/ambari-metrics-collector/hbase-ams-master-<hostname>.log<br></br>/var/log/ambari-metrics-collector/hbase-ams-master-<hostname>.out
+
+**Manually restart Metrics Collector**
+
+Stop command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ stop'
+```
+
+Start command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ start'
+```
+
+## Metrics Monitor
+
+**Pid File location**
+
+```
+/var/run/ambari-metrics-monitor/ambari-metrics-monitor.pid
+```
+
+**Log File location**
+
+```
+/var/log/ambari-metrics-monitor/ambari-metrics-monitor.out
+```
+
+**Manually restart Metrics Monitor**
+Stop command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf stop'
+```
+
+Start command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf start'
+```
+
+## Build Instructions
+
+The ambari-metrics-assembly package builds the assemblies (rpm/deb/msi) for various platforms.
+
+Following binaries can be found in the ambari-metrics-assembly/target folder after build is successful.
+
+```
+ambari-metrics-collecor-<ambari-version>.<arch>
+ambari-metrics-monitor-<ambari-version>.<arch>
+ambari-hadoop-sink-<ambari-version>.<arch>
+```
+
+**Note**: Ambari Metrics needs to be built before Ambari Server
+
+### RPM packages
+
+```bash
+cd ambari-metrics
+mvn clean package -Dbuild-rpm
+```
+
+### Debian packages
+Same instruction as above, change the maven target to build-deb
+
+### Windows msi
+TBU
+
+### Command line parameters
+
+Parameter | Default Value | Comment
+---------------|------------------|------------------------------
+hbase.tar | http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.0.0/tars/hbase-1.1.1.2.3.0.0-2557.tar.gz | HBase tarball. This is default version for Ambari 2.1.2
+hbase.folder | hbase-1.1.1.2.3.0.0-2557 |-
+hadoop.tar | http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.0.0/tars/hadoop-2.7.1.2.3.0.0-2557.tar.gz | Hadoop tarball, used for native libs. This is default version for Ambari 2.1.2
+hadoop.folder | hadoop-2.7.1.2.3.0.0-2557 |-
+
+**Note**
+
+After the change introducted by [AMBARI-18915](https://issues.apache.org/jira/browse/AMBARI-18915) - Update AMS pom to use Apache hbase,hadoop,phoenix tarballs REOPENED AMS uses hadoop tars downloaded from Apache by default. Since that version of libhadoop is not built with libsnappy, the following config change in ams-site is needed to make AMS start up correctly.
+
+**timeline.metrics.hbase.compression.scheme = None**
+
+## Disk space utilization guidance
+
+Num of Nodes | METRIC_RECORD (MB) | METRIC_RECORD_MINUTE (MB) | METRIC_RECORD_HOURLY (MB) | METRIC_RECORD_DAILY (MB) | METRIC_AGGREGATE (MB) | METRIC_AGGREGATE_MINUTE (MB) | METRIC_AGGREGATE_HOURLY (MB) | METRIC_AGGREGATE_DAILY (MB) | TOTAL (GB)
+-------|----------|-----------|----------|----------|---------|--------|----------|-----------------|-------------
+50 | 5120 | 2700 | 245 | 10 | 1500 |305 |28 |1 |10
+100 | 10240 | 5400 | 490 | 20 | 1500 |305 |28 |1 |18
+300 | 30720 | 16200 | 1470 | 60 | 1500 |305 |28 |1 |49
+500 | 51200 | 27000 | 2450 | 100 | 1500 |305 |28 |1 |81
+800 | 81920 | 43200 | 3920 | 160 | 1500 |305 |28 |1 |128
+
+**NOTE**:
+
+The above guidance has been derived from looking at AMS disk utilization in actual clusters.
+The ACTUAL numbers have been obtained by observing an actual cluster with the basic services (HDFS, YARN, HBase) installed along with Storm, Kafka and Flume.
+Kafka and Flume generate metrics only while a job is running. If those services are being used heavily, additional disk space is recommended. We ran sample jobs with STORM and KAFKA while deriving these numbers to make sure there is some contribution.
+
+**Actual disk utilization data**
+
+Num of Nodes | METRIC_RECORD (MB) | METRIC_RECORD_MINUTE (MB) | METRIC_RECORD_HOURLY (MB) | METRIC_RECORD_DAILY (MB) | METRIC_AGGREGATE (MB) | METRIC_AGGREGATE_MINUTE (MB) | METRIC_AGGREGATE_HOURLY (MB) | METRIC_AGGREGATE_DAILY (MB) | TOTAL (GB)
+-------|----------|-----------|----------|----------|---------|--------|----------|-----------------|-------------
+2 | 120 | 175 | 17 | 1 | 545 | 136 | 16 | 1 | 1
+3 | 294 | 51 | 3.4 | 1 | 104 | 26 | 1.8 | 1 | 0.5
+10 | 1024 | 540 | 49 | 2 | 1433.6 | 305 | 28 | 1 | 3.3
+
+## Phoenix Schema
+
+### Phoenix Tables
+
+Table Name | Description | Purge Interval(default)
+------------------------|---------------------------------------------------------------------------|-------------------------
+METRIC_RECORD | Data per metric per host at 10 seconds precision with 1 minute aggregates.| 1 day
+METRIC_RECORD_MINUTE | Data per metric per host at 5 minute precision | 1 week
+METRIC_RECORD_HOURLY | Data per metric per host at 1 hour precision | 30 days
+METRIC_RECORD_DAILY | Data per metric per host at 1 day precision | 1 year
+METRIC_AGGREGATE | Cluster wide aggregates per metric at 30 seconds precision | 1 week
+METRIC_AGGREGATE_MINUTE | Cluster wide aggregates per metric at 5 minute precision | 30 days
+METRIC_AGGREGATE_HOURLY | Cluster wide aggregates per metric at 1 hour precision | 1 year
+METRIC_AGGREGATE_DAILY | Cluster wide aggregates per metric at 1 day precision | 2 years
+
+### Connecting to Phoenix
+* Unpack Phoenix (4.2.0+) tarball onto the Metric Collector host
+* Change director to phonenix-4.*/bin
+* Edit sqlline.py and search for "java", replace java with full path to the java executable, example: "/usr/jdk64/jdk1.8.0_40/bin/java"
+* Connect command:
+Ambari versions 2.2.0 and below : ./sqlline.py localhost:61181:/hbase
+Ambari versions > 2.2.0 :
+```bash
+./sqlline.py localhost:61181:/ams-hbase-unsecure (embedded mode) and <cluster-zookeeper-quorum-host>:<cluster_zookeeper_port>:/ams-hbase-unsecure (distributed mode)
+```
+
\ No newline at end of file
diff --git a/docs/ambari-design/metrics/stack-defined-metrics.md b/docs/ambari-design/metrics/stack-defined-metrics.md
new file mode 100644
index 0000000..e535f4b
--- /dev/null
+++ b/docs/ambari-design/metrics/stack-defined-metrics.md
@@ -0,0 +1,79 @@
+# Stack Defined Metrics
+
+The Ambari Stack definition represents the complete declarative description of Services that are comprised in a cluster.
+
+The stack definition also contains a definition file for all metrics that are supported by the Service.
+
+Presently the metrics.json describes the mapping between the metrics name requested in the REST API and the metrics name to use for making a call to the Metrics Service.
+
+Location of the **metrics.json** in the stack:
+
+Level|Location|Comment
+-----|--------|-------
+Cluster & Host | ganglia_properties.json | Presently, this file defines metrics for Host Component and Service Components as well but these are only used for older versions of stack < 2.0 and unit tests.<br></br>The Cluster and Host sections of this json file drive the Dashboard graphs.
+Component & Host Component | common-services.<SERVICE_NAME> | This file contains definition of metrics mapping for Ambari Metrics (type = ganglia) and JMX.
+
+**Note**: The individual stacks that override behavior from common services can redefine the metrics.json file, the inheritance is all-or-nothing, meaning if metrics.json file is present in the child stack, it will override the metrics.json from common-services
+
+**Structure of metrics.json file**
+
+Key|Allowed Values|Comments
+-----|--------|-------------
+Type |"ganglia" / "jmx" |type = ganglia implies Metrics Service request fulfilled by either a Ganglia (up to version 2.0) or Ambari Metrics (2.0 and above) backend service, this decision is taken by Ambari server at runtime.
+Category | "default" / "performance" ... |This is to group metrics into subsets for better navigability
+Metrics |metricKey : { <br></br>"metricName":<br></br>"pointInTime":<br></br>temporal":<br></br>} | metricKey = Key to be used by REST API. This is unique for a service and identifies the requested metric as well as what endpoint to use for serving the data (AMS vs JMX)<br></br>metricName = Name to use for the Metrics Service backend<br></br>pointInTime = Get latest value, no time range query allowed<br></br>temporal = Time range query supported
+
+Example:
+
+```json
+{
+
+ "NAMENODE": {
+
+ "Component": [
+
+ {
+
+ "type": "ganglia",
+
+ "metrics": {
+
+ "default": {
+
+ "metrics/dfs/FSNamesystem/TotalLoad": {
+
+ "metric": "dfs.FSNamesystem.TotalLoad",
+
+ "pointInTime": false,
+
+ "temporal": true
+
+ }
+
+ } ]
+
+ },
+
+ "HostComponent" : [
+
+ { "type" : "ganglia", ... }
+
+ { "type" : "jmx", .... }
+
+ ]
+
+}
+```
+
+**Sample API calls to retrieve metric definitions**:
+
+Service metrics:
+```
+Template => http://<ambari-server>:<port>/api/v1/stacks/<stackName>/versions/<stackVersion>/services/<serviceName>/artifacts/metrics_descriptor
+Example => http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/HDFS/artifacts/metrics_descriptor
+```
+Cluster & Host metrics:
+```
+Template => http://<ambari-server>:<port>/api/v1/stacks/<stackName>/versions/<stackVersion>/artifacts/metrics_descriptor
+Example => http://localhost:8080/api/v1/stacks/HDP/versions/2.3/artifacts/metrics_descriptor
+```
\ No newline at end of file
diff --git a/docs/ambari-design/metrics/troubleshooting.md b/docs/ambari-design/metrics/troubleshooting.md
new file mode 100644
index 0000000..cab1ea2
--- /dev/null
+++ b/docs/ambari-design/metrics/troubleshooting.md
@@ -0,0 +1,145 @@
+# Troubleshooting
+
+## Cleaning up Ambari Metrics System Data
+
+Following steps would help in cleaning up Ambari Metrics System data in a given cluster.
+
+Important Note:
+
+1. Cleaning up the AMS data would remove all the historical AMS data available
+2. The hbase parameters mentioned above are specific to AMS and they are different from the Cluster Hbase parameters
+
+### Step-by-step guide
+
+1. Using Ambari
+ * Set AMS to maintenance
+ * Stop AMS from Ambari. Identify the following from the AMS Configs screen
+ * 'Metrics Service operation mode' (embedded or distributed)
+ * hbase.rootdir
+ * hbase.zookeeper.property.dataDir
+2. AMS data would be stored in 'hbase.rootdir' identified above. Backup and remove the AMS data.
+ * If the Metrics Service operation mode
+ * is 'embedded', then the data is stored in OS files. Use regular OS commands to backup and remove the files in hbase.rootdir
+ * is 'distributed', then the data is stored in HDFS. Use 'hdfs dfs' commands to backup and remove the files in hbase.rootdir
+3. Remove the AMS zookeeper data by backing up and removing the contents of 'hbase.tmp.dir'/zookeeper
+4. Remove any Phoenix spool files from 'hbase.tmp.dir'/phoenix-spool folder
+5 Restart AMS using Ambari
+
+## Moving Metrics Collector to a new host
+
+1. Stop AMS Service
+
+2. Execute the following API call to Delete Metric Collector. (Replace server-host, cluster-name and host-name with the Metrics Collector host)
+
+```
+curl -u admin:admin -H "X-Requested-By:ambari" -i -X DELETE http://<server-host>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/METRICS_COLLECTOR
+```
+
+3. Execute the following API call to add Metrics collector to a new host. (Replace, server-host, cluster-name, host-name)
+
+```
+curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://<server-host>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/METRICS_COLLECTOR
+```
+
+4. Install the Metrics Collector component from the Host page of the new host.
+
+5. If the AMS is in embedded mode, copy the AMS data from old node to new node.
+
+ * For embedded mode (ams-site: timeline.metrics.service.operation.mode), copy over the hbase.rootdir and tmpdir to new host from the old collector host.
+ * For distributed mode, since AMS HBase is writing to HDFS, no change will be necessary.
+ * Ensure that ams:hbase-site:hbase.rootdir and hbase.tmp.dir are pointing to the correct location in the new AMS node
+6. Start the Metrics Service.
+
+7. The service daemons will be pointing to the old metrics collector host. Perform a rolling restart of slave components and a normal restart of Master components for them to pick up the new collector host.
+
+Note : Restart of services is not needed post Ambari-2.5.0 since live collector information is maintained in the cluster zookeeper.
+
+
+
+## Troubleshooting Guide
+
+The following page documents common problems discovered with Ambari Metrics Service and provides a guide for things to look out for and already solved problems.
+
+**Important facts to collect from the system**:
+
+Problems with Metric Collector host
+* Output of "rpm -qa | grep ambari" on the collector host.
+* Total available System memory, output of : "free -g"
+* Total available disk space and available partitions, output of : "df -h "
+* Total number of hosts in the cluster
+* Configs: /etc/ams-hbase/conf/hbase-env.sh, /etc/ams-hbase/conf/hbase-site.xml, /etc/ambari-metrics-collector/conf/ams-env.sh, /etc/ambari-metrics-collector/conf/ams-site.xml
+* Collector logs:
+
+```
+/var/log/ambari-metrics-collector/ambari-metrics-collector.log, /var/log/ambari-metrics-collector/hbase-ams-master-<host>.log, /var/log/ambari-metrics-collector/hbase-ams-master-<host>.out
+Note: Additionally, If distributed mode is enabled, /var/log/ambari-metrics-collector/hbase-ams-zookeeper-<host>.log, /var/log/ambari-metrics-collector/hbase-ams-regionserver-<host>.log
+```
+
+* Response to the following URLs -
+
+```
+http://<ams-host>:6188/ws/v1/timeline/metrics/metadata
+http://<ams-host>:6188/ws/v1/timeline/metrics/hosts
+```
+
+* The response will be JSON and can be attached as a file.
+* From AMS HBase Master UI - http://<METRICS_COLLECTOR_HOST>:61310
+
+
+ * Region Count
+ * StoreFile Count
+ * JMX Snapshot - http://<METRICS_COLLECTOR_HOST>:61310/jmx
+
+
+**Problems with Metric Monitor host**
+
+```
+Monitor log file: /etc/ambari-metrics-monitor/ambari-metrics-monitor.out
+```
+
+**Check out [Configurations - Tuning](https://cwiki.apache.org/confluence/display/AMBARI/Configurations+-+Tuning) for scale issue troubleshooting.**
+
+**Issue 1: AMS HBase process slow disk writes**
+
+The symptoms and resolutions below address the **embedded** mode of AMS only.
+
+_Symptoms_:
+
+Behavior|How to detect
+--------|--------------
+High CPU usage | HBase process on Collector host taking up close to 100% of every core
+HBase Log: Compaction times | grep hbase-ams-master-<host>.log | grep "Finished memstore flush"<br></br>This yields MB written in X milliseconds, generally 128 MBps and above is average speed unless the disk is contended.<br></br>Also this search reveals how many times compaction ran per minute. A value greater than 6 or 8 is a warning that write volume is far greater than what HBase can hold in memory
+HBase Log: ZK timeout | HBase crashes saying zookeeper session timed out. This happens because in embedded mode the zk session timeout is limited to max of 30 seconds (HBase issue: fix planned for 2.1.3).<br></br>The cause is again slow disk reads.
+Collector Log : "waiting for some tasks to finish" | ambari-metric-collector log shows messages where AsyncProcess writes are queued up
+
+_Resolutions_:
+
+Configuration Change|Description
+--------|-----------------------
+ams-hbase-site :: hbase.rootdir | Change this path to a disk mount that is not heavily contended.
+ams-hbase-ste :: hbase.tmp.dir | Change this path to a location different from hbase.rootdir
+ams-hbase-env :: hbase_master_heapsize<br></br>ams-hbase-site :: hbase.hregion.memstore.flush.size | Bump this value up so more data is held in memory to address I/O speeds.<br></br>If heap size is increased and resident memory usage does not go up, this parameter can be changed to address how much data can be stored in a memstore per Region. Default is set to 128 MB. The size is in bytes.<br></br>Be careful with modifying this value, generally limit the setting between 64 MB (small heap with fast disk write), to 512 MB (large heap > 8 GB, and average write speed), since more data held in memory means longer time to write it to disk during a Flush operation.
+
+**Issue 2: Ambari Metrics take a long time to load**
+
+_Symptoms_:
+
+Behavior|How to detect
+--------|--------------
+Graphs: Loading time too long<br></br>Graphs: No data available | Check out service pages / host pages for metric graphs
+Socket read timeouts | ambari-server.log shows: Error message saying socket timeout for metrics
+Ambari UI slowing down | Host page loading time is high, heatmaps do not show data<br></br>Dashboard loading time is too high<br></br>Multiple sessions result in slowness
+
+_Resolutions_:
+
+Upgrade to 2.1.2+ is highly recommended.
+
+Following is a list of fixes in 2.1.2 release that should greatly help to alleviate the slow loading and timeouts:
+
+https://issues.apache.org/jira/browse/AMBARI-12654
+
+https://issues.apache.org/jira/browse/AMBARI-12983
+
+https://issues.apache.org/jira/browse/AMBARI-13108
+
+## [Known Issues](https://cwiki.apache.org/confluence/display/AMBARI/Known+Issues)
\ No newline at end of file
diff --git a/docs/ambari-design/metrics/upgrading-ambari-metrics-system.md b/docs/ambari-design/metrics/upgrading-ambari-metrics-system.md
new file mode 100644
index 0000000..e61a8d7
--- /dev/null
+++ b/docs/ambari-design/metrics/upgrading-ambari-metrics-system.md
@@ -0,0 +1,21 @@
+# Upgrading Ambari Metrics System
+
+**Upgrading from Ambari 2.0 or 2.0.1 to 2.1**
+
+1. Upgrade ambari server and perform needed post-upgrade checks. (make sure all services are up and running)
+2. Stop Ambari Metrics service
+3. Execute the following command on all hosts.
+
+ ```bash
+ yum upgrade -y ambari-metrics-monitor ambari-metrics-hadoop-sink
+ ```
+ (Use appropriate package manager on ubuntu and windows)
+
+4. Execute the following command on host running Metrics Collector
+
+ ```bash
+ yum upgrade -y ambari-metrics-collector
+ ```
+
+5. Start Ambari Metrics Service
+6. The Sink jars will be deployed on every host, the daemons will pick the changes to sink implementations when they are restarted. (E.g.: HDFS Namenode / Datanode)
\ No newline at end of file
diff --git a/docs/ambari-design/quick-links.md b/docs/ambari-design/quick-links.md
new file mode 100644
index 0000000..0fea1e4
--- /dev/null
+++ b/docs/ambari-design/quick-links.md
@@ -0,0 +1,225 @@
+# Quick Links
+
+## Introduction
+
+A service can add a list of quick links to the Ambari web UI by adding meta info to a file following a predefined JSON format. Ambari server parses the quick link JSON file and provides its content to the UI. So that Ambari web UI can calculate quick link URLs based on the information and populate the quick links drop-down list accordingly.
+
+## Design
+
+By default, the JSON file is called quicklinks.json and is located in the quicklinks directory under the service root directory. For example, for Oozie, the file is OOZIE/quicklinks/quicklinks.json. You can also name the file differently as well as put it in a custom directory under the service root directory.
+
+
+Use YARN as an example, the following is what the metainfo.xml looks like with the quick links configurations.
+
+```xml
+<services>
+ <service>
+ <name>YARN</name>
+ <version>2.7.1.2.3</version>
+ <quickLinksConfigurations>
+ <quickLinksConfiguration>
+ <fileName>quicklinks.json</fileName>
+ <default>true</default>
+ </quickLinksConfiguration>
+ </quickLinksConfigurations>
+```
+
+The metainfo.xml can have different quick links configuration as shown here for MapReduce2.
+
+The _quickLinksConfigurations-dir_is an optional field that tells Ambari Server where to load the quicklinks.json file. We can skip it if we want the service to use the default _quicklinks_directory.
+
+```xml
+<service>
+ <name>MAPREDUCE2</name>
+ <version>2.7.1.2.3</version>
+ <quickLinksConfigurations-dir>quicklinks-mapred</quickLinksConfigurations-dir>
+ <quickLinksConfigurations>
+ <quickLinksConfiguration>
+ <fileName>quicklinks.json</fileName>
+ <default>true</default>
+ </quickLinksConfiguration>
+ </quickLinksConfigurations>
+```
+
+A quick link JSON file has two major sections, the "configuration" section for determine the protocol (HTTP vs HTTPS), and the "links" section for meta information of each quick link to be displayed on the Ambari web UI. The JSON file also includes a "name" section at the top that defines the name of the quick links JSON file that server uses for identification.
+
+Ambari web UI uses information provided in the "configuration" section to determine if the service is running against HTTP or HTTPS. The result is used to construct all quick link URLs defined in the "links" section.
+
+Use YARN as an example, the following is what the quicklinks.json looks like
+
+```json
+{
+ "name": "default",
+ "description": "default quick links configuration",
+ "configuration": {
+ "protocol": {
+ # type tells the UI which protocol to use if all checks meet.
+
+ # Use https_only or http_only with empty checks section to explicitly specify the type
+ "type":"https",
+ "checks":[ # There can be more than one check needed.
+ {
+ "property":"yarn.http.policy",
+ # Desired section here either is a specific value for the property specified
+ # Or whether the property value should exit or not_exist, blank or not_blank
+ "desired":"HTTPS_ONLY",
+ "site":"yarn-site"
+ }
+ ]
+ },
+ #configuration for individual links
+ "links": [
+ {
+ "name": "resourcemanager_ui",
+ "label": "ResourceManager UI",
+ "requires_user_name": "false", #set this to true if UI should attach log in user name to the end of the quick link url
+ "url": "%@://%@:%@",
+
+ #section calculate the port numbe.
+ "port":{
+ #use a property for the whole url if the service does not have a property for the port.
+ #Specify the regex so the url can be parsed for the port value.
+ "http_property": "yarn.timeline-service.webapp.address",
+ "http_default_port": "8080",
+ "https_property": "yarn.timeline-service.webapp.https.address",
+ "https_default_port": "8090",
+ "regex": "\\w*:(\\d+)",
+ "site": "yarn-site"
+ }
+ },
+ {
+ "name": "resourcemanager_logs",
+ "label": "ResourceManager logs",
+ "requires_user_name": "false",
+ "url": "%@://%@:%@/logs",
+ "port":{
+ "http_property": "yarn.timeline-service.webapp.address",
+ "http_default_port": "8088",
+ "https_property": "yarn.timeline-service.webapp.https.address",
+ "https_default_port": "8090",
+ "regex": "\\w*:(\\d+)",
+ "site": "yarn-site"
+ }
+ }
+ ]
+ }
+}
+```
+
+# REST API
+
+You can examine the quick link information made available to the Ambari web UI by running the following REST API as an HTTP GET request.
+
+REST API
+
+```
+/api/v1/stacks/[stack_name]versions/[stack_version]/services/[service_name]/quicklinks?QuickLinkInfo/default=true&fields=*
+```
+
+Response sent to the Ambari web UI.
+
+```json
+{
+ "href" : "http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/YARN/quicklinks?QuickLinkInfo/default=true&fields=*",
+ "items" : [
+ {
+ "href" : "http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/YARN/quicklinks/quicklinks.json",
+ "QuickLinkInfo" : {
+ "default" : true,
+ "file_name" : "quicklinks.json",
+ "service_name" : "YARN",
+ "stack_name" : "HDP",
+ "stack_version" : "2.3",
+ "quicklink_data" : {
+ "QuickLinksConfiguration" : {
+ "description" : "default quick links configuration",
+ "name" : "default",
+ "configuration" : {
+ "protocol" : {
+ "type" : "https",
+ "checks" : [
+ {
+ "property" : "yarn.http.policy",
+ "desired" : "HTTPS_ONLY",
+ "site" : "yarn-site"
+ }
+ ]
+ },
+ "links" : [
+ {
+ "name" : "resourcemanager_jmx",
+ "label" : "ResourceManager JMX",
+ "url" : "%@://%@:%@/jmx",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "resourcemanager_logs",
+ "label" : "ResourceManager logs",
+ "url" : "%@://%@:%@/logs",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "resourcemanager_ui",
+ "label" : "ResourceManager UI",
+ "url" : "%@://%@:%@",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.resourcemanager.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.resourcemanager.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "thread_stacks",
+ "label" : "Thread Stacks",
+ "url" : "%@://%@:%@/stacks",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ }
+ ]
+ }
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+## Ambari Web UI
+
+The changes for the stack driven quick links are hidden from the UI presentation. The quick links drop-down list behavior remains unchanged.
diff --git a/docs/ambari-design/service-dashboard/imgs/create-widget.png b/docs/ambari-design/service-dashboard/imgs/create-widget.png
new file mode 100644
index 0000000..87f901e
--- /dev/null
+++ b/docs/ambari-design/service-dashboard/imgs/create-widget.png
Binary files differ
diff --git a/docs/ambari-design/service-dashboard/imgs/gauge.png b/docs/ambari-design/service-dashboard/imgs/gauge.png
new file mode 100644
index 0000000..13ae128
--- /dev/null
+++ b/docs/ambari-design/service-dashboard/imgs/gauge.png
Binary files differ
diff --git a/docs/ambari-design/service-dashboard/imgs/graphs.png b/docs/ambari-design/service-dashboard/imgs/graphs.png
new file mode 100644
index 0000000..a9bf723
--- /dev/null
+++ b/docs/ambari-design/service-dashboard/imgs/graphs.png
Binary files differ
diff --git a/docs/ambari-design/service-dashboard/imgs/number.png b/docs/ambari-design/service-dashboard/imgs/number.png
new file mode 100644
index 0000000..adae257
--- /dev/null
+++ b/docs/ambari-design/service-dashboard/imgs/number.png
Binary files differ
diff --git a/docs/ambari-design/service-dashboard/imgs/widget-browser.png b/docs/ambari-design/service-dashboard/imgs/widget-browser.png
new file mode 100644
index 0000000..c205590
--- /dev/null
+++ b/docs/ambari-design/service-dashboard/imgs/widget-browser.png
Binary files differ
diff --git a/docs/ambari-design/service-dashboard/index.mdx b/docs/ambari-design/service-dashboard/index.mdx
new file mode 100644
index 0000000..1c48434
--- /dev/null
+++ b/docs/ambari-design/service-dashboard/index.mdx
@@ -0,0 +1,188 @@
+# Enhanced Service Dashboard
+
+This feature was first introduced in Ambari 2.1.0 release. Any Ambari release before 2.1.0 does not support this feature. Cluster is required to be upgraded to Ambari 2.1.0 or above to use this feature.
+
+:::caution
+This document assumes that the service metrics are being exposed via Ambari. If this is not the case then please refer to [Metrics](https://cwiki.apache.org/confluence/display/AMBARI/Metrics)document for more related information.
+:::
+
+## Introduction
+
+The term Enhanced Service Dashboard is used for being able to seamlessly add new widgets to the service summary page and heatmap page. This feature enables stack service to be packaged with the widget definitions in the JSON format. These widget definitions will appear as default widgets on the service summary page and heatmap page on service installation. In addition new widgets for the service can be created any time on the deployed cluster.
+
+Displaying default service dashboard widgets on service installation is a 3 step process:
+
+1. Push service metrics to Ambari Metric Collector.
+
+2. Declare service metrics in service's metrics.json file of Ambari. This step is required to expose metrics via Ambari RESTful API.
+
+3. Define service widgets in the widgets.jsonfile.
+
+:::tip
+Widget gets the data to be charted from the service metrics. It is important to validate that the required service metrics are being exposed from Ambari metrics endpoint before defining a widget.
+:::
+
+## Service Dashboard Widgets
+
+Ambari supports 4 widget types:
+
+1. Graph
+2. Gauge
+3. Number
+4. Template
+
+### Graph
+
+A widget to display line or area graphs that are derived from one or more than one service metrics value over a range of time.
+
+
+### Graph Widget Definition
+
+```json
+{
+ "widget_name": "Memory Utilization",
+ "description": "Percentage of total memory allocated to containers running in the cluster.",
+ "widget_type": "GRAPH",
+ "is_visible": true,
+ "metrics": [
+ {
+ "name": "yarn.QueueMetrics.Queue=root.AllocatedMB",
+ "metric_path": "metrics/yarn/Queue/root/AllocatedMB",
+ "service_name": "YARN",
+ "component_name": "RESOURCEMANAGER",
+ "host_component_criteria": "host_components/HostRoles/ha_state=ACTIVE"
+ },
+ {
+ "name": "yarn.QueueMetrics.Queue=root.AvailableMB",
+ "metric_path": "metrics/yarn/Queue/root/AvailableMB",
+ "service_name": "YARN",
+ "component_name": "RESOURCEMANAGER",
+ "host_component_criteria": "host_components/HostRoles/ha_state=ACTIVE"
+ }
+ ],
+ "values": [
+ {
+ "name": "Memory Utilization",
+ "value": "${(yarn.QueueMetrics.Queue=root.AllocatedMB / (yarn.QueueMetrics.Queue=root.AllocatedMB + yarn.QueueMetrics.Queue=root.AvailableMB)) * 100}"
+ }
+ ],
+ "properties": {
+ "display_unit": "%",
+ "graph_type": "LINE",
+ "time_range": "1"
+ }
+}
+```
+
+1. **widget_name:** This is the name that will be displayed in the UI for the widget.
+
+2. **description:**Description for the widget that will be displayed in the UI.
+
+3. **widget_type:**This information is used by the widget to create the widget from the metric data.
+
+4. **is_visible:** This boolean decides if the widget is shown on the service summary page by default or not.
+
+5. **metrics:** This is an array that includes all metrics definitions comprising the widget.
+
+6. **metrics/name:** Actual n ame of the metric as being pushed to the sink or emitted as JMX property by the service.
+
+7. **metrics/metric_path** **:** This is the path to which above mentioned metrics/name is mapped in metrics.json file for the service. Metric value will be exposed in the metrics attribute of the service component or host component endpoint of the Ambari API at the same path.
+
+8. **metrics/service_name:**Name of the service containing the component emitting the metric.
+
+9. **metrics/component_name:**Name of the component emitting the metric.
+
+10. **metrics/host_component_criteria:**This is an optional field. Presence of this field means that the metric is host component metric and not a service component metric. If a metric is intended to be queried on host component endpoint then the criteria for choosing the host component needs to be specified over here. If this is left as a single space string then the first found host component will be queried for the metric.
+
+11. **values:** This is an array of datasets for the Graph widget. For other widget types this array always has one element.
+
+12. **values/name:** This field is used only for Graph widget type. This shows up as a label name in the legend for the dataset shown in a Graph widget type.
+
+13. **values/value:** This is the expression from which the value for the dataset is calculated. Expression contains references to the declared metric name and constant numbers which acts as valid operands. Expression also contains a valid set of operators {+,-,*,/} that can be used along with valid operands. Parentheses are also permitted in the expression.
+
+14. **properties:** These contains set of properties specific to the widget type. For Graph widget type it contains display_unit, graph_type and time_range. t ime_range field is currently not honored in the UI.
+
+### Gauge
+
+A widget to display percentage calculated from current value of a metric or current values of more than one metric.
+
+
+### Number
+
+A widget to display a number optionally with a unit that can be calculated from the current value of a metric or current values of more than one metric.
+
+
+
+### Template
+
+A widget to display one or more numbers calculated from current value of a metric or current values of more than one metric along with the embedded string.
+
+Aggregator function for the Metric
+
+Aggregator function are related to only service component level metrics and are not supported for host component level metrics.
+
+Ambari Metrics System supports 4 type of aggregation:
+
+1. **max**: Maximum value of the metric across all host components
+2. **min**: Minimum value of the metric across all host components
+3. **avg**: Average value of the metric across all host components
+4. **sum**: Sum of metric value recorded for each host components
+
+By default Ambari Metrics System uses the average aggregator function while computing value for a service component metric but this behavior can be overridden by suffixing metric name with the aggregator function name (._max ,._min ,._avg and ._sum ).
+
+```json
+{
+ "widget_name": "Blocked Updates",
+ "description": "Number of milliseconds updates have been blocked so the memstore can be flushed.",
+ "default_section_name": "HBASE_SUMMARY",
+ "widget_type": "GRAPH",
+ "is_visible": true,
+ "metrics": [
+ {
+ "name": "regionserver.Server.updatesBlockedTime._sum",
+ "metric_path": "metrics/hbase/regionserver/Server/updatesBlockedTime._sum",
+ "service_name": "HBASE",
+ "component_name": "HBASE_REGIONSERVER"
+ }
+ ],
+ "values": [
+ {
+ "name": "Updates Blocked Time",
+ "value": "${regionserver.Server.updatesBlockedTime._sum}"
+ }
+ ],
+ "properties": {
+ "display_unit": "ms",
+ "graph_type": "LINE",
+ "time_range": "1"
+ }
+}
+```
+
+## Widget Operations:
+
+A Service that has widgets.json and metrics.json file will also be provided with following abilities:
+
+1. **Widget Browser** for performing widget related operations.
+
+2. **Create widget wizard** for creating new desired service widgets in a cluster.
+
+3. **Edit widget wizard** for editing service widgets post creation.
+
+## Widget Browser
+
+Widget Browser is the place from which actions can be performed on widgets like adding/removing a widget from the dashboard, sharing widget with other users and deleting a widget.While creating new widget, user can choose to share the widget with other users or not. All widgets that are created by the user + shared with the user + defined in the stack as default service widgets will be available in the widget browser of a user.
+
+
+### Create Widget Wizard
+
+A custom desired widget can be created from the exposed service metrics using 3 step Create Widget wizard.
+
+
+## Using Enhanced Service Dashboard feature
+
+If the existing service in Ambari is pushing the metrics to Ambari metrics collector then minimal work needs to be done. This includes adding metrics.json and widgets.json file for the service, and might include making changes to metainfo.xml file.
+
+:::tip
+[AMBARI-9910](https://issues.apache.org/jira/browse/AMBARI-9910) added widgets to Accumulo service page. This work can be referred as an example to use Enhanced Service Dashboard feature.
+:::
\ No newline at end of file
diff --git a/docs/ambari-design/stack-and-services/custom-services.md b/docs/ambari-design/stack-and-services/custom-services.md
new file mode 100644
index 0000000..561da65
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/custom-services.md
@@ -0,0 +1,1059 @@
+# Custom Services
+
+There are many aspects to creating custom services. At its most basic a service must include its metainfo.xml and command script. It also must be packaged to allow adding it to a cluster. Some of the further sub-sections define optional elements of the service definition which can be included.
+
+## Defining a Custom Service
+
+
+
+### Service Metainfo and Component Category
+
+#### metainfo.xml
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service must be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you must specify the <commandScript> to use when executing commands. There is a defined set of default commands the component must support depending on the components category.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+### Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV". This service includes MASTER, SLAVE and CLIENT components.
+
+#### Create a Custom Service
+
+1. Create a directory named `<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```bash
+mkdir SAMPLESRV
+cd SAMPLESRV
+```
+2. Within the `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily>
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+3. In the above, the service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+4. Next, let's create that command script. Create a directory for the command script `SAMPLESRV` `/` ** `package/scripts`** that we designated in the service metainfo.
+
+```bash
+mkdir -p package/scripts
+cd package/scripts
+```
+5. Within the scripts directory, create the `.py` command script files mentioned in the metainfo. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+#### Implementing a Custom Command
+
+1. Browse to the `SAMPLESRV` directory, and edit the `metainfo.xml` file that describes the service. For example, adding a custom command to the SAMPLESRV_CLIENT:
+
+```xml
+
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+```
+2. Next, let's create that command script by editing the package/scripts/sample_client.py file that we designated in the service metainfo.
+
+
+```python
+import sys
+from resource_management import *
+
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+#### Adding Configs to the Custom Service
+
+In this example, we will add a configuration type "test-config" to our SAMPLESRV.
+
+1. Modify the metainfo.xml Add the configuration files to the CLIENT component will make it available in the client tar ball downloaded from Ambari.
+
+```xml
+<component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>test-config.xml</fileName>
+ <dictionaryName>test-config</dictionaryName>
+ </configFile>
+ </configFiles>
+</component>
+```
+2. Create a directory for the configuration dictionary file `SAMPLESRV` `/` **`configuration`**.
+
+```bash
+mkdir -p configuration
+cd configuration
+```
+3. Create the `test-config.xml` file. For example:
+
+```xml
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<configuration>
+ <property>
+ <name>some.test.property</name>
+ <value>this.is.the.default.value</value>
+ <description>This is a test description.</description>
+ </property>
+ <property>
+ <name>another.test.property</name>
+ <value>5</value>
+ <description>This is a second test description.</description>
+ </property>
+</configuration>
+
+```
+4. There is an optional setting "configuration-dir". Custom services should either not include the setting or should leave it as the default value "configuration".
+
+```xml
+<configuration-dir>configuration</configuration-dir>
+```
+5. Configuration dependencies can be included in the metainfo.xml in the a " `configuration-dependencies`" section. This section can be added to the service as a whole or a particular component. One of the implications of this dependency is that whenever the config-type is updated, Ambari automatically marks the component or service as requiring restart.
+
+For example, HIVE defines a component level configuration dependencies for the HIVE_METASTORE component
+
+```xml
+ <component>
+ <name>HIVE_METASTORE</name>
+ <displayName>Hive Metastore</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <reassignAllowed>true</reassignAllowed>
+ <clientsToUpdateConfigs></clientsToUpdateConfigs>
+... ...
+ <configuration-dependencies>
+ <config-type>hive-site</config-type>
+ </configuration-dependencies>
+ </component>
+```
+
+HIVE also defines service level configuration dependencies.
+
+```xml
+<configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hive-log4j</config-type>
+ <config-type>hive-exec-log4j</config-type>
+ <config-type>hive-env</config-type>
+ <config-type>hivemetastore-site.xml</config-type>
+ <config-type>webhcat-site</config-type>
+ <config-type>webhcat-env</config-type>
+ <config-type>parquet-logging</config-type>
+ <config-type>ranger-hive-plugin-properties</config-type>
+ <config-type>ranger-hive-audit</config-type>
+ <config-type>ranger-hive-policymgr-ssl</config-type>
+ <config-type>ranger-hive-security</config-type>
+ <config-type>mapred-site</config-type>
+ <config-type>application.properties</config-type>
+ <config-type>druid-common</config-type>
+ </configuration-dependencies>
+```
+
+## Packaging and Installing Custom Services
+
+### Introduction
+
+Custom services in Apache Ambari can be packaged and installed in many ways. Ideally, they should all be packaged and installed in the same manner. This document describes how to package and install custom services using Extensions and Management Packs. Using this approach, the custom service definitions do not get inserted under the stack versions services directory. This keeps the stack clean and allows users to easily see which services were installed by which package (stack or extension).
+
+### Management Packs
+
+A [management pack](./management-packs.md) is a mechanism for installing stacks, extensions and custom services. A management pack is packaged as a tar.gz file which expands as a directory that includes an mpack.json file and the stack, extension and custom service definitions that it defines.
+
+#### Example Structure
+
+myext-mpack1.0.0.0
+
+├── mpack.json
+
+└──
+
+#### mpack.json Format
+
+The mpacks.json file allows you to specify the name, version and description of the management pack along with the prerequisites for installing the management pack. For extension management packs, the only important prerequisite is the min_ambari_version. The most important part is the artifacts section. For the purpose here, the artifact type will always be "extension-definitions". You can provide any name for the artifact and you can potentially change the source_dir if you wish to package your extensions under a different directory than "extensions". For consistency, it is recommended that you use the default source_dir "extensions".
+
+```json
+{
+
+"type" : "full-release",
+
+"name" : "myextension-mpack",
+
+"version": "1.0.0.0",
+
+"description" : "MyExtension Management Pack",
+
+"prerequisites": {
+
+"min_ambari_version" : "2.4.0.0"
+
+},
+
+"artifacts": [
+
+{
+
+"name" : "myextension-extension-definitions",
+
+"type" : "extension-definitions",
+
+"source_dir": "extensions"
+
+}
+
+]
+
+}
+```
+
+### Extensions
+
+An [extension](./extensions.md)is a collection of one or more custom services which are packaged together. Much like stacks, each extension has a name which needs to be unique in the cluster. It also has a version folder to distinguish different releases of the extension which go in the resources/extensions folder with
+
+An extension version is similar to a stack version but it only includes the metainfo.xml and the services directory. This means that the alerts, kerberos, metrics, role command order and widgets files are not supported and should be included at the service level. In addition, the repositories, hooks, configurations, and upgrades directories are not supported although upgrade support can be added at the service level.
+
+#### Extension Structure
+
+```
+MY_EXT
+
+└── 1.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ ├── SERVICEA
+
+ ├── ...
+```
+
+#### Extension metainfo.xml Format:
+
+The extension metainfo.xml is very simple, it just specifies the minimum stack versions which are supported.
+
+```xml
+<metainfo>
+
+ <prerequisites>
+
+ <min-stack-versions>
+
+ <stack>
+
+ <name>BIGTOP</name>
+
+ <version>1.0.*</version>
+
+ </stack>
+
+ </min-stack-versions>
+
+ </prerequisites>
+
+</metainfo>
+```
+
+#### Extension Inheritance
+
+Extension versions can _extend_ other Extension versions in order to share command scripts and configurations. This reduces duplication of code across Extensions with the following:
+
+* add new Services in the child Extension version (not in the parent Extension version)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, **MyExtension 2.0**could extend **MyExtension 1.0** so only the changes applicable to **the MyExtension 2.0** extension are present in that Extension definition. This extension is defined in the metainfo.xml for **MyExtension 2.0**:
+
+```xml
+<metainfo>
+ <extends>1.0</extends>
+
+```
+
+### Extension Management Packs Structure
+
+```
+myext-mpack1.0.0.0
+
+├── mpack.json
+
+└── extensions
+
+ └── MY_EXT
+
+ └── 1.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ └── SERVICEA
+
+ └── 2.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ ├── SERVICEA
+
+ └── …
+
+
+```
+
+### Installing Management Packs
+
+In order to install an extension management pack, you run the following command with or without the "-v" option:
+
+ambari-server install-mpack --mpack=/dir/to/myext-mpack-1.0.0.0.tar.gz -v
+
+This will check to see if the management pack's prerequisites are met (min_ambari_version). In addition it will check to see if there are any errors in the management pack format. Assuming everything is correct, the management pack will be extracted in:
+
+/var/lib/ambariserver/resources/mpacks.
+
+It will then create symlinks from /var/lib/ambari-server/resources/extensions for each extension version in /var/lib/ambari-server/resources/mpacks/
+
+Extension Directory | Target Management Pack Symlink
+--------------------|------------------------------------------------------------------
+resources/extensions/MY_EXT/1.0 | resources/mpacks/myext-mpack1.0.0.0/extensions/MY_EXT/1.0
+resources/extensions/MY_EXT/2.0 | resources/mpacks/myext-mpack1.0.0.0/extensions/MY_EXT/2.0
+
+### Verifying the Extension Installation
+
+Once you have installed the extension management pack, you can restart ambari-server.
+
+```bash
+ambari-server restart
+```
+
+After ambari-server has been restarted, you will see in the ambari DB your extension listed in the extension table:
+
+```
+ambari=> select * from extension;
+
+extension_id | extension_name | extension_version
+
+--------------+----------------+-------------------
+
+1 | EXT | 1.0
+
+(1 row)
+```
+
+You can also query for extensions by calling REST APIs.
+
+```
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"items" : [{
+
+"href" : "http://
+
+"Extensions" : {
+
+"extension_name" : "EXT"
+
+}
+
+}]
+
+}
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"Extensions" : {
+
+"extension_name" : "EXT"
+
+},
+
+"versions" : [{
+
+"href" : "http://
+
+"Versions" : {
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0"
+
+}
+
+}]
+
+}
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"Versions" : {
+
+"extension-errors" : [ ],
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0",
+
+"parent_extension_version" : null,
+
+"valid" : true
+
+}
+
+}
+```
+
+### Linking Extensions to the Stack
+
+Once you have verified that Ambari knows about your extension, the next step is linking the extension version to the current stack version. Linking adds the extension version's services to the list of stack version services. This allows you to install the extension services on the cluster. Linking an extension version to a stack version, will first verify whether the extension supports the given stack version. This is determined by the stack versions listed in the extension version's metainfo.xml.
+
+The following REST API call, will link an extension version to a stack version. In this example it is linking EXT/1.0 with the BIGTOP/1.0 stack version.
+
+```bash
+curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"ExtensionLink": {"stack_name": "BIGTOP", "stack_version": "1.0", "extension_name": "EXT", "extension_version": "1.0"}}' http://
+```
+
+You can examine links (or extension links) either in the Ambari DB or with REST API calls.
+
+```
+ambari=> select * from extensionlink;
+
+link_id | stack_id | extension_id
+
+---------+----------+--------------
+
+1 | 2 | 1
+
+(1 row)
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"items" : [{
+
+"href" : "http://
+
+"ExtensionLink" : {
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0",
+
+"link_id" : 1,
+
+"stack_name" : "BIGTOP",
+
+"stack_version" : "1.0"
+
+}
+
+}]
+
+}
+```
+
+## Role Command Order
+
+Each service can define its own role command order by including a role_command_order.json file in its service folder. The service should only specify the relationship of its components to other components. In other words, if a service only includes COMP_X, it should only list dependencies related to COMP_X. If when COMP_X starts it is dependent on the NameNode start and when the NameNode stops it should wait for COMP_X to stop, the following would be included in the role command order:
+
+```json
+{
+ "_comment" : "Record format:",
+ "_comment" : "blockedRole-blockedCommand: [blockerRole1-blockerCommand1, blockerRole2-blockerCommand2, ...]",
+ "general_deps" : {
+ "_comment" : "dependencies for all cases"
+ },
+ "_comment" : "Dependencies that are used when GLUSTERFS is not present in cluster",
+ "optional_no_glusterfs": {
+ "COMP_X-START": ["NAMENODE-START"],
+ "NAMENODE-STOP": ["COMP_X-STOP"]
+ }
+}
+```
+
+The entries in the service's role command order will be merged with the role command order defined in the stack. For example, since the stack already has a dependency for NAMENODE-STOP, in the example above COMP_X-STOP would be added to the rest of the NAMENODE-STOP dependencies and the COMP_X-START dependency on NAMENODE-START would be added as a new dependency.
+
+**Sections**
+Ambari uses the below sections only:
+
+Section Name | When Used
+-------------|------------
+general_deps | Command orders are applied in all situations
+optional_glusterfs | Command orders are applied when cluster has instance of GLUSTERFS service
+optional_no_glusterfs | Command orders are applied when cluster does not have instance of GLUSTERFS service
+namenode_optional_ha | Command orders are applied when HDFS service is installed and JOURNALNODE component exists (HDFS HA is enabled)
+resourcemanager_optional_ha | Command orders are applied when YARN service is installed and multiple RESOURCEMANAGER host-components exist (YARN HA is enabled)
+
+**Commands**
+Commands currently supported by Ambari are
+
+* INSTALL
+* UNINSTALL
+* START
+* RESTART
+* STOP
+* EXECUTE
+* ABORT
+* UPGRADE
+* SERVICE_CHECK
+* CUSTOM_COMMAND
+* ACTIONEXECUTE
+
+## Service Advisor
+
+Each custom service can provide a service advisor as a Python script named service-advisor.py in their service folder. A Service Advisor allows custom services to integrate into the stack advisor behavior which only applies to the services within the stack.
+
+### Service Advisor Inheritance
+
+Unlike the Stack-advisor scripts, the service-advisor scripts do not automatically extend the parent service's service-advisor scripts. The service-advisor script needs to explicitly extend their parent's service service-advisor script. The following code sample shows how you would refer to a parent's service_advisor.py. In this case it is extending the root service-advisor.py file in the resources/stacks directory.
+
+**Sample service-advisor.py file inheritance**
+
+```python
+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
+STACKS_DIR = os.path.join(SCRIPT_DIR, '../../../stacks/')
+PARENT_FILE = os.path.join(STACKS_DIR, 'service_advisor.py')
+
+try:
+ with open(PARENT_FILE, 'rb') as fp:
+ service_advisor = imp.load_module('service_advisor', fp, PARENT_FILE, ('.py', 'rb', imp.PY_SOURCE))
+except Exception as e:
+ traceback.print_exc()
+ print "Failed to load parent"
+
+class HAWQ200ServiceAdvisor(service_advisor.ServiceAdvisor):
+```
+
+### Service Advisor Behavior
+
+Like the stack advisors, service advisors provide information on 4 important aspects for the service:
+
+1. Recommend layout of the service on cluster
+2. Recommend service configurations
+3. Validate layout of the service on cluster
+4. Validate service configurations
+
+By providing the service-advisor.py file, one can control dynamically each of the above for the service.
+
+The [main interface for the service-advisor](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51) scripts contains documentation on how each of the above are called, and what data is provided.
+
+**Base service_advisor.py from resources/stacks**
+
+```python
+
+class ServiceAdvisor(DefaultStackAdvisor):
+
+ def colocateService(self, hostsComponentsMap, serviceComponents):
+ pass
+
+ def getServiceConfigurationRecommendations(self, configurations, clusterSummary, services, hosts):
+ pass
+
+ def getServiceComponentLayoutValidations(self, services, hosts):
+ return []
+
+ def getServiceConfigurationsValidationItems(self, configurations, recommendedDefaults, services, hosts):
+ return []
+```
+
+**Examples**
+[Service Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51)
+[HAWQ 2.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0/service_advisor.py)
+[PXF 3.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/PXF/3.0.0/service_advisor.py)
+
+## Service Inheritance
+
+A service can inherit through the stack but may also inherit directly from common-services. This is declared in the metainfo.xml:
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <extends>common-services/HDFS/2.1.0.2.0</extends>
+```
+
+When a service inherits from another service version, how its defining files and directories are inherited follows a number of different patterns.
+
+The following files if defined in the current service version replace the definitions from the parent service version:
+
+* alerts.json
+* kerberos.json
+* metrics.json
+* role_command_order.json
+* service_advisor.py
+* widgets.json
+
+Note: All the services' role command orders will be merge with the stack's role command order to provide a master list.
+
+The following files if defined in the current service version are merged with the parent service version (supports removing/deleting parent entries):
+
+* quicklinks/quicklinks.json
+* themes/theme.json
+
+The following directories if defined in the current service version replace those from the parent service version:
+
+* packages
+* upgrades
+
+This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The configurations directory in the current service version merges the configuration files with those from the parent service version. Configuration files defined at any level can be omitted from the services configurations by specifying the config-type in the excluded-config-types list:
+
+```xml
+ <excluded-config-types>
+ <config-type>storm-site</config-type>
+ </excluded-config-types>
+```
+
+For an individual configuration file (or configuration type) like core-site.xml, it will by default merge with the configuration type from the parent. If the `supports_do_not_extend` attribute is specified as `true`, the configuration type will **not** be merged.
+
+```xml
+<configuration supports_do_not_extend="true">
+```
+
+### Inheritance and the Service MetaInfo
+
+By default all attributes of the service and components if defined in the metainfo.xml of the current service version will replace those of the parent service version unless specified in the sections that follow.
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <displayName>HDFS</displayName>
+ <comment>Apache Hadoop Distributed File System</comment>
+ <version>2.1.0.2.0</version>
+ <components>
+ <component>
+ <name>NAMENODE</name>
+ <displayName>NameNode</displayName>
+ <category>MASTER</category>
+ <cardinality>1-2</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <reassignAllowed>true</reassignAllowed>
+ <commandScript>
+ <script>scripts/namenode.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>1800</timeout>
+ </commandScript>
+ ...
+```
+
+The custom commands defined in the metainfo.xml of the current service version are merged with those of the parent service version.
+
+```xml
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/namenode.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+```
+
+The configuration dependencies defined in the metainfo.xml of the current service version are merged with those of the parent service version.
+
+```xml
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ ...
+ </configuration-dependencies>
+
+```
+
+The components defined in the metainfo.xml of the current service are merged with those of the parent (supports delete).
+
+```xml
+ <component>
+ <name>ZKFC</name>
+ <displayName>ZKFailoverController</displayName>
+ <category>SLAVE</category>
+```
+
+## Service Upgrade
+
+Each custom service can define its upgrade within its service definition. This allows the custom service to be integrated within the [stack's upgrade](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-StackUpgrades).
+
+### Service Upgrade Packs
+
+Each service can define _upgrade-packs_, which are XML files describing the upgrade process of that particular service and how the upgrade pack relates to the overall stack upgrade-packs. These _upgrade-pack_ XML files are placed in the service's _upgrades/_ folder in separate sub-folders specific to the stack-version they are meant to extend. Some examples of this can be seen in the testing code.
+
+**Examples**
+
+- [Upgrades folder](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/)
+- [Upgrade-pack XML](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml)
+
+### Matching Upgrade Packs
+
+Each upgrade-pack that the service defines should match the file name of the service defined by a particular stack version. For example in the testing code, HDP 2.2.0 had an [upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.2.0/upgrades/upgrade_test_15388.xml) upgrade-pack. The HDFS service defined an extension to that upgrade pack [HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml). In this case the upgrade-pack was defined in the HDP/2.0.5 stack. The upgrade-pack is an extension to HDP/2.2.0 because it is defined in upgrade/HDP/2.2.0 directory. Finally the name of the service's extension to the upgrade-pack upgrade_test_15388.xml matches the name of the upgrade-pack in HDP/2.2.0/upgrades.
+
+**Upgrade XML Format**
+
+The file format for the service is much the same as that of the stack. The target, target-stack and type attributes should all be the same as the stack's upgrade-pack.
+
+**Prerequisite Checks**
+
+The service is able to add its own prerequisite checks.
+
+**General Attributes and Prerequisite Checks**
+```xml
+<upgrade xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <target>2.4.*</target>
+ <target-stack>HDP-2.4.0</target-stack>
+ <type>ROLLING</type>
+ <prerequisite-checks>
+ <check>org.apache.ambari.server.checks.FooCheck</check>
+ </prerequisite-checks>
+```
+
+**Order Section**
+
+The order section of the upgrade-pack, consists of group elements just like the stack's upgrade-pack. The key difference is defining how these groups relate to groups in the stack's upgrade pack or other service upgrade-packs. In the first example we are referencing the PRE_CLUSTER group and adding a new execute-stage for the service FOO. The entry is supposed to be added after the execute-stage for HDFS based on the
+
+**Order Section - Add After Group Entry**
+```xml
+
+<order>
+ <group xsi:type="cluster" name="PRE_CLUSTER" title="Pre {{direction.text.proper}}">
+ <add-after-group-entry>HDFS</add-after-group-entry>
+ <execute-stage service="FOO" component="BAR" title="Backup FOO">
+ <task xsi:type="manual">
+ <message>Back FOO up.</message>
+ </task>
+ </execute-stage>
+ </group>
+
+```
+
+The same syntax can be used to order other sections like service check priorities and group services.
+
+**Order Section - Further Add After Group Entry Examples**
+```xml
+<group name="SERVICE_CHECK1" title="All Service Checks" xsi:type="service-check">
+ <add-after-group-entry>ZOOKEEPER</add-after-group-entry>
+ <priority>
+ <service>HBASE</service>
+ </priority>
+</group>
+
+<group name="CORE_MASTER" title="Core Masters">
+ <add-after-group-entry>YARN</add-after-group-entry>
+ <service name="HBASE">
+ <component>HBASE_MASTER</component>
+ </service>
+</group>
+```
+
+It is also possible to add new groups and order them after other groups in the stack's upgrade-packs. In the following example, we are adding the FOO group after the HIVE group using the add-after-group tag.
+
+**Order Section - Add After Group**
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO">
+ <component>BAR</component>
+ </service>
+</group>
+```
+
+You could also include both the add-after-group and the add-after-group-entry tags in the same group. This will create a new group if it doesn't already exist and will order it after the add-after-group's group name. The add-after-group-entry will determine the internal ordering of that group's services, priorities or execute stages.
+
+**Order Section - Add After Group**
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <add-after-group-entry>FOO</add-after-group-entry>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO2">
+ <component>BAR2</component>
+ </service>
+</group>
+```
+
+**Processing Section**
+
+The processing section of the upgrade-pack remains the same as what it would be in the stack's upgrade-pack.
+
+**Processing Section**
+```xml
+ <processing>
+ <service name="FOO">
+ <component name="BAR">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ <component name="BAR2">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ </service>
+ </processing>
+```
+## Custom Service Repo
+
+Each service can define its own repo info by adding repos/repoinfo.xml in its service folder. The service specific repo will be included in the list of repos defined for the stack.
+
+**Example**: https://github.com/apache/ambari/blob/trunk/contrib/management-packs/microsoft-r_mpack/src/main/resources/custom-services/MICROSOFT_R_SERVER/8.0.5/repos/repoinfo.xml
+
+```xml
+
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://cust.service.lab.com/Services/centos6/1.1/myservices</baseurl>
+ <repoid>CUSTOM-1.1</repoid>
+ <reponame>CUSTOM</reponame>
+ </repo>
+ </os>
+</reposinfo>
+```
+
+## Custom Services - Additional Configuration
+
+### Alerts
+
+Each service is capable of defining which alerts Ambari should track by providing an [alerts.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/alerts.json) file.
+
+### Kerberos
+
+Ambari is capable of enabling and disabling Kerberos for a cluster. To inform Ambari of the identities and configurations to be used for the service and its components, each service can provide a _kerberos.json_ file.
+
+### Metrics
+
+Ambari provides the [Ambari Metrics System ("AMS")](../metrics/index.md)service for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+Each service can define which metrics AMS should collect and provide by defining a [metrics.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metrics.json) file.
+
+Read more about the metrics.json file format in the [Stack Defined Metrics](../metrics/stack-defined-metrics.md) page.
+
+### Quick Links
+
+A service can add a list of quick links to the Ambari web UI by adding a quick links JSON file. Ambari server parses the quick links JSON file and provides its content to the Ambari web UI. The UI can calculate quick link URLs based on that information and populate the quick links drop-down list accordingly.
+
+### Widgets
+
+Each service can define which widgets and heat maps show up by default on the service summary page by defining a [widgets.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/widgets.json) file.
diff --git a/docs/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md b/docs/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md
new file mode 100644
index 0000000..3b76f47
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md
@@ -0,0 +1,644 @@
+# Defining a Custom Stack and Services
+
+## Background
+
+The Stack definitions can be found in the source tree at [/ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks). After you install the Ambari Server, the Stack definitions can be found at `/var/lib/ambari-server/resources/stacks`
+
+## Stack Properties
+
+The stack must contain or inherit a properties directory which contains two files: [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) and [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json). The properties directory must be at the root stack version level and must not be included in the other stack versions. This [directory](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) is new in Ambari 2.4.
+
+The stack_features.json contains a list of features that are included in Ambari and allows the stack to specify which versions of the stack include those features. The list of features are determined by the particular Ambari release. The reference list for a particular Ambari version should be found in the [HDP/2.0.6/properties/stack_features.json](https://github.com/apache/ambari/blob/branch-2.4/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) in the branch for that Ambari release. Each feature has a name and description and the stack can provide the minimum and maximum version where that feature is supported.
+
+```json
+{
+
+"stack_features": [
+
+{
+
+"name": "snappy",
+
+"description": "Snappy compressor/decompressor support",
+
+"min_version": "2.0.0.0",
+
+"max_version": "2.2.0.0"
+
+},
+
+...
+
+}
+```
+
+The stack_tools.json includes the name and location where the stack_selector and conf_selector tools are installed.
+
+```json
+{
+
+"stack_selector": ["hdp-select", "/usr/bin/hdp-select", "hdp-select"],
+
+"conf_selector": ["conf-select", "/usr/bin/conf-select", "conf-select"]
+
+}
+```
+
+For more information see the [Stack Properties](./stack-properties.md) wiki page.
+
+## Structure
+
+The structure of a Stack definition is as follows:
+
+```
+|_ stacks
+ |_
+ |_
+ metainfo.xml
+ |_ hooks
+ |_ repos
+ repoinfo.xml
+ |_ services
+ |_
+ metainfo.xml
+ metrics.json
+ |_ configuration
+ {configuration files}
+ |_ package
+ {files, scripts, templates}
+```
+
+## Defining a Service and Components
+
+**metainfo.xml**
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service can be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you specify the `<commandScript>` to use when executing commands. There is a defined set of default commands the component must support.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+## Using Stack Inheritance
+
+Stacks can _extend_ other Stacks in order to share command scripts and configurations. This reduces duplication of code across Stacks with the following:
+
+* define repositories for the child Stack
+* add new Services in the child Stack (not in the parent Stack)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, the **HDP 2.1 Stack _extends_ HDP 2.0.6 Stack** so only the changes applicable to **HDP 2.1 Stack** are present in that Stack definition. This extension is defined in the [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.1/metainfo.xml) for HDP 2.1 Stack:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.0.6</extends>
+</metainfo>
+```
+
+## Example: Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV", add it to an existing Stack definition. This service includes MASTER, SLAVE and CLIENT components.
+
+### Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+```
+3. Browse to the newly created `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+5. Next, let's create that command script. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts` that we designated in the service metainfo.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+```
+6. Browse to the scripts directory and create the `.py` command script files. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+### Install the Service (via Ambari Web "Add Services")
+
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Sample Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Sample Service" and click Next.
+
+4. Assign the "Sample Srv Master" and click Next.
+
+5. Select the hosts to install the "Sample Srv Client" and click Next.
+
+6. Once complete, the "My Sample Service" will be available Service navigation area.
+
+7. If you want to add the "Sample Srv Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+## Example: Implementing a Custom Client-only Service
+
+In this example, we will create a custom service called "TESTSRV", add it to an existing Stack definition and use the Ambari APIs to install/configure the service. This service is a CLIENT so it has two commands: install and configure.
+
+### Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTSRV</strong>` that will contain the service definition for **TESTSRV**.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+```
+3. Browse to the newly created `TESTSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTSRV</name>
+ <displayName>New Test Service</displayName>
+ <comment>A New Test Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TEST_CLIENT</name>
+ <displayName>New Test Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTSRV**", and it contains one component " **TEST_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts` that we designated in the service metainfo.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the client';
+ def configure(self, env):
+ print 'Configure the client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```
+ambari-server restart
+```
+
+### Adding Repository details in repoinfo.xml
+
+When adding a custom service, it may be needed to add additional repository details for the stack especially if the service binaries are available through a separate repository. Additional
+
+```xml
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://cust.service.lab.com/Services/centos6/1.1/myservices</baseurl>
+ <repoid>CUSTOM-1.1</repoid>
+ <reponame>CUSTOM</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.1</baseurl>
+ <repoid>HDP-2.0.6</repoid>
+ <reponame>HDP</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.17/repos/centos6</baseurl>
+ <repoid>HDP-UTILS-1.1.0.17</repoid>
+ <reponame>HDP-UTILS</reponame>
+ </repo>
+ </os>
+</reposinfo>
+```
+
+### Install the Service (via the Ambari REST API)
+
+1. Add the Service to the Cluster.
+
+
+```
+POST
+/api/v1/clusters/MyCluster/services
+
+{
+"ServiceInfo": {
+ "service_name":"TESTSRV"
+ }
+}
+```
+2. Add the Components to the Service. In this case, add TEST_CLIENT to TESTSRV.
+
+```
+POST
+/api/v1/clusters/MyCluster/services/TESTSRV/components/TEST_CLIENT
+```
+3. Install the component on all target hosts. For example, to install on `<a href="http://c6402.ambari.apache.org">c6402.ambari.apache.org</a>` and `<a href="http://c6403.ambari.apache.org">c6403.ambari.apache.org</a>`, first create the host_component resource on the hosts using POST.
+
+```
+POST
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+POST
+/api/v1/clusters/MyCluster/hosts/c6403.ambari.apache.org/host_components/TEST_CLIENT
+```
+4. Now have Ambari install the components on all hosts. In this single command, you are instructing Ambari to install all components related to the service. This call the `install()` method in the command script on each host.
+
+
+```
+PUT
+/api/v1/clusters/MyCluster/services/TESTSRV
+
+{
+ "RequestInfo": {
+ "context": "Install Test Srv Client"
+ },
+ "Body": {
+ "ServiceInfo": {
+ "state": "INSTALLED"
+ }
+ }
+}
+```
+5. Alternatively, instead of installing all components at the same time, you can explicitly install each host component. In this example, we will explicitly install the TEST_CLIENT on `<a href="http://c6402.ambari.apache.org">c6402.ambari.apache.org</a>`:
+
+```
+PUT
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+{
+ "RequestInfo": {
+ "context":"Install Test Srv Client"
+ },
+ "Body": {
+ "HostRoles": {
+ "state":"INSTALLED"
+ }
+ }
+}
+```
+6. Use the following to configure the client on the host. This will end up calling the `configure()` method in the command script.
+
+```
+POST
+/api/v1/clusters/MyCluster/requests
+
+{
+ "RequestInfo" : {
+ "command" : "CONFIGURE",
+ "context" : "Config Test Srv Client"
+ },
+ "Requests/resource_filters": [{
+ "service_name" : "TESTSRV",
+ "component_name" : "TEST_CLIENT",
+ "hosts" : "c6403.ambari.apache.org"
+ }]
+}
+```
+7. If you want to see which hosts the component is installed.
+
+```
+GET
+/api/v1/clusters/MyCluster/components/TEST_CLIENT
+```
+
+### Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Test Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Test Service" and click Next.
+
+4. Select the hosts to install the "New Test Client" and click Next.
+
+5. Once complete, the "My Test Service" will be available Service navigation area.
+
+6. If you want to add the "New Test Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+
+## Example: Implementing a Custom Client-only Service (with Configs)
+
+In this example, we will create a custom service called "TESTCONFIGSRV" and add it to an existing Stack definition. This service is a CLIENT so it has two commands: install and configure. And the service also includes a configuration type "test-config".
+
+### Create and Add the Service to the Stack
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTCONFIGSRV</strong>` that will contain the service definition for TESTCONFIGSRV.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+```
+3. Browse to the newly created `TESTCONFIGSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTCONFIGSRV</name>
+ <displayName>New Test Config Service</displayName>
+ <comment>A New Test Config Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TESTCONFIG_CLIENT</name>
+ <displayName>New Test Config Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTCONFIGSRV**", and it contains one component " **TESTCONFIG_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts` that we designated in the service metainfo `<commandscript></commandscript>`.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the config client';
+ def configure(self, env):
+ print 'Configure the config client';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now let's define a config type for this service. Create a directory for the configuration dictionary file `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servicesTESTCONFIGSRV/configuration`.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+```
+8. Browse to the configuration directory and create the `test-config.xml` file. For example:
+
+```xml
+
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<configuration>
+ <property>
+ <name>some.test.property</name>
+ <value>this.is.the.default.value</value>
+ <description>This is a kool description.</description>
+ </property>
+</configuration>
+
+```
+9. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```
+ambari-server restart
+```
diff --git a/docs/ambari-design/stack-and-services/extensions.md b/docs/ambari-design/stack-and-services/extensions.md
new file mode 100644
index 0000000..b89bc96
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/extensions.md
@@ -0,0 +1,208 @@
+# Extensions
+
+## Background
+Added in Ambari 2.4.
+
+An Extension is a collection of one or more custom services which are packaged together. Much like stacks, each extension has a name which needs to be unique in the cluster. It also has a version directory to distinguish different releases of the extension. Much like stack versions which go in `/var/lib/ambari-server/resources/stacks with <stack_name>/<stack_version>` sub-directories, extension versions go in `/var/lib/ambari-server/resources/extensions with <extension_name>/<extension_version>` sub-directories.
+
+An extension can be linked to supported stack versions. Once an extension version has been linked to the currently installed stack version, the custom services contained in the extension version may be added to the cluster in the same manner as if they were actually contained in the stack version.
+
+Third party developers can release Extensions which can be added to a cluster.
+
+## Structure
+
+The structure of an Extension definition is as follows:
+
+```
+|_ extensions
+ |_ <extension_name>
+ |_ <extension_version>
+ |_ metainfo.xml
+ |_ services
+ |_ <service_name>
+ |_ metainfo.xml
+ |_ metrics.json
+ |_ configuration
+ |_ {configuration files}
+ |_ package
+ |_ {files, scripts, templates}
+```
+
+An extension version is similar to a stack version but it only includes the metainfo.xml and the services directory. This means that the alerts, kerberos, metrics, role command order, widgets files are not supported and should be included at the service level. In addition, the repositories, hooks, configurations, and upgrades directories are not supported although upgrade support can be added at the service level.
+
+## Extension Inheritance
+
+Extension versions can extend other Extension versions in order to share command scripts and configurations. This reduces duplication of code across Extensions with the following:
+
+- add new Services in the child Extension version (not in the parent Extension version)
+- override command scripts of the parent Services
+- override configurations of the parent Services
+
+For example, **MyExtension 2.0** could extend **MyExtension 1.0** so only the changes applicable to the **MyExtension 2.0** extension are present in that Extension definition. This extension is defined in the metainfo.xml for **MyExtension 2.0**:
+
+```xml
+<metainfo>
+ <extends>1.0</extends>
+```
+
+## Supported Stack Versions
+
+**Each Extension Version must support one or more Stack Versions.** The Extension Version specifies the minimum Stack Version which it supports. This is included in the extension's metainfo.xml in the prerequisites section like so:
+
+```xml
+<metainfo>
+ <prerequisites>
+ <min-stack-versions>
+ <stack>
+ <name>HDP</name>
+ <version>2.4</version>
+ </stack>
+ <stack>
+ <name>OTHER</name>
+ <version>1.0</version>
+ </stack>
+ </min-stack-versions>
+ </prerequisites>
+</metainfo>
+
+```
+
+## Installing Extensions
+
+It is recommended to install extensions using [management packs](./management-packs.md). For more details see the [instructions on packaging custom services using extensions and management packs](custom-services.md).
+
+Once the extension version directory has been created under the resource/extensions directory with the required metainfo.xml file, you can restart ambari-server.
+
+## Extension REST APIs
+
+You can query for extensions by calling REST APIs.
+
+### Get all extensions
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions'
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/",
+ "items" : [
+ {
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT",
+ "Extensions" : {
+ "extension_name" : "EXT"
+ }
+ }
+ ]
+}
+```
+
+### Get extension
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions/EXT'
+
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT",
+ "Extensions" : {
+ "extension_name" : "EXT"
+ },
+ "versions" : [
+ {
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT/versions/1.0",
+ "Versions" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0"
+ }
+ }
+ ]
+}
+```
+
+### Get extension version
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions/EXT/versions/1.0'
+
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT/versions/1.0/",
+ "Versions" : {
+ "extension-errors" : [],
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "parent_extension_version" : null,
+ "valid" : true
+ }
+}
+```
+
+## Extension Links
+
+An Extension Link is a link between a stack version and an extension version. Once an extension version has been linked to the currently installed stack version, the custom services contained in the extension version may be added to the cluster in the same manner as if they were actually contained in the stack version.
+
+It is only possible to link an extension version to a stack version if the stack version is supported by the extension version. The stack name must be specified in the prerequisites section of the extension's metainfo.xml and the stack version must be greater than or equal to the minimum version number specified.
+
+## Extension Link REST APIs
+
+You can retrieve, create, update and delete extension links by calling REST APIs.
+
+### Create Extension Link
+
+```
+The following curl command will link an extension EXT/1.0 to the stack HDP/2.4
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"ExtensionLink": {"stack_name": "HDP", "stack_version":
+
+"2.4", "extension_name": "EXT", "extension_version": "1.0"}}' http://<server>:<port>/api/v1/links/
+```
+
+### Get All Extension Links
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/links'
+
+{
+ "href" : "http://<server>:<port>/api/v1/links/",
+ "items" : [
+ {
+ "href" : "http://<server>:<port>:8080/api/v1/links/1",
+ "ExtensionLink" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "link_id" : 1,
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+ }
+ ]
+}
+```
+
+### Get Extension Link
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/link/1'
+{
+ "href" : "http://<server>:<port>/api/v1/links/1",
+ "ExtensionLink" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "link_id" : 1,
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Delete Extension Link
+
+```
+You must specify the ID of the Extension Link to be deleted.
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE http://<server>:<port>/api/v1/links/<link_id>
+```
+
+### Update All Extension Links
+
+```
+This will reread the stacks, extensions and services in order to make sure the state of the stack is up to date in memory.
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT http://<server>:<port>/api/v1/links/
+```
\ No newline at end of file
diff --git a/docs/ambari-design/stack-and-services/faq.md b/docs/ambari-design/stack-and-services/faq.md
new file mode 100644
index 0000000..d19f838
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/faq.md
@@ -0,0 +1,29 @@
+# FAQ
+
+## **[STACK]/[SERVICE]/metainfo.xml**
+
+**If a component exists in the parent stack and is defined again in the child stack with just a few attributes, are these values just to override the parent's values or the whole component definition is replaced? What about other elements in metainfo.xml -- which rules apply?**
+
+Ambari goes property by property and merge them from parent to child. So if you remove a category for example from the child it will be inherited from parent, that goes for pretty much all properties.
+
+So, the question is how do we tackle existence of a property in both parent and child. Here, most of the decision still follow same paradigm as take the child value instead of parent and every property in parent, not explicitly deleted from child using a marker like
+
+
+* For config-dependencies, we take all or nothing approach, if this property exists in child use it and all of its children else take it from parent.
+
+* The custom commands are merged based on names, such that merged definition is a union of commands with child commands with same name overriding those fro parent.
+
+* Cardinality is overwritten by a child or take from the parent if child has not provided one.
+
+You could look at this method for more details: `org.apache.ambari.server.api.util.StackExtensionHelper#mergeServices`
+
+For more information see the [Service Inheritance](./custom-services.md#Service20%Inheritance) wiki page.
+
+**If a component is missing in the new definition but is present in the parent, does it get inherited?**
+
+Generally yes.
+
+**Configuration dependencies for the service -- are they overwritten or merged?**
+
+Overwritten.
+
diff --git a/docs/ambari-design/stack-and-services/hooks.md b/docs/ambari-design/stack-and-services/hooks.md
new file mode 100644
index 0000000..aa83f2a
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/hooks.md
@@ -0,0 +1,48 @@
+# Hooks
+
+A stack can add during the following points in Ambari actions:
+
+* before ANY
+* before and after INSTALL
+* before RESTART
+* before START
+
+As mentioned in [Stack Inheritance](./stack-inheritance.md), the hooks directory if defined in the current stack version replace those from the parent stack version. This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The hooks directory should have 5 sub-directories:
+
+* after-INSTALL
+* before-ANY
+* before-INSTALL
+* before-RESTART
+* before-START
+
+Each hook directory can have 3 sub-directories:
+
+* files
+* scripts
+* templates
+
+The scripts directory is the main directory and must be supplied. This must contain a hook.py file. It may contain other python scripts which initialize variables (like a params.py file) or other utility files contain functions used in the hook.py file.
+
+The files directory can any non-python files such as scripts, jar or properties files.
+
+The templates folder can contain any required j2 files that are used to set up properties files.
+
+The hook.py file should extend the Hook class defined in resource_management/libraries/script/hook.py. The naming convention is to name the hook class after the hook folder such as AfterInstallHook if the class is in the after-INSTALL folder. The hook.py file must define the hook(self, env) function. Here is an example hook:
+
+>
+
+```py
+from resource_management.libraries.script.hook import Hook
+
+class AfterInstallHook(Hook):
+
+ def hook(self, env):
+ import params
+ env.set_params(params)
+ # Call any functions to set up the stack after install
+
+if __name__ == "__main__":
+ AfterInstallHook().execute()
+```
diff --git a/docs/ambari-design/stack-and-services/how-to-define-stacks-and-services.md b/docs/ambari-design/stack-and-services/how-to-define-stacks-and-services.md
new file mode 100644
index 0000000..cf07b79
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/how-to-define-stacks-and-services.md
@@ -0,0 +1,1216 @@
+# How-To Define Stacks and Services
+
+Services managed by Ambari are defined in its _stacks_ folder .
+
+To define your own services and stacks to be managed by Ambari, follow the steps below.
+
+There is also an example you can follow on how to [create your custom stack and service](./defining-a-custom-stack-and-services.md).
+
+A stack is a collection of services. Multiple versions of a stack can be defined, each with its own set of services. Stacks in Ambari are defined in [ambari-server/src/main/resources/stacks ;](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folder, which can be found at **/var/lib/ambari-server/resources/stacks** folder after install.
+
+Services managed by a stack can be defined either in [ambari-server/src/main/resources/common-services](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services)or [ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folders. These folders after install can be found at _/var/lib/ambari-server/resources/common-services_ or _/var/lib/ambari-server/resources/stacks_ folders respectively.
+
+> **Question: When do I define service in _common-services_ vs. _stacks_ folders?**
+One would define services in the [common-services](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services)folder if there is possibility of the service being used in multiple stacks. For example, almost all stacks would need the HDFS service - so instead of redefining HDFS in each stack, the one defined in common-services is referenced .Likewise, if a service is never going to be shared, it can be defined in the [stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folder.Basically services defined in stacks folder are used by containment, whereas the ones defined in common-services are used by reference.
+
+## Define Service
+
+Shown below is how to define a service in _common-services_ folder. The same approach can be taken when defining services in the _stacks_ folder, which will be discussed in the _Define Stack_ section.
+
+
+
+Services **MUST** provide the main _metainfo.xml_ file which provides important metadata about the service.
+
+Apart from that, other files can be provided to give more information about the service. More details about these files are provided below.
+
+A service may also inherit from either a previous stack version or common services. For more information see the [Service Inheritance](./stack-inheritance.md) page.
+
+### _metainfo.xml_
+
+In the _metainfo.xml_ service descriptor, one can first define the service and its components.
+
+Complete reference can be found in the [Writing metainfo.xml](./writing-metainfo.md) page.
+
+A good reference implementation is the [HDFS metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L27).
+
+> **Question: Is it possible to define multiple services in the same metainfo.xml?**
+Yes. Though it is possible, it is discouraged to define multiple services in the same service folder.
+
+YARN and MapReduce2 are services that are defined together in the [YARN folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0). Its [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml) defines both services.
+
+#### Scripts
+
+With the components defined, we need to provide scripts which can handle the various stages of the service and component's lifecycle.
+
+The scripts necessary to manage service and components are specified in the _metainfo.xml_ ([HDFS](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L35))
+Each of these scripts should extend the [Script](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-common/src/main/python/resource_management/libraries/script/script.py) class which provides useful methods. Example: [NameNode script](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L68)
+
+
+
+These scripts should be provided in the __ folder.
+
+
+
+**package/scripts**
+Folder | Purpose
+-------|--------
+**package/scripts** | Contains scripts invoked by Ambari. These scripts are loaded into the execution path with the correct environment.<br></br>Example: [HDFS](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts)
+**package/files** | Contains files used by above scripts. Generally these are other scripts (bash, python, etc.) invoked as a separate process.<br></br>Example: [checkWebUI.py](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/files/checkWebUI.py) is run in HDFS service-check to determine if Journal Nodes are available
+**package/templates** | Template files used by above scripts to generate files on managed hosts. These are generally configuration files required by the service to operate.<br></br>Example: [exclude_hosts_list.j2](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/templates/exclude_hosts_list.j2) which is used by scripts to generate _/etc/hadoop/conf/dfs.exclude_
+
+#### Python
+
+Ambari by default supports Python scripts for management of service and components.
+
+Component scripts should extend `resource_management.Script` class and provide methods required for that component's lifecycle.
+
+Taken from the page on [how to create custom stack](./defining-a-custom-stack-and-services.md), the following methods are needed for MASTER, SLAVE and CLIENT components to go through their lifecycle.
+
+```python
+import sys
+from resource_management import Script
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+```python
+import sys
+from resource_management import Script
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+```python
+import sys
+from resource_management import Script
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+Ambari provides helpful Python libraries below which are useful in writing service scripts. For complete reference on these libraries visit the [Ambari Python Libraries](https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Python+Libraries) page.
+
+* resource_management
+* ambari_commons
+* ambari_simplejson
+
+##### OS Variant Scripts
+
+If the service is supported on multiple OSes which requires separate scripts, the base _resource_management.Script_ class can be extended with different _@OsFamilyImpl()_ annotations.
+
+This allows for the separation of only OS specific methods of the component.
+
+Example: [NameNode default script](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L126), [NameNode Windows script](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L346).
+
+> **Examples**
+NameNode [Start](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py#L93), [Stop](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py#L208).
+
+DataNode [Start and Stop](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py#L68).
+
+HDFS [configurations persistence](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py#L31)
+
+#### Custom Actions
+
+Sometimes services need to perform actions unique to that service which go beyond the default actions provided by Ambari (like _install_ , _start, stop, configure,_ etc.).
+
+Services can define such actions and expose them to the user in UI so that they can be easily invoked.
+
+As an example, we show the _Rebalance HDFS_ custom action implemented by HDFS.
+
+##### Stack Changes
+
+1. [Define custom command inside the _customCommands_ section](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L49) of the component in _metainfo.xml_.
+
+2. [Implement method with same name as custom command](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L273) in script referenced from _metainfo.xml_.
+
+ 1. If custom command does not have OS variants, it can be implemented in the same class that extends _resource_management.Script_
+ 2. If there are OS variants, different methods can be implemented in each class annotated by _@OsFamilyImpl(os_family=...)_. [Default rebalancehdfs](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L273),[Windows rebalancehdfs](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L354).
+
+This will provide ability by the backend to run the script on all managed hosts where the service is installed.
+
+##### UI Changes
+
+No UI changes are necessary to see the custom action on the host page.
+
+The action should show up in the host-component's list of actions. Any master-component actions will automatically show up on the service's action menu.
+
+When the action is clicked in UI, the POST call is made automatically to trigger the script defined above.
+
+> **Question: How do I provide my own label and icon for the custom action in UI?**
+In Ambari UI, add your component action to the _App.HostComponentActionMap_ object with custom icon and name. Ex: [REBALANCEHDFS](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-web/app/models/host_component.js#L351).
+
+### Configuration
+
+Configuration files for a service should be placed by default in the _ [configuration](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration)_ folder.
+
+If a different named folder has to be used, the [< _configuration-dir>_](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml#L249) element can be used in _metainfo.xml_ to point to that folder.
+
+The important sections of the metainfo.xml with regards to configurations are:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <displayName>HDFS</displayName>
+ <comment>Apache Hadoop Distributed File System</comment>
+ <version>2.1.0.2.0</version>
+ <components>
+ ...
+ <component>
+ <name>HDFS_CLIENT</name>
+ ...
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>hdfs-site.xml</fileName>
+ <dictionaryName>hdfs-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>xml</type>
+ <fileName>core-site.xml</fileName>
+ <dictionaryName>core-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>log4j.properties</fileName>
+ <dictionaryName>hdfs-log4j,yarn-log4j</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>hadoop-env.sh</fileName>
+ <dictionaryName>hadoop-env</dictionaryName>
+ </configFile>
+ </configFiles>
+ ...
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ </configuration-dependencies>
+ </component>
+ ...
+ </components>
+
+ <configuration-dir>configuration</configuration-dir>
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ <config-type>hadoop-env</config-type>
+ <config-type>hadoop-policy</config-type>
+ <config-type>hdfs-log4j</config-type>
+ <config-type>ranger-hdfs-plugin-properties</config-type>
+ <config-type>ssl-client</config-type>
+ <config-type>ssl-server</config-type>
+ <config-type>ranger-hdfs-audit</config-type>
+ <config-type>ranger-hdfs-policymgr-ssl</config-type>
+ <config-type>ranger-hdfs-security</config-type>
+ <config-type>ams-ssl-client</config-type>
+ </configuration-dependencies>
+ </service>
+ </services>
+</metainfo>
+```
+
+* **config-type** - String representing a group of configurations. Example: _core-site, hdfs-site, yarn-site_, etc. When configurations are saved in Ambari, they are persisted within a version of config-type which is immutable. If you change and save HDFS core-site configs 4 times, you will have 4 versions of config-type core-site. Also, when a service's configs are saved, only the changed config-types are updated.
+
+* **configFiles** - lists the config-files handled by the enclosing component
+* **configFile** - represents one config-file of a certain type
+
+ - **type** - type of file based on which contents are generated differently
+
+ + **xml** - XML file generated in Hadoop friendly format. Ex:[hdfs-site.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml)
+ + **env** - Generally used for scripts where the content value is used as a template. The template has config-tags whose values are populated at runtime during file generation. Ex:[hadoop-env.sh](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml)
+ + **properties** - Generates property files where entries are in key=value format. Ex:[falcon-runtime.properties](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-runtime.properties.xml)
+ - **dictionaryName** - Name of the config-type as which key/values of this config file will be stored
+* **configuration-dependencies** - Lists the config-types on which this component or service depends on. One of the implications of this dependency is that whenever the config-type is updated, Ambari automatically marks the component or service as requiring restart. From the code section above, whenever _core-site_ is updated, both HDFS service as well as HDFS_CLIENT component will be marked as requiring restart.
+
+* **configuration-dir** - Directory where files listed in _configFiles_ will be. Optional. Default value is _configuration_.
+
+#### Adding new configs in a config-type
+
+There are a number of different parameters that can be specified to a config item when it is added to a config-type. These have been covered [here](https://cwiki.apache.org/confluence/display/AMBARI/Configuration+support+in+Ambari).
+
+#### UI - Categories
+
+Configurations defined above show up in the service's _Configs_ page.
+
+To customize categories and ordering of configurations in UI, the following files have to be updated.
+
+**Create Category** - Update the _ [ambari-web/app/models/stack_service.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/models/stack_service.js#L226)_ file to add your own service, along with your new categories.
+
+**Use Category** - To place configs inside a defined category, and specify an order in which configs are placed, add configs to [ambari-web/app/data/HDP2/site_properties.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2/site_properties.js) file. In this file one can specify the category to use, and the index where a config should be placed. The stack folders in [ambari-web/app/data](https://github.com/apache/ambari/tree/trunk/ambari-web/app/data) are hierarchical and inherit from previous versions. The mapping of configurations into sections is defined here. Example [Hive Categories](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2.2/hive_properties.js), [Tez Categories](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2.2/tez_properties.js).
+
+#### UI - Enhanced Configs
+
+The _Enhanced Configs_feature makes it possible for service providers to customize their service's configs to a great deal and determine which configs are prominently shown to user without making any UI code changes. Customization includes providing a service friendly layout, better controls (sliders, combos, lists, toggles, spinners, etc.), better validation (minimum, maximum, enums), automatic unit conversion (MB, GB, seconds, milliseconds, etc.), configuration dependencies and improved dynamic recommendations of default values.
+
+A service provider can accomplish all the above just by changing their service definition in the _stacks_/folder.
+
+Read more in the _ [Enhanced Configs](https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Configs)_ page
+
+### Alerts
+
+Each service is capable of defining which alerts Ambari should track by providing an [alerts.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/alerts.json) file.
+
+Read more about Ambari Alerts framework [in the Alerts wiki page](https://cwiki.apache.org/confluence/display/AMBARI/Alerts) and the alerts.json format in the [Alerts definition documentation](https://github.com/apache/ambari/blob/branch-2.1/ambari-server/docs/api/v1/alert-definitions.md).
+
+### Kerberos
+
+Ambari is capable of enabling and disabling Kerberos for a cluster. To inform Ambari of the identities and configurations to be used for the service and its components, each service can provide a _kerberos.json_ file.
+
+Read more about Kerberos support in the _ [Automated Kerberization](https://cwiki.apache.org/confluence/display/AMBARI/Automated+Kerberizaton)_ wiki page and the Kerberos descriptor in the [Kerberos Descriptor documentation](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/security/kerberos/kerberos_descriptor.md).
+
+### Metrics
+
+Ambari provides the [Ambari Metrics System ("AMS")](https://cwiki.apache.org/confluence/display/AMBARI/Metrics)service for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+Each service can define which metrics AMS should collect and provide by defining a [metrics.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metrics.json) file.
+
+You can read about the metrics.json file format in the [Stack Defined Metrics](https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics) page.
+
+### Quick Links
+
+A service can add a list of quick links to the Ambari web UI by adding metainfo to a text file following a predefined JSON format. Ambari server parses the quicklink JSON file and provides its content to the UI. So that Ambari web UI can calculate quick link URLs based on the information and populate the quicklinks drop-down list accordingly.
+
+Read more about quick links JSON file design in the [Quick Links](../quick-links.md) page.
+
+### Widgets
+
+Each service can define which widgets and heatmaps show up by default on the service summary page by defining a [widgets.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/widgets.json) file.
+
+You can read more about the widgets descriptor in the [Enhanced Service Dashboard](https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard) page.
+
+### Role Command Order
+
+From Ambari 2.2, each service can define its own role command order by including the [role_command_order.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json) file in its service folder. The service should only specify the relationship of its components to other components. In other words, if a service only includes COMP_X, it should only list dependencies related to COMP_X. If when COMP_X starts it is dependent on the NameNode start and when the NameNode stops it should wait for COMP_X to stop, the following would be included in the role command order:
+
+**Example service role_command_order.json**
+```json
+"COMP_X-START": ["NAMENODE-START"],
+ "NAMENODE-STOP": ["COMP_X-STOP"]
+```
+
+The entries in the service's role command order will be merged with the role command order defined in the stack. For example, since the stack already has a dependency for NAMENODE-STOP, in the example above COMP_X-STOP would be added to the rest of the NAMENODE-STOP dependencies and in addition the COMP_X-START dependency on NAMENODE-START would just be added as a new dependency.
+
+For more details on role command order, see the [Role Command Order](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-RoleCommandOrder) section below.
+
+### Service Advisor
+
+From Ambari 2.4, each service can choose to define its own service advisor rather than define the details of its configuration and layout in the stack advisor. This is particularly useful for custom services which are not defined in the stack. Ambari provides the _Service Advisor_ capability where a service can write a Python script named _service-advisor.py_ in their service folder. This folder can be in the stack's services directory where the service is defined or can be inherited from the service definition in common-services or elsewhere. Example: [common-services/HAWQ/2.0.0](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0).
+
+Unlike the Stack-advisor scripts, the service-advisor scripts do not automatically extend the parent service's service-advisor scripts. The service-advisor script needs to explicitly extend their parent's service service-advisor script. The following code sample shows how you would refer to a parent's service_advisor.py. In this case it is extending the root service-advisor.py file in the resources/stacks directory.
+
+**Sample service-advisor.py file inheritance**
+```python
+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
+STACKS_DIR = os.path.join(SCRIPT_DIR, '../../../stacks/')
+PARENT_FILE = os.path.join(STACKS_DIR, 'service_advisor.py')
+
+try:
+ with open(PARENT_FILE, 'rb') as fp:
+ service_advisor = imp.load_module('service_advisor', fp, PARENT_FILE, ('.py', 'rb', imp.PY_SOURCE))
+except Exception as e:
+ traceback.print_exc()
+ print "Failed to load parent"
+
+class HAWQ200ServiceAdvisor(service_advisor.ServiceAdvisor):
+```
+
+Like the stack advisors, service advisors provide information on 4 important aspects:
+
+1. Recommend layout of the service on cluster
+2. Recommend service configurations
+3. Validate layout of the service on cluster
+4. Validate service configurations
+
+By providing the service-advisor.py file, one can control dynamically each of the above for the service.
+
+The main interface for the service-advisor scripts contains documentation on how each of the above are called, and what data is provided.
+
+```python
+class ServiceAdvisor(DefaultStackAdvisor):
+"""
+ Abstract class implemented by all service advisors.
+
+"""
+
+"""
+ If any components of the service should be colocated with other services,
+ this is where you should set up that layout. Example:
+
+ # colocate HAWQSEGMENT with DATANODE, if no hosts have been allocated for HAWQSEGMENT
+ hawqSegment = [component for component in serviceComponents if component["StackServiceComponents"]["component_name"] == "HAWQSEGMENT"][0]
+ if not self.isComponentHostsPopulated(hawqSegment):
+ for hostName in hostsComponentsMap.keys():
+ hostComponents = hostsComponentsMap[hostName]
+ if {"name": "DATANODE"} in hostComponents and {"name": "HAWQSEGMENT"} not in hostComponents:
+ hostsComponentsMap[hostName].append( { "name": "HAWQSEGMENT" } )
+ if {"name": "DATANODE"} not in hostComponents and {"name": "HAWQSEGMENT"} in hostComponents:
+ hostComponents.remove({"name": "HAWQSEGMENT"})
+"""
+ def colocateService(self, hostsComponentsMap, serviceComponents):
+ pass
+
+"""
+ Any configuration recommendations for the service should be defined in this function.
+
+ This should be similar to any of the recommendXXXXConfigurations functions in the stack_advisor.py
+ such as recommendYARNConfigurations().
+
+"""
+ def getServiceConfigurationRecommendations(self, configurations, clusterSummary, services, hosts):
+ pass
+
+"""
+ Returns an array of Validation objects about issues with the hostnames to which components are assigned.
+
+ This should detect validation issues which are different than those the stack_advisor.py detects.
+
+ The default validations are in stack_advisor.py getComponentLayoutValidations function.
+
+"""
+ def getServiceComponentLayoutValidations(self, services, hosts):
+ return []
+
+"""
+ Any configuration validations for the service should be defined in this function.
+
+ This should be similar to any of the validateXXXXConfigurations functions in the stack_advisor.py
+ such as validateHDFSConfigurations.
+
+"""
+ def getServiceConfigurationsValidationItems(self, configurations, recommendedDefaults, services, hosts):
+ return []
+```
+
+#### **Examples**
+
+* [Service Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51)
+* [HAWQ 2.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0/service_advisor.py)
+* [PXF 3.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/PXF/3.0.0/service_advisor.py)
+
+### Service Upgrade
+
+From Ambari 2.4, each service can now define its upgrade within its service definition. This is particularly useful for custom services which no longer need to modify the stack's upgrade-packs in order to integrate themselves into the [cluster upgrade](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-StackUpgrades).
+
+
+Each service can define _upgrade-packs_, which are XML files describing the upgrade process of that particular service and how the upgrade pack relates to the overall stack upgrade-packs. These _upgrade-pack_ XML files are placed in the service's _upgrades/_ folder in separate sub-folders specific to the stack-version they are meant to extend. Some examples of this can be seen in the testing code.
+
+#### Examples
+
+* [Upgrades folder](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/)
+* [Upgrade-pack XML](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml)
+
+Each upgrade-pack that the service defines should match the file name of the service defined by a particular stack version. For example in the testing code, HDP 2.2.0 had an [upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.2.0/upgrades/upgrade_test_15388.xml) upgrade-pack. The HDFS service defined an extension to that upgrade pack [HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml). In this case the upgrade-pack was defined in the HDP/2.0.5 stack. The upgrade-pack is an extension to HDP/2.2.0 because it is defined in upgrade/HDP/2.2.0 directory. Finally the name of the service's extension to the upgrade-pack upgrade_test_15388.xml matches the name of the upgrade-pack in HDP/2.2.0/upgrades.
+
+The file format for the service is much the same as that of the stack. The target, target-stack and type attributes should all be the same as the stack's upgrade-pack. The service is able to add its own prerequisite checks.
+
+**General Attributes and Prerequisite Checks**
+```
+<upgrade xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <target>2.4.*</target>
+ <target-stack>HDP-2.4.0</target-stack>
+ <type>ROLLING</type>
+ <prerequisite-checks>
+ <check>org.apache.ambari.server.checks.FooCheck</check>
+ </prerequisite-checks>
+```
+
+The order section of the upgrade-pack, consists of group elements just like the stack's upgrade-pack. The key difference is defining how these groups relate to groups in the stack's upgrade pack or other service upgrade-packs. In the first example we are referencing the PRE_CLUSTER group and adding a new execute-stage for the service FOO. The entry is supposed to be added after the execute-stage for HDFS based on the `<add-after-group-entry>` tag.
+
+```xml
+<order>
+ <group xsi:type="cluster" name="PRE_CLUSTER" title="Pre {{direction.text.proper}}">
+ <add-after-group-entry>HDFS</add-after-group-entry>
+ <execute-stage service="FOO" component="BAR" title="Backup FOO">
+ <task xsi:type="manual">
+ <message>Back FOO up.</message>
+ </task>
+ </execute-stage>
+ </group>
+
+```
+
+The same syntax can be used to order other sections like service check priorities and group services.
+
+```xml
+<group name="SERVICE_CHECK1" title="All Service Checks" xsi:type="service-check">
+ <add-after-group-entry>ZOOKEEPER</add-after-group-entry>
+ <priority>
+ <service>HBASE</service>
+ </priority>
+</group>
+
+<group name="CORE_MASTER" title="Core Masters">
+ <add-after-group-entry>YARN</add-after-group-entry>
+ <service name="HBASE">
+ <component>HBASE_MASTER</component>
+ </service>
+</group>
+```
+
+It is also possible to add new groups and order them after other groups in the stack's upgrade-packs. In the following example, we are adding the FOO group after the HIVE group using the add-after-group tag.
+
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO">
+ <component>BAR</component>
+ </service>
+</group>
+```
+
+You could also include both the add-after-group and the add-after-group-entry tags in the same group. This will create a new group if it doesn't already exist and will order it after the add-after-group's group name. The add-after-group-entry will determine the internal ordering of that group's services, priorities or execute stages.
+
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <add-after-group-entry>FOO</add-after-group-entry>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO2">
+ <component>BAR2</component>
+ </service>
+</group>
+```
+
+The processing section of the upgrade-pack remains the same as what it would be in the stack's upgrade-pack.
+
+```xml
+ <processing>
+ <service name="FOO">
+ <component name="BAR">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ <component name="BAR2">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ </service>
+ </processing>
+```
+
+## Define Stack
+
+A stack is a versioned collection of services. Each stack is a folder is defined in [ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks) source. Once installed, these stack definitions are available on the ambari-server machine at _/var/lib/ambari-server/resources/stacks_.
+
+Each stack folder contains one sub-folder per version of the stack. Some of these stack-versions are active while some are not. Each stack-version includes services which are either referenced from _common-services_, or defined inside the stack-version's _services_ folder.
+
+
+
+Example: [HDP stack](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP). [HDP-2.4 stack version](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.4).
+
+### Stack-Version Descriptor
+
+Each stack-version should provide a _metainfo.xml_ (Example: [HDP-2.3](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/metainfo.xml), [HDP-2.4](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.4/metainfo.xml)) descriptor file which describes the following about this stack-version:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.3</extends>
+ <minJdk>1.7</minJdk>
+ <maxJdk>1.8</maxJdk>
+</metainfo>
+```
+
+* **versions/active** - Whether this stack-version is still available for install. If not available, this version will not show up in UI during install.
+
+* **extends** - The stack-version in this stack that is being extended. Extended stack-versions inherit services along with almost all aspects of the parent stack-version.
+
+* **minJdk** - Minimum JDK with which this stack-version is supported. Users are warned during installer wizard if the JDK used by Ambari is lower than this version.
+
+* **maxJdk** - Maximum JDK with which this stack-version is supported. Users are warned during installer wizard if the JDK used by Ambari is greater than this version.
+
+### Stack Properties
+
+The stack must contain or inherit a properties directory which contains two files: [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) and [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json). This [directory](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) is new in Ambari 2.4.
+
+The stack_features.json contains a list of features that are included in Ambari and allows the stack to specify which versions of the stack include those features. The list of features are determined by the particular Ambari release. The reference list for a particular Ambari version should be found in the [HDP/2.0.6/properties/stack_features.json](https://github.com/apache/ambari/blob/branch-2.4/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) in the branch for that Ambari release. Each feature has a name and description and the stack can provide the minimum and maximum version where that feature is supported.
+
+```json
+{
+
+"stack_features": [
+
+{
+
+"name": "snappy",
+
+"description": "Snappy compressor/decompressor support",
+
+"min_version": "2.0.0.0",
+
+"max_version": "2.2.0.0"
+
+},
+
+...
+
+}
+```
+
+The stack_tools.json includes the name and location where the stack_selector and conf_selector tools are installed.
+
+```json
+{
+
+"stack_selector": ["hdp-select", "/usr/bin/hdp-select", "hdp-select"],
+
+"conf_selector": ["conf-select", "/usr/bin/conf-select", "conf-select"]
+
+}
+```
+
+
+Any custom stack must include these two JSON files. For further information see the [Stack Properties](./stack-properties.md) wiki page.
+
+### Services
+
+Each stack-version includes services which are either referenced from _common-services_, or defined inside the stack-version's _services_ folder.
+
+Services are defined in _common-services_ if they will be shared across multiple stacks. If they will never be shared, then they can be defined inside the stack-version.
+
+#### Reference _common-services_
+
+To reference a service from common-services, the service descriptor file should use the < _extends>_ element. (Example: [HDFS in HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/metainfo.xml))
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <extends>common-services/HDFS/2.1.0.2.0</extends>
+ </service>
+ </services>
+</metainfo>
+```
+
+#### Define Service
+
+In exactly the same format as services defined in _common-services_, a new service can be defined inside the _services_ folder.
+
+Examples:
+
+* [HDFS in BIGTOP-0.8](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/BIGTOP/0.8/services/HDFS)
+* [GlusterFS in HDP-2.3.GlusterFS](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS)
+
+#### Extend Service
+
+When a stack-version extends another stack-version, it inherits all details of the parent service. It is also free to override and remove any portion of the inherited service definition.
+
+Examples:
+
+* HDP-2.3 / HDFS -[Adding NFS_GATEWAY component, updating service version and OS specific packages](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/HDFS/metainfo.xml)
+* HDP-2.2 / Storm -[Deleting STORM_REST_API component, updating service version and OS specific packages](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.2/services/STORM/metainfo.xml)
+* HDP-2.3 / YARN -[Deleting YARN node-label configuration from capacity-scheduler.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/YARN/configuration/capacity-scheduler.xml)
+* HDP-2.3 / Kafka -[Add Kafka Broker Process alert](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/KAFKA/alerts.json)
+
+### Role Command Order
+
+_**Role**_ is another name for **Component** (Ex: NAMENODE, DATANODE, RESOURCEMANAGER, HBASE_MASTER, etc.)
+
+As the name implies, it is possible to tell Ambari about the order in which commands should be run for the components defined in your stack.
+
+For example: "_ZooKeeper Server_ should be started before starting _NameNode_". Or "_HBase Master_ should be started only after _NameNode_ and _DataNodes_ are started".
+
+This can be specified by including the [role_command_order.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json) file in the stack-version folder.
+
+#### Format
+
+Specified in JSON format, the file contains a JSON object with top-level keys being either section names or comments Ex: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json).
+
+Inside each section object, the key describes the dependent component-action, and the value lists the component-actions which should be done before it.
+
+```json
+{
+ "_comment": "Section 1 comment",
+ "section_name_1": {
+ "_comment": "Section containing role command orders",
+ "-": ["-", "-"],
+ "-": ["-"],
+ ...
+
+ },
+ "_comment": "Next section comment",
+ ...
+
+}
+```
+
+#### Sections
+
+Ambari uses the below sections only:
+
+Section Name | When Used
+-------------|---------------
+general_deps |Command orders are applied in all situations
+optional_glusterfs | Command orders are applied when cluster has instance of GLUSTERFS service
+optional_no_glusterfs | Command orders are applied when cluster does not have instance of GLUSTERFS service
+namenode_optional_ha | Command orders are applied when HDFS service is installed and JOURNALNODE component exists (HDFS HA is enabled)
+resourcemanager_optional_ha | Command orders are applied when YARN service is installed and multiple RESOURCEMANAGER host-components exist (YARN HA is enabled)
+
+#### Commands
+
+Commands currently supported by Ambari are
+
+* INSTALL
+* UNINSTALL
+* START
+* RESTART
+* STOP
+* EXECUTE
+* ABORT
+* UPGRADE
+* SERVICE_CHECK
+* CUSTOM_COMMAND
+* ACTIONEXECUTE
+
+#### Examples
+
+Role Command Order | Explanation
+-------------------|---------------------
+"HIVE_METASTORE-START": ["MYSQL_SERVER-START", "NAMENODE-START"] | Start MySQL and NameNode components before starting Hive Metastore
+"MAPREDUCE_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START"], | MapReduce service check needs ResourceManager and NodeManagers started
+"ZOOKEEPER_SERVER-STOP" : ["HBASE_MASTER-STOP", "HBASE_REGIONSERVER-STOP", "METRICS_COLLECTOR-STOP"], | Before stopping ZooKeeper servers, make sure HBase Masters, HBase RegionServers and AMS Metrics Collector are stopped.
+
+### Repositories
+
+Each stack-version can provide the location of package repositories to use, by providing a _repos/repoinfo.xml_ (Ex: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/repos/repoinfo.xml))
+The _repoinfo.xml_ file contains repositories grouped by operating systems. Each OS specifies a list of repositories that are shown to the user when the stack-version is selected for install.
+
+These repositories are used in conjunction with the [_packages_ defined in a service's metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L161) to install appropriate bits on the system.
+
+```xml
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.1</baseurl>
+ <repoid>HDP-2.0.6</repoid>
+ <reponame>HDP</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.17/repos/centos6</baseurl>
+ <repoid>HDP-UTILS-1.1.0.17</repoid>
+ <reponame>HDP-UTILS</reponame>
+ </repo>
+ </os>
+<reposinfo>
+```
+
+baseurl- URL of the RPM repository where provided _repoid_ can be found
+**repoid** - Repo ID to use that are hosted at _baseurl
+**reponame** - Display name for the repo being used.
+
+#### Latest Builds
+
+Though repository base URL is capable of providing updates to a particular repo, it has to be defined at build time. This could be an issue later when the repository changes location, or update builds are hosted at a different site.
+
+For such scenarios, a stack-version can provide the location of a JSON file which can provide details of other repo URLs to use.
+
+Example: [HDP-2.3 repoinfo.xml uses](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/repos/repoinfo.xml), which then points to alternate repository URLs where latest builds can be found:
+
+```json
+{
+ ...
+
+ "HDP-2.3":{
+ "latest":{
+ "centos6":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos6/2.x/BUILDS/2.3.6.0-3586/",
+ "centos7":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/2.x/BUILDS/2.3.6.0-3586/",
+ "debian6":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/debian6/2.x/BUILDS/2.3.6.0-3586/",
+ "debian7":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/debian7/2.x/BUILDS/2.3.6.0-3586/",
+ "suse11":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/suse11sp3/2.x/BUILDS/2.3.6.0-3586/",
+ "ubuntu12":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu12/2.x/BUILDS/2.3.6.0-3586/",
+ "ubuntu14":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu14/2.x/BUILDS/2.3.6.0-3586/"
+ }
+ },
+ ...
+
+}
+```
+
+### Hooks
+
+A stack-version could have very basic and common instructions that need to be run before or after certain Ambari commands, across all services.
+
+Instead of duplicating this code across all service scripts and asking users to worry about them, Ambari provides the _Hooks_ ability where common before and after code can be pulled away into the _hooks_ folder. (Ex: [HDP-2.0.6](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks))
+
+
+
+#### Command Sub-Folders
+
+The general naming pattern for hooks sub-folders is `"<before|after>-<ANY|<CommandName>>"`.
+What this means is that the scripts/hook.py file under the sub-folder is run either before or after the command.
+
+**Examples:**
+
+Sub-Folder | Purpose | Example
+-----------|---------|------------
+before-START | Hook script called before START command is run on any component of the stack-version. | [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py#L30)<br></br>sets up Hadoop log and pid directories<br></br>creates Java Home symlink<br></br>Creates /etc/hadoop/conf/topology_script.py<br></br>etc.
+before-INSTALL | Hook script called before installing of any component of the stack-version | [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py#L33)<br></br>Creates repo files in /etc/yum.repos.d<br></br>Installs basic packages like curl, unzip, etc.
+
+Based on the commands currently supported by Ambari, the following sub-folders can be created based on necessity
+
+Prefix | - |Command
+-------|--|---------------------
+before | - |INSTALL UNINSTALL START RESTART STOP
+after | - |EXECUTE ABORT UPGRADE SERVICE_CHECK `<custom_command>`-Custom commands specified by the user like the DECOMMISSION or REBALANCEHDFS commands specified by HDFS
+
+The _scripts/hook.py_ script should import [resource_management.libraries.script.hook](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/script/hook.py) module and extend the Hook class
+
+```python
+from resource_management.libraries.script.hook import Hook
+
+class CustomHook(Hook):
+ def hook(self, env):
+ # Do custom work
+
+if __name__ == "__main__":
+ CustomHook().execute()
+```
+
+### Configurations
+
+Though most configurations are set at the service level, there can be configurations which apply across all services to indicate the state of the cluster installed with this stack.
+
+For example, things like ["is security enabled?"](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml#L25), ["what user runs smoke tests?"](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml#L46) etc.
+
+Such configurations can be defined in the [configuration folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration) of the stack. They are available for access just like the service-level configs.
+
+#### Stack Advisor
+
+With each stack containing multiple complex services, it becomes necessary to dynamically determine how the services are laid out on the cluster, and for determining values of certain configurations.
+
+Ambari provides the _Stack Advisor_ capability where stacks can write a Python script named _stack-advisor.py_ in the _services/_ folder. Example: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py).
+
+Stack-advisor scripts automatically extend the parent stack-version's stack-advisor scripts. This allows newer stack-versions to change behavior without effecting earlier behavior.
+
+Stack advisors provide information on 4 important aspects:
+
+1. Recommend layout of services on cluster
+2. Recommend service configurations
+3. Validate layout of services on cluster
+4. Validate service configurations
+
+By providing the stack-advisor.py file, one can control dynamically each of the above.
+
+The main interface for the stack-advisor scripts contains documentation on how each of the above are called, and what data is provided
+
+```python
+class StackAdvisor(object):
+"""
+ Abstract class implemented by all stack advisors. Stack advisors advise on stack specific questions.
+
+ Currently stack advisors provide following abilities:
+ - Recommend where services should be installed in cluster
+ - Recommend configurations based on host hardware
+ - Validate user selection of where services are installed on cluster
+ - Validate user configuration values
+
+ Each of the above methods is passed in parameters about services and hosts involved as described below.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services selected by the user.
+
+ Example: {
+ "services": [
+ {
+ "StackServices": {
+ "service_name" : "HDFS",
+ "service_version" : "2.6.0.2.2",
+ },
+ "components" : [
+ {
+ "StackServiceComponents" : {
+ "cardinality" : "1+",
+ "component_category" : "SLAVE",
+ "component_name" : "DATANODE",
+ "display_name" : "DataNode",
+ "service_name" : "HDFS",
+ "hostnames" : []
+ },
+ "dependencies" : []
+ }, {
+ "StackServiceComponents" : {
+ "cardinality" : "1-2",
+ "component_category" : "MASTER",
+ "component_name" : "NAMENODE",
+ "display_name" : "NameNode",
+ "service_name" : "HDFS",
+ "hostnames" : []
+ },
+ "dependencies" : []
+ },
+ ...
+
+ ]
+ },
+ ...
+
+ ]
+ }
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ Example: {
+ "items": [
+ {
+ Hosts: {
+ "host_name": "c6401.ambari.apache.org",
+ "public_host_name" : "c6401.ambari.apache.org",
+ "ip": "192.168.1.101",
+ "cpu_count" : 1,
+ "disk_info" : [
+ {
+ "available" : "4564632",
+ "used" : "5230344",
+ "percent" : "54%",
+ "size" : "10319160",
+ "type" : "ext4",
+ "mountpoint" : "/"
+ },
+ {
+ "available" : "1832436",
+ "used" : "0",
+ "percent" : "0%",
+ "size" : "1832436",
+ "type" : "tmpfs",
+ "mountpoint" : "/dev/shm"
+ }
+ ],
+ "host_state" : "HEALTHY",
+ "os_arch" : "x86_64",
+ "os_type" : "centos6",
+ "total_mem" : 3664872
+ }
+ },
+ ...
+
+ ]
+ }
+
+ Each of the methods can either return recommendations or validations.
+
+ Recommendations are made in a Ambari Blueprints friendly format.
+
+ Validations are an array of validation objects.
+
+"""
+
+ def recommendComponentLayout(self, services, hosts):
+"""
+ Returns recommendation of which hosts various service components should be installed on.
+
+ This function takes as input all details about services being installed, and hosts
+ they are being installed into, to generate hostname assignments to various components
+ of each service.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Layout recommendation of service components on cluster hosts in Ambari Blueprints friendly format.
+
+ Example: {
+ "resources" : [
+ {
+ "hosts" : [
+ "c6402.ambari.apache.org",
+ "c6401.ambari.apache.org"
+ ],
+ "services" : [
+ "HDFS"
+ ],
+ "recommendations" : {
+ "blueprint" : {
+ "host_groups" : [
+ {
+ "name" : "host-group-2",
+ "components" : [
+ { "name" : "JOURNALNODE" },
+ { "name" : "ZKFC" },
+ { "name" : "DATANODE" },
+ { "name" : "SECONDARY_NAMENODE" }
+ ]
+ },
+ {
+ "name" : "host-group-1",
+ "components" :
+ { "name" : "HDFS_CLIENT" },
+ { "name" : "NAMENODE" },
+ { "name" : "JOURNALNODE" },
+ { "name" : "ZKFC" },
+ { "name" : "DATANODE" }
+ ]
+ }
+ ]
+ },
+ "blueprint_cluster_binding" : {
+ "host_groups" : [
+ {
+ "name" : "host-group-1",
+ "hosts" : [ { "fqdn" : "c6401.ambari.apache.org" } ]
+ },
+ {
+ "name" : "host-group-2",
+ "hosts" : [ { "fqdn" : "c6402.ambari.apache.org" } ]
+ }
+ ]
+ }
+ }
+ }
+ ]
+ }
+"""
+ pass
+
+ def validateComponentLayout(self, services, hosts):
+"""
+ Returns array of Validation issues with service component layout on hosts
+
+ This function takes as input all details about services being installed along with
+ hosts the components are being installed on (hostnames property is populated for
+ each component).
+
+ @type services: dictionary
+ @param services: Dictionary containing information about services and host layout selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Dictionary containing array of validation items
+ Example: {
+ "items": [
+ {
+ "type" : "host-group",
+ "level" : "ERROR",
+ "message" : "NameNode and Secondary NameNode should not be hosted on the same machine",
+ "component-name" : "NAMENODE",
+ "host" : "c6401.ambari.apache.org"
+ },
+ ...
+
+ ]
+ }
+"""
+ pass
+
+ def recommendConfigurations(self, services, hosts):
+"""
+ Returns recommendation of service configurations based on host-specific layout of components.
+
+ This function takes as input all details about services being installed, and hosts
+ they are being installed into, to recommend host-specific configurations.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services and component layout selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Layout recommendation of service components on cluster hosts in Ambari Blueprints friendly format.
+
+ Example: {
+ "services": [
+ "HIVE",
+ "TEZ",
+ "YARN"
+ ],
+ "recommendations": {
+ "blueprint": {
+ "host_groups": [],
+ "configurations": {
+ "yarn-site": {
+ "properties": {
+ "yarn.scheduler.minimum-allocation-mb": "682",
+ "yarn.scheduler.maximum-allocation-mb": "2048",
+ "yarn.nodemanager.resource.memory-mb": "2048"
+ }
+ },
+ "tez-site": {
+ "properties": {
+ "tez.am.java.opts": "-server -Xmx546m -Djava.net.preferIPv4Stack=true -XX:+UseNUMA -XX:+UseParallelGC",
+ "tez.am.resource.memory.mb": "682"
+ }
+ },
+ "hive-site": {
+ "properties": {
+ "hive.tez.container.size": "682",
+ "hive.tez.java.opts": "-server -Xmx546m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC",
+ "hive.auto.convert.join.noconditionaltask.size": "238026752"
+ }
+ }
+ }
+ },
+ "blueprint_cluster_binding": {
+ "host_groups": []
+ }
+ },
+ "hosts": [
+ "c6401.ambari.apache.org",
+ "c6402.ambari.apache.org",
+ "c6403.ambari.apache.org"
+ ]
+ }
+"""
+ pass
+
+ def validateConfigurations(self, services, hosts):
+""""
+ Returns array of Validation issues with configurations provided by user
+ This function takes as input all details about services being installed along with
+ configuration values entered by the user. These configurations can be validated against
+ service requirements, or host hardware to generate validation issues.
+
+ @type services: dictionary
+ @param services: Dictionary containing information about services and user configurations.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Dictionary containing array of validation items
+ Example: {
+ "items": [
+ {
+ "config-type": "yarn-site",
+ "message": "Value is less than the recommended default of 682",
+ "type": "configuration",
+ "config-name": "yarn.scheduler.minimum-allocation-mb",
+ "level": "WARN"
+ }
+ ]
+ }
+"""
+ pass
+```
+
+#### **Examples**
+
+* [Stack Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py#L23)
+* [Default Stack Advisor implementation - for all stacks](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py#L303)
+* [HDP (2.0.6) Default Stack Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L28)
+* [YARN container size calculated](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L807)
+* Recommended configurations -[HDFS](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L222),[YARN](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L133),[MapReduce2](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L148),[HBase](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L245) (HDP-2.0.6),[HBase](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L148) (HDP-2.3)
+* [Delete HBase Bucket Cache configs on smaller machines](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py#L272)
+* [Specify maximum value for Tez config](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py#L184)
+
+### Properties
+
+Similar to stack configurations, most properties are defined at the service level, however there are global properties which can be defined at the stack-version level affecting across all services.
+
+Some examples are: [stack-selector and conf-selector](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json#L2) specific names or what [stack versions certain stack features](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json#L5) are supported by. Most of these properties were introduced in Ambari 2.4 version in the effort of parameterize stack information and facilitate the reuse of common-services code by other distributions.
+
+Such properties can be defined in .json format in the [properties folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) of the stack.
+
+More details about stack properties can be found on [Stack Properties section](https://cwiki.apache.org/confluence/x/pgPiAw).
+
+### Widgets
+
+At the stack-version level one can contribute heatmap entries to the main dashboard of the cluster.
+
+Generally these heatmaps would be ones which apply to all services - like host level heatmaps.
+
+Example: [HDP-2.0.6 contributes host level heatmaps](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/widgets.json)
+
+### Kerberos
+
+We have seen previously the Kerberos descriptor at the service level.
+
+One can be defined at the stack-version level also to describe identities across all services.
+
+Read more about the Kerberos support and the Kerberos Descriptor in the _ [Automated Kerberization](https://cwiki.apache.org/confluence/display/AMBARI/Automated+Kerberizaton)_ page.
+
+Example: [Smoke tests user and SPNEGO user defined in HDP-2.0.6](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/kerberos.json)
+
+### Stack Upgrades
+
+Ambari provides the ability to upgrade your cluster from a lower stack-version to a higher stack-version.
+
+Each stack-version can define _upgrade-packs_, which are XML files describing the upgrade process. These _upgrade-pack_ XML files are placed in the stack-version's _upgrades/_ folder.
+
+Example: [HDP-2.3](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades), [HDP-2.4](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.4/upgrades)
+
+Each stack-version should have an upgrade-pack for the next stack-version a cluster can **upgrade to**.
+
+Ex: [Upgrade-pack from HDP-2.3 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.4.xml)
+
+There are two types of upgrades:
+
+Upgrade Type | Pros | Cons
+-------------|------|----------
+Express Upgrade (EU) | Cluster unavailable - services are stopped during upgrade process | Much faster - clusters can be upgraded in a couple of hours
+Rolling Upgrade (RU) | Minimal cluster downtime - services available throughout upgrade process | Takes time (sometimes days depending on cluster size) due to incremental upgrade approach
+
+Each component which has to be upgraded by Ambari should specify the **versionAdvertised** flag in the metainfo.xml.
+
+This will tell Ambari to use the component's version and perform upgrade. Not specifying this flag will result in Ambari not upgrading the component.
+
+Example: [HDFS NameNode](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L33) (versionAdvertised=true), [AMS Metrics Collector](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml#L33) (versionAdvertised=false).
+
+#### Rolling Upgrades
+
+In Rolling Upgrade each service is upgraded with minimal downtime in mind. The general approach is to quickly upgrade the master components, followed by upgrading of workers in batches.
+
+The service will not be available when masters are restarting. However when master components are in High Availability (HA), the service continues to be available through restart of each master.
+
+You can read more about the Rolling Upgrade process via this [blog post](http://hortonworks.com/blog/introducing-automated-rolling-upgrades-with-apache-ambari-2-0/) and [documentation](https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_upgrading_Ambari/content/_upgrading_HDP_perform_rolling_upgrade.html).
+
+Examples
+
+* [HDP-2.2.x to HDP-2.2.y](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.2.xml)
+* [HDP-2.2 to HDP-2.3](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.3.xml)
+* [HDP-2.2 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.4.xml)
+* [HDP-2.3 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.4.xml)
+
+#### Express Upgrades
+
+In Express Upgrade the goal is to upgrade entire cluster as fast as possible - even if it means cluster downtime. It is generally much faster than Rolling Upgrade.
+
+For each service the components are first stopped, upgraded and then started.
+
+You can read about Express Upgrade steps in this [documentation](https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_upgrading_Ambari/content/_upgrading_HDP_perform_express_upgrade.html).
+
+Examples
+
+* [HDP-2.1 to HDP-2.3](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.1/upgrades/nonrolling-upgrade-2.3.xml)
+* [HDP-2.2 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/nonrolling-upgrade-2.4.xml)
+
+## Configuration support in Ambari
+[Configuration support in Ambari](https://cwiki.apache.org/confluence/display/AMBARI/Configuration+support+in+Ambari)
\ No newline at end of file
diff --git a/docs/ambari-design/stack-and-services/imgs/define-service.png b/docs/ambari-design/stack-and-services/imgs/define-service.png
new file mode 100644
index 0000000..40868dd
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/imgs/define-service.png
Binary files differ
diff --git a/docs/ambari-design/stack-and-services/imgs/define-stack.png b/docs/ambari-design/stack-and-services/imgs/define-stack.png
new file mode 100644
index 0000000..7ebfdb9
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/imgs/define-stack.png
Binary files differ
diff --git a/docs/ambari-design/stack-and-services/imgs/hooks.png b/docs/ambari-design/stack-and-services/imgs/hooks.png
new file mode 100644
index 0000000..7ebfdb9
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/imgs/hooks.png
Binary files differ
diff --git a/docs/ambari-design/stack-and-services/imgs/scripts-folder.png b/docs/ambari-design/stack-and-services/imgs/scripts-folder.png
new file mode 100644
index 0000000..f072a4c
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/imgs/scripts-folder.png
Binary files differ
diff --git a/docs/ambari-design/stack-and-services/imgs/scripts.png b/docs/ambari-design/stack-and-services/imgs/scripts.png
new file mode 100644
index 0000000..30d1e3a
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/imgs/scripts.png
Binary files differ
diff --git a/docs/ambari-design/stack-and-services/imgs/stacks-properties.png b/docs/ambari-design/stack-and-services/imgs/stacks-properties.png
new file mode 100644
index 0000000..d526a8c
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/imgs/stacks-properties.png
Binary files differ
diff --git a/docs/ambari-design/stack-and-services/index.md b/docs/ambari-design/stack-and-services/index.md
new file mode 100644
index 0000000..6ed5dfd
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/index.md
@@ -0,0 +1,16 @@
+# Stacks and Services
+
+**Introduction**
+
+Ambari supports the concept of Stacks and associated Services in a Stack Definition. By leveraging the Stack Definition, Ambari has a consistent and defined interface to install, manage and monitor a set of Services and provides extensibility model for new Stacks + Services to be introduced.
+
+From Ambari 2.4, there is also support for the concept of Extensions and its associated custom Services in an Extension Definition.
+
+Terminology
+
+Term | Description
+-----|------------
+Stack | Defines a set of Services and where to obtain the software packages for those Services. A Stack can have one or more versions, and each version can be active/inactive. For example, Stack = "HDP-1.3.3".
+Extension | Defines a set of custom Services which can be added to a stack version. An Extension can have one or more versions.
+Service | Defines the Components (MASTER, SLAVE, CLIENT) that make up the Service. For example, Service = "HDFS"
+Component | The individual Components that adhere to a certain defined lifecycle (start, stop, install, etc). For example, Service = "HDFS" has Components = "NameNode (MASTER)", "Secondary NameNode (MASTER)", "DataNode (SLAVE)" and "HDFS Client (CLIENT)"
\ No newline at end of file
diff --git a/docs/ambari-design/stack-and-services/management-packs.md b/docs/ambari-design/stack-and-services/management-packs.md
new file mode 100644
index 0000000..aaf9118
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/management-packs.md
@@ -0,0 +1,327 @@
+# Management Packs
+
+## **Background**
+
+At present, stack definitions are bundled with Ambari core and are part of Apache Ambari releases. This enforces having to do an Ambari release with updated stack definitions whenever a new version of a stack is released. Also to add an "add-on" service (custom service) to the stack definition, one has to manually add the add-on service to the stack definition on an Ambari Server. There is no release vehicle that can be used to ship add-on services.
+
+Apache Ambari Management Packs addresses this issue by decoupling Ambari's core functionality (cluster management and monitoring) from stack management and definition. An Apache Ambari Management Pack (Mpack) can bundle multiple service definitions, stack definitions, stack add-on service definitions, view definitions services so that releasing these artifacts don't enforce an Apache Ambari release. Apache Ambari Management Packs will be released as separate release artifacts and will follow its own release cadence instead of being tightly coupled with Apache Ambari releases.
+
+Management packs are released as tarballs, however they contain a metadata file (mpack.json) that specify the contents of the management pack and actions to perform when installing the management pack.
+
+## **Apache JIRA**
+
+[AMBARI-14854](https://issues.apache.org/jira/browse/AMBARI-14854)
+
+## **Release Timelines**
+
+* Short Term Goals (Apache Ambari 2.4.0.0 release)
+ 1. Provide a release vehicle for stack definitions (example:HDP management pack, IOP management pack).
+
+ 2. Provide a release vehicle for add-on/custom services (example: Microsoft-R management pack)
+ 3. Retrofit in existing stack processing infrastructure
+ 4. Command line to update stack definitions and service definitions
+
+* Long Term Goals (Ambari 2.4+)
+ 1. Release HDP stacks as mpacks
+ 2. Build management pack processing infrastructure that will replace the stack processing infrastructure.
+
+ 3. Dynamic creation of stack definitions by processing management packs
+ 4. Provide a REST API adding/removing /upgrading managment packs.
+
+## **Management Pack Metadata (Mpack.json)**
+
+Management pack should contain following metadata information in mpack.json.
+
+* **Name**: Unique management pack name
+* **Version**: Management pack version
+* **Description**: Friendly description of the management pack
+* **Prerequisites**:
+ - Minimum Ambari Version on which the management pack is installable.
+
+ + **Example**: To install stackXYZ-ambari-mpack1.0.0.0 management pack, Ambari should be atleast on Apache Ambari 2.4.0.0 version.
+
+ - Minimum management pack version that should be installed before upgrading to this management pack.
+
+ + **Example**: To upgrade to stackXYZ-ambari-mpack-2.0.0.0 management pack, stackXYZ-ambari-mpack-1.8.0.0 management pack or higher should be installed.
+
+ - Minimum stack version that should already be present in the stack definitions for this management pack to be applicable.
+
+ + **Example**: To add a add-on service management pack myservice-ambari-mpack-1.0.0.0 management pack stackXYZ-2.1 stack definition should be present.
+
+* **Artifacts**:
+ - List of release artifacts (service definitions, stack definitions, stack-addon-service-definitions, view-definitions) bundled in the management pack.
+
+ - Metadata for each artifact like source directory, additional applicability for that artifact etc.
+
+ - Supported Artifact Types
+ + **service-definitions**: Contains service definitions similar to common-services/serviceA/1.0.0
+ + **stack-definitions**: Contains stack definitions similar to stacks/stackXYZ/1.0
+ + **extension-definitions**: Contains dynamic stack extensions (Refer:[Extensions](./extensions.md))
+ + **stack-addon-service-definitions**: Defines add-on service applicability for stacks and how to merge the add-on service into the stack definition.
+
+ + **view-definitions** (Not supported in Apache Ambari 2.4.0.0)
+ - A management pack can have more than one release artifacts.
+
+ + **Example**: It should be possible to create a management pack that bundles together
+ * **stack-definitions**: stackXYZ-1.0, stackXYZ-1.1, stackXYZ-2.0
+ * **service-definitions**: HAWQ, HDFS, ZOOKEEPER
+ * **stack-addon-service-definitions**: HAWQ/2.0.0 is applicable to stackXYZ-2.0, stackABC-1.0
+ * **view-definitions**: Hive, Jobs, Slider (Apache Ambari 2.4.0.0)
+
+## **Management Pack Structure**
+
+### StackXYZ Management Pack Structure
+
+_stackXYZ-ambari-mpack-1.0.0.0_
+
+```
+├── mpack.json
+
+├── common-services
+
+│ └── HDFS
+
+│ └── 2.1.0.2.0
+
+│ └── configuration
+
+└── stacks
+
+ └── stackXYZ
+
+ └── 1.0
+
+ ├── metainfo.xml
+
+ ├── repos
+
+ │ └── repoinfo.xml
+
+ ├── role_command_order.json
+
+ └── services
+
+ ├── HDFS
+
+ │ ├── configuration
+
+ │ │ └── hdfs-site.xml
+
+ │ └── metainfo.xml
+
+ ├── stack_advisor.py
+
+ └── ZOOKEEPER
+
+ └── metainfo.xml
+```
+
+### StackXYZ Management Pack Mpack.json
+
+_stackXYZ-ambari-mpack1.0.0.0/mpack.json_
+
+```json
+{
+
+ "type" : "full-release",
+
+ "name" : "stackXYZ-ambari-mpack",
+
+ "version": "1.0.0.0",
+
+ "description" : "StackXYZ Management Pack",
+
+ "prerequisites": {
+
+ "min_ambari_version" : "2.4.0.0"
+
+ },
+
+ "artifacts": [
+
+ {
+
+ "name" : "stackXYZ-service-definitions",
+
+ "type" : "service-definitions",
+
+ "source_dir": "common-services"
+
+ },
+
+ {
+
+ "name" : "stackXYZ-stack-definitions",
+
+ "type" : "stack-definitions",
+
+ "source_dir": "stacks"
+
+ }
+
+ ]
+
+}
+```
+
+### Add-On Service Management Pack Structure
+
+_myservice-ambari-mpack-1.0.0.0_
+
+```
+├── common-services
+
+│ └── MYSERVICE
+
+│ └── 1.0.0
+
+│ ├── configuration
+
+│ │ └── myserviceconfig.xml
+
+│ ├── metainfo.xml
+
+│ ├── package
+
+│ │ └── scripts
+
+│ │ ├── client.py
+
+│ │ ├── master.py
+
+│ │ └── slave.py
+
+│ └── role_command_order.json
+
+├── custom-services
+
+│ └── MYSERVICE
+
+│ ├── 1.0.0
+
+│ │ └── metainfo.xml
+
+│ └── 2.0.0
+
+│ └── metainfo.xml
+
+└── mpack.json
+```
+
+### Add-On Service Management Pack Mpack.json
+
+_myservice-ambari-mpack-1.0.0.0/mpack.json_
+
+```json
+{
+
+ "type" : "full-release",
+
+ "name" : "myservice-ambari-mpack",
+
+ "version": "1.0.0.0",
+
+ "description" : "MyService Management Pack",
+
+ "prerequisites": {
+
+ "min-ambari-version" : "2.4.0.0",
+
+ "min-stack-versions" : [
+
+ {
+
+ "stack_name" : "stackXYZ",
+
+ "stack_version" : "2.2"
+
+ }
+
+ ]
+
+ },
+
+ "artifacts": [
+
+ {
+
+ "name" : "MYSERVICE-service-definition",
+
+ "type" : "service-definition",
+
+ "source_dir" : "common-services/MYSERVICE/1.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "1.0.0"
+
+ },
+
+ {
+
+ "name" : "MYSERVICE-1.0.0",
+
+ "type" : "stack-addon-service-definition",
+
+ "source_dir": "addon-services/MYSERVICE/1.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "1.0.0",
+
+ "applicable_stacks" : [
+
+ {
+
+ "stack_name" : "stackXYZ", "stack_version" : "2.2"
+
+ }
+
+ ]
+
+ },
+
+ {
+
+ "name" : "MYSERVICE-2.0.0",
+
+ "type" : "stack-addon-service-definition",
+
+ "source_dir": "custom-services/MYSERVICE/2.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "2.0.0",
+
+ "applicable_stacks" : [
+
+ {
+
+ "stack_name" : "stackXYZ",
+
+ "stack_version" : "2.4"
+
+ }
+
+ ]
+
+ }
+
+ ]
+
+}
+```
+
+## **Installing Management Pack**
+
+```bash
+ambari-server install-mpack --mpack=/path/to/mpack.tar.gz --purge --verbose
+```
+
+**Note**: Do not pass the "--purge" command line parameter when installing an add-on service management pack. The "--purge" flag is used to purge any existing stack definition (HDP is included in Ambari release) and should be included only when installing a Stack Management Pack.
+
+## **Upgrading Management Pack**
+
+```bash
+ambari-server upgrade-mpack --mpack=/path/to/mpack.tar.gz --verbose
+```
diff --git a/docs/ambari-design/stack-and-services/overview.mdx b/docs/ambari-design/stack-and-services/overview.mdx
new file mode 100644
index 0000000..b47e75e
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/overview.mdx
@@ -0,0 +1,554 @@
+# Overview
+
+## Background
+
+The Stack definitions can be found in the source tree at `<a href="https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks">/ambari-server/src/main/resources/stacks</a>`. After you install the Ambari Server, the Stack definitions can be found at `/var/lib/ambari-server/resources/stacks`
+
+## Structure
+
+The structure of a Stack definition is as follows:
+
+```
+|_ stacks
+ |_
+ |_
+ metainfo.xml
+ |_ hooks
+ |_ repos
+ repoinfo.xml
+ |_ services
+ |_
+ metainfo.xml
+ metrics.json
+ |_ configuration
+ {configuration files}
+ |_ package
+ {files, scripts, templates}
+```
+
+## Defining a Service and Components
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service can be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you specify the <commandScript> to use when executing commands. There is a defined set of default commands the component must support.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+#### Using Stack Inheritance
+
+Stacks can _extend_ other Stacks in order to share command scripts and configurations. This reduces duplication of code across Stacks with the following:
+
+* define repositories for the child Stack
+* add new Services in the child Stack (not in the parent Stack)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, the **HDP 2.1 Stack _extends_ HDP 2.0.6 Stack** so only the changes applicable to **HDP 2.1 Stack** are present in that Stack definition. This extension is defined in the [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.1/metainfo.xml) for HDP 2.1 Stack:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.0.6</extends>
+</metainfo>
+```
+
+## Example: Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV", add it to an existing Stack definition. This service includes MASTER, SLAVE and CLIENT components.
+
+## Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+```
+3. Browse to the newly created `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+5. Next, let's create that command script. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `SAMPLESRV` `/` ** `package/scripts`** that we designated in the service metainfo.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+```
+6. Browse to the scripts directory and create the `.py` command script files. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+## Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Sample Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Sample Service" and click Next.
+
+4. Assign the "Sample Srv Master" and click Next.
+
+5. Select the hosts to install the "Sample Srv Client" and click Next.
+
+6. Once complete, the "My Sample Service" will be available Service navigation area.
+
+7. If you want to add the "Sample Srv Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+#### Example: Implementing a Custom Client-only Service
+
+In this example, we will create a custom service called "TESTSRV", add it to an existing Stack definition and use the Ambari APIs to install/configure the service. This service is a CLIENT so it has two commands: install and configure.
+
+## Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTSRV</strong>` that will contain the service definition for **TESTSRV**.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+```
+3. Browse to the newly created `TESTSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTSRV</name>
+ <displayName>New Test Service</displayName>
+ <comment>A New Test Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TEST_CLIENT</name>
+ <displayName>New Test Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTSRV**", and it contains one component " **TEST_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTSRV` `/` **`package/scripts`** that we designated in the service metainfo.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the client';
+ def configure(self, env):
+ print 'Configure the client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+## Install the Service (via the Ambari REST API)
+
+1. Add the Service to the Cluster.
+
+
+```
+POST
+/api/v1/clusters/MyCluster/services
+
+{
+"ServiceInfo": {
+ "service_name":"TESTSRV"
+ }
+}
+```
+2. Add the Components to the Service. In this case, add TEST_CLIENT to TESTSRV.
+
+```
+POST
+/api/v1/clusters/MyCluster/services/TESTSRV/components/TEST_CLIENT
+```
+3. Install the component on all target hosts. For example, to install on `c6402.ambari.apache.org` and `c6403.ambari.apache.org`, first create the host_component resource on the hosts using POST.
+
+```
+POST
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+POST
+/api/v1/clusters/MyCluster/hosts/c6403.ambari.apache.org/host_components/TEST_CLIENT
+```
+4. Now have Ambari install the components on all hosts. In this single command, you are instructing Ambari to install all components related to the service. This call the `install()` method in the command script on each host.
+
+
+```
+PUT
+/api/v1/clusters/MyCluster/services/TESTSRV
+
+{
+ "RequestInfo": {
+ "context": "Install Test Srv Client"
+ },
+ "Body": {
+ "ServiceInfo": {
+ "state": "INSTALLED"
+ }
+ }
+}
+```
+5. Alternatively, instead of installing all components at the same time, you can explicitly install each host component. In this example, we will explicitly install the TEST_CLIENT on `c6402.ambari.apache.org`:
+
+```
+PUT
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+{
+ "RequestInfo": {
+ "context":"Install Test Srv Client"
+ },
+ "Body": {
+ "HostRoles": {
+ "state":"INSTALLED"
+ }
+ }
+}
+```
+6. Use the following to configure the client on the host. This will end up calling the `configure()` method in the command script.
+
+```
+POST
+/api/v1/clusters/MyCluster/requests
+
+{
+ "RequestInfo" : {
+ "command" : "CONFIGURE",
+ "context" : "Config Test Srv Client"
+ },
+ "Requests/resource_filters": [{
+ "service_name" : "TESTSRV",
+ "component_name" : "TEST_CLIENT",
+ "hosts" : "c6403.ambari.apache.org"
+ }]
+}
+```
+7. If you want to see which hosts the component is installed.
+
+```
+GET
+/api/v1/clusters/MyCluster/components/TEST_CLIENT
+```
+
+## Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Test Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Test Service" and click Next.
+
+4. Select the hosts to install the "New Test Client" and click Next.
+
+5. Once complete, the "My Test Service" will be available Service navigation area.
+
+6. If you want to add the "New Test Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+
+#### Example: Implementing a Custom Client-only Service (with Configs)
+
+In this example, we will create a custom service called "TESTCONFIGSRV" and add it to an existing Stack definition. This service is a CLIENT so it has two commands: install and configure. And the service also includes a configuration type "test-config".
+
+## Create and Add the Service to the Stack
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTCONFIGSRV</strong>` that will contain the service definition for TESTCONFIGSRV.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+```
+3. Browse to the newly created `TESTCONFIGSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```
+2.0
+
+ TESTCONFIGSRV
+ New Test Config Service
+ A New Test Config Service
+ 0.1.0
+
+ TESTCONFIG_CLIENT
+ New Test Config Client
+ CLIENT
+ 1+
+
+ scripts/test_client.py
+ PYTHON
+ 600
+
+ any
+```
+4. In the above, my service name is " **TESTCONFIGSRV**", and it contains one component " **TESTCONFIG_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTCONFIGSRV` `/` **`package/scripts`** that we designated in the service metainfo `<commandscript></commandscript>`.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the config client';
+ def configure(self, env):
+ print 'Configure the config client';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now let's define a config type for this service. Create a directory for the configuration dictionary file `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTCONFIGSRV` `/` **`configuration`**.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+```
+8. Browse to the configuration directory and create the `test-config.xml` file. For example:
+
+```
+some.test.property
+ this.is.the.default.value
+ This is a kool description.
+
+```
+9. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
diff --git a/docs/ambari-design/stack-and-services/stack-inheritance.md b/docs/ambari-design/stack-and-services/stack-inheritance.md
new file mode 100644
index 0000000..8d5184d
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/stack-inheritance.md
@@ -0,0 +1,68 @@
+
+# Stack Inheritance
+
+Each stack version must provide a metainfo.xml descriptor file which can declare whether the stack inherits from another stack version:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.3</extends>
+ ...
+</metainfo>
+```
+
+When a stack inherits from another stack version, how its defining files and directories are inherited follows a number of different patterns.
+
+The following files should not be redefined at the child stack version level:
+
+* properties/stack_features.json
+* properties/stack_tools.json
+
+Note: These files should only exist at the base stack level.
+
+The following files if defined in the current stack version replace the definitions from the parent stack version:
+
+* kerberos.json
+* widgets.json
+
+The following files if defined in the current stack version are merged with the parent stack version:
+
+* configuration/cluster-env.xml
+
+* role_command_order.json
+
+Note: All the services' role command orders will be merge with the stack's role command order to provide a master list.
+
+All attributes of the current stack version's metainfo.xml will replace those defined in the parent stack version.
+
+The following directories if defined in the current stack version replace those from the parent stack version:
+
+* hooks
+
+This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The following directories are not inherited:
+
+* repos
+* upgrades
+
+The repos/repoinfo.xml file should be defined in every stack version. The upgrades directory and its corresponding XML files should be defined in all stack versions that support upgrade.
+
+## Services Folder
+
+The services folder is a special case. There are two inheritance mechanisms at work here. First the stack_advisor.py will automatically import the parent stack version's stack_advisor.py script but the rest of the inheritance behavior is up to the script's author. There are several examples of [stack_advisor.py](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py) files in the Ambari server source.
+
+```python
+ class HDP23StackAdvisor(HDP22StackAdvisor):
+ def __init__(self):
+ super(HDP23StackAdvisor, self).__init__()
+ Logger.initialize_logger()
+
+ def getComponentLayoutValidations(self, services, hosts):
+ parentItems = super(HDP23StackAdvisor, self).getComponentLayoutValidations(services, hosts)
+ ...
+```
+
+Services defined within the services folder follow the rules for [service inheritance](./custom-services.md#Service20%Inheritance). By default if a service does not declare an explicit inheritance (via the **extends** tag), the service will inherit from the service defined at the parent stack version.
diff --git a/docs/ambari-design/stack-and-services/stack-properties.md b/docs/ambari-design/stack-and-services/stack-properties.md
new file mode 100644
index 0000000..af33f0c
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/stack-properties.md
@@ -0,0 +1,119 @@
+# Stack Properties
+
+Similar to stack configurations, most properties are defined at the service level, however there are global properties which can be defined at the stack-version level affecting across all services.
+
+Some examples are: [stack-selector and conf-selector](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json#L2) specific names or what [stack versions certain stack features](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json#L5) are supported by. Most of these properties were introduced in Ambari 2.4 version in the effort of parameterize stack information and facilitate the reuse of common-services code by other distributions.
+
+
+Such properties can be defined in .json format in the [properties folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) of the stack.
+
+
+
+# Stack features
+
+Stacks can support different features depending on their version, for example: upgrade support, NFS support, support for specific new components (such as Ranger, Phoenix )...
+
+
+Stack featurization was added as part of the HDP stack configurations on [HDP/2.0.6/configuration/cluster-env.xml](http://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml), introducing a new stack_features property which value is processed in the stack engine from an external property file.
+
+```xml
+<!-- Define stack_features property in the base stack. DO NOT override this property for each stack version -->
+<property>
+ <name>stack_features</name>
+ <value/>
+ <description>List of features supported by the stack</description>
+ <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+ <value-attributes>
+ <property-file-name>stack_features.json</property-file-name>
+ <property-file-type>json</property-file-type>
+ <read-only>true</read-only>
+ <overridable>false</overridable>
+ <visible>false</visible>
+ </value-attributes>
+ <on-ambari-upgrade add="true"/>
+</property>
+```
+
+Stack Features properties are defined in [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) file under /HDP/2.0.6/properties. These features support is now available for access at service-level code to change certain service behaviors or configurations. This is an example of features described in stack_features.jon file:
+
+
+```json
+"stack_features": [
+ {
+ "name": "snappy",
+ "description": "Snappy compressor/decompressor support",
+ "min_version": "2.0.0.0",
+ "max_version": "2.2.0.0"
+ },
+ {
+ "name": "lzo",
+ "description": "LZO libraries support",
+ "min_version": "2.2.1.0"
+ },
+ {
+ "name": "express_upgrade",
+ "description": "Express upgrade support",
+ "min_version": "2.1.0.0"
+ },
+ {
+ "name": "rolling_upgrade",
+ "description": "Rolling upgrade support",
+ "min_version": "2.2.0.0"
+ }
+ ]
+}
+```
+
+where min_version/max_version are optional constraints.
+
+Feature constants, matching features names, such has ROLLING_UPGRADE = "rolling_upgrade" has been added to a new StackFeature class in [resource_management/libraries/functions/constants.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/constants.py#L38)
+
+
+```python
+class StackFeature:
+"""
+ Stack Feature supported
+"""
+ SNAPPY = "snappy"
+ LZO = "lzo"
+ EXPRESS_UPGRADE = "express_upgrade"
+ ROLLING_UPGRADE = "rolling_upgrade"
+```
+
+Additionally, corresponding helper functions has been introduced in [resource_management/libraries/functions/stack_fetaures.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/stack_features.py) to parse the .json file content and called from service code to check if the stack supports the specific feature.
+
+This is an example where the new stack featurization design is used in service code:
+
+```python
+if params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version):
+ conf_select.select(params.stack_name, "hive", params.version)
+ stack_select.select("hive-server2", params.version)
+```
+
+# Stack Tools
+
+
+Similar to stack features, stack-selector and conf-selector tools are now stack-driven instead of hardcoding hdp-select and conf-select. They are defined in [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json) file under /HDP/2.0.6/properties
+
+And declared as part of the HDP stack configurations as a new property on [/HDP/2.0.6/configuration/cluster-env.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml)
+
+
+```xml
+<!-- Define stack_tools property in the base stack. DO NOT override this property for each stack version -->
+<property>
+ <name>stack_tools</name>
+ <value/>
+ <description>Stack specific tools</description>
+ <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+ <value-attributes>
+ <property-file-name>stack_tools.json</property-file-name>
+ <property-file-type>json</property-file-type>
+ <read-only>true</read-only>
+ <overridable>false</overridable>
+ <visible>false</visible>
+ </value-attributes>
+ <on-ambari-upgrade add="true"/>
+</property>
+```
+
+Corresponding helper functions have been added in [resource_management/libraries/functions/stack_tools.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py). These helper functions are used to remove hardcodings in resource_management library.
diff --git a/docs/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md b/docs/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md
new file mode 100644
index 0000000..95023bf
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md
@@ -0,0 +1,67 @@
+# Version functions: conf-select and stack-select
+
+Especially during upgrade, it is important to be able to set the current stack and configuration versions. For non-custom services, these functions are implemented in the conf-select and stack-select functions. These can be imported in a service's scripts with the following imports:
+
+```py
+from resource_management.libraries.functions import conf_select
+
+from resource_management.libraries.functions import stack_select
+```
+
+Typically the select functions, which is used to set the stack and configuration versions, are called in the pre_upgrade_restart function during a rolling upgrade:
+
+```py
+ def pre_upgrade_restart(self, env, upgrade_type=None):
+ import params
+ env.set_params(params)
+
+ # this function should not execute if the version can't be determined or
+ # the stack does not support rolling upgrade
+ if not (params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version)):
+ return
+
+ Logger.info("Executing <My Service> Stack Upgrade pre-restart")
+ conf_select.select(params.stack_name, "<my_service>", params.version)
+ stack_select.select("<my_service>", params.version)
+```
+
+The select functions will set up symlinks for the current stack or configuration version. For the stack, this will set up the links from the stack root current directory to the particular stack version. For example:
+
+```
+/usr/hdp/current/hadoop-client -> /usr/hdp/2.5.0.0/hadoop
+```
+
+For the configuration version, this will set up the links for all the configuration directories, as follows:
+
+```
+/etc/hadoop/conf -> /usr/hdp/current/hadoop-client/conf
+
+/usr/hdp/current/hadoop-client/conf -> /etc/hadoop/2.5.0.0/0
+```
+
+The stack_select and conf_select functions can also be used to return the hadoop directories:
+
+```
+hadoop_prefix = stack_select.get_hadoop_dir("home")
+
+hadoop_bin_dir = stack_select.get_hadoop_dir("bin")
+
+hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+```
+
+The conf_select api is as follows:
+
+```py
+def select(stack_name, package, version, try_create=True, ignore_errors=False)
+
+def get_hadoop_conf_dir(force_latest_on_upgrade=False)
+```
+
+The stack_select api is as follows:
+```py
+def select(component, version)
+
+def get_hadoop_dir(target, force_latest_on_upgrade=False)
+```
+
+Unfortunately for custom services these functions are not available to set up their configuration or stack versions. A custom service could implement their own functions to set up the proper links.
diff --git a/docs/ambari-design/stack-and-services/writing-metainfo.md b/docs/ambari-design/stack-and-services/writing-metainfo.md
new file mode 100644
index 0000000..2645fe9
--- /dev/null
+++ b/docs/ambari-design/stack-and-services/writing-metainfo.md
@@ -0,0 +1,247 @@
+# Writing metainfo.xml
+
+metainfo.xml is a declarative definition of an Ambari managed service describing its content. Itis the most critical file for any service definition. This section describes various key sub-sections within a metainfo.xml file.
+
+_Non-mandatory fields are described in italics._
+
+The top level fields to describe a service are as follows:
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+name | the name of the service. A name has to be unique among all the services that are included in the stack definition containing the service. | HDFS
+displayName | the display name of the service | HDFS
+version | the version of the service. name and version together uniquely identify a service. Usually, the version is the version of the service binary itself. | 2.1.0.2.0
+components | the list of component that the service is comprised of | `<check out HDFS metainfo>`
+osSpecifics | OS specific package information for the service | `<check out HDFS metainfo>`
+commandScript | service level commands may also be defined. The command is executed on a component instance that is a client | `<check out HDFS metainfo>`
+comment | a short description describing the service | Apache Hadoop Distributed File System
+requiredServices | what other services that should be present on the cluster | `<check out HDFS metainfo>`
+configuration-dependencies | configuration files that are expected by the service (config files owned by other services are specified in this list) | `<check out HDFS metainfo>`
+restartRequiredAfterRackChange | Restart Required After Rack Change | true / false
+configuration-dir | Use this to specify a different config directory if not 'configuration' | -
+
+**service/components - A service contains several components. The fields associated with a component are**:
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+name | name of the component | HDFS
+displayName | display name of the component. | HDFS
+category | type of the component - MASTER, SLAVE, and CLIENT | 2.1.0.2.0
+commandScript | application wide commands may also be defined. The command is executed on a component instance that is a client | `<check out HDFS metainfo>`
+cardinality | allowed/expected number of instances | For example, 1-2 for MASTER, 1+ for Slave
+reassignAllowed | whether the component can be reassigned / moved to a different host. | true / false
+versionAdvertised | does the component advertise its version - used during rolling/express upgrade | Apache Hadoop Distributed File System
+timelineAppid | This will be the component name under which the metrics from this component will be collected. | `<check out HDFS metainfo>`
+dependencies | the list of components that this component depends on | `<check out HDFS metainfo>`
+customCommands | a set of custom commands associated with the component in addition to standard commands. | RESTART_LLAP (Check out HIVE metainfo)
+
+**service/osSpecifics - OS specific package names (rpm or deb packages)**
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+osFamily | the os family for which the package is applicable | any => all<br></br>amazon2015,redhat6,debian7,ubuntu12,ubuntu14,ubuntu16
+packages | list of packages that are needed to deploy the service | `<check out HDFS metainfo>`
+package/name | name of the package (will be used by the yum/zypper/apt commands) | Eg hadoop-lzo.
+
+**service/commandScript - the script that implements service check**
+
+Feild | What is it used for
+------|---------------------
+script | the relative path to the script
+scriptType | the type of the script, currently only supported type is PYTHON
+timeout | custom timeout for the command - this supersedes ambari default
+
+sample values:
+
+```xml
+<commandScript>
+ <script>scripts/service_check.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>300</timeout>
+</commandScript>
+```
+**service/component/dependencies/dependency**
+
+Feild | What is it used for
+------|---------------------
+name | name of the component it depends on
+scope | cluster / host. specifies whether the dependent component<br></br>should be present in the same cluster or the same host.
+auto-deploy | custom timeout for the command - this supersedes ambari default
+conditions | Conditions in which this dependency exists. For example, the presence of a property in a config.
+
+sample values:
+
+```xml
+<dependency>
+ <name>HDFS/ZKFC</name>
+ <scope>cluster</scope>
+ <auto-deploy>
+ <enabled>false</enabled>
+ </auto-deploy>
+ <conditions>
+ <condition xsi:type="propertyExists">
+ <configType>hdfs-site</configType>
+ <property>dfs.nameservices</property>
+ </condition>
+ </conditions>
+</dependency>
+```
+
+**service/component/commandScript - the script that implements components specific default commands (Similar to service/commandScript )**
+
+**service/component/logs - provides log search integration.**
+
+Feild | What is it used for
+------|---------------------
+logId | logid of the component
+primary | primary log id or not.
+
+sample values:
+
+```xml
+<log>
+ <logId>hdfs_namenode</logId>
+ <primary>true</primary>
+</log>
+```
+
+**service/component/customCommand - custom commands can be added to components.**
+
+- **name**: name of the custom command
+- **commandScript**: the details of the script that implements the custom command
+- commandScript/script: the relative path to the script
+- commandScript/scriptType: the type of the script, currently only supported type is PYTHON
+- commandScript/timeout: custom timeout for the command - this supersedes ambari default
+
+**service/component/configFiles - list of config files to be available when client config is to be downloaded (used to configure service clients that are not managed by Ambari)**
+
+- **type**: the type of file to be generated, xml or env sh, yaml, etc
+- **fileName**: name of the generated file
+- **dictionary**: data dictionary that contains the config properties (relevant to how ambari manages config bags internally)
+
+## Sample metainfo.xml
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HBASE</name>
+ <displayName>HBase</displayName>
+ <comment>Non-relational distributed database and centralized service for configuration management &
+ synchronization
+ </comment>
+ <version>0.96.0.2.0</version>
+ <components>
+ <component>
+ <name>HBASE_MASTER</name>
+ <displayName>HBase Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <timelineAppid>HBASE</timelineAppid>
+ <dependencies>
+ <dependency>
+ <name>HDFS/HDFS_CLIENT</name>
+ <scope>host</scope>
+ <auto-deploy>
+ <enabled>true</enabled>
+ </auto-deploy>
+ </dependency>
+ <dependency>
+ <name>ZOOKEEPER/ZOOKEEPER_SERVER</name>
+ <scope>cluster</scope>
+ <auto-deploy>
+ <enabled>true</enabled>
+ <co-locate>HBASE/HBASE_MASTER</co-locate>
+ </auto-deploy>
+ </dependency>
+ </dependencies>
+ <commandScript>
+ <script>scripts/hbase_master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>1200</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/hbase_master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+
+ <component>
+ <name>HBASE_REGIONSERVER</name>
+ <displayName>RegionServer</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <timelineAppid>HBASE</timelineAppid>
+ <commandScript>
+ <script>scripts/hbase_regionserver.py</script>
+ <scriptType>PYTHON</scriptType>
+ </commandScript>
+ </component>
+
+ <component>
+ <name>HBASE_CLIENT</name>
+ <displayName>HBase Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <commandScript>
+ <script>scripts/hbase_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ </commandScript>
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>hbase-site.xml</fileName>
+ <dictionaryName>hbase-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>hbase-env.sh</fileName>
+ <dictionaryName>hbase-env</dictionaryName>
+ </configFile>
+ </configFiles>
+ </component>
+ </components>
+
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily>
+ <packages>
+ <package>
+ <name>hbase</name>
+ </package>
+ </packages>
+ </osSpecific>
+ </osSpecifics>
+
+ <commandScript>
+ <script>scripts/service_check.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>300</timeout>
+ </commandScript>
+
+ <requiredServices>
+ <service>ZOOKEEPER</service>
+ <service>HDFS</service>
+ </requiredServices>
+
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hbase-site</config-type>
+ <config-type>ranger-hbase-policymgr-ssl</config-type>
+ <config-type>ranger-hbase-security</config-type>
+ </configuration-dependencies>
+
+ </service>
+ </services>
+</metainfo>
+```
\ No newline at end of file
diff --git a/docs/ambari-design/technology-stack.md b/docs/ambari-design/technology-stack.md
new file mode 100644
index 0000000..92de946
--- /dev/null
+++ b/docs/ambari-design/technology-stack.md
@@ -0,0 +1,30 @@
+---
+sidebar_position: 1
+---
+
+# Technology Stack
+
+## Ambari Server
+
+- Server code: Java 1.7 / 1.8
+- Agent scripts: Python
+- Database: Postgres, Oracle, MySQL
+- ORM: EclipseLink
+- Security: Spring Security with remote LDAP integration and local database
+- REST server: Jersey (JAX-RS)
+- Dependency Injection: Guice
+- Unit Testing: JUnit
+- Mocks: EasyMock
+- Configuration management: Python
+
+## Ambari Web
+
+- Frontend code: JavaScript
+- Client-side MVC framework: Ember.js / AngularJS
+- Templating: Handlebars.js (integrated with Ember.js)
+- DOM manipulation: jQuery
+- Look and feel: Bootstrap 2
+- CSS preprocessor: LESS
+- Unit Testing: Mocha
+- Mocks: Sinon.js
+- Application assembler/tester: Brunch / Grunt / Gulp
diff --git a/docs/ambari-design/views/framework-services.md b/docs/ambari-design/views/framework-services.md
new file mode 100644
index 0000000..47a639e
--- /dev/null
+++ b/docs/ambari-design/views/framework-services.md
@@ -0,0 +1,110 @@
+# Framework Services
+
+This section describes the framework services that are available for views.
+
+
+## ViewContext
+
+The view server-side resources have access to a [ViewContext](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewContext.java) object. The view context provides information about the current authenticated user, the view definition, the instance configuration properties, instance data and the view controller.
+
+```java
+/**
+ * The view context.
+
+ */
+ @Inject
+ ViewContext context;
+```
+
+## Instance Data
+
+The view framework exposes a way to store key/value pair "instance data". This data is scoped to a given view instance and user. Instance data is meant to be used for information such as "user prefs" or other lightweight information that supports the experience of your view application. You can access the instance data get and put methods from the [ViewContext](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewContext.java) object.
+
+Checkout the **Favorite View** for example usage of the instance data API.
+
+[https://github.com/apache/ambari/tree/trunk/ambari-views/examples/favorite-view](https://github.com/apache/ambari/tree/trunk/ambari-views/examples/favorite-view)
+
+```java
+/**
+ * Context object available to the view components to provide access to
+ * the view and instance attributes as well as run time information about
+ * the current execution context.
+
+ */
+public interface ViewContext {
+
+ /**
+ * Save an instance data value for the given key.
+
+ *
+ * @param key the key
+ * @param value the value
+ *
+ * @throws IllegalStateException if no instance is associated
+ */
+ public void putInstanceData(String key, String value);
+ /**
+ * Get the instance data value for the given key.
+
+ *
+ * @param key the key
+ *
+ * @return the instance data value; null if no instance is associated
+ */
+ public String getInstanceData(String key);
+
+}
+```
+
+## Instance Configuration Properties
+
+The instance configuration properties (set when you created your view instance) are accessible from the view context:
+
+```java
+viewContext.getProperties();
+```
+
+Configuration properties also supports a set of pre-defined **variables** that will be replaced when you read the property from the view context. For example, if your view requires a configuration parameter "hdfs.file.path" and that path is going to be set based on the username, when you configure the view instance, set the configuration property like so:
+
+```
+"hdfs.file.path" : "/this/is/some/path/${username}"
+```
+
+When you get this property from the view context, the `${username}` variable will be replaced automatically.
+
+```java
+viewContext.getProperties().get("hdfs.file.path") returns "/this/is/some/path/pramod"
+```
+
+Instance parameters support the following pre-defined variables: `${username}`, `${viewName}` and `${instanceName}`
+
+## Events
+
+Events are an important component of the views framework. Events allow the view to interact with the framework on lifecycle changes (i.e. "Framework Events") such as deploy, create and destroy. As well, once a user has collection of views available, eventing allows the views to communicate with other views (i.e. "View Events").
+
+### Framework Events
+
+To register to receive framework events, in the `view.xml`, specify a `<view-class>this.is.my.view-clazz</view-class>` which is a class that implements the [View](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/View.java) interface.
+
+
+
+Event | Description
+---------|-------
+onDeploy() | Called when a view is deployed.
+onCreate() | Called when a view instance is created.
+onDestroy() | Called when a view instance is destroy.
+
+### Views Events
+
+Views can pass events between views. Obtain the [ViewController](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewController.java) object that allows you to **register listeners** for view events and to **fire events** for other listeners. A view can register an event [Listener](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/events/Listener.java) (via the [ViewController](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewController.java)) for other views by **view name**, or by **view name + version**. When an [Event](http://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/events/Event.java) is fired from the source view, all registered listeners will receive the event.
+
+
+
+1. Obtain the view controller and register a listener.
+
+```java
+viewContext.getViewController().registerListener(...);
+```
+2. Fire the event. `viewContext.getViewController().fireEvent(...);`
+
+3. The framework will notify all registered listeners. The listener implementation can process the event as appropriate. `listener.notify(...)`
diff --git a/docs/ambari-design/views/imgs/fmwk-events.jpg b/docs/ambari-design/views/imgs/fmwk-events.jpg
new file mode 100644
index 0000000..a1b4881
--- /dev/null
+++ b/docs/ambari-design/views/imgs/fmwk-events.jpg
Binary files differ
diff --git a/docs/ambari-design/views/imgs/view-components.jpg b/docs/ambari-design/views/imgs/view-components.jpg
new file mode 100644
index 0000000..1f95d12
--- /dev/null
+++ b/docs/ambari-design/views/imgs/view-components.jpg
Binary files differ
diff --git a/docs/ambari-design/views/imgs/view-events.jpg b/docs/ambari-design/views/imgs/view-events.jpg
new file mode 100644
index 0000000..62f0532
--- /dev/null
+++ b/docs/ambari-design/views/imgs/view-events.jpg
Binary files differ
diff --git a/docs/ambari-design/views/imgs/view-lifecycle.png b/docs/ambari-design/views/imgs/view-lifecycle.png
new file mode 100644
index 0000000..1cd7cc1
--- /dev/null
+++ b/docs/ambari-design/views/imgs/view-lifecycle.png
Binary files differ
diff --git a/docs/ambari-design/views/imgs/view-versions.jpg b/docs/ambari-design/views/imgs/view-versions.jpg
new file mode 100644
index 0000000..4dcd45d
--- /dev/null
+++ b/docs/ambari-design/views/imgs/view-versions.jpg
Binary files differ
diff --git a/docs/ambari-design/views/index.md b/docs/ambari-design/views/index.md
new file mode 100644
index 0000000..6e8a74a
--- /dev/null
+++ b/docs/ambari-design/views/index.md
@@ -0,0 +1,90 @@
+# Views
+
+:::info
+This capability is currently under development.
+:::info
+
+**Ambari Views** offer a systematic way to plug-in UI capabilities to surface custom visualization, management and monitoring features in Ambari Web. A " **view**" is a way of extending Ambari that allows 3rd parties to plug in new resource types along with the APIs, providers and UI to support them. In other words, a view is an application that is deployed into the Ambari container.
+
+
+## Useful Resources
+
+Resource | Link
+---------|-------
+Views Overview | http://www.slideshare.net/hortonworks/ambari-views-overview
+Views Framework API Docs | https://github.com/apache/ambari/blob/trunk/ambari-views/docs/index.md
+Views Framework Examples | https://github.com/apache/ambari/tree/trunk/ambari-views/examples
+
+## Terminology
+
+The following section describes the basic terminology associated with views.
+
+Term | Description
+---------|-------
+View Name | The name of the view. The view name identifies the view to Ambari.
+View Version | The version of the view. A unique view name can have multiple versions deployed in Ambari.
+View Package | This is the JAR package that contains the **view definition** and all view resources (server-side resources and client-side assets) for a given view version. See [View Package](#View20%Package) for more information on the contents and structure of the package.
+View Definition | This defines the view name, version, resources and required/optional configuration parameters for a view. The view definition file is included in the view package. See View Definition for more information on the view definition file syntax and features.
+View Instance | An unique instance of a view, that is based on a view definition and specific version that is configured. See Versions and Instances for more information.
+View API | The REST API for viewing the list of deployed views and creating view instances. See View API for more information.
+Framework Services | The server-side of the view framework exposes certain services for use with your views. This includes persistence of view instance data and view eventing. See Framework Services for more information.
+
+## Components of a View
+
+A view can consist of **client-side assets** (i.e. the UI that is exposed in Ambari Web) and **server-side resources** (i.e. the classes that expose REST end points). When the view loads into Ambari Web, the view UI can use the view server-side resources as necessary to deliver the view functionality.
+
+
+
+### Client-side Assets
+
+The view does not limit or restrict what client-side technologies a view uses. You can package client-side dependencies (such as JavaScript and CSS frameworks) with your view.
+
+### Server-side Resources
+
+A view can expose resources as REST end points to be used in conjunction with the client-side to deliver the functionality of your view application. Thees resources are written in Java and can be anything from a servlet to a regular REST service to an Ambari ResourceProvider (i.e. a special type of REST service that handles some REST capabilities such as partial response and pagination – if you adhere to the Ambari ResourceProvider interface). See [Framework Services](./framework-services.md) for more information on capabilities that the framework exposes on the server-side for views.
+
+:::info
+Checkout the **Weather View** as an example of a view that exposes servlet and REST endpoints.
+
+[https://github.com/apache/ambari/tree/trunk/ambari-views/examples/weather-view](https://github.com/apache/ambari/tree/trunk/ambari-views/examples/weather-view)
+:::
+
+## View Package
+
+The assets associated with a view are delivered as a JAR package. The **view definition file** must be at the root of the package. UI assets and server-side classes are served from the root. Dependent Java libraries are placed in the `WEB-INF/lib` directory.
+
+```
+view.jar
+|
+|- view.xml
+|
+|-
+|
+|- index.html
+| |
+| |_
+|
+|_ WEB-INF
+ |
+ |_ lib/*.jar
+```
+
+## Versions and Instances
+
+Multiple versions of a given view can be deployed into Ambari and multiple instances of each view can be created for each version. For example, I can have a view named FILES and deploy versions 0.1.0 and 0.2.0. I can then create instances of each version FILES{0.1.0} and FILES{0.2.0} allowing some Ambari users to have an older version of FILES (0.1.0), and other users to have the newer FILES version (0.2.0). I can also create multiple instances for each version, configuring each differently.
+
+
+
+### Instance Configuration Parameters
+
+As part of a view definition, the instance configuration parameters are specified (i.e. "these parameters are needed to configure an instance of this view"). When you create a view instance, you specify the configuration parameters specific to that instance. Since parameters are scoped to a particular view instance, you can have multiple instances of a view, each instance configured differently.
+
+Using the example above, I can create two instances of the FILES{0.2.0} version, one instance that is configured a certain way and the second that is configured differently. This allows some Ambari users to use FILES one way, and other users a different way.
+
+See [Framework Services](./framework-services.md) for more information on instance configuration properties.
+
+## View Lifecycle
+
+The lifecycle of a view is shown below. As you deploy a view and create instances of a view, server-side framework events are invoked. See [Framework Services](./framework-services.md) for more information on capabilities that the framework exposes on the server-side for views.
+
+
diff --git a/docs/ambari-design/views/view-api.md b/docs/ambari-design/views/view-api.md
new file mode 100644
index 0000000..023db84
--- /dev/null
+++ b/docs/ambari-design/views/view-api.md
@@ -0,0 +1,195 @@
+# View API
+
+This section describes basic usage of the View REST API. Browse https://github.com/apache/ambari/blob/trunk/ambari-views/docs/index.md for detailed usage information and examples.
+
+## Get List of Deployed Views
+
+1. Gets the list of all deployed views
+
+```
+GET /api/v1/views
+
+200 - OK
+```
+
+2. Once you have a list of views, you can drill-into a view and see the available versions.
+
+```
+GET /api/v1/views/FILES
+
+200 - OK
+```
+
+3. You can go a level deeper and see more information about that specific version for the view, such as the parameters and the archive name, and a list of all instances of the view for that specific view version.
+
+```
+GET /api/v1/views/FILES/versions/0.1.0
+
+200 - OK
+```
+
+## Creating a View Instance: Files View
+
+The following example shows creating an instance of the [Files View](https://github.com/apache/ambari/tree/trunk/contrib/views/files), name FILES, version 0.1.0 view called "MyFiles".
+
+1. Create the view instance.
+
+```
+POST /api/v1/views/FILES/versions/0.1.0/instances/MyFiles
+
+[ {
+"ViewInstanceInfo" : {
+ "properties" : {
+ "dataworker.defaultFs" : "webhdfs://your.namenode.host:50070"
+ }
+ }
+} ]
+
+201 - CREATED
+```
+
+:::info
+When creating your view instance, be sure to provide all required view instance properties, otherwise you will receive a 500 with a message explaining the properties that are required.
+:::
+
+2. Restart Ambari Server to pick-up the view instance and UI resources.
+
+```bash
+ambari-server restart
+```
+
+3. Confirm the newly created view instance is available.
+
+```
+GET /api/v1/views/FILES/versions/0.1.0
+
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/FILES/versions/0.1.0/",
+ "ViewVersionInfo" : {
+ "archive" : "/var/lib/ambari-server/resources/views/work/FILES{0.1.0}",
+ "label" : "Files",
+ "masker_class" : null,
+ "parameters" : [
+ {
+ "name" : "dataworker.defaultFs",
+ "description" : "FileSystem URI",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "dataworker.username",
+ "description" : "The username (defaults to ViewContext username)",
+ "required" : false,
+ "masked" : false
+ }
+ ],
+ "version" : "0.1.0",
+ "view_name" : "FILES"
+ },
+ "permissions" : [ ],
+ "instances" : [
+ {
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/FILES/versions/0.1.0/instances/MyFiles",
+ "ViewInstanceInfo" : {
+ "instance_name" : "MyFiles",
+ "version" : "0.1.0",
+ "view_name" : "FILES"
+ }
+ }
+ ]
+}
+```
+
+Browse to the view instance directly.
+
+```
+http://c6401.ambari.apache.org:8080/views/FILES/0.1.0/MyFiles/
+
+or
+
+http://c6401.ambari.apache.org:8080/#/main/views/FILES/0.1.0/MyFiles
+```
+
+## Creating a View Instance: Capacity Scheduler View
+
+The following example shows creating an instance of the [Capacity Scheduler View](https://github.com/apache/ambari/tree/trunk/contrib/views/capacity-scheduler), name CAPACITY-SCHEDULER, version 0.1.0 view called "CS_1", using the label "Capacity Scheduler".
+
+* Create the view instance.
+
+```
+POST /api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/instances/CS_1
+
+[ {
+"ViewInstanceInfo" : {
+ "label" : "Capacity Scheduler",
+ "properties" : {
+ "ambari.server.url" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster",
+ "ambari.server.username" : "admin",
+ "ambari.server.password" : "admin"
+ }
+ }
+} ]
+
+201 - CREATED
+```
+
+:::info
+When creating your view instance, be sure to provide all **required** view instance properties, otherwise you will receive a 500 with a message explaining the properties that are required.
+:::
+
+* Confirm the newly created view instance is available.
+
+```
+GET /api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0
+
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/",
+ "ViewVersionInfo" : {
+ "archive" : "/var/lib/ambari-server/resources/views/work/CAPACITY-SCHEDULER{0.1.0}",
+ "label" : "Capacity Scheduler",
+ "masker_class" : null,
+ "parameters" : [
+ {
+ "name" : "ambari.server.url",
+ "description" : "Target Ambari Server REST API cluster URL (for example: http://ambari.server:8080/api/v1/clusters/c1)",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "ambari.server.username",
+ "description" : "Target Ambari administrator username (for example: admin)",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "ambari.server.password",
+ "description" : "Target Ambari administrator password (for example: admin)",
+ "required" : true,
+ "masked" : false
+ }
+ ],
+ "version" : "0.1.0",
+ "view_name" : "CAPACITY-SCHEDULER"
+ },
+ "permissions" : [ ],
+ "instances" : [
+ {
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/instances/CS_1",
+ "ViewInstanceInfo" : {
+ "instance_name" : "CS_1",
+ "version" : "0.1.0",
+ "view_name" : "CAPACITY-SCHEDULER"
+ }
+ }
+ ]
+}
+```
+* Browse to the view instance directly.
+
+```
+http://c6401.ambari.apache.org:8080/views/CAPACITY-SCHEDULER/0.1.0/CS_1/
+
+or
+
+http://c6401.ambari.apache.org:8080/#/main/views/CAPACITY-SCHEDULER/0.1.0/CS_1/
+```
diff --git a/docs/ambari-design/views/view-definition.md b/docs/ambari-design/views/view-definition.md
new file mode 100644
index 0000000..b32cc8f
--- /dev/null
+++ b/docs/ambari-design/views/view-definition.md
@@ -0,0 +1,210 @@
+# View Definition
+
+The following describes the syntax of the View Definition File (`view.xml`) as part of the View Package.
+
+An XML Schema Definition is available [here](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/resources/view.xsd).
+
+## `<view>`
+
+The `<view>` element is the enclosing element in the Definition File. The following table describes the elements you can include in the `<view>` element:
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes | The unique name of the view. See `<name>` for more information.
+label | Yes | The display label of the view. See `<label>` for more information.
+version | Yes | The version of the view. See `<version>` for more information.
+min-ambari-version<br></br>max-ambari-version | No | The minimum and maximum Ambari version this view can be deployed with. See `<min-ambari-version>` for more information.
+description | No | The description of the view. See `<description>` for more information.
+icon | No | The 32x32 icon to display for this view. Suggested size is 32x32 and will be displayed as 8x8 and 16x16 as necessary. If this property is not set, a default view framework icon is used.
+icon64 | No | The 64x64 icon to display for this view. If this property is not set, the 32x32 sized icon will be used.
+permission | No | Defines a custom permission for this view. See `<permission>` for more information.
+parameter | No | Defines a configuration parameter that is used to when creating a view instance. See `<parameter>` for more information.
+resource | No | Defines a resource that is exposed by the view. See `<resource>` for more information.
+instance | No | Defines a static instance of the view. See `<instance>` for more information.
+view-class | No| Registers a view class to receive framework events. See `<view-class>` for more information.
+validator-class | No | Registers a validator class to receive framework events. See `<validator-class>` for more information.
+
+## `<name>`
+
+The unique name of the view. Example:
+
+```xml
+<name>MY_COOL_VIEW</name>
+```
+
+## `<label>`
+
+The label of the view. Example:
+
+```xml
+<label>My Cool View</label>
+```
+
+## `<version>`
+
+The version of the view. Example:
+
+```xml
+<version>0.1.0</version>
+```
+
+## `<min-ambari-version> <max-ambari-version>`
+
+The minimum and maximum version of Ambari server that can run this view. Example:
+
+```xml
+<min-ambari-version>1.7.0</min-ambari-version>
+<min-ambari-version>1.7.*</min-ambari-version>
+<max-ambari-version>2.0</max-ambari-version>
+```
+
+## `<description>`
+
+The description of the view. Example:
+
+```xml
+<description>This view is used to display information.</description>
+
+```
+
+## `<parameter>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes | The name of the configuration parameter.
+description | Yes | The description of the configuration parameter.
+label | No | The user friendly name of the configuration parameter (used in the Ambari Administration Interface UI).
+placeholder| No | The placeholder value for the configuration parameter (used in the Ambari Administration Interface UI).
+default-value | No| The default value for the configuration parameter (used in the Ambari Administration Interface UI).
+required | Yes |If true, the configuration parameter is required in order to create a view instance.
+masked | No | Indicated this parameter value is to be "masked" in the Ambari Web UI (i.e. not shown in the clear). Omitting this element default to not-masked. Otherwise, if true, the parameter value will be "masked" in the Web UI.
+
+```xml
+<parameter>
+ <name>someParameter</name>
+ <description>Some parameter this is used to configure an instance of this view</description>
+ <required>false</required>
+</parameter>
+```
+
+```xml
+<parameter>
+ <name>name.label.descr.default.place</name>
+ <description>Name, label, description, default and placeholder</description>
+ <label>NameLabelDescDefaultPlace</label>
+ <placeholder>this is placeholder text but you should see default</placeholder>
+ <default-value>youshouldseethisdefault</default-value>
+ <required>true</required>
+</parameter>
+```
+
+See the [Property View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/property-view/docs/index.md) to see the different parameter options in use.
+
+## `<permission>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes| The unique name of the permission.
+description| Yes| The description of the permission.
+
+```xml
+<permission>
+ <name>SOME_CUSTOM_PERM</name>
+ <description>A custom permission for this view</description>
+</permission>
+<permission>
+ <name>SOME_OTHER_PERM</name>
+ <description>Another custom permission for this view</description>
+</permission>
+```
+
+## `<resource>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes| The name of the resource. This will be the resource endpoint name of the view instance.
+plural-name | No | The plural name of the resource.
+service-class | No | The JAX-RS annotated resource service class.
+id-property | No | The resource identifier.
+provider-class | No | The Ambari ResourceProvider resource class.
+resource-class | No | The JavaBean resource class.
+
+```xml
+<resource>
+ <name>calculator</name>
+ <service-class>org.apache.ambari.view.proxy.CalculatorResource</service-class>
+</resource>
+```
+
+See the [Calculator View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/calculator-view/docs/index.md) to see a REST service endpoint view implementation.
+
+```xml
+<resource>
+ <name>city</name>
+ <plural-name>cities</plural-name>
+ <id-property>id</id-property>
+ <resource-class>org.apache.ambari.view.weather.CityResource</resource-class>
+ <provider-class>org.apache.ambari.view.weather.CityResourceProvider</provider-class>
+ <service-class>org.apache.ambari.view.weather.CityService</service-class>
+</resource>
+```
+
+See the [Weather View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/weather-view/docs/index.md) to see an Ambari ResourceProvider view implementation..
+
+## `<instance>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes |The unique name of the view instance.
+label | No |The display label of the view instance. If not set, the view definition `<label>` is used.
+description| No |The description of the view instance. If not set, the view definition `<description>` is used.
+visible | No |If true, for the view instance to show up in the users view instance list.
+icon | No |Overrides the view icon for this specific view instance.
+icon64 | No |Overrides the view icon64 for this specific view instance.
+property | No |Specifies any necessary configuration parameters for the view instance. See `<property>` for more information.
+
+```xml
+<instance>
+ <name>US_WEST</name>
+ <property>
+ <key>cities</key>
+ <value>Palo Alto, US;Los Angeles, US;Portland, US;Seattle, US</value>
+ </property>
+ <property>
+ <key>units</key>
+ <value>imperial</value>
+ </property>
+</instance>
+```
+
+## `<property>`
+
+Element | Requred | Description
+--------|---------|---------------
+key |Yes |The property key (for the configuration parameter to set).
+value |Yes |The property value (for the configuration parameter to set).
+
+```xml
+<property>
+ <key>units</key>
+ <value>imperial</value>
+</property>
+```
+
+## `<view-class>`
+
+Registers a view class to receive framework events. The view class must implement the [View](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/View.java) interface.
+
+```xml
+<view-class>this.is.my.viewclazz</view-class>
+```
+
+## `<validator-class>`
+
+Registers a validator class to receive property and instance validation requests. The validator class must implement the [Validator](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/validation/Validator.java) interface.
+
+```xml
+<validator-class>org.apache.ambari.view.property.MyValidator</validator-class>
+```
+
+See [Property Validator View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/property-validator-view/docs/index.md) to see view property and instance validation in use.
diff --git a/docs/ambari-dev/admin-view-ambari-admin-development.md b/docs/ambari-dev/admin-view-ambari-admin-development.md
new file mode 100644
index 0000000..e8b230c
--- /dev/null
+++ b/docs/ambari-dev/admin-view-ambari-admin-development.md
@@ -0,0 +1,39 @@
+# Admin View (ambari-admin) Development
+
+## Frontend Development
+
+Follow the instructions here to ease frontend development for Admin View (ambari-admin module):
+
+1. Follow the Quick Start Guide to install and start Ambari Server (cluster need not be deployed).
+2. Follow the "Frontend Development" section in Quick Start Guide to check out the Ambari source using git. This makes the entire Ambari source available via /vagrant/ambari from the Vagrant VM.
+3. From the Ambari Server host:
+
+ ```bash
+ cd /var/lib/ambari-server/resources/views/work <- if this directory does not exist, you have not started ambari-server; run "ambari-server start" to start it
+ mv ADMIN_VIEW\{2.5.0.0\} /tmp
+ ln -s /vagrant/ambari/ambari-admin/src/main/resources/ui/admin-web/dist ADMIN_VIEW\{2.5.0.0\}
+ cp /tmp/ADMIN_VIEW\{2.5.0.0\}/view.xml ADMIN_VIEW\{2.5.0.0\}/
+ ambari-server restart
+ ```
+
+4. Now you can change the source code for Admin View and run gulp locally, and the changes are automatically reflected on the server.
+
+
+## Functional Tests
+
+To run end-to-end functional tests on the browser, execute:
+
+npm run update-webdriver
+npm start (This starts http server at 8000 port)
+
+Open another terminal at same path and execute: npm run protractor (does e2e test in the browser. This library works on top of selenium jar).
+
+## Unit Tests
+
+To run unit tests:
+
+Go to path: `/ambari/ambari-admin/src/main/resources/ui/admin-web`
+Execute npm run test-single-run (this uses PhantomJS headless browser; it's the same used by the ambari-web unit tests)
+
+Note:
+"npm test" command starts karma server at [http://localhost:9876/](http://localhost:9876/) and runs unit tests. This server remain up, autoreloads any changes in the test code and reruns the tests. This is useful while developing unit tests.
diff --git a/docs/ambari-dev/ambari-code-layout.md b/docs/ambari-dev/ambari-code-layout.md
new file mode 100644
index 0000000..6adf4ed
--- /dev/null
+++ b/docs/ambari-dev/ambari-code-layout.md
@@ -0,0 +1,102 @@
+# Ambari Code Layout
+
+_Ambari code checkout and build instructions are available in Ambari Development page._
+_Ambari design and architecture is detailed in [Ambari Design](../ambari-design/index.md) page._
+_Understanding the architecture of Ambari is helpful in navigating code easily._
+
+Ambari's source has the following layout
+
+```
+ambari/
+ ambari-agent/
+ ambari-common/
+ ambari-project/
+ ambari-server/
+ ambari-views/
+ ambari-web/
+ contrib/
+ docs/
+```
+
+Major components of Ambari reside in their own sub-folders under the root folder, to maintain clean separation of code.
+
+Folder | Components or Purpose
+------|---------------------
+ambari-server | Code for the main Ambari server which manages Hadoop through the agents installed on each node.
+ambari-agent | Code for the Ambari agents which run on each node that the server above manages.
+ambari-web | Code for Ambari Web UI which interacts with the Ambari server above.
+ambari-views | Code for Ambari Views, the framework for extending the Ambari Web UI.
+ambari-common | Any common code between Ambari Server and Agents.
+contrib | Code for any custom contributions Ambari makes to other third party software or libraries.
+docs | Basic Ambari documentation, including the Ambari REST API.
+
+Ambari Server and Agents interact with each other via an internal JSON protocol.
+Ambari Web UI interacts with Ambari Server through the documented Ambari REST APIs.
+
+## Ambari-Server
+
+## Ambari-Agent
+
+## Ambari-Views
+
+## Ambari-Web
+
+The Ambari Web UI is a purely browser side JavaScript application based on the [Ember](http://emberjs.com/) JavaScript framework. A good understanding of [Ember](http://emberjs.com/) is necessary to easily understand the code and its layout.
+
+Being a pure JavaScript application, all UI is rendered locally in browser; with data coming from the Ambari REST APIs provided by the Ambari Server.
+
+```
+ambari-web/
+ app/
+ config.coffee
+ package.json
+ pom.xml
+ test/
+ vendor/
+```
+
+Folder | Description
+------|---------------------
+app/ |The main application code. This contains Ember's views, templates, controllers, models, routes, etc. for rendering Ambari UI
+config.coffee |[Brunch](http://brunch.io/) application builder configuration file
+package.json |[npm](https://npmjs.org/) package manager configuration file
+test/ |Javascript test files testing functionality written in app/ folder
+vendor/ |Third party javascript libraries and stylesheets used. Full list of third party libraries is documented in /ambari/ambari-web/app/assets/licenses/NOTICE.txt
+
+Developers mainly work on javascript and other files in the app/ folder. Once that is done, the final javascript is built using Brunch (a HTML5 application assembler based on node.js) into the /ambari/ambari-web/public/ folder. This folder contains the index.html which bootstraps the Ambari web application.
+
+Developers while working should use the
+
+```bash
+brunch w
+```
+
+command to launch Brunch in watch mode where it re-generates the final application on any change. Similarly
+
+```bash
+brunch watch --server (or use the shorthand: brunch w -s)
+```
+
+launches a HTTP server at http://localhost:3333 serving the final application. This is helpful in seeing UI with mock data, without the entire Ambari server deployed.
+
+Note: see "[Coding Guidelines for Ambari](./coding-guidelines-for-ambari.md)" for more details on building and running Ambari Web locally.
+
+**ambari-web/app**
+
+Since ambari-web/app/ folder is where developers spend a majority of time, the major files and their purpose is listed below.
+
+Folder or File | Description
+------|---------------------
+assets/ | Mock data under assets/data. Static files given from server via assets/font, assets/img.
+controllers/ | The C in MVC. Ember controllers for the main application controllers/main, installer controllers/wizard, and common controllers controllers/global
+data/ | Meta data for the application (UI metadata, server data metadata, etc.)
+mappers/ | Classes which map server side JSON data structures into client side Ember models.
+models/ | The M in MVC. [Ember Data](http://emberjs.com/guides/models/) models used. Clusters, Services, Hosts, Alerts, etc. models are defined here
+routes/ | [Ember routes](http://emberjs.com/guides/routing/) defining the various page redirections in the application. main.js contains the main application routes. installer.js contains installer routes. Others are routings in various wizards etc.
+styles/ | CSS stylesheets represented in the [less](http://lesscss.org/) format. This is compiled by Brunch into the ambari-web/public/stylesheets/app.css
+views/ | The V in MVC. Contains all the Ember views of the application. Main application views under views/main, installer views under views/installer, and common views under views/commons
+templates/ | The HTML templates used by the views above. Generally a view will have a template file. Sometimes views define the template content in themselves as strings
+app.js | The main Ember application
+config.js | Main configuration file for the javascript application. Developer can keep application in test mode using the App.testMode property etc.
+
+If a developer adds, removes, or renames a model, view, controller, template, route, they should update the corresponding entry in models.js, views.js, controllers.js, templates.js, routes.js files.
\ No newline at end of file
diff --git a/docs/ambari-dev/apache-ambari-jira.md b/docs/ambari-dev/apache-ambari-jira.md
new file mode 100644
index 0000000..8097868
--- /dev/null
+++ b/docs/ambari-dev/apache-ambari-jira.md
@@ -0,0 +1,36 @@
+# Apache Ambari JIRA
+
+The following page describes the [Apache Ambari JIRA](https://issues.apache.org/jira/browse/AMBARI) components for tasks, bugs and improvements across the core project + contributions.
+
+## components
+
+Proposed Rename | Description
+------|---------------------
+alerts | JIRAs related to Ambari Alerts system.
+ambari-admin | New component specifically for Ambari Admin.
+ambari-agent | JIRAs related to the Ambari Agent.
+ambari-client | JIRAs related to the Ambari Client.
+ambari-metrics| JIRAs related to Ambari Metrics system.
+ambari-server | JIRAs related to the Ambari Server.
+ambari-shell | New component specifically for Ambari Shell.
+ambari-views | JIRAs related to the [Ambari Views framework](../ambari-design/views/index.md). Specific Views that are built on the framework will be handled with labels.
+ambari-web | New component specifically for Ambari Web.
+blueprints | JIRAs related to [Ambari Blueprints](../ambari-design/blueprints/index.md).
+contrib | JIRAs related to contributions under "contrib", such as Ambari SCOM
+documentation | JIRAs related to project documentation including the wiki.
+infra | JIRAs related to project infrastructure including builds, releases mechanics and automation
+security | JIRAs related to Ambari security features, including Kerberos.
+site | JIRAs related to the project site http://ambari.apache.org/
+stacks | JIRAs related to Ambari Stacks.
+test | JIRAs related to unit tests and test automation
+
+## Use of Labels
+
+In certain cases, the component listed above might be "too broad" and you want to designate JIRAs to a specific area of that component. To handle these scenarios, use a combination of component + labels. Some examples:
+
+Feature Area | Description|Component|Label
+-------------|------------|----------|---------
+HDP Stack | These are specific Stack implements for HDP. |stacks | HDP
+BigTop | This is a specific Stack implement for BigTop. | stacks | BigTop
+Files View | This is a specific view implementation for Files. | ambari-views | Files
+Ambari SCOM | This is a specific contribution of a Management Pack for Microsoft System Center. | contrib |Ambari-SCOM
diff --git a/docs/ambari-dev/code-review-guidelines.md b/docs/ambari-dev/code-review-guidelines.md
new file mode 100644
index 0000000..8f079a6
--- /dev/null
+++ b/docs/ambari-dev/code-review-guidelines.md
@@ -0,0 +1,20 @@
+# Code Review Guidelines
+
+Please refer to [How to Contribute](./how-to-contribute.md) for instructions how to submit a code review to Github.
+
+**What makes a good code review?**
+
+- Authors should annotate source code before the review. This makes it easier for devs reviewing your code and may even help you spot bugs before they do.
+- Send small code-reviews if possible. Reviewing more than 400 lines per hour diminishes our ability to find defects.
+- Reviewing code for more than one hour also reduces our ability to find bugs.
+- If possible, try to break up large reviews into separate but functional stages. If you need to temporarily comment out unit tests, do so. Sending gigantic patches means your review will take longer since reviewers need to block out more time to go through it, and you may spend more time revving iterations and rebasing.
+
+We have a global community of committers, so please be mindful that you should **wait at least 24 hours** before merging your pull request even though you may already have the necessary +1.
+
+This encourages others to take an interest in your pull request and helps us find more bugs (it's ok to slow down in order to speed up).
+
+**Always include** **at least two committers that are familiar with that code area**.
+
+If you want to subscribe to code reviews for a particular area, [feel free to edit this section](https://cwiki.apache.org/confluence/display/AMBARI/Code+Review+Guidelines).
+
+
diff --git a/docs/ambari-dev/coding-guidelines-for-ambari.md b/docs/ambari-dev/coding-guidelines-for-ambari.md
new file mode 100644
index 0000000..48798ec
--- /dev/null
+++ b/docs/ambari-dev/coding-guidelines-for-ambari.md
@@ -0,0 +1,189 @@
+# Coding Guidelines for Ambari
+
+## Ambari Web Frontend Development Environment
+
+### Application Assembler: Brunch
+
+* Brunch was used to create the application skeleton for Ambari Web.
+
+* Brunch builds and deploys code automatically in the background as you modify the source files. This lets you break up the application into a number of JS files for code organization and reuse without worrying about development turnaround or runtime load performance.
+
+* Run a Node.js-based web server with a single command so that you can easily run Ambari Web without setting up Ambari Server (you still need to run Ambari Server for true end-to-end testing).
+
+To check out Ambari Web from the Github repository and run it:
+
+* Install Node.js from [http://nodejs.org](http://nodejs.org)
+* Execute the following:
+
+```bash
+git clone https://git-wip-us.apache.org/repos/asf/ambari.git
+cd ambari/ambari-web
+sudo npm install -g brunch@1.7.20
+rm -rf node_modules public
+npm install
+brunch build
+```
+
+_Note: if you receive a "gyp + xcodebuild" error when running "npm install", confirm you have Xcode CLI tools installed (Xcode > Preferences > Downloads)_
+_Note: if you receive "Error: EMFILE, open" errors when running "brunch build", increase the ulimit for file descriptors (for example, "ulimit -n 10000")_
+
+To run the web server in isolation with Ambari Server:
+
+```
+brunch watch --server (or use the shorthand: brunch w -s)
+```
+
+The above runs Ambari Web with a local test server at localhost:3333. The login/password is admin/admin
+
+All Ambari front-end developers are highly recommended to use PhpStorm by JetBrains. JetBrains has kindly granted Apache Ambari an open-source license for PhpStorm and IntelliJ. These products are available to Ambari committers (if you are an Ambari committer, email [private@ambari.apache.org](mailto:private@ambari.apache.org) to request license keys). You can also use Eclipse if that is your preference.
+
+* IDE Plugins
+
+Go to Preferences->Plugins->Browse repositories and install "Node.js" and "Handlebars" plugins.
+
+### Coding Conventions
+
+For any JavaScript/Handlebars/LESS files, they should be formatted with the IDE to maintain consistency.
+
+Also, the IDE will give warnings in the editor about implicit globals, etc. Fix these warnings before submitting patches.
+
+We will use all default settings for Code Style in the IDE, except for the following:
+
+```
+Go to Preferences
+Code Style->General
+Line separator (for new files): Unix
+Make sure "Use tab character" is NOT checked
+Tab size: 2
+Indent: 2
+Continuation indent: 2
+Code Style->JavaScript:
+Tabs and Indents
+Make sure "use tab character" is NOT checked
+Set Tab size, Indent, and Continuation indent to "2".
+
+Spaces->Other
+Turn on "After name-value property separator ':'
+```
+
+In general, the following conventions should be followed for all JavaScript code: http://javascript.crockford.com/code.html
+
+Exceptions to the rule from the above:
+
+* We use 2 spaces instead of 4.
+
+* Variable Declarations:
+"It is preferred that each variable be given its own line and comment. They should be listed in alphabetical order."
+Comment only where it makes sense. - No need to do alphabetical sorting.
+
+* "JavaScript does not have block scope, so defining variables in blocks can confuse programmers who are experienced with other C family languages. Define all variables at the top of the function." - This does not need to be followed.
+
+### Java Import Order
+
+Some IDEs define their default import order differently and this can cause a lot of problems when creating patches and merging commits to different branches. The following are the checkstyle rules which are applied while executing the test phase of the build. Your IDE of choice should be updated to match these settings:
+
+* The use of the wild card character, '*', should be avoided and all imports should be explicitly stated.
+
+* The following order should be used for all import statements:
+ - java
+ - javax
+ - org
+ - com
+ - other
+
+### UI Unit Tests
+
+All patches must be accompanied by unit tests ensuring good coverage. When unit tests are not applicable (e.g., stylistic or layout changes, etc.), you must explicitly state in the JIRA that unit tests are not applicable.
+
+Unit tests are written using Mocha and run with the PhantomJS headless browser.
+
+To run unit tests for ambari-web, run:
+
+```bash
+cd ambari-web
+mvn test
+```
+
+## Ambari Backend Development
+
+**Following points are borrowed from hadoop wiki:**
+
+* All public classes and methods should have informative Javadoc comments.
+
+* Do not use @author tags.
+
+* Code must be formatted according to Sun's conventions, with one exception:
+* Indent two spaces per level, not four.
+
+* Contributions must pass existing unit tests.
+
+* The code changes must be accompanied by unit tests. In cases where unit tests are not possible or don't make sense an explanation should be provided on the jira.
+
+* New unit tests should be provided to demonstrate bugs and fixes. JUnit (junit4) is our test framework:
+* You must implement a class that uses @Test annotations for all test methods.
+
+* Define methods within your class whose names begin with test, and call JUnit's many assert methods to verify conditions. Please add meaningful messages to the assert statement to facilitate diagnostics.
+
+* By default, do not let tests write any temporary files to /tmp. Instead, the tests should write to the location specified by the test.build.data system property.
+
+* Logging levels should conform to Log4j levels
+* Use slf4j instead of commons logging as the logging facade.
+
+* Logger name should be the class name as far as possible.
+
+
+**Unit tests**
+
+* Developers should always run full unit tests before submitting their patch for code review and before committing to Apache. From the top-level directory,
+
+```bash
+mvn clean test
+```
+
+Sometimes it is useful to run unit tests just for the feature you are working on (e.g., Kerberos, Rolling/Express Upgrade, Stack Inheritance, Alerts, etc.). For this purpose, you can run unit tests with a given profile.
+
+The profiles run all test classes/cases annotated with a given Category, E.g.,
+
+```java
+@Category({ category.AlertTest.class})
+```
+
+To run one of the profiles, look at the available names in the top-level pom.xml . E.g.,
+
+```bash
+mvn clean test -P AlertTests # Other options are AmbariUpgradeTests, BlueprintTests, KerberosTests, MetricsTests, StackUpgradeTests
+```
+
+
+After you're done testing just that suite, **you should run a full unit test using "mvn clean test".**
+* [http://wiki.apache.org/hadoop/HowToDevelopUnitTests](http://wiki.apache.org/hadoop/HowToDevelopUnitTests)
+* The tests should be named *Test.java
+* **Unit testing with databases**
+ - We should use JavaDB as the in-memory database for unit-test. The database layer/ORM should be configurable to use in memory database. Two things are important for the db in testing.
+
+ - Ability to bootstrap the db with any initial data dynamically.
+
+ - Ability to modify the db state out of band to simulate certain test cases. One way to achieve the above could be to implement a database access layer only for testing purposes, but it might cause inconsistency in ORM objects, which needs to be figured out.
+
+* **Stub Heartbeat handler**
+ - For testing purpose it may be a good idea to implement a stub heartbeat handler that only simulates interaction with the agents but doesn't interact with any real agent: It will expose an action queue similar to the real heartbeat handler, but will not send anything anywhere, will just periodically remove the action from the queue. It will expose an interface to inject artificial responses for each of the actions, which can be used in tests to simulate agent responses. It will also expose an interface to inject node state to simulate failure of nodes or lost heartbeats. Guice framework can be used to inject stub heartbeat handler in testing.
+
+* **EasyMock**
+ - EasyMock is our mocking framework of choice. It has been successfully used in hadoop.A few important points: An example of a scenario where Easymock is apt: Suppose we are testing deployment of a service but want to bypass a service dependency or want to inject an artificial component dependency, the dependency tracker object can be mocked to simulate the desired dependency scenario. Ambari server is by and large a state driven system. EasyMock can be used to bypass the state changes and test components narrowly. However, it is probably better to rather use in-memory database to simulate state changes and use EasyMock only when certain behavior cannot be easily simulated. For example consider testing API implementation to get status of a transaction. It can be tested by mocking the action manager object, alternatively, it can be tested by setting the state in the in-memory database. In this case, the later is a more comprehensive test.
+ Avoid static methods and objects, Easymock cannot mock these. Use configuration or dependency injection to initialize static objects if they are likely to be mocked.
+ EasyMock cannot mock final classes, so those should be avoided for classes likely to be mocked. Take a look at:[http://www.easymock.org/EasyMock3_1_Documentation.html](http://www.easymock.org/EasyMock3_1_Documentation.html) for docs.
+
+**Guice**
+
+Guice is a dependency injection framework and can be used to dynamically inject pluggable components.
+Please refer http://code.google.com/p/google-guice/wikJamiroquaii/Motivation. We can use Guice in following scenarios:
+
+* Pluggable manifest generator: It may be desirable to have different implementation of manifest generator for non-puppet setup or for testing.
+
+* Injecting in-memory database (if possible) instead of a real persistent database for testing. It needs to be investigated how Guice fits with the ORM tools.
+
+* Injecting a stub implementation of heartbeat handler.
+
+* It may be a good idea to bind API implementations for management or monitoring via Guice. This will allow testing of APIs and the server independent of the implementation via mock implementations. For example, the management api implementation in coordinator can be mocked so that API definitions and URIs can be independently tested.
+
+* Injecting mock objects for dependency tracker, or stage planner for testing.
diff --git a/docs/ambari-dev/developer-tools.md b/docs/ambari-dev/developer-tools.md
new file mode 100644
index 0000000..e97b47a
--- /dev/null
+++ b/docs/ambari-dev/developer-tools.md
@@ -0,0 +1,79 @@
+# Developer Tools
+
+## Diff and Merge tools
+
+Araxis has been kind enough to give us free licenses for Araxis Merge if you work on open source, just submit a request at http://www.araxis.com/buy/open-source
+
+Download from http://www.araxis.com/url/merge/download.uri
+
+You will be prompted for your serial number when you run the application for the first time. To enter a new serial number into an existing installation, click the Re-Register... button in the About window.
+
+### Integrating Araxis to Git as your Diff and Merge tool
+
+After installing Araxis Merge,
+
+On Mac OS X,
+
+- Drag Araxis across to your ~/Applications folder as normal
+- Copy the contents of the Utilities folder to (e.g.) /usr/local/araxis/bin
+- Add the path to your startup script: export PATH="$PATH:/usr/local/araxis/bin"
+
+In your .gitconfig file (tested on Mac OS X),
+
+```
+[diff]
+ tool = araxis
+[difftool]
+ prompt = false
+[merge]
+ tool = araxis_merge
+[mergetool "araxis_merge"]
+ cmd = araxisgitmerge "$PWD/$REMOTE" "$PWD/$BASE" "$PWD/$LOCAL" "$PWD/$MERGED"
+```
+
+## Git Best Practices
+
+This is just a personal preference, but it may be easier to create one Git branch per Jira/feature. E.g.,
+
+```bash
+git checkout trunk
+git checkout -b AMBARI12345 # create the branch and switch to it
+git branch --set-upstream-to=origin/trunk AMBARI12345 # set the upstream so that git pull --rebase will get the HEAD from trunk
+# Do work,
+git commit -m "AMBARI-12345. Foo (username)"
+# Do more work
+git commit --amend # edit the last commit
+git pull --rebase
+
+# If conflicts are detected, then run
+git mergetool # should be easy if you have Araxis Merge setup to do a 3-way merge
+git rebase --continue
+git push origin HEAD:trunk
+```
+
+## Useful Git Commands
+
+In your .gitconfig file,
+
+```bash
+[alias]
+ st = status
+ ci = commit
+ br = branch
+ co = checkout
+ dc = diff --cached
+ dtc = difftool --cached
+ lg = log -p
+ lsd = log --graph --decorate --pretty=oneline --abbrev-commit --all
+ slast = show --stat --oneline HEAD
+ pshow = show --no-prefix --format=format:%H --full-index
+ pconfig = config --list
+```
+
+Also, in your ~/.bashrc or ~/.profile file,
+
+```bash
+alias branchshow='for k in `git branch|perl -pe s/^..//`;do echo -e `git show --pretty=format:"%Cgreen%ci %Cblue%cr%Creset" $k|head -n 1`\\t$k;done|sort'
+```
+
+This command will show all of your branches sorted by the last commit times, which is useful if you develop one feature per branch.
\ No newline at end of file
diff --git a/docs/ambari-dev/development-in-docker.md b/docs/ambari-dev/development-in-docker.md
new file mode 100644
index 0000000..1c8c14e
--- /dev/null
+++ b/docs/ambari-dev/development-in-docker.md
@@ -0,0 +1,92 @@
+# Development in Docker
+
+## Overview
+
+This page describes how to develop, build and test Ambari on Docker.
+
+In order to build Ambari, there are a quite few steps to execute and it is a bit cumbersome. You can build an environment in Docker and are good to go!
+
+This is NOT meant for running production level Ambari in Docker (though you can run Ambari and deploy Hadoop in a single Docker container for testing purpose)
+
+
+
+(This is not only about Jenkins slaves but think it is your laptop)
+
+First, we will make a Docker image that has all third party libraries Ambari requires.
+
+Second, prepare your code on Docker host machine. It can be trunk or a branch, or your developing code or with a patch applied. Note that your code does not reside inside of Docker container, but on the Docker host and we link it by Docker volume (like mount)
+
+And you are ready to go!
+
+### Source code
+
+This code has been migrated to Ambari trunk.
+
+https://github.com/apache/ambari/tree/trunk/dev-support/docker
+
+## Requirements
+
+There are a few system requirements if you want to play with this document.
+
+- Docker https://docs.docker.com/#installation-guides
+
+## Create Docker Image
+
+First thing first, we have to build an Docker image for this solution. This will setup libraries including ones from yum and maven dependencies. In my environment (Centos 6.5 VM with 8GB and 4CPUs) takes 30mins. Good news is this is one time.
+
+```bash
+git clone https://github.com/apache/ambari.git
+cd ambari
+docker build -t ambari/build ./dev-support/docker/docker
+```
+
+This is going to build a image named "ambari/build" from configuration files under ./dev-support/docker/docker
+
+## Unit Test
+
+For example our unit test Jenkins job on trunk runs on Docker. If you want to replicate the environment, read this section.
+
+The basic
+
+```bash
+cd {ambari_root}
+docker run --privileged -h node1.mydomain.com -v $(pwd):/tmp/ambari ambari/build /tmp/ambari/dev-support/docker/docker/bin/ambaribuild.py test -b
+```
+
+- 'docker run' is the command to run a container from a image. Which image was run? It is 'ambari/build'
+- -h sets a host name in the container.
+- -v is to mount your Ambari code on the host to the container's /tmp. Make sure you are at the Ambari root directory.
+- ambaribuild.py runs some script to eventually run 'mvn test' for ambari.
+- -b option is to rebuild the entire source tree. It runs test as is on your host if omitted.
+
+## Deploy Hadoop
+
+You want to run Ambari and Hadoop to test your improvements that you have just coded on your host. Here is the way!
+
+
+```bash
+cd {ambari_root}
+docker run --privileged -t -p 80:80 -p 5005:5005 -p 8080:8080 -h node1.mydomain.com --name ambari1 -v $(pwd):/tmp/ambari ambari/build /tmp/ambari-build-docker/bin/ambaribuild.py deploy -b
+
+# once your are done
+docker kill ambari1 && docker rm ambari1
+```
+
+- --privileged is important as ambari-server accessing to /proc/??/exe
+- -p 80:80 to ensure you can access to web UI from your host.
+- -p 5005 is java debug port
+- 'deploy' to build, install rpms and start ambari-server and ambari-agent and deploy Hadoop through blueprint.
+
+You can take a look at https://github.com/apache/ambari/tree/trunk/dev-support/docker/docker/blueprints to see what is actually deployed.
+
+There are a few other parameters you can play.
+
+```bash
+cd {ambari_root}
+docker run --privileged -t -p 80:80 -p 5005:5005 -p 8080:8080 -h node1.mydomain.com --name ambari1 -v ${AMBARI_SRC:-$(pwd)}:/tmp/ambari ambari/build /tmp/ambari-build-docker/bin/ambaribuild.py [test|server|agent|deploy] [-b] [-s [HDP|BIGTOP|PHD]]
+```
+
+- test: mvn test
+- server: install and run ambari-server
+- agent: install and run ambari-server and ambari-agent
+- deploy: install and run ambari-server and ambari-agent, and deploy a hadoop
\ No newline at end of file
diff --git a/docs/ambari-dev/development-process-for-new-major-features.md b/docs/ambari-dev/development-process-for-new-major-features.md
new file mode 100644
index 0000000..1bf5e5c
--- /dev/null
+++ b/docs/ambari-dev/development-process-for-new-major-features.md
@@ -0,0 +1,163 @@
+# Development Process for New Major Features
+
+
+## Goals
+
+* Make it clear to the community of new feature development happening at a high level
+* Make it easier to correlate features with JIRAs
+* Make it easier to track progress for features in development
+* Make it easier to understand estimated release schedule for features in development
+
+## Process
+
+* Create a JIRA of type "Epic" for the new feature in [Apache Ambari JIRA](https://issues.apache.org/jira/browse/AMBARI)
+* Add the feature to the [Features + Roadmap](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30755705) wiki and link it to the Epic created
+* The Epic should contain a high-level description that is easy to understand
+* The Epic should also contain the initial, detailed design (this can be in the form of a shared Google Doc for ease of collaboration,Word doc, pdf, etc)
+* Once the initial design is posted, announce to the dev mailing list to elicit feedback (Subject: [DISCUSS] _Epic Name_. Be sure include a link to the Epic JIRA in the body).It is recommended to ask for review feedback to be given by a certain date so that the review process does not drag on.
+
+* Iterate on the design based on community feedback. Incorporate multiple review cycles as needed.
+
+* Once the design is finalized, break it down into Tasks that are linked to the Epic
+* (Nice to have) Once the Tasks are defined, schedule them into sprints using the Agile Board so that it's easy to see who is working on what/when, what tasks remain but unassigned so the community can pick up work from the backlog, etc.
+
+## Feature Branches
+
+The use of feature branches allows for the large, potentially destabilizing changes to be made without affecting the stability of the trunk.
+
+## Feature Flags
+
+* Sometimes, we want to have the ability for the users to experiment with a new feature, but not make expose it as a general feature since it has not gone through rigorous testing. In other cases, we want to provide an escape hatch for certain edge-case scenarios that we may not want to expose in general because using the escape hatch is potentially dangerous and should only be reserved special occasions. For these purposes, Ambari has a notion of **feature flags**. Make use of Feature Flags when adding new features that follow under these categories. [Feature Flags](./feature-flags.md) has more details on this.
+
+## Contribution Flow
+
+[https://docs.google.com/document/d/1hz7qjGKkNeckMibEs67ZmAa2kxjie0zkG6H_IiC2RgA/edit?pli=1](https://docs.google.com/document/d/1hz7qjGKkNeckMibEs67ZmAa2kxjie0zkG6H_IiC2RgA/edit?pli=1)
+
+## Git Feature Branches
+
+
+The Git feature branch workflow is a simple, yet powerful way to develop new features in an encapsulated environment while at the same time fostering collaboration within the community. The idea is create short-lived branches where new development will take place and eventually merge the completed feature branch back into `trunk`. A short-lived branch could mean anywhere from several days to several months depending on the extent of the feature and how often the branch is merged back into `trunk`.
+
+Feature branches are also useful for changes which are not necessarily considered to be new features. They can be for proof-of-concept changes or architectural changes which have the likelihood of destabilizing `trunk`.
+
+### Benefits
+
+* Allows incremental work to proceed without destabilizing the main trunk of source control.
+
+* Smaller commits means smaller and clearer code reviews.
+
+* Each code review is not required to be fully functional allowing a more agile approach to gathering feedback on the progress of the feature.
+
+* Maintains Git history and allows for code to be backed out easily after merging.
+
+### Drawbacks
+
+* Requires frequent merges from `trunk` into your feature branch to keep merge conflicts to a minimum.
+
+* May require periodic merges of the feature branch back into trunk during development to help mitigate frequent merge conflicts.
+
+* No continuous integration coverage on feature branches. Although this is not really a drawback since most feature branches will break some aspects of CI in the early stages of the feature.
+
+### Guidelines to Follow
+
+The following simple rules can help in keeping Ambari's approach to feature branch development simple and consistent.
+
+* When creating a feature branch, it should be given a meaningful name. Acceptable names include either the name of the feature or the name of the Ambari JIRA. The branch should also always start with the text `branch-feature-`. Some examples of properly named feature branches include:
+ - `branch-feature-patch-upgrades`
+ - `branch-feature-AMBARI-12345`
+
+* Every commit in your feature branch should have an associated `AMBARI-XXXXX` JIRA. This way, when your branch is merged back into trunk, the commit history follows Ambari's conventions.
+
+* Merge frequently from trunk into your branch to keep your branch up-to-date and lessen the number of potential merge conflicts.
+
+* Do **NOT** squash commits. Every commit in your feature branch must have an `AMBARI-XXXXX` association with it.
+
+
+* Once a feature has been completed and the branch has been merged into trunk, the branch can be safely removed. Feature branches should only exist while the work is still in progress.
+
+### Approach
+
+The following steps outline the lifecycle of a feature branch. You'll notice that once the feature has been completed and merged back into trunk, the feature branch is deleted. This is an important step to keep the git branch listing as clean as possible.
+
+```
+$ git checkout -b branch-feature-AMBARI-12345 trunk
+Switched to a new branch 'branch-feature-AMBARI-12345'
+
+$ git push -u origin branch-feature-AMBARI-12345
+Total 0 (delta 0), reused 0 (delta 0)
+To https://git-wip-us.apache.org/repos/asf/ambari.git
+ * [new branch] branch-feature-AMBARI-12345 -> branch-feature-AMBARI-12345
+Branch branch-feature-AMBARI-12345 set up to track remote branch branch-feature-AMBARI-12345 from origin by rebasing.
+
+```
+
+* Branch is correctly named
+* Branch is pushed to Apache so it can be visible to other developers
+
+```bash
+$ git checkout -b branch-feature-AMBARI-12345 trunk
+Switched to a new branch 'branch-feature-AMBARI-12345'
+
+$ git add
+$ git commit -m 'AMBARI-28375 - Some Change (me)'
+
+$ git add
+$ git commit -m 'AMBARI-28499 - Another Change (me)'
+
+$ git push
+```
+
+* Each commit to the feature branch has its own AMBARI-XXXXX JIRA
+* Multiple commits are allowed before pushing the changes to the feature branch
+
+```bash
+$ git checkout branch-feature-AMBARI-12345
+Switched to branch 'branch-feature-AMBARI-18456'
+
+$ git merge trunk
+Updating ed28ff4..3ab2a7c
+Fast-forward
+ ambari-server/include.xml | 0
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+ create mode 100644 ambari-server/include.xml
+```
+
+* Merging trunk into the feature branch often (daily, hourly) allows for faster and easier merge conflict resolution
+* Fast-forwards are OK here since trunk is always the source of truth and we don't need extra "merge" commits in the feature branch
+
+```bash
+$ git checkout trunk
+Switched to branch 'trunk'
+
+$ git merge --no-ff branch-feature-AMBARI-12345
+Updating ed28ff4..3ab2a7c
+ ambari-server/include.xml | 0
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+ create mode 100644 ambari-server/include.xml
+```
+
+Notice that the `--no-ff` option was provided when merging back into `trunk`. This is to ensure that an additional "merge" commit is created which references all feature branch commits. With this single merge commit, the entire merge can be easily backed out if a problem was discovered which destabilized trunk.
+
+* The feature is merged successfully with a "merge" commit back to trunk
+* This can be done multiple times during the course of the feature development as long as the code merged back to trunk is stable
+
+```bash
+$ git checkout trunk
+Switched to branch 'trunk'
+
+$ git branch -d branch-feature-AMBARI-12345
+Deleted branch branch-feature-AMBARI-12345 (was ed28ff4).
+
+$ git push origin --delete branch-feature-AMBARI-12345
+To https://git-wip-us.apache.org/repos/asf/ambari.git
+ - [deleted] branch-feature-AMBARI-12345
+
+$ git remote update origin --prune
+Fetching origin
+From https://git-wip-us.apache.org/repos/asf/ambari
+ x [deleted] (none) -> branch-feature-AMBARI-56789
+```
+
+* Cleanup the branch when done, both locally and remotely
+* Prune your local branches which no longer track remote branches
+
diff --git a/docs/ambari-dev/feature-flags.md b/docs/ambari-dev/feature-flags.md
new file mode 100644
index 0000000..ed13903
--- /dev/null
+++ b/docs/ambari-dev/feature-flags.md
@@ -0,0 +1,13 @@
+# Feature Flags
+
+* Sometimes, we want to have the ability for the end users to experiment with a new feature, but not expose it as a general feature since it has not gone through rigorous testing and use of the feature could result in problems. In other cases, we want to provide an escape hatch for certain edge-case scenarios that we may not want to expose in general because using the escape hatch is potentially dangerous and should only be reserved special occasions. For these purposes, Ambari has a notion of **feature flags**.
+
+* Feature flags can be created as an attribute of App.supports map under [https://github.com/apache/ambari/blob/trunk/ambari-web/app/config.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/config.js)
+* Those boolean flags are exposed in the Ambari Web UI via `<ambari-server-protocol>://<ambari-server-host>:<ambari-server-port>/#/experimental`
+ * The end user can go to the above URL to turn certain experimental features on.
+
+ 
+
+* In Ambari Web code, we should toggle experimental features on/off via the App.supports object.
+
+* You will see sample usage if you recursively grep for "App.supports" under the ambari-web project.
diff --git a/docs/ambari-dev/how-to-commit.md b/docs/ambari-dev/how-to-commit.md
new file mode 100644
index 0000000..1c6bb2f
--- /dev/null
+++ b/docs/ambari-dev/how-to-commit.md
@@ -0,0 +1,14 @@
+# How to Commit
+
+This document describes how to commit changes to Ambari. It assumes a knowledge of Git. While it is for committers to use as a guide, it also provides contributors an idea of how the commit process actually works.
+
+In general we are very conservative about changing the Apache Ambari code base. It is ground truth for systems that use it, so we need to make sure that it is reliable. For this reason we use Review Then Commit (RTC) http://www.apache.org/foundation/glossary.html#ReviewThenCommit change policy.
+
+Except for some very rare cases any change to the Apache Ambari code base will start off as a Jira. (In some cases a change may relate to more than one Jira. Also, there are cases when a Jira results in multiple commits.) Generally, the process of getting ready to commit when the Jira has a patch associated with it and the contributor decides that it is ready for review and marks it patch available.
+
+A committer must sign off on a patch. It is very helpful if the community also reviews the patch, but in the end a committer must take responsibility for the correctness of the patch. If the patch is simple enough and the committer feels confident in the review, a single +1 from a committer is sufficient to commit the patch. (Remember committers cannot review their own patch. If a committer submits a patch, they should make sure that another committer reviews it.)
+
+Follow the instructions in [How to Contribute](./how-to-contribute.md) guide to commit changes to Ambari.
+
+If the Jira is a bug fix you may also need to commit the patch to the latest branch in git (trunk).
+
diff --git a/docs/ambari-dev/how-to-contribute.md b/docs/ambari-dev/how-to-contribute.md
new file mode 100644
index 0000000..1e44e8c
--- /dev/null
+++ b/docs/ambari-dev/how-to-contribute.md
@@ -0,0 +1,127 @@
+# How to Contribute
+
+## Contributing code changes
+
+### Checkout source code
+
+* Fork the project from Github at https://github.com/apache/ambari if you haven't already
+* Clone this fork:
+
+```bash
+# Replace [forked-repository-url] with your git clone url
+git clone [forked-repository-url] ambari
+```
+
+* Set upstream remote:
+
+```bash
+cd ambari
+git remote add upstream https://github.com/apache/ambari.git
+```
+
+### Keep your Fork Up to Date
+
+```bash
+# Fetch from upstream remote
+git fetch upstream
+# Checkout the branch that needs to sync
+git checkout trunk
+# Merge with remote
+git merge upstream/trunk
+```
+
+Repeat these steps for all the branches that needs to be synced with the remote.
+
+### JIRA
+
+Apache Ambari uses JIRA to track issues including bugs and improvements, and uses Github pull requests to manage code reviews and code merges. Major design changes are discussed in JIRA and implementation changes are discussed in pull requests after a pull request is created.
+
+* Find an existing Apache JIRA that the change pertains to
+ * Do not create a new JIRA if the change is minor and relates to an existing JIRA; add to the existing discussion and work instead
+ * Look for existing pull requests that are linked from the JIRA, to understand if someone is already working on the JIRA
+
+* If the change is new, then create a new JIRA:
+ * Provide a descriptive Title
+ * Write a detailed Description. For bug reports, this should ideally include a short reproduction of the problem. For new features, it may include a design document.
+ * Fill the required fields:
+ * Issue Type. Bug and Task are the most frequently used issue types in Ambari.
+ * Priority. Their meaning is roughly:
+ * Blocker: pointless to release without this change as the release would be unusable to a large minority of users
+ * Critical: a large minority of users are missing important functionality without this, and/or a workaround is difficult
+ * Major: a small minority of users are missing important functionality without this, and there is a workaround
+ * Minor: a niche use case is missing some support, but it does not affect usage or is easily worked around
+ * Trivial: a nice-to-have change but unlikely to be any problem in practice otherwise
+ * Component. Choose the components that are affected by this change. Choose from Ambari Components
+ * Affects Version. For Bugs, assign at least one version that is known to exhibit the problem or need the change
+ * Do not include a patch file; pull requests are used to propose the actual change.
+
+### Pull Request
+
+Apache Ambari uses [Github pull requests](https://github.com/apache/ambari/pulls) to review and merge changes to the source code. Before creating a pull request, one must have a fork of apache/ambari checked out. Follow instructions in step 1 to create a fork if you haven't already.
+
+#### Commit and Push changes
+
+- Create a branch AMBARI-XXXXX-branchName before starting to make any code changes. Ex: If the Fix Version of the JIRA you are working on is 2.6.2, then create a branch based on branch-2.6
+
+ ```bash
+ git checkout branch-2.6
+ git pull upstream branch-2.6
+ git checkout -b AMBARI-XXXXX-branch-2.6
+ ```
+
+- Mark the status of the related JIRA as "In Progress" to let others know that you have started working on the JIRA.
+- Make changes to the code and commit them to the newly created branch.
+- Run all the tests that are applicable and make sure that all unit tests pass
+- Push your changes. Provide your Github user id and [personal access token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/) when asked for user name and password
+
+ ```bash
+ git push origin AMBARI-XXXXX-branch-2.6
+ ```
+
+#### Create Pull Request
+
+Navigate to your fork in Github and [create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/). The pull request needs to be opened against the branch you want the patch to land.
+
+The pull request title should be of the form **[AMBARI-xxxx] Title**, where AMBARI-xxxx is the relevant JIRA number
+
+- If the pull request is still a work in progress, and so is not ready to be merged, but needs to be pushed to Github to facilitate review, then add **[WIP]** after the **AMBARI-XXXX**
+- Consider identifying committers or other contributors who have worked on the code being changed. Find the file(s) in Github and click “Blame” to see a line-by-line annotation of who changed the code last. You can add @username in the PR description or as a comment to request review from a developer.
+- Note: Contributors do not have access to edit or add reviewers in the "Reviewers" widget. Contributors can only @mention to get the attention of committers.
+- The related JIRA will automatically have a link to the PR as shown below. Mark the status of JIRA as "Patch Available" manually.
+
+
+
+#### Jenkins Job
+
+* A Jenkins Job is configured to be triggered everytime a new pull request is created. The job is configured to perform the following tasks:
+ * Validate the merge
+ * Build Ambari
+ * Run unit tests
+* It reports the outcome of the build as an integrated check in the pull request as shown below.
+
+
+
+* It is the responsibility of the contributor of the pull request to make sure that the build passes. Pull requests should not be merged if the Jenkins job fails to validate the merge.
+* To re-trigger the build job, just comment "retest this please" in the PR. Visit this page to check the latest build jobs.
+
+#### Repeat
+
+Repeat the above steps for patches that needs to land in multiple branches. Ex: If a patch needs to be committed to branches branch-2.6 and trunk, then you need to create two branches and open two pull requests by following the above steps.
+
+## Review Process
+
+Ambari uses Github for code reviews. All committers are required to follow the instructions in this [page](https://gitbox.apache.org/setup/) and link their github accounts with gitbox to gain Merge access to [apache/ambari](https://github.com/apache/ambari) in github.
+
+To try out the changes locally, you can checkout the pull request locally by following the instructions in this [guide](https://help.github.com/articles/checking-out-pull-requests-locally/).
+
+* Other reviewers, including committers can try out the changes locally and either approve or give their comments as suggestions on the pull request by submitting a review on the pull request. More help can be found here.
+* If more changes are required, reviewers are encouraged to leave their comments on the lines of code that require changes. The author of the pull request can then update the code and push another commit to the same branch to update the pull request and notify the committers.
+* The pull request can be merged if atleast one committer has approved it or commented "LGTM" which means "Looks Good To Me" and the jenkins job validated the merge successfully. If you comment LGTM you will be expected to help with bugs or follow-up issues on the patch. (Remember committers cannot review their own patch. If a committer opens a PR, they should make sure that another committer reviews it.)
+* Sometimes, other changes might be merged which conflict with the pull request’s changes. The PR can’t be merged until the conflict is resolved. This can be resolved by running **git fetch** upstream followed by git rebase **upstream/[branch-name]** and resolving the conflicts by hand, then pushing the result to your branch.
+* If a PR is merged, promptly close the PR and resolve the JIRA as "Fixed".
+
+## Apache Ambari Committers
+
+Please read more on Apache Committers at: http://www.apache.org/dev/committers.html
+
+In general a contributor that makes sustained, welcome contributions to the project may be invited to become a committer, though the exact timing of such invitations depends on many factors. Sustained contributions over 6 months is a welcome sign of contributor showing interest in the project. A contributor receptive to feedback and following development guidelines stated above is a good sign for being a committer to the project. We have seen contributors contributing 20-30 patches become committers but again this is very subjective and can vary on the patches submitted to the project. Ultimately it is the Ambari PMC that suggests and votes for committers in the project.
diff --git a/docs/ambari-dev/imgs/experimental-features .png b/docs/ambari-dev/imgs/experimental-features .png
new file mode 100644
index 0000000..211a596
--- /dev/null
+++ b/docs/ambari-dev/imgs/experimental-features .png
Binary files differ
diff --git a/docs/ambari-dev/imgs/jenkins-job.png b/docs/ambari-dev/imgs/jenkins-job.png
new file mode 100644
index 0000000..0e6476c
--- /dev/null
+++ b/docs/ambari-dev/imgs/jenkins-job.png
Binary files differ
diff --git a/docs/ambari-dev/imgs/pull-request.png b/docs/ambari-dev/imgs/pull-request.png
new file mode 100644
index 0000000..2a8b408
--- /dev/null
+++ b/docs/ambari-dev/imgs/pull-request.png
Binary files differ
diff --git a/docs/ambari-dev/imgs/reviewers.png b/docs/ambari-dev/imgs/reviewers.png
new file mode 100644
index 0000000..c19354d
--- /dev/null
+++ b/docs/ambari-dev/imgs/reviewers.png
Binary files differ
diff --git a/docs/ambari-dev/imgs/with-without-docker.png b/docs/ambari-dev/imgs/with-without-docker.png
new file mode 100644
index 0000000..2d6e38c
--- /dev/null
+++ b/docs/ambari-dev/imgs/with-without-docker.png
Binary files differ
diff --git a/docs/ambari-dev/index.md b/docs/ambari-dev/index.md
new file mode 100644
index 0000000..3dba30d
--- /dev/null
+++ b/docs/ambari-dev/index.md
@@ -0,0 +1,311 @@
+# Ambari Development
+
+## Checking out Ambari source
+
+Follow the instructions under [Checkout source code](./how-to-contribute.md) section of "How to contribute" guide.
+
+We'll refer to the top-level "ambari" directory as `AMBARI_DIR` in this document.
+
+## Tools needed to build Ambari
+
+The following tools are needed to build Ambari from source.
+
+Alternatively, you can easily launch a VM that is preconfigured with all the tools that you need. See the **Pre-Configured Development Environment** section in the [Quick Start Guide](../quick-start/quick-start-guide.md).
+
+* xCode (if using Mac - free download from the apple store)
+* JDK 8 (Ambari 2.6 and below can be compiled with JDK 7, from Ambari 2.7, it can be compiled with at least JDK 8)
+* [Apache Maven](http://maven.apache.org/download.html) 3.3.9 or later
+Tip:In order to persist your changes to the JAVA_HOME environment variable and add Maven to your path, create the following files:
+File: ~/.profile
+
+```bash
+source ~/.bashrc
+```
+
+File: ~/.bashrc
+
+```bash
+export PATH=/usr/local/apache-maven-3.3.9/bin:$PATH
+export JAVA_HOME=$(/usr/libexec/java_home)
+export _JAVA_OPTIONS="-Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true"
+```
+
+
+* Python 2.6 (Ambari 2.7 or later require Python 2.7 as minimum supported version)
+* Python setuptools:
+for Python 2.6: D [ownload](http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c11-py2.6.egg#md5=bfa92100bd772d5a213eedd356d64086) setuptools and run:
+
+```bash
+sh setuptools-0.6c11-py2.6.egg
+```
+
+for Python 2.7: D [ownload](https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg#md5=fe1f997bc722265116870bc7919059ea) setuptools and run:
+
+```bash
+sh setuptools-0.6c11-py2.7.egg
+```
+
+
+* rpmbuild (rpm-build package)
+* g++ (gcc-c++ package)
+
+## Running Unit Tests
+
+* `mvn clean test`
+* Run unit tests in a single module:
+
+```bash
+mvn -pl ambari-server test
+```
+
+
+* Run only Java tests:
+
+```bash
+mvn -pl ambari-server -DskipPythonTests
+```
+
+
+* Run only specific Java tests:
+
+```bash
+mvn -pl ambari-server -DskipPythonTests -Dtest=AgentHostInfoTest test
+```
+
+
+* Run only Python tests:
+
+```bash
+mvn -pl ambari-server -DskipSurefireTests test
+```
+
+
+* Run only specific Python tests:
+
+```bash
+mvn -pl ambari-server -DskipSurefireTests -Dpython.test.mask=TestUtils.py test
+```
+
+
+* Run only Checkstyle and RAT checks:
+
+```bash
+mvn -pl ambari-server -DskipTests test
+```
+
+
+
+NOTE: Please make sure you have npm in the path before running the unit tests.
+
+## Generating Findbugs Report
+
+* mvn clean install
+
+This will generate xml and html report unders target/findbugs. You can also add flags to skip unit tests to generate report faster.
+
+## Building Ambari
+
+Note: if you can an error that too many files are open while building, then run: ulimit -n 10000 (for example)
+
+To build Ambari RPMs, run the following.
+
+Note: Replace ${AMBARI_VERSION} with a 4-digit version you want the artifacts to be (e.g., -DnewVersion=1.6.1.1)
+
+**Note**: If running into errors while compiling the ambari-metrics package due to missing the artifacts of jms, jmxri, jmxtools:
+
+```
+[ERROR] Failed to execute goal on project ambari-metrics-kafka-sink: Could not resolve dependencies for project org.apache.ambari:ambari-metrics-kafka-sink:jar:2.0.0-0: The following artifacts could not be resolved: javax.jms:jms:jar:1.1, com.sun.jdmk:jmxtools:jar:1.2.1, com.sun.jmx:jmxri:jar:1.2.1: Could not transfer artifact javax.jms:jms:jar:1.1 from/to java.net (https://maven-repository.dev.java.net/nonav/repository): No connector available to access repository java.net (https://maven-repository.dev.java.net/nonav/repository) of type legacy using the available factories WagonRepositoryConnectorFactory
+```
+
+The work around is to manually install the three missing artifacts:
+
+```
+mvn install:install-file -Dfile=jms-1.1.pom -DgroupId=javax.jms -DartifactId=jms -Dversion=1.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxtools-1.2.1.pom -DgroupId=com.sun.jdmk -DartifactId=jmxtools -Dversion=1.2.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxri-1.2.1.pom -DgroupId=com.sun.jmx -DartifactId=jmxri -Dversion=1.2.1 -Dpackaging=jar
+
+If when compiling it seems stuck, and you've already increased Java and Maven heapsize, it could be that Ambari Views has a lot of artifacts, and the rat-check is choking up. In this case, try running
+
+git clean -df (this will remove untracked files and directories)
+mvn clean package -DskipTests -Drat.ignoreErrors=true
+or
+mvn clean package -DskipTests -Drat.skip
+```
+
+## Setting the Version Using Maven
+
+Ambari 2.8+ uses a newer method to update the version when building Ambari.
+
+**RHEL/CentOS 6**:
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package rpm:rpm -DskipTests -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+**SUSE/SLES 11**
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package rpm:rpm -DskipTests -Psuse11 -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+**Ubuntu 12**:
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package jdeb:jdeb -DskipTests -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+Ambari Server will create following packages
+
+* RPM will be created under `AMBARI_DIR`/ambari-server/target/rpm/ambari-server/RPMS/noarch.
+
+* DEB will be created under `AMBARI_DIR`/ambari-server/target/
+
+Ambari Agent will create following packages
+
+* RPM will be created under `AMBARI_DIR`/ambari-agent/target/rpm/ambari-agent/RPMS/x86_64.
+
+* DEB will be created under `AMBARI_DIR`/ambari-agent/target
+
+Optional parameters:
+
+* -X -e: add these options for more verbose output by Maven. Useful when debugging Maven issues.
+
+* -DdefaultStackVersion=STACK-VERSION
+* Sets the default stack and version to be used for installation (e.g., -DdefaultStackVersion=HDP-1.3.0)
+* -DenableExperimental=true
+* Enables experimental features to be available via Ambari Web (default is false)
+* All views can be packaged in RPM by adding _-Dviews_ parameter
+ - _mvn -B clean install package rpm:rpm -Dviews -DskipTests_
+* Specific views can be built by adding `--projects` parameter to the _-Dviews_
+ - _mvn -B clean install package rpm:rpm --projects ambari-web,ambari-project,ambari-views,ambari-admin,contrib/views/files,contrib/views/pig,ambari-server,ambari-agent,ambari-client,ambari-shell -Dviews -DskipTests_
+
+
+_NOTE: Run everything as `root` below._
+
+## Building Ambari Metrics
+
+If you plan on installing the Ambari Metrics service, you will also need to build the Ambari Metrics project.
+
+```bash
+cd ambari-metrics
+mvn clean package -Dbuild-rpm -DskipTests
+
+For Ubuntu:
+cd ambari-metrics
+mvn clean package -Dbuild-deb -DskipTests
+```
+
+**Note:**
+
+The metrics rpms will be found at: ambari-metrics-assembly/target/. These would be need for installing the Ambari Metrics service.
+
+## Running the Ambari Server
+
+First, install the Ambari Server RPM.
+
+**On RHEL/CentOS:**
+
+```bash
+yum install ambari-server/target/rpm/ambari-server/RPMS/noarch/ambari-server-*.noarch.rpm
+```
+
+On SUSE/SLES:
+
+```bash
+zypper install ambari-server/target/rpm/ambari-server/RPMS/noarch/ambari-server-*.noarch.rpm
+```
+
+**On Ubuntu 12:**
+
+```bash
+dpkg --install ambari-server/target/ambari-server-*.deb # Will fail with missing dependencies errors
+apt-get update # Update locations of dependencies
+apt-get install -f # Install all failed dependencies
+dpkg --install ambari-server/target/ambari-server-*.deb # Will succeed
+```
+
+Initialize Ambari Server:
+
+```bash
+ambari-server setup
+```
+
+Start up Ambari Server:
+
+```
+ambari-server start
+```
+
+See Ambari Server log:
+
+```bash
+tail -f /var/log/ambari-server/ambari-server.log
+```
+
+To access Ambari, go to
+
+```
+http://{ambari-server-hostname}:8080
+```
+
+from your web browser and log in with username _admin_ and password _admin_.
+
+## Install and Start the Ambari Agent Manually on Each Host in the Cluster
+
+Install the Ambari Agent RPM.
+
+On RHEL/CentOS:
+
+SUSE/SLES:
+
+```bash
+zypper install ambari-agent/target/rpm/ambari-agent/RPMS/x86_64/ambari-agent-*.rpm
+```
+
+Ubuntu12:
+
+```bash
+dpkg --install ambari-agent/target/ambari-agent-*.deb
+```
+
+Edit the location of Ambari Server in /etc/ambari-agent/conf/ambari-agent.ini by editing the _hostname_ line.
+
+Start Ambari Agent:
+
+```
+ambari-agent start
+```
+
+See Ambari Agent log:
+
+```bash
+tail -f /var/log/ambari-agent/ambari-agent.log
+```
+
+## Setting up Ambari in Eclipse
+
+```
+$ mvn clean eclipse:eclipse
+```
+
+After doing the above you should be able to import the project via Eclipse "Import > Maven > Existing Maven Project". Choose the root directory where you cloned the git repository. You should be able to see the following projects on eclipse:
+
+```
+ambari
+|
+|- ambari-project
+|- ambari-server
+|- ambari-agent
+|- ambari-web
+```
+
+Select the top-level "ambari pom.xml" and click Finish.
diff --git a/docs/ambari-dev/releasing-ambari.md b/docs/ambari-dev/releasing-ambari.md
new file mode 100644
index 0000000..d041946
--- /dev/null
+++ b/docs/ambari-dev/releasing-ambari.md
@@ -0,0 +1,401 @@
+# Releasing Ambari
+
+## Useful Links
+
+### [Publishing Maven Artifacts](http://apache.org/dev/publishing-maven-artifacts.html)
+
+* Setting up release signing keys
+* Uploading artifacts to staging and release repositories
+
+### [Apache Release Guidelines](http://www.apache.org/legal/release-policy.html)
+
+* Release requirements
+* Process for staging
+
+## Preparing for release
+
+Setup for first time release managers
+
+If you are being a release manager for the first time, you will need to run the following additional steps so that you are not blocked during the actual release process.
+
+**Configure SSH/SFTP to home.apache.org**
+
+SFTP to home.apache.org supports only Key-Based SSH Logins
+
+```bash
+# Generate RSA Keys
+ mkdir ~/.ssh
+ chmod 700 ~/.ssh
+ ssh-keygen -t rsa -b 4096
+
+# Note: This will create ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub files will be generated
+
+# Upload Public RSA Key
+Login at http://id.apache.org
+Add Public SSH Key to your profile from ~/.ssh/id_rsa.pub
+SSH Key (authorized_keys line):
+Submit changes
+
+# Verify SSH to minotaur.apache.org works
+ssh -i ~/.ssh/id_rsa {username}@minotaur.apache.org
+
+# SFTP to home.apache.org
+sftp {username}@home.apache.org
+mkdir public_html
+cd public_html
+put test #This test file is a sample empty file present in current working directory from which you sftp.
+
+Verify URL http://home.apache.org/{username}/test
+```
+
+**Generate OpenGPG Key**
+
+You should get a signing key, keep it in a safe place, upload the public key to apache, and build a web of trust.
+
+Ref: http://zacharyvoase.com/2009/08/20/openpgp/
+
+```bash
+gpg2 --gen-key
+gpg2 --keyserver pgp.mit.edu --send-key {key}
+gpg2 --armor --export {username}@apache.org > {username}.asc
+Copy over {username}.asc to {username}@home.apache.org:public_html/~{username}.asc
+Verify URL http://home.apache.org/~{username}/{username}.asc
+Query PGP KeyServer http://pgp.mit.edu:11371/pks/lookup?search=0x{key}&op=vindex
+
+Web of Trust:
+Request others to sign your PGP key.
+
+Login at http://id.apache.org
+Add OpenPGP Fingerprint to your profile
+OpenPGP Public Key Primary Fingerprint: XXXX YYYY ZZZZ ....
+Submit changes
+Verify that the public PGP key is exported to http://home.apache.org/keys/committer/{username}.asc
+```
+
+**Email dev@ambari.apache.org mailing list notifying that you will be creating the release branch at least one week in advance**
+
+```
+Subject: Preparing Ambari X.Y.Z branch
+
+Hi developers and PMCs,
+
+I am proposing cutting a new branch branch-X.Y for Ambari X.Y.Z on __________ as per the outlined tasks in the Ambari Feature + Roadmap page (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30755705).
+
+After making the branch, we (i.e., development community) should only accept blocker or critical bug fixes into the branch and harden it until it meets a high enough quality bar.
+
+If you have a bug fix, it should first be committed to trunk, and after ensuring that it does not break any tests (including smoke tests), then it should be integrated to the Ambari branch-X.Y
+If you have any doubts whether a fix should be committed into branch-X.Y, please email me for input at ____________
+Stay tuned for updates on the release process.
+
+Thanks
+```
+
+**Create the release branch**
+
+Create a branch for a release using branch-X.Y (ex: branch-2.1) as the name of the branch.
+
+Note: Going forward, we should be creating branch-{majorVersion}.{minorVersion), so that the same branch can be used for maintenance releases.
+
+**Checkout the release branch**
+
+```bash
+git checkout branch-X.Y
+```
+
+**Update Ambari REST API docs**
+
+Starting with Ambari's `<span>trunk</span>` branch as of Ambari 2.8, the release manager should generate documentation from existing source code. The documentation should be checked back into the branch before performing the release.
+
+```bash
+# Generate the following artifacts:
+# - Configuration markdown at docs/configuration/index.md
+# - swagger.json and index.html at docs/api/generated/
+cd ambari-server/
+mvn clean compile exec:java@configuration-markdown test -Drat.skip -Dcheckstyle.skip -DskipTests -Dgenerate.swagger.resources
+
+# Review and Commit the changes to branch-X.Y
+git commit
+```
+
+**Update release version**
+
+Once the branch has been created, the release version must be set and committed. The changes should be committed to the release branch.
+
+**Ambari 2.8+**
+
+Starting with Ambari 2.8, the build process relies on [Maven 3.5+ which allows the](https://maven.apache.org/maven-ci-friendly.html) [use of the `${revision}` tag](https://maven.apache.org/maven-ci-friendly.html). This means that the release version can be defined once in the root `pom.xml` and then inherited by all submodules. In order to build Ambari with a specific build number, there are two methods:
+
+```bash
+mvn -Drevision=2.8.0.0.0 ...
+Editing the root pom.xml to include the new build number
+<revision>2.8.0.0-SNAPSHOT</revision>
+```
+
+To be consistent with prior releases, the `pom.xml` should be updated in order to contain the new version.
+
+**Steps followed for 2.8.0 release as a reference**
+
+```bash
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+# Remove .versionsBackup files
+git clean -f -x
+
+# Review and commit the changes to branch-X.Y
+git commit
+```
+:::danger
+Ambari 2.7 and Earlier Releases (Deprecated)
+:::
+
+Older Ambari branches still required that you update every `pom.xml` manually through the below process:
+
+**Steps followed for 2.2.0 release as a reference**
+
+```bash
+# Update the release version
+mvn versions:set -DnewVersion=2.2.0.0.0
+pushd ambari-metrics
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd contrib/ambari-log4j
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd contrib/ambari-scom
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd docs
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+
+# Update the ambari.version properties in all pom.xml
+$ find . -name "pom.xml" | xargs grep "ambari\.version"
+
+./contrib/ambari-scom/ambari-scom-server/pom.xml: 2.1.0-SNAPSHOT
+./contrib/ambari-scom/ambari-scom-server/pom.xml: ${ambari.version}
+./contrib/views/hive/pom.xml: 2.1.0.0.0
+./contrib/views/jobs/pom.xml: ${ambari.version}
+./contrib/views/pig/pom.xml: 2.1.0.0.0
+./contrib/views/pom.xml: 2.1.0.0.0
+./contrib/views/storm/pom.xml: ${ambari.version}
+./contrib/views/tez/pom.xml: ${ambari.version}
+./docs/pom.xml: 2.1.0
+./docs/pom.xml: ${project.artifactId}-${ambari.version}
+
+# Update any 2.1.0-SNAPSHOT references in pom.xml
+$ grep -r --include "pom.xml" "2.1.0-SNAPSHOT" .
+
+# Remove .versionsBackup files
+git clean -f -x -d
+
+# Review and commit the changes to branch-X.Y
+git commit
+```
+
+**Update KEYS**
+
+If this is the first time you have taken release management responsibilities, make sure to update the KEYS file and commit the updated KEYS in both the ambari trunk branch and the release branch. Also in addition to updating the KEYS file in the tree, you also need to p ush the KEYS file to [https://dist.apache.org/repos/dist/release/ambari/](https://dist.apache.org/repos/dist/release/ambari/)
+
+```bash
+gpg2 --list-keys jluniya@apache.org >> KEYS
+gpg2 --armor --export jluniya@apache.org >> KEYS
+
+# commit the changes to both trunk and new release branch
+git commit
+
+# push the updated KEYS file to https://dist.apache.org/repos/dist/release/ambari/.
+
+# Only PMCs members can do this 'svn' step.
+
+svn co https://dist.apache.org/repos/dist/release/ambari ambari_svn
+cp {path_to_keys_file}/KEYS ambari_svn/KEYS
+svn update KEYS
+svn commit -m "Updating KEYS for Ambari"
+```
+
+**Setup Build**
+
+Setup Jenkins Job for the new branch on http://builds.apache.org
+
+## Creating Release Candidate
+
+```
+Note: The first release candidate is rc0. The following documented process assumes rc0, but replace it with the appropriate rc number as required.
+
+```
+
+**Checkout the release branch**
+
+```
+git checkout branch-X.Y
+```
+
+**Create a Release Tag from the release branch**
+
+```bash
+git tag -a release-X.Y.Z-rc0 -m 'Ambari X.Y.Z RC0'
+git push origin release-X.Y.Z-rc0
+```
+
+**Create a tarball**
+
+```bash
+# create a clean copy of the source
+ cd ambari-git-X.Y.Z
+ git clean -f -x -d
+ cd ..
+
+ cp -R ambari-git-X.Y.Z apache-ambari-X.Y.Z-src
+
+ # create ambari-web/public by running the build instructions per https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Development
+ # once ambari-web/public is created, copy it as ambari-web/public-static
+ cp -R ambari-git-X.Y.Z/ambari-web/public apache-ambari-X.Y.Z-src/ambari-web/public-static
+
+ # make sure apache rat tool runs successfully
+ cp -R apache-ambari-X.Y.Z-src apache-ambari-X.Y.Z-ratcheck
+ cd apache-ambari-X.Y.Z-ratcheck
+ mvn clean apache-rat:check
+ cd ..
+
+ # if rat check fails, file JIRAs and fix them before proceeding.
+
+ # tar it up, but exclude git artifacts
+ tar --exclude=.git --exclude=.gitignore --exclude=.gitattributes -zcvf apache-ambari-X.Y.Z-src.tar.gz apache-ambari-X.Y.Z-src
+```
+
+**Sign the tarball**
+
+```bash
+gpg2 --armor --output apache-ambari-X.Y.Z-src.tar.gz.asc --detach-sig apache-ambari-X.Y.Z-src.tar.gz
+```
+
+**Generate SHA512 checksums:**
+
+```
+sha512sum apache-ambari-X.Y.Z-src.tar.gz > apache-ambari-X.Y.Z-src.tar.gz.sha512
+```
+
+or
+
+```
+openssl sha512 apache-ambari-X.Y.Z-src.tar.gz > apache-ambari-X.Y.Z-src.tar.gz.sha512
+```
+
+**Upload the artifacts to your apache home:**
+
+The artifacts then need to be copied over (SFTP) to
+
+```
+public_html/apache-ambari-X.Y.Z-rc0
+```
+
+## Voting on Release Candidate
+
+**Call for a vote on the dev@ambari.apache.org mailing list with something like this:**
+
+I have created an ambari-** release candidate.
+
+GIT source tag (r***)
+
+```
+https://git-wip-us.apache.org/repos/asf/ambari/repo?p=ambari.git;a=log;h=refs/tags/release-x.y.z-rc0
+```
+
+Staging site: http://home.apache.org/user_name/apache-ambari-X.Y.Z-rc0
+
+Vote will be open for 72 hours.
+
+```
+[ ] +1 approve
+[ ] +0 no opinion
+[ ] -1 disapprove (and reason why)
+```
+
+Once the vote passes/fails, send out an email with subject like "[RESULT] [VOTE] Apache Ambari x.y.z rc0" to dev@ambari.apache.org. For the vote to pass, 3 +1 votes are required. If the vote does not pass another release candidate will need to be created after addressing the feedback from the community.
+
+## Publishing and Announcement
+
+* Login to [https://id.apache.org](https://id.apache.org) and verify the fingerprint of PGP key used to sign above is provided. (gpg --fingerprint)
+* Upload your PGP public key only to _/home/_
+
+Publish the release as below:
+
+```bash
+svn co https://dist.apache.org/repos/dist/release/ambari ambari
+
+# Note : Only PMCs members can do this 'svn' step.
+
+cd ambari
+mkdir ambari-X.Y.Z
+scp ~/public_html/apache-ambari-X.Y.Z-rc0/* ambari-X.Y.Z
+svn add ambari-X.Y.Z
+svn rm ambari-A.B.C # Remove the older release from the mirror. Only the latest version should appear in dist.
+
+svn commit -m "Committing Release X.Y.Z"
+```
+
+Create the release tag:
+
+```bash
+git tag -a release-X.Y.Z -m 'Ambari X.Y.Z'
+git push origin release-X.Y.Z
+```
+
+Note that it takes 24 hours for the changes to propagate to the mirrors.
+
+Wait 24 hours and verify that the bits are available in the mirrors before sending an announcement.
+
+**Update Ambari Website and Wiki**
+
+http://ambari.apache.org is checked in Git in `/ambari/docs/src/site` folder.
+
+```bash
+cd docs
+mvn versions:set -DnewVersion=X.Y.Z
+
+# Make necessary changes, typically to pom.xml, site.xml, index.apt, and whats-new.apt
+mvn clean site
+```
+
+Examine the changes under _/ambari/docs/target_ folder.
+
+Update the wiki to add pages for installation of the new version. _Usually you can copy the pages for the last release and make the URL changes to point to new repo/tarball location._
+
+**Send out Announcement to dev@ambari.apache.org and user@ambari.apache.org.**
+
+Subject: [ANNOUNCE] Apache Ambari X.Y.Z.
+
+The Apache Ambari team is proud to announce Apache Ambari version X.Y.Z
+
+Apache Ambari is a tool for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari consists of a set of RESTful APIs and a browser-based management console UI.
+
+The release bits are at: http://www.apache.org/dyn/closer.cgi/ambari/ambari-X.Y.Z.
+
+To use the released bits please use the following documentation:
+
+https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+X.Y.Z
+
+We would like to thank all the contributors that made the release possible.
+
+Regards,
+
+The Ambari Team
+
+**Submit release data to Apache reporter database.**
+
+This step can be done only by a project PMC. If release manager is not an Ambari PMC then please reach out to an existing Ambari PMC or contact Ambari PMC chair to complete this step.
+
+- Login to https://reporter.apache.org/addrelease.html?ambari with apache credentials.
+- Fill out the fields:
+ - Committe: ambari
+ - Full version name: 2.2.0
+ - Date of release (YYYY-MM-DD): 2015-12-19
+- Submit the data
+- Verify that the submitted data is reflected at https://reporter.apache.org/?ambari
+
+Performing this step keeps [https://reporter.apache.org/?ambari](https://reporter.apache.org/?ambari) site updated and people using the Apache Reporter Service will be able to see the latest release data for Ambari.
+
+## Publish Ambari artifacts to Maven central
+
+Please use the following [document](https://docs.google.com/document/d/1RjWQOaTUne6t8DPJorPhOMWAfOb6Xou6sAdHk96CHDw/edit) to publish Ambari artifacts to Maven central.
diff --git a/docs/ambari-dev/unit-test-reports.md b/docs/ambari-dev/unit-test-reports.md
new file mode 100644
index 0000000..9fadf3f
--- /dev/null
+++ b/docs/ambari-dev/unit-test-reports.md
@@ -0,0 +1,6 @@
+# Unit Test Reports
+
+Branch | Unit Test Report URL
+-------|-------------
+trunk | https://builds.apache.org/job/Ambari-trunk-Commit/
+branch-2.2 | https://builds.apache.org/job/Ambari-branch-2.2/
\ No newline at end of file
diff --git a/docs/ambari-dev/verifying-release-candidate.md b/docs/ambari-dev/verifying-release-candidate.md
new file mode 100644
index 0000000..707db13
--- /dev/null
+++ b/docs/ambari-dev/verifying-release-candidate.md
@@ -0,0 +1,107 @@
+# Verifying Release Candidate
+
+[Apache Release Process](http://www.apache.org/dev/release-publishing)
+
+The steps are based on what is needed on a fresh centos6 VM created based on [Quick Start Guide](../quick-start/quick-start-guide.md)
+
+## Verify hashes and signature
+
+```bash
+mkdir -p /usr/work/ambari
+pushd /usr/work/ambari
+```
+
+_Download the src tarball, asc signature, and md5/sha1 hashes._
+
+Verify the hashes
+
+```
+openssl md5 apache-ambari-2.4.1-src.tar.gz | diff apache-ambari-2.4.1-src.tar.gz.md5 -
+openssl sha1 apache-ambari-2.4.1-src.tar.gz | diff apache-ambari-2.4.1-src.tar.gz.sha1 -
+```
+
+Verify the signature
+
+```bash
+gpg --keyserver pgpkeys.mit.edu --recv-key <key ID>
+gpg apache-ambari-2.4.1-src.tar.gz.asc
+```
+
+## Compiling the code
+
+If you are verifying the release on a clean machine (e.g. freshly installed VM) then you need to run several preparatory step.
+
+### Install mvn
+
+```bash
+mkdir /usr/local/apache-maven
+cd /usr/local/apache-maven
+wget http://mirror.olnevhost.net/pub/apache/maven/binaries/apache-maven-3.2.1-bin.tar.gz
+tar -xvf apache-maven-3.2.1-bin.tar.gz
+export M2_HOME=/usr/local/apache-maven/apache-maven-3.2.1
+export M2=$M2_HOME/bin
+export PATH=$M2:$PATH
+```
+
+### Install java
+
+```bash
+mkdir /usr/jdk
+cd /usr/jdk
+cp "FROM SOURCE"/jdk-7u67-linux-x64.tar.gz . (or download the latest)
+tar -xvf jdk-7u67-linux-x64.tar.gz
+export PATH=$PATH:/usr/jdk/jdk1.7.0_67/bin
+export JAVA_HOME=/usr/jdk/jdk1.7.0_67
+export _JAVA_OPTIONS="-Xmx2048m -XX:MaxPermSize=1024m -Djava.awt.headless=true"
+```
+
+### Install packages
+
+```bash
+yum install -y git
+curl --silent --location https://rpm.nodesource.com/setup | bash -
+yum install -y nodejs
+yum install -y gcc-c++ make
+npm install -g brunch@1.7.20
+yum install -y libfreetype.so.6
+yum install -y freetype
+yum install -y fontconfig
+yum install -y python-devel
+yum install -y rpm-build
+```
+
+### Install python tools
+
+```bash
+wget http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c11-py2.6.egg --no-check-certificate
+
+sh setuptools-0.6c11-py2.6.egg
+```
+
+### Additional steps
+
+These steps may not be needed in every environment. You can either perform these steps before build or after, and if, you encounter specific errors.
+
+_Install pom files needed by ambari-metrics-kafka-sink_
+
+
+```bash
+mkdir /tmp/pom-files
+pushd /tmp/pom-files
+cp "FROM SOURCE"/jms-1.1.pom .
+cp "FROM SOURCE"/jmxri-1.2.1.pom .
+cp "FROM SOURCE"/jmxtools-1.2.1.pom .
+mvn install:install-file -Dfile=jmxri-1.2.1.pom -DgroupId=com.sun.jmx -DartifactId=jmxri -Dversion=1.2.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jms-1.1.pom -DgroupId=javax.jms -DartifactId=jms -Dversion=1.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxtools-1.2.1.pom -DgroupId=com.sun.jdmk -DartifactId=jmxtools -Dversion=1.2.1 -Dpackaging=jar
+popd
+```
+
+### Compile the code
+
+```bash
+pushd /usr/work/ambari
+tar -xvf apache-ambari-2.4.1-src.tar.gz
+cd apache-ambari-2.4.1-src
+mvn clean install -DskipTests
+```
\ No newline at end of file
diff --git a/docs/ambari-plugin-contribution/imgs/step1.png b/docs/ambari-plugin-contribution/imgs/step1.png
new file mode 100644
index 0000000..814b402
--- /dev/null
+++ b/docs/ambari-plugin-contribution/imgs/step1.png
Binary files differ
diff --git a/docs/ambari-plugin-contribution/imgs/step2.png b/docs/ambari-plugin-contribution/imgs/step2.png
new file mode 100644
index 0000000..c903003
--- /dev/null
+++ b/docs/ambari-plugin-contribution/imgs/step2.png
Binary files differ
diff --git a/docs/ambari-plugin-contribution/imgs/step3.png b/docs/ambari-plugin-contribution/imgs/step3.png
new file mode 100644
index 0000000..a0548b5
--- /dev/null
+++ b/docs/ambari-plugin-contribution/imgs/step3.png
Binary files differ
diff --git a/docs/ambari-plugin-contribution/imgs/step4.png b/docs/ambari-plugin-contribution/imgs/step4.png
new file mode 100644
index 0000000..5f1eb47
--- /dev/null
+++ b/docs/ambari-plugin-contribution/imgs/step4.png
Binary files differ
diff --git a/docs/ambari-plugin-contribution/imgs/step5.png b/docs/ambari-plugin-contribution/imgs/step5.png
new file mode 100644
index 0000000..68e4ca6
--- /dev/null
+++ b/docs/ambari-plugin-contribution/imgs/step5.png
Binary files differ
diff --git a/docs/ambari-plugin-contribution/imgs/step6.png b/docs/ambari-plugin-contribution/imgs/step6.png
new file mode 100644
index 0000000..65a83b3
--- /dev/null
+++ b/docs/ambari-plugin-contribution/imgs/step6.png
Binary files differ
diff --git a/docs/ambari-plugin-contribution/index.md b/docs/ambari-plugin-contribution/index.md
new file mode 100644
index 0000000..fe9d0b3
--- /dev/null
+++ b/docs/ambari-plugin-contribution/index.md
@@ -0,0 +1,9 @@
+# Ambari Plugin Contributions
+
+These are independent extensions that are contributed to the Ambari codebase.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+
+<DocCardList />
+```
\ No newline at end of file
diff --git a/docs/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg b/docs/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg
new file mode 100644
index 0000000..0414dd4
--- /dev/null
+++ b/docs/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg
Binary files differ
diff --git a/docs/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png b/docs/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png
new file mode 100644
index 0000000..365b9f3
--- /dev/null
+++ b/docs/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png
Binary files differ
diff --git a/docs/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg b/docs/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg
new file mode 100644
index 0000000..5cd861d
--- /dev/null
+++ b/docs/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg
Binary files differ
diff --git a/docs/ambari-plugin-contribution/scom/index.md b/docs/ambari-plugin-contribution/scom/index.md
new file mode 100644
index 0000000..efe224f
--- /dev/null
+++ b/docs/ambari-plugin-contribution/scom/index.md
@@ -0,0 +1,63 @@
+# Ambari SCOM Management Pack
+
+This information is intended for **Apache Hadoop** and **Microsoft System Center Operations Manager** users who install the **Ambari SCOM Management Pack**.
+
+## Introduction
+
+### Versions
+
+Ambari SCOM Version | Ambari Server Version | Version
+--------------------|------------------------|---------
+2.0.0 | 1.5.1 | 1.5.1.2.0.0.0-673
+1.0.0 | 1.4.4 | 1.4.4.1.0.0.1-472
+0.9.0 | 1.2.5 | 1.2.5.0.9.0.0-60
+
+The Ambari SCOM contribution can be found in the Apache Ambari project:
+
+- https://github.com/apache/ambari/tree/trunk/contrib/ambari-scom
+
+### Useful Resources
+
+The following links connect you to information about common tasks that are associated with System Center Management Packs:
+
+- [Administering the Management Pack Life Cycle](http://go.microsoft.com/fwlink/?LinkId=211463)
+- [How to Import a Management Pack in Operations Manager 2007](http://go.microsoft.com/fwlink/?LinkID=142351)
+- [How to Monitor Using Overrides](http://go.microsoft.com/fwlink/?LinkID=117777)
+- [How to Create a Run As Account in Operations Manager 2007](http://technet.microsoft.com/en-us/library/hh321655.aspx)
+- [How to Modify an Existing Run As Profile](http://go.microsoft.com/fwlink/?LinkID=165412)
+- [How to Export Management Pack Customizations](http://go.microsoft.com/fwlink/?LinkId=209940)
+- [How to Remove a Management Pack](http://go.microsoft.com/fwlink/?LinkId=209941)
+
+For questions about Operations Manager and monitoring packs, see the [System Center Operations Manager community forum](http://social.technet.microsoft.com/Forums/systemcenter/en-US/home?category=systemcenteroperationsmanager).
+
+A useful resource is the [System Center Operations Manager Unleashed blog](http://opsmgrunleashed.wordpress.com/), which contains "By Example" posts for specific monitoring packs.
+
+## Get Started
+
+### Overview
+
+**Ambari SCOM** extends the functionality of **Microsoft System Center Operations Manager** to monitor Apache Hadoop clusters, and leverages Ambari (and the Ambari REST API) to obtain Hadoop metrics. The Ambari SCOM Management Pack will:
+
+- Automatically discover all nodes within a Hadoop cluster(s).
+- Proactively monitor the availability and capacity.
+- Proactively notify when the health is critical.
+- Intuitively and efficiently visualize the health of Hadoop cluster via dashboards
+
+
+
+### Architecture
+
+Ambari SCOM is made up of the following components
+
+Component | Description
+----------|------------
+Ambari SCOM Management Pack | The Ambari SCOM Management Pack extends the functional of Microsoft System Center Operations Manager to monitor Hadoop clusters.
+Ambari SCOM Server | The Ambari SCOM Server component connects to the Hadoop cluster components and exposes a REST API for the Ambari SCOM Management Pack.
+ResourceProvider | An Ambari ResourceProvider is a pluggable interface in Ambari enables the customization of the Ambari SCOM Server.
+ClusterLayout ResourceProvider | An Ambari ResourceProvider implementation that supplies information on cluster layout (i.e. where Hadoop master and slave components are installed) to the Ambari SCOM Server. This allows Ambari to know how and where to access components of the Hadoop cluster.
+Property ResourceProvider | An Ambari ResourceProvider implementation that integrates with the SQL Server database instance for retrieving stored Hadoop metrics.
+SQL Server | A SQL Server instance that stores the metrics emitted from Hadoop via the SqlServerSink and the Hadoop Metrics2 interface.
+SqlServerSink | This is a Hadoop Metrics2 sink designed to consume metrics emitted from the Hadoop cluster. Ambari SCOM provides a SQL Server implementation.
+
+
+
diff --git a/docs/ambari-plugin-contribution/scom/installation.md b/docs/ambari-plugin-contribution/scom/installation.md
new file mode 100644
index 0000000..6032c1d
--- /dev/null
+++ b/docs/ambari-plugin-contribution/scom/installation.md
@@ -0,0 +1,340 @@
+# Installation
+
+## Prerequisite Software
+
+Setting up Ambari SCOM assumes the following prerequisite software:
+
+* Ambari SCOM 1.0
+ - Apache Hadoop 1.x cluster (HDFS and MapReduce) 1
+* Ambari SCOM 2.0
+ - Apache Hadoop 2.x cluster (HDFS and YARN/MapReduce) 2
+* JDK 1.7
+* Microsoft SQL Server 2012
+* Microsoft JDBC Driver 4.0 for SQL Server 3
+* Microsoft System Center Operations Manager (SCOM) 2012 SP1 or later
+* System Center Monitoring Agent installed on **Watcher Node** 4
+
+1 _Ambari SCOM_ 1.0 has been tested with a Hadoop cluster based on **Hortonworks Data Platform 1.3 for Windows** ("[HDP 1.3 for Windows](http://hortonworks.com/products/releases/hdp-1-3-for-windows/)")
+
+2 _Ambari SCOM_ 2.0 has been tested with a Hadoop cluster based on **Hortonworks Data Platform 2.1 for Windows** ("[HDP 2.1 for Windows](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-Win-latest/bk_installing_hdp_for_windows/content/win-getting-ready.html)")
+
+3 Obtain the _Microsoft JDBC Driver 4.0 for SQL Server_ JAR file (`sqljdbc4.jar`) at [http://technet.microsoft.com/en-us/library/ms378749.aspx](http://technet.microsoft.com/en-us/library/ms378749.aspx)
+
+4 See Microsoft TechNet topic for [Managing Discovery and Agents](http://technet.microsoft.com/en-us/library/hh212772.aspx). Minimum Agent requirements _.NET 4_ and _PowerShell 2.0 + 3.0_
+
+## Package Contents
+
+```
+├─ ambari-scom- _**version**_.zip
+├── README.md
+├── server.zip
+├── metrics-sink.zip
+├── mp.zip
+└── ambari-scom.msi
+```
+
+File | Name | Description
+-----|------|-------------
+server.zip | Server Package | Contains the required software for configuring the Ambari SCOM Server software.
+metrics-sink.zip | Metrics Sink Package | Contains the required software for manually configuring SQL Server and the Hadoop Metrics Sink.
+ambari-scom.msi | MSI Installer | The Ambari SCOM MSI Installer for configuring the Ambari SCOM Server and Hadoop Metrics Sink
+mp.zip | Management Pack Package | Contains the Ambari SCOM Management Pack software.
+
+## Ambari SCOM Server Installation
+
+:::caution
+The **Ambari SCOM Management Pack** must connect to an Ambari SCOM Server to retrieve cluster metrics. Therefore, you need to have an Ambari SCOM Server running in your cluster. If you have already installed your Hadoop cluster (including the Ganglia Service) with Ambari (minimum **Ambari 1.5.1 for SCOM 2.0.0**) and have an Ambari Server already running + managing your Hadoop 1.x cluster, you can use that Ambari Server and point the **Management Pack** that host. You can proceed directly to [Installing Ambari SCOM Management Pack](#id-2installation-mgmtpack) and skip these steps to install an Ambari SCOM Server. If you do not have an Ambari Server running + managing your cluster, you **must** install an Ambari SCOM Server using one of the methods described below.
+:::
+
+The following methods are available for installing Ambari SCOM Server:
+
+* **Manual Installation** - This installation method requires you to configure the SQL Server database, setup the Ambari SCOM Server and configure the Hadoop Metrics Sink. This provides the most flexible install option based on your environment.
+* **MSI Installation** - This installation method installs the Ambari SCOM Server and configures the Hadoop Metrics Sink on all hosts in the cluster automatically using an MSI Installer. After launching the MSI, you provide information about your SQL Server database and the cluster for the installer to handle configuration.
+
+## Manual Installation
+
+### Configuring SQL Server
+
+1. Configure an existing SQL Server instance for "mixed mode" authentication.
+
+2. Confirm SQL Server is installed with TCP/IP active and enabled. (default port: 1433)
+3. Create a user and password. Remember this user and password as this will be the account used by the Hadoop metrics interface for capturing metrics. (default user: sa)
+4. Extract the contents of the `metrics-sink.zip` package to obtain the DDL script.
+
+5. Create the Ambari SCOM database schema by running the `Hadoop-Metrics-SQLServer-CREATE.ddl` script.
+
+:::info
+The Hadoop Metrics DDL script will create a database called "HadoopMetrics".
+:::
+
+### Configuring Hadoop Metrics Sink
+
+#### Preparing the Metrics Sink
+
+1. Extract the contents of the `metrics-sink.zip` package to obtain the `metrics-sink-<strong><em>version</em></strong>.jar` file.
+
+2. Obtain the _Microsoft JDBC Driver 4.0 for SQL Server_ `sqljdbc4.jar` file.
+
+3. Copy `sqljdbc4.jar` and `metrics-sink-version.jar` to each host in the cluster. For example, copy to `C:\Ambari\metrics-sink-version.jar` and `C:\Ambari\sqljdbc4.jar`
+on each host.
+
+#### Setup Hadoop Metrics2 Interface
+
+1. On each host in the cluster, setup the Hadoop metrics2 interface to use the `SQLServerSink`.
+
+Edit the `hadoop-metrics2.properties` file (located in the `<strong><em>{C:\hadoop\install\dir}</em></strong>\bin` folder of each host in the cluster):
+
+```
+*.sink.sql.class=org.apache.hadoop.metrics2.sink.SqlServerSink
+
+namenode.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+datanode.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+jobtracker.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+tasktracker.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+maptask.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+reducetask.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+```
+
+:::info
+_Where:_
+
+* _server = the SQL Server hostname_
+* _port = the SQL Server port (for example, 1433)_
+* _user = the SQL Server user (for example, sa)_
+* _password = the SQL Server password (for example, BigData1)_
+:::
+
+1. Update the Java classpath for each Hadoop service to include the `metrics-sink-<strong><em>version</em></strong>.jar` and `sqljdbc4.jar` files.
+
+
+ - Example: Updating the Java classpath for _HDP for Windows_ clusters
+
+ The `service.xml` files will be located in the `C:\hadoop\install\dir\bin` folder of each host in the cluster. The Java classpath is specified for each service in the `<arguments>` element of the `service.xml` file. For example, to update the Java classpath for the `NameNode` component, edit the `C:\hadoop\bin\namenode.xml` file.
+
+ ```
+ ...
+
+ ... -classpath ...;C:\Ambari\metrics-sink-1.5.1.2.0.0.0-673.jar;C:\Ambari\sqljdbc4.jar ...
+
+ ...
+
+ ```
+
+2. Restart Hadoop for these changes to take affect.
+
+#### Verify Metrics Collection
+
+1. Confirm metrics are being captured in the SQL Server database by querying the `MetricRecord` table:
+
+```sql
+select * from HadoopMetrics.dbo.MetricRecord
+```
+:::info
+In the above SQL statement, `HadoopMetrics` is the database name.
+:::
+
+### Installing and Configuring Ambari SCOM Server
+
+#### Running the Server
+
+1. Designate a machine in the cluster to run the Ambari SCOM Server.
+
+2. Extract the contents of the `server.zip` package to obtain the Ambari SCOM Server packages.
+
+```
+├── ambari-scom-server- **_version_**-conf.zip
+├── ambari-scom-server- **_version_**-lib.zip
+└── ambari-scom-server- **_version_**.jar
+```
+
+3. Extract the contents of the `ambari-scom-server-version-lib.zip` package to obtain the Ambari SCOM dependencies.
+
+4. Extract the contents of the `ambari-scom-server-version-conf.zip` package to obtain the Ambari SCOM configuration files.
+
+5. From the configuration files, edit the `ambari.properties` file:
+
+```
+scom.sink.db.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
+scom.sink.db.url=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+```
+
+:::info
+_Where:_
+ - _server = the SQL Server hostname_
+ - _port = the SQL Server port (for example, 1433)_
+ - _user = the SQL Server user (for example, sa)_
+ - _password = the SQL Server password (for example, BigData1)_
+:::
+
+6. Run the `org.apache.ambari.scom.AmbariServer` class from the Java command line to start the Ambari SCOM Server.
+
+:::info
+Be sure to include the following in the classpath:
+ - `ambari-scom-server-version.jar` file
+ - configuration folder containing the Ambari SCOM configuration files
+ - lib folder containing the Ambari SCOM dependencies
+ - folder containing the `clusterproperties.txt` file from the Hadoop install. For example, `c:\hadoop\install\dir`
+ - `sqljdbc4.jar` SQLServer JDBC Driver file
+::
+
+For example:
+
+```bash
+java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Xms512m -Xmx2048m -cp "c:\ambari-scom\server\conf;c:\ambari-scom\server\lib\*;c:\jdbc\sqljdbc4.jar;c:\hadoop\install\dir;c:\ambari-scom\server\ambari-scom-server-1.5.1.2.0.0.0-673.jar" org.apache.ambari.scom.AmbariServer
+```
+
+:::info
+In the above command, be sure to replace the Ambari SCOM version in the `ambari-scom-server-version.jar` and replace `c:\hadoop\install\dir` with the folder containing the `clusterproperties.txt` file.
+:::
+
+#### Verify the Server API
+
+1. From a browser access the API
+
+```
+http://[ambari-scom-server]:8080/api/v1/clusters
+```
+2. Verify that metrics are being reported.
+
+```
+http://[ambari-scom-server]:8080/api/v1/clusters/ambari/services/HDFS/components/NAMENODE
+```
+
+## MSI Installation
+
+### Configuring SQL Server
+
+1. Configure an existing SQL Server instance for "mixed mode" authentication.
+
+2. Confirm SQL Server is installed with TCP/IP active and enabled. (default port: 1433)
+3. Create a user and password. (default user: sa)
+
+### Running the MSI Installer
+
+1. Designate a machine in the cluster to run the Ambari SCOM Server.
+
+2. Extract the contents of the `server.zip` package to obtain the Ambari SCOM Server packages.
+
+3. Run the `ambari-scom.msi` installer. The "Ambari SCOM Setup" dialog appears:
+
+ 
+
+4. Provide the following information:
+
+Field | Description
+------|------------
+Ambari SCOM package directory | The directory where the installer will place the Ambari SCOM Server packages. For example: C:\Ambari
+SQL Server hostname | The hostname of the SQL Server instance for Ambari SCOM Server to use to store Hadoop metrics.
+SQL Server port | The port of the SQL Server instance.
+SQL Server login | The login username.
+SQL Server password | The login password
+Path to SQL Server JDBC Driver (sqljdbc4.jar) | The path to the JDBC Driver JAR file.
+Path to the cluster layout file (clusterproperties.txt) | The path to the cluster layout properties file.
+
+5. You can optionally select to Start Services
+6. Click Install
+7. After completion, links are created on the desktop to "Start Ambari SCOM Server", "Browse Ambari API" and "Browse Ambari API Metrics". After starting the Ambari SCOM Server, browse the API and Metrics to confirm the server is working properly.
+
+:::info
+The MSI installer installation log can be found at `C:\AmbariInstallFiles\AmbariSetupTools\ambari.winpkg.install.log`
+:::
+
+### Installing Ambari SCOM Management Pack
+
+:::info
+Before installing the Management pack, be sure to install the Ambari SCOM Server using the Ambari SCOM Server Installation instructions.
+:::
+
+#### Import the Management Pack
+
+Perform the following to import the Ambari SCOM Management Pack into System Center Operations Manager.
+
+1. Extract the contents of the `mp.zip` package to obtain the Ambari SCOM management pack (`.mpb`) files.
+
+2. Ensure Windows Server 2012 running SCOM with SQL Server (full text search).
+
+3. Open System Center Operations Manager.
+
+4. Go to Administration -> Management Packs.
+
+5. From the Tasks panel, select Import Management Packs...
+
+6. In the Import Management Packs dialog, select Add -> Add from disk...
+
+7. You are prompted to search the Online Catalog. Click "No".
+
+8. Browse for the Ambari SCOM management pack files.
+
+9. Select the following files:
+
+```
+Ambari.SCOM.Monitoring.mpb
+Ambari.SCOM.Management.mpb
+Ambari.SCOM.Presentation.mpb
+```
+10. Click "Open"
+11. Review the Import list and click "Install".
+
+12. The Ambari SCOM Management Pack installation will start.
+
+:::info
+The Ambari SCOM package also includes `AmbariSCOMManagementPack.msi` which is an alternative packaging of the `mp.zip`. This MSI is being made in **beta** form in this release.
+:::
+
+#### Create Run As Account
+
+Perform the following to configure a account to use when the Ambari SCOM Management Pack talks to the Ambari SCOM Server.
+
+1. After Importing the Management Pack is complete, go to Administration -> Run As Configuration -> Accounts.
+
+2. In the Tasks panel, select "Create Run as Account..."
+3. You are presented with the Create Run As Account Wizard.
+
+4. Go thru the wizard, select Run As account type "Basic Authentication".
+
+5. Give the account a Display name and click "Next".
+
+6. Enter the account name and password for the Ambari SCOM Server. This account will be used to connect to the Ambari SCOM Server to access the Ambari REST API. Default is account name is "admin" and password is "admin".
+
+7. Click "Next"
+8. Select the "Less secure" distribution security option.
+
+9. Click "Next" and complete the wizard.
+
+#### Configure the Management Pack
+
+Perform the following to configure the Ambari SCOM Management Pack to talk to the Ambari SCOM Server.
+
+1. Go to Authoring -> Management Pack Templates -> Ambari SCOM
+2. In the Tasks panel, select "Add Monitoring Wizard".
+
+3. Select monitoring type "Ambari SCOM"
+4. Provide a name and select the destination management pack.
+
+5. Provide the Ambari URI with is the address of the Ambari SCOM Server in the format:
+
+```
+http://[ambari-scom-server]:8080/api/
+```
+
+:::info
+In the above Ambari URI, `ambari-scom-server` is the Ambari SCOM Server.
+:::
+
+6. Select the Run As Account that you created in Create Run As Account.
+
+7. Select "Watcher Node". If node is not listed, click "Add" and browse for the node. Click "Next".
+
+8. Complete the Add Monitoring Wizard and proceed to the Monitoring Scenariosfor information on using the management pack.
+
+#### Best Practice: Create Management Pack for Customizations
+
+By default, Operations Manager saves all customizations such as overrides to the **Default Management Pack**. As a best practice, you should instead create a separate management pack for each sealed management pack you want to customize.
+
+When you create a management pack for the purpose of storing customized settings for a sealed management pack, it is helpful to base the name of the new management pack on the name of the management pack that it is customizing, such as **Ambari SCOM Customizations**.
+
+Creating a new management pack for storing customizations of each sealed management pack makes it easier to export the customizations from a test environment to a production environment. It also makes it easier to delete a management pack, because you must delete any dependencies before you can delete a management pack. If customizations for all management packs are saved in the **Default Management Pack** and you need to delete a single management pack, you must first delete the **Default Management Pack**, which also deletes customizations to other management packs.
+
+## Monitoring Scenarios
+
+[Monitoring Scenarios](https://cwiki.apache.org/confluence/display/AMBARI/3.+Monitoring+Scenarios)
\ No newline at end of file
diff --git a/docs/ambari-plugin-contribution/step-by-step.md b/docs/ambari-plugin-contribution/step-by-step.md
new file mode 100644
index 0000000..41d96d2
--- /dev/null
+++ b/docs/ambari-plugin-contribution/step-by-step.md
@@ -0,0 +1,52 @@
+# Step-by-step guide on adding a dashboard widget for a host.
+
+## Create your own dashboard widget for hosts:
+
+Requirements:
+
+- Jmxtrans
+ - Jmxtrans is the application chosen to compile rrd files in order to produce graphing data on ganglia.
+https://github.com/jmxtrans/jmxtrans
+- .rrd files
+ - All the Ganglia rrd files are stored in the /var/lib/rrds directory on the host machine where the Ganglia server is installed.
+ - In this example I’ll be using the “**Nimbus_JVM_Memory_Heap_used.rrd**” file for the data of my custom widget.
+
+**Step 1**:
+
+First we need to go add the rrd file in the “**ganglia_properties.json**” file which is located in the `ambari\ambari-server\src\main\resources` directory of your Ambari source code. This is necessary so that the Ambari-server can call your rrd file from Ganglia via the Ambari API.
+
+
+
+Line 108: Create the path for the metrics to be included in the API.
+
+Line 109: Specify the rrd file.
+
+**Step 2**:
+
+Now we are going to add the API path created in step 1 at line 108, to the “**update_controller.js**” file located in the `ambari\ambari-web\app\controllers\global` directory, so that our graph data can be updated frequently.
+
+
+
+**Step 3**:
+
+Create a JavaScript file for the view of the template of our custom widget and save it in the `ambari\ambari-web\app\views\main\host\metrics` directory of your Ambari source code. In this case I saved my file as “**nimbus.js**”
+
+
+
+**Step 4**:
+
+Add the JavaScript file you created in the previous step into the “**views.js**” file located in the `ambari\ambari-web\app` directory.
+
+
+
+**Step 5**:
+
+Add the .js file view created in step 3 in the “**metrics.hbs**” template file located in the `ambari\ambari-web\app\templates\main\host` directory.
+
+
+
+**Step 6**:
+
+Add the API call to the “**ajax.js**” file located in the `ambari\ambari-web\app\utils` directory.
+
+
\ No newline at end of file
diff --git a/docs/introduction.md b/docs/introduction.md
new file mode 100644
index 0000000..58a8611
--- /dev/null
+++ b/docs/introduction.md
@@ -0,0 +1,48 @@
+---
+sidebar_position: 1
+---
+
+# Introduction
+
+The Apache Ambari project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs.
+
+Ambari enables System Administrators to:
+
+* Provision a Hadoop Cluster
+ - Ambari provides a step-by-step wizard for installing Hadoop services across any number of hosts.
+
+ - Ambari handles configuration of Hadoop services for the cluster.
+
+* Manage a Hadoop Cluster
+ - Ambari provides central management for starting, stopping, and reconfiguring Hadoop services across the entire cluster.
+
+* Monitor a Hadoop Cluster
+ - Ambari provides a dashboard for monitoring health and status of the Hadoop cluster.
+
+ - Ambari leverages [Ambari Metrics System](https://issues.apache.org/jira/browse/AMBARI-5707) for metrics collection.
+
+ - Ambari leverages [Ambari Alert Framework](https://issues.apache.org/jira/browse/AMBARI-6354) for system alerting and will notify you when your attention is needed (e.g., a node goes down, remaining disk space is low, etc).
+
+Ambari enables Application Developers and System Integrators to:
+
+* Easily integrate Hadoop provisioning, management, and monitoring capabilities to their own applications with the [Ambari REST APIs](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md).
+
+## Getting Started with Ambari
+
+Follow the [installation guide for Ambari 2.7.6](https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+2.7.6).
+
+Note: Ambari currently supports the 64-bit version of the following Operating Systems:
+
+* RHEL (Redhat Enterprise Linux) 7.4, 7.3, 7.2
+* CentOS 7.4, 7.3, 7.2
+* OEL (Oracle Enterprise Linux) 7.4, 7.3, 7.2
+* Amazon Linux 2
+* SLES (SuSE Linux Enterprise Server) 12 SP3, 12 SP2
+* Ubuntu 14 and 16
+* Debian 9
+
+## Get Involved
+
+Visit the [Ambari Wiki](https://cwiki.apache.org/confluence/display/AMBARI/Ambari) for design documents, roadmap, development guidelines, etc.
+
+[Join the Ambari User Meetup Group](http://www.meetup.com/Apache-Ambari-User-Group). You can see the slides from [April 2, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/109316812/), [June 25, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/119184782/), and [September 25, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/134373312/) meetups.
diff --git a/docs/quick-start/_category_.json b/docs/quick-start/_category_.json
new file mode 100644
index 0000000..b1ce0cc
--- /dev/null
+++ b/docs/quick-start/_category_.json
@@ -0,0 +1,8 @@
+{
+ "label": "Quick Start",
+ "position": 2,
+ "link": {
+ "type": "generated-index",
+ "description": "Ambari Quick Start"
+ }
+}
diff --git a/docs/quick-start/quick-start-for-new-vm-users.md b/docs/quick-start/quick-start-for-new-vm-users.md
new file mode 100644
index 0000000..6b7469c
--- /dev/null
+++ b/docs/quick-start/quick-start-for-new-vm-users.md
@@ -0,0 +1,410 @@
+---
+sidebar_position: 2
+---
+
+# Quick Start for New VM Users
+
+This Quick Start guide is for readers who are new to the use of virtual machines, Apache Ambari, and/or the Apache Hadoop component stack, who would like to install and use a small local Hadoop cluster. The instructions are for a local host machine running OS X.
+
+The following instructions cover four main steps for installing Ambari and HDP using VirtualBox and Vagrant:
+
+1. Install VirtualBox and Vagrant.
+
+2. Install and start one or more Linux virtual machines. Each machine represents a node in a cluster.
+
+3. On one of the virtual machines, download, install, and deploy the version of Ambari you wish to use.
+
+4. Using Ambari, deploy the version of HDP you wish to use.
+
+When you complete the example in this Quick Start, you should have a three-node cluster of virtual machines running Ambari 2.4.1.0 and HDP 2.5.0 (unless you specify different repository versions).
+
+Once VirtualBox and Vagrant are installed, steps 2 through 4 can be done multiple times to change versions, create a larger cluster, and so on. There is no need to repeat step 1 unless you want to upgrade VirtualBox and/or Vagrant later.
+
+Note: these steps were most recently tested on MacOS 10.11.6, El Capitan.
+
+
+## Terminology
+
+A _virtual machine_, or _VM_, is a software program that exhibits the behavior of a separate computer and is capable of running applications and programs within its own environment.
+
+A virtual machine is often called a _guest_, because it runs within another computing environment--usually known as the _host_. For example, if you install three Linux VMs on a Mac, the Mac is the host machine; the three Linux VMs are guests.
+
+Multiple virtual machines can exist within a single host at one time. In the following examples, one or more virtual machines run on a _host_ machine running OS X. OS X is the primary operating system. The virtual machines (guests) are installed under OS X. The virtual machines run Linux in separate environments on OS X. Thus, your Mac is the "host" machine, and the virtual machines that run Ambari and Hadoop are called "guest" machines.
+
+## Prerequisites
+
+You will need the following resources for this Quick Start:
+
+* A solid internet connection, preferably with at least 5 MB available download bandwidth.
+
+* If you are installing the VMs on a Mac, at least 16 GB of memory (assuming 3 GB per VM)
+
+## Install VirtualBox and Vagrant
+
+VirtualBox is a software virtualization package that installs on an operating system as an application. It allows you to run multiple virtual machines at the same time. In this Quick Start you will use VirtualBox to run Linux nodes within VirtualBox on OS X:
+
+
+
+Vagrant is a tool that makes it easier to work with virtual machines. It helps automate the work of setting up, running, and removingvirtual machine environments. Using Vagrant, you can install and run a preconfigured cluster environment with Ambari and the HDP stack.
+
+1. Download and install VirtualBox from [https://www.virtualbox.org/wiki/Downloads](https://www.virtualbox.org/wiki/Downloads). This Quick Start has been tested on version 5.1.6.
+
+2. Download and install Vagrant from [https://www.vagrantup.com/downloads.html](https://www.vagrantup.com/downloads.html) [.<br></br>](http://downloads.vagrantup.com)
+3. Clone the `ambari-vagrant` GitHub repository into a convenient folder on your Mac. Navigate to the folder, and enter the following command from the terminal:
+
+```bash
+git clone https://github.com/u39kun/ambari-vagrant.git
+```
+
+The repository contains scripts for setting up Ambari virtual machines on several Linux distributions.
+
+4. Add virtual machine hostnames and addresses to the `/etc/hosts` file on your computer. The following command copies a set of host names and addresses from `ambari-vagrant/append-to-etc-hosts.txt` to the end of the `/etc/hosts` files:
+
+```bash
+sudo -s 'cat ambari-vagrant/append-to-etc-hosts.txt >> /etc/hosts'
+```
+5. Use the `vagrant` command to create a private key to use with Ambari:
+
+```bash
+vagrant
+```
+
+The `vagrant` command displays Vagrant command information, and then it creates a private key in the file `~/.vagrant.d/insecure_private_key`.
+
+## Start Linux Virtual Machines
+
+The `ambari-vagrant` directory (cloned from GitHub) contains several subdirectories, each for a specific Linux distribution. Each subdirectory has scripts and configuration files for running Ambari and HDP on that version of Linux.
+
+To start one or more virtual machines:
+
+1. Change your current directory to `ambari-vagrant`:
+
+```bash
+cd ambari-vagrant
+```
+
+If you run an `ls` command on the `ambari-vagrant` directory, you will see subdirectories for several different operating systems and operating system versions.
+
+2. `cd` into the OS subdirectory for the OS you wish to use. CentOS is recommended, because it is quicker to launch than other operating systems.
+
+The remainder of this example uses CentOS 7.0. (To install and use a different version or distribution of Linux, specify the other directory name in place of `centos7.0`.)
+
+```bash
+cd centos7.0
+```
+
+**Important**: All VM `vagrant` commands operate within your current directory. Be sure to run them from the local (Mac) subdirectory associated with the VM operating system that you have chosen to use. If you attempt to run a `vagrant` command from another directory, it will not find the VM.
+
+Copy the private key into the directory associated with the chosen operating system.
+
+For this example, which uses `centos7.0`, issue the following command:
+
+```bash
+cp ~/.vagrant.d/insecure_private_key .
+```
+3. (Optional) If you have at least 16 GB of memory on your Mac, consider increasing the amount of memory allocated to the VMs.
+
+Edit the following line in `Vagrantfile` , increasing allocated memory from 3072 to 4096 or more; for example:
+
+```bash
+vb.customize ["modifyvm", :id, "--memory", 4096] # RAM allocated to each VM
+```
+4. Every virtual machine will have a directory called `/vagrant` inside the VM. This corresponds to the `ambari-vagrant/<os></os>` directory on your local computer, making it easy to transfer files back and forth between your host Mac and the virtual machine. If you have any files to access from within the VM, you can place them in this shared directory.
+
+5. Start one or more VMs, using the `./up.sh` command. Each VM will run one HDP node.Recommendation: if you have at least 16GB of RAM on your Mac and wish to run a small cluster, start with three nodes.
+
+```bash
+./up.sh
+```
+
+For example, the following command starts 3 VMs:<br></br> `./up.sh 3`
+
+On an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes five minutes. For CentOS 7.0, the hostnames are `c7001`, `c7002`, and `c7003`.
+
+Additional notes:
+- If you ran the VMs before and used `vagrant destroy` to remove the VM's, this is the step at which you would recreate and start the VMs.
+
+- The default `Vagrantfile` (in each OS subdirectory) can create up to 10 virtual machines.
+
+- The `up.sh 3` command is equivalent to `vagrant up c700{1..3}`.
+
+- The fully-qualified domain name (FQDN) for each VM has the format `<os-code>[01-10].ambari.apache.org</os-code>`, where `<os-code></os-code>` is `c59` (CentOS 5.9), `c64` (CentOS 6.4), etc. For example, `c5901.ambari.apache.org` will be the FQDN for node 01 running CentOS 5.9.
+
+- The IP address for each VM has the format `192.168.<os-subnet>.1[01-10]</os-subnet>`, where `<os-subnet></os-subnet>` is `64` for CentOS 6.4, `70` for CentOS 7.0, and so on. For example, `192.168.70.101` will be the IP address for CentOS 7.0 node `c7001`.
+
+6. Check the status of your VM(s), and review any errors. The following example shows the results of `./upsh 3` for three VMs running with CentOS 7.0:
+
+```
+LMBP:centos7.0 lkg$ vagrant status
+
+Current machine states:
+c7001 running (virtualbox)
+c7002 running (virtualbox)
+c7003 running (virtualbox)
+c7004 not created (virtualbox)
+c7005 not created (virtualbox)
+c7006 not created (virtualbox)
+c7007 not created (virtualbox)
+c7008 not created (virtualbox)
+c7009 not created (virtualbox)
+c7010 not created (virtualbox)
+```
+
+In the preceding list, three virtual machines are installed and running.
+
+7. At this point, you can snapshot the VMs to have a fresh set of running machines to reuse if desired. This is especially helpful when installing Apache Ambari and the HDP stack for the first time; it allows you to back out to fresh VMs and reinstall Ambari and HDP if you encounter errors. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Access Virtual Machines
+
+Use the following steps when you want to access a running virtual machine:
+
+1. To log on to a virtual machine, use the `vagrant ssh` command on your host machine, and specify the hostname; for example:
+
+```
+LMBP:centos7.0 lkg$ vagrant ssh c7001
+
+Last login: Tue Jan 12 11:20:28 2016
+[vagrant@c7001 ~]$
+```
+
+From this point onward, this terminal window is connected to the virtual machine until yo u exit the virtual machine. All commands go to the VM, not to your Mac.
+
+_**Recommendation**_: Open a second terminal window for your Mac. This is useful when accessing the Ambari Web UI. To distinguish between the two, terminal windows typically list the computer name or VM hostname on each command-line prompt and at the top of the terminal window.
+
+2. When you first access the VM you will be logged in as user `vagrant`. Switch to the `root` user; be sure to include the space between "su" and "-":
+
+```
+[vagrant@c7001 ~]$ sudo su -
+
+Last login: Sun Sep 25 01:34:28 AEST 2016 on pts/0
+root@c7001:~#
+```
+
+If at any time you wish to return the terminal window to your host machine:
+
+1.
+
+ 1. Use the `logout` command to log out of root
+ 2. Use the `exit` command to return to your host machine (Mac)
+
+At this point, the VMs are still running in the background. You can re-issue the `vagrant ssh` command later, to reconnect, or you can stop the virtual machines. For more information, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Install Ambari on the Virtual Machines
+
+**Prerequisites**: Before installing Ambari, the following software packages must be installed on your VM:
+
+* rpm
+* curl
+* wget
+* pdsh
+
+On CentOS: to check if a package is installed, run `yum info <package-name></package-name>`. To install a package, run `yum install <package-name></package-name>`.
+
+To install Ambari, you can build it yourself from source (see [Ambari Development](../ambari-dev/index.md)), or you can use published binaries.
+
+As this is a Quick Start Guide to get you going quickly, ready-made publicly-available binaries are referenced. Note that these binaries were built and publicly made available via Hortonworks, a commercial vendor for Hadoop. This is for your convenience. Note that using the binaries shown here would make HDP, Hortonworks' distribution, available to be installed via Apache Ambari. The instructions here should still work (only the repo URLs need to be changed) if you have Ambari binaries from any other vendor/organization/individuals (the instructions here can be updated if anyone wanted to expand this to include such ready-made, publicly accessible binaries from any source - such contributions are welcome). This would also work if you had built the binaries yourself.
+
+From the terminal window o n the VM where you want to run the main Ambari service, download the Ambari repository. The following comman ds download Ambari version 2.5.1.0 and install `ambari-server`. To install a different version of Ambari, specify the appropriate repo URL. Choose the appropriate commands for the operating system on your VMs:
+
+```bash
+# CentOS 6 (for CentOS 7, replace centos6 with centos7 in the repo URL)
+#
+# to test public release 2.5.1
+wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo
+yum install ambari-server -y
+
+# Ubuntu 14 (for Ubuntu 16, replace ubuntu14 with ubuntu16 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/apt/sources.list.d/ambari.list http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.5.1.0/ambari.list
+apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
+apt-get update
+apt-get install ambari-server -y
+
+# SUSE 11 (for SUSE 12, replace suse11 with suse12 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/zypp/repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.5.1.0/ambari.repo
+zypper install ambari-server -y
+```
+
+On an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes seven minutes. Timing also depends on internet download speeds.
+
+To install Ambari with default settings, set up and start `ambari-server`:
+
+```bash
+ambari-server setup -s
+ambari-server start
+```
+
+To check Ambari Server status, issue the following command:<br></br> `ambari-server status`
+
+
+After Ambari Server has started, launch a browser on your host machine (Mac). Access the Ambari Web UI at ` http://<hostname>.<a href="http://ambari.apache.org">ambari.apache.org</a>:8080</hostname>`. The `<hostname></hostname>` part of the URL specifies the VM where you installed Ambari;for example:
+
+```
+http://c7001.ambari.apache.org:8080
+```
+
+**Note**: The Ambari Server can take some time to launch and be ready to accept connections. Keep trying the URL until you see the login page.
+
+
+At this point, you can snapshot the VMs to have a cluster with Ambari installed, to rerun later if desired. This is especially helpful when installing Apache Ambari and the HDP stack for the first time; it allows you to back out to fresh VMs running Ambari, and reinstall a fresh HDP stack if you encounter errors. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Install the HDP Stack
+
+The following instructions describe basic steps for using Ambari to install HDP components.
+
+1. On the Ambari screen, login using default username `admin`, password `admin`.
+
+2. On the welcome page, choose "Launch Install Wizard."
+3. Specify a name for your cluster, and then click Next.
+
+4. On the Select Version page, choose which version of HDP to install, and then click Next.
+
+5. On the Install Options page, complete the following steps:
+ 1. List the FQDNs of the virtual machines. For example:
+
+```
+c7001.ambari.apache.org
+c7002.ambari.apache.org
+c7003.ambari.apache.org
+```
+
+Alternatively, you can use a range expression:
+
+```
+c70[01-03].ambari.apache.org
+```
+ 2. Upload the `insecure_private_key` file that you created earlier: browse to the `ambari-vagrant` directory, navigate to the operating system folder for your VM's, and choose the key file.
+
+ 3. Change the SSH User Account to `vagrant`.
+
+ 4. Click "Register and Confirm."
+
+6. On the Confirm Hosts page, Ambari displays installation status.
+
+If you see a yellow banner with the following message, click on the link to review warnings:
+
+See the Troubleshooting section (later on this page) for more information.
+
+7. When all host checks pass, close the warning window:
+
+
+
+8. Click Next to continue:
+9. On the Choose Services page, uncheck any components that you do not expect to use. If any are required for selected components, Ambari will request to add them back in.
+
+10. On the Assign Masters screen, choose hosts or simply click Next to use default values.
+
+11. On the Assign Slaves and Clients screen, choose hosts or simply click Next to use default values.
+
+12. On the Customize Services screen
+ 1. Review services with warning notes, such as Hive and Ambari Metrics in the following image:
+
+ 2. Specify missing property values (such as admin passwords) as directed by the installation wizard. When all configurations have been addressed, click Next.
+
+13. On the Review screen, review the service definitions, and then click Next.
+
+14. The Install, Start and Test page shows deployment status. This step takes a while; on an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes 45 minutes.
+
+15. When the cluster installs successfully, you can snapshot the VMs to have a fresh cluster with Ambari and HDP installed, to rerun later if desired. This allows you to experiment with the cluster and quickly restore back to a previous state if you wish. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Troubleshooting
+
+This subsection describes a few error conditions that might occur during Ambari installation and HDP cluster deployment:
+
+### Confirm Hosts
+
+If you see an error similar to the following on the Confirm Hosts page of the Ambari installation wizard, click the link to see the warnings:
+'Some warnings were encountered while performing checks against the 3 registered hosts above. Click here to see the warnings."
+
+**`ntpd` Error**
+
+On the Host Checks window, the following warning indicates that you need to start `ntpd` on each host:
+
+
+
+To start the services, for each VM navigate to a terminal window (on your Mac, `vagrant ssh <vm-name></vm-name>`). Issue the following commands:
+
+`service ntpd start`<br></br> `service ntpd status`
+
+You should see messages confirming that `ntpd` is running. Navigate back to the Host Checks window of the Ambari installation wizard and click Rerun Checks. When all checks complete successfully, click Close to continue the installation process.
+
+### Install, Start and Test
+
+If the Install, Start and Test step fails with the following error during DataNode deployment:
+
+```
+Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
+Requires: snappy(x86-64) = 1.0.5-1.el6
+Installed: snappy-1.1.0-3.el7.x86_64 (@anaconda/7.2)
+```
+
+Run the following commands under the root account on each VM:
+
+```bash
+yum remove -y snappy-1.1.0-3.el7.x86_64
+yum install snappy-devel -y
+```
+
+### Stopping and Restarting Virtual Machines
+
+Hadoop is a complex ecosystem with a lot of status checks and cross-component messages. This can make it challenging to halt and restart several VMs and restore them later without warnings or errors.
+
+**Recommendations**
+
+If you would like to save state for a period of time and you plan to stop using your Mac during that time, if you sleep your Mac the cluster should continue from where it left off after you wake the Mac.
+
+When stopping a set of VMs--if you don't need to save cluster state--it can be helpful to stop all services first, stop ambari-server (`ambari-server stop`), and then issue a Vagrant `halt` or `suspend` command.
+
+When restarting a cluster after halting or taking a snapshot, check Ambari server status and restart it if necessary:
+
+```bash
+ambari-server status ambari-server start
+```
+
+After logging into the Ambari Web UI, expect to see alert warnings or errors due to timeout conditions. Check the associated messages to determine whether they might affect your use of the virtual cluster. If so, it can be helpful to stop and restart one or more associated components.
+
+# Reference: Basic Vagrant Commands
+
+The following table lists several common Vagrant commands. For more information, see Vagrant [Command-Line Interface](https://www.vagrantup.com/docs/cli/) documentation.
+
+**vagrant up**
+
+Create and configure guest machines. Example: `vagrant up c6406`
+
+`up.sh` is a wrapper for this call. You can use this command to start more VMs after you called `up.sh`.
+
+Note: if you do not specify the `<vm-name></vm-name>` parameter, Vagrant will attempt to start ten VMs.
+
+**vagrant suspend []**
+
+Save the current running state of a VM and stop the VM. A suspend effectively saves the _exact point-in-time state_ of a machine. When you issue a `resume` command, the VM begins running immediately from that point, rather than doing a full boot.
+
+When you are ready to begin working with it again, run `vagrant up`. The machine will resume where you left off. The main benefit of `suspend` is that it is very fast; it usually takes only 5 to 10 seconds to stop and start your work. The downside is that the operation uses disk space for the VM and to store all VM state information (in RAM, when running) on disk.
+
+Optional: Specify a specific VM to suspend.
+
+**vagrant halt** `vagrant up` `halt` **vagrant destroy -f [**
+
+Remove all traces of the guest machine from your system. The `destroy` command stops the guest machine, powers it down, and removes all guest hard disks. When you are ready to begin working with it again, run `vagrant up`. The benefit of this all disk space and RAM consumed by the guest machine are reclaimed; your host machine is left clean. The downside is that the `vagrant up` operation will take extra time; rebuilding the environment takes the longest (compared with `suspend` and `halt`) because it re-imports and re-provisions the machine.
+
+Optional: Specify a specific VM to destroy.
+
+**vagrant ssh**
+
+Starts a SSH session to the host.
+
+Example: `vagrant ssh c6401`
+**vagrant status []** **vagrant snapshot**
+
+A Vagrant snapshot saves the current state of a VM so that you can restart the VM from the same point at a future time. Commands include push, pop, save, restore, list, and delete. For more information, see [https://www.vagrantup.com/docs/cli/snapshot.html](https://www.vagrantup.com/docs/cli/snapshot.html).
+
+Note: Upon resuming a snapshot, you may find that time-sensitive services such as the (HBase RegionServer) may be down. If this happens, you will need to restart those services.
+
+**vagrant --help**
+
+**Recommendation**: After you start the VMs--but before you run anything on the VMs–save a snapshot. This allows you to restore the initial state of your VMs. This process is much faster than starting the VMs from scratch and then reinstalling Ambari and HDP. You can return to the initial state without destroying other named snapshots that you create later.
+
+More information: [https://www.vagrantup.com/docs/getting-started/teardown.html](https://www.vagrantup.com/docs/getting-started/teardown.html)
+
+If you have favorite ways of starting and stopping VMs running a Hadoop cluster, please feel free to share them in the Comments section. Thanks!
diff --git a/docs/quick-start/quick-start-guide.md b/docs/quick-start/quick-start-guide.md
new file mode 100644
index 0000000..ddab91f
--- /dev/null
+++ b/docs/quick-start/quick-start-guide.md
@@ -0,0 +1,252 @@
+---
+sidebar_position: 1
+---
+
+# Quick Start Guide
+
+This document shows how to quickly set up a cluster using Ambari on your local machine using virtual machines.
+
+This utilizes [VirtualBox](https://www.virtualbox.org/) and [Vagrant](https://www.vagrantup.com/) so you will need to install both.
+
+Note that the steps were tested on MacOS 10.8.4 / 10.8.5.
+
+After you have installed VirtualBox and Vagrant on your computer, check out the "ambari-vagrant" repo on github:
+
+```bash
+git clone https://github.com/u39kun/ambari-vagrant.git
+```
+
+Edit your **/etc/hosts** on your computer so that you will be able to resolve hostnames for the VMs:
+
+```bash
+sudo -s 'cat ambari-vagrant/append-to-etc-hosts.txt >> /etc/hosts'
+```
+
+Copy the private key to your home directory (or some place convenient for you) so that it's easily accessible for uploading via Ambari Web:
+
+```bash
+vagrant
+```
+
+The above command shows the command usage and also creates a private key as ~/.vagrant.d/insecure_private_key. This key will be used in the following steps.
+
+First, change directory to ambari-vagrant:
+
+```bash
+cd ambari-vagrant
+```
+
+You will see subdirectories for different OS's. "cd" into the OS that you want to test. **centos6.8** is recommended as this is quicker to launch than other OS's.
+
+Now you can start VMs with the following command:
+
+```bash
+cd centos6.8
+cp ~/.vagrant.d/insecure_private_key .
+./up.sh
+```
+
+For example, **up.sh 3** starts 3 VMs. 3 seems to be a good number with 16GB of RAM without taxing the system too much.
+
+With the default **Vagrantfile**, you can specify up to 10 (if your computer can handle it; you can even add more).
+
+VMs will have the FQDN
+
+If it is your first time running a vagrant command, run:
+
+```bash
+vagrant init
+```
+
+Log into the VM:
+
+```bash
+vagrant ssh c6801
+```
+
+```bash
+sudo su -
+```
+
+to make yourself root.
+
+To install Ambari, you can build it yourself from source (see [Ambari Development](../ambari-dev/index.md)), or you can use published binaries.
+
+As this is a Quick Start Guide to get you going quickly, ready-made, publicly available binaries are referenced in the steps below.
+
+Note that these binaries were built and made publicly available via Hortonworks, a commercial vendor for Hadoop. This is for your convenience. Note that using the binaries shown here would make HDP, Hortonworks' distribution, available to be installed via Apache Ambari. The instructions here should still work (only the repo URLs need to be changed) if you have Ambari binaries from any other vendor/organization/individuals (the instructions here can be updated if anyone wanted to expand this to include such ready-made, publicly accessible binaries from any source - such contributions are welcome). This would also work if you had built the binaries yourself.
+
+```bash
+# CentOS 6 (for CentOS 7, replace centos6 with centos7 in the repo URL)
+#
+# to test public release 2.5.1
+wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo
+yum install ambari-server -y
+
+# Ubuntu 14 (for Ubuntu 16, replace ubuntu14 with ubuntu16 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/apt/sources.list.d/ambari.list http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.5.1.0/ambari.list
+apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
+apt-get update
+apt-get install ambari-server -y
+
+# SUSE 11 (for SUSE 12, replace suse11 with suse12 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/zypp/repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.5.1.0/ambari.repo
+zypper install ambari-server -y
+```
+
+Ambari offers many installation options (see [Ambari User Guides](https://cwiki.apache.org/confluence/display/AMBARI/Ambari+User+Guides)), but to get up and running quickly with default settings, you can run the following to set up and start ambari-server.
+
+```bash
+ambari-server setup -s
+ambari-server start
+```
+
+_For frontend developers only: see Frontend Development section below for extra setup instructions._
+
+Once Ambari Server is started, hit [http://c6801.ambari.apache.org:8080](http://c6801.ambari.apache.org:8080) (URL depends on the OS being tested) from your browser on your local computer.
+
+Note that Ambari Server can take some time to fully come up and ready to accept connections. Keep hitting the URL until you get the login page.
+
+Once you are at the login page, login with the default username **admin** and password **admin**.
+
+On the Install Options page, use the FQDNs of the VMs. For example:
+
+```
+c6801.ambari.apache.org
+c6802.ambari.apache.org
+c6803.ambari.apache.org
+```
+
+Alternatively, you can use a range expression:
+
+```
+c68[01-03].ambari.apache.org
+```
+
+Specify the the non-root SSH user **vagrant**, and upload **insecure_private_key** file that you copied earlier as the private key.
+
+Follow the onscreen instructions to install your cluster.
+
+When done testing, run **vagrant destroy -f** to purge the VMs.
+
+**vagrant up**
+Starts a specific VM. up.sh is a wrapper for this call.
+
+Note: if you don't specify the
+
+**vagrant destroy -f**
+Destroys all VMs launched from the current directory (deletes them from disk as well)
+You can optionally specify a specific VM to destroy
+
+**vagrant suspend**
+Suspends (snapshot) all VMs launched from the current directory so that you can resume them later
+You can optionally specify a specific VM to suspend
+
+**vagrant resume**
+Resumes all suspended VMs launched from the current directory
+You can optionally specify a specific VM to resume
+
+**vagrant status**
+Shows which VMs are running, suspended, etc.
+
+## Modifying RAM for the VMs
+
+Each VM is allocated 2GB of RAM. These can be changed by editing **Vagrantfile**. To change the RAM allocation, modify the following line:
+
+```bash
+vb.customize ["modifyvm", :id, "--memory", 2048]
+```
+
+## Taking Snapshots
+
+Vagrant makes it easy to take snapshots of the entire cluster.
+
+First, install the snapshot plugin:
+
+```bash
+vagrant plugin install vagrant-vbox-snapshot --plugin-version=0.0.2
+```
+
+This enables the "vagrant snapshot" command. Note that the above installs vesion 0.0.2. if you install the latest plugin version 0.0.3 does not allow taking snapshots of the whole cluster at the same time (you have to specify a VM name).
+
+Run **vagrant snapshot** to see the syntax.
+
+Note that the plugin tries to take a snapshot of all VMs configured in Vagrantfile. If you are always using 3 VMs, for example, you can comment out c68[04-10] in Vagrantfile so that the snapshot commands only operate on c68[01-03].
+
+Note: Upon resuming a snapshot, you may find that time-sensitive services may be down (e.g, HBase RegionServer is down, etc.)
+
+Tip: After starting the VMs but before you do anything on the VMs, run "vagrant snapshot take init". This way, you can go back to the initial state of the VMs by running "vagrant snapshot go init"; this only takes seconds (much faster than starting the VMs up from scratch by using up.sh or "vagrant up"). Another advantage of this is that you can always go back to the initial state without destroying other named snapshots that you created.
+
+## Misc
+
+All VMs launched will have a directory called **/vagrant** inside the VM. This maps to the **ambari-vagrant/** directory on your local computer. You can use this shared directory mapping to push files, etc.
+
+If you want to test OS's other than what's currently in the **ambari-vagrant** repo, please see [http://www.vagrantbox.es/](http://www.vagrantbox.es/) for all the readily available OS images you can test. Note that Ambari currently works on RHEL 5/6, CentOS 5/6, Oracle Linux 5/6, SUSE 11, and SLES 11. Ubuntu support is work in progress.
+
+## Kerberos Support
+
+Ambari supports adding Kerberos security to an existing Ambari-installed cluster. First setup any one host as the KDC as follows:
+
+Install the Kerberos server on the chosen host. e.g. for Centos/RedHat
+
+```bash
+yum install krb5-server krb5-libs krb5-auth-dialog rng-tools -y
+```
+
+Create the Kerberos database.
+
+```bash
+rngd -r /dev/urandom -o /dev/random
+/usr/sbin/kdb5_util create -s
+```
+
+Update `/etc/krb5.conf` on the KDC host. e.g. if your realm is `EXAMPLE.COM` and kdc host is `c6801.ambari.apache.org`
+
+```bash
+[realms]
+ EXAMPLE.COM = {
+ admin_server = c6801.ambari.apache.org
+ kdc = c6801.ambari.apache.org
+ }
+```
+
+Restart Kerberos services. e.g. for Centos/RedHat
+
+```bash
+/etc/rc.d/init.d/krb5kdc restart
+/etc/rc.d/init.d/kadmin restart
+```
+
+```bash
+$ sudo kadmin.local
+kadmin.local: add_principal admin/admin@EXAMPLE.COM
+WARNING: no policy specified for admin/admin@EXAMPLE.COM; defaulting to no policy
+Enter password for principal "admin/admin@EXAMPLE.COM":
+Re-enter password for principal "admin/admin@EXAMPLE.COM":
+Principal "admin/admin@EXAMPLE.COM" created.
+
+```
+
+Remember the password for this principal. The Ambari Kerberos Wizard will request it later.Distribute the updated `/etc/krb5.conf` file to remaining hosts in the cluster.
+
+Navigate to _Ambari Dashboard —> Admin —> Kerberos_ to launch the Kerberos Wizard and follow the wizard steps. If you run into errors, the Ambari server logs can be found at `/var/log/ambari-server/ambari-server.log`.
+
+## Pre-Configured Development Environment
+
+Simply edit **Vagrantfile** to launch a VM with all the tools necessary to build Ambari from source.
+
+```bash
+cd ambari-vagrant/centos6.8
+vi Vagrantfile Frontend DevelopmentYou can use this set up to develop and test out Ambari Web frontend code against a real Ambari Server on a multi-node environment.You need to first fork the apache/ambari repository if you haven't already. Read the How to Contribute guide for instructions on how to fork.On the host machine:cd ambari-vagrant/centos6.8
+# Replace the [forked-repository-url] with your fork's clone url
+git clone [forked-repository-url] ambari
+cd ambari/ambari-web
+npm install
+brunch wOn c6801 (where Ambari Server is installed):cd /usr/lib/ambari-server
+mv web web-orig
+ln -s /vagrant/ambari/ambari-web/public web
+ambari-server restartWith this setup, whenever you change the content of ambari-web files (under ambari-vagrant/ambari/) on the host machine, brunch will pick up changes in the background and update ambari-vagrant/ambari/ambari-web/public. Because of the symbolic link, the changes are automatically picked up by Ambari Server. All you have to do is hit refresh on the browser to see the frontend code changes reflected.Not seeing code changes as expected? If you have run the maven command to build Ambari previously, you will see files called app.js.gz and vendor.js.gz under the public folder. You need to delete these files for the frontend code changes to be effective, as the app.js.gz and vendor.js.gz files take precedence over app.js and vendor.js, respectively.
+
+```
diff --git a/docusaurus.config.js b/docusaurus.config.js
new file mode 100644
index 0000000..97777cf
--- /dev/null
+++ b/docusaurus.config.js
@@ -0,0 +1,163 @@
+// @ts-check
+// Note: type annotations allow type checking and IDEs autocompletion
+
+const lightCodeTheme = require('prism-react-renderer/themes/github');
+const darkCodeTheme = require('prism-react-renderer/themes/dracula');
+
+/** @type {import('@docusaurus/types').Config} */
+const config = {
+ title: 'Apache Ambari',
+ tagline: 'The Apache Ambari project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs.',
+ url: 'https://your-docusaurus-test-site.com',
+ baseUrl: '/',
+ onBrokenLinks: 'throw',
+ onBrokenMarkdownLinks: 'warn',
+ favicon: 'img/favicon.ico',
+ organizationName: 'apache',
+ projectName: 'ambari',
+
+ // Even if you don't use internalization, you can use this field to set useful
+ // metadata like html lang. For example, if your site is Chinese, you may want
+ // to replace "en" with "zh-Hans".
+ i18n: {
+ defaultLocale: 'en',
+ locales: ['en'],
+ },
+
+ presets: [
+ [
+ 'classic',
+ /** @type {import('@docusaurus/preset-classic').Options} */
+ ({
+ docs: {
+ sidebarPath: require.resolve('./sidebars.js'),
+ breadcrumbs: true,
+ // Please change this to your repo.
+ // Remove this to remove the "edit this page" links.
+ editUrl:
+ 'https://github.com/vivostar/vivostar.github.io/tree/master/',
+ },
+ blog: {
+ showReadingTime: true,
+ // Please change this to your repo.
+ // Remove this to remove the "edit this page" links.
+ editUrl:
+ 'https://github.com/vivostar/vivostar.github.io/tree/master/',
+ },
+ theme: {
+ customCss: require.resolve('./src/css/custom.css'),
+ },
+ }),
+ ],
+ ],
+
+ themeConfig:
+ /** @type {import('@docusaurus/preset-classic').ThemeConfig} */
+ ({
+ docs: {
+ sidebar: {
+ hideable: true,
+ },
+ },
+ navbar: {
+ title: 'Apache Ambari',
+ logo: {
+ alt: 'Apache Ambari Logo',
+ src: 'img/ambari-logo.png',
+ },
+ items: [
+ {
+ type: 'doc',
+ docId: 'introduction',
+ position: 'left',
+ label: 'Docs',
+ },
+ {
+ href: "https://cwiki.apache.org/confluence/display/AMBARI/Ambari",
+ label: "Wiki",
+ position: "left",
+ },
+
+ {
+ type: 'dropdown',
+ position: 'left',
+ label: 'Releases',
+ items: [
+ {
+ label: '2.7.6',
+ href: 'https://www.apache.org/dyn/closer.cgi/ambari/ambari-2.7.6',
+ },
+ {
+ label: '2.7.5',
+ href: 'https://www.apache.org/dyn/closer.cgi/ambari/ambari-2.7.5',
+ },
+ ],
+ },
+ {
+ type: 'dropdown',
+ position: 'left',
+ label: 'Project Information',
+ items: [
+ {
+ label: 'Old Version Website',
+ target: '_blank',
+ to: '/old/',
+ },
+ {
+ label: 'Swagger API Doc',
+ target: '_blank',
+ to: '/swagger/',
+ },
+ {
+ label: 'Java Doc',
+ target: '_blank',
+ to: '/javadoc/apidocs',
+ },
+ {
+ label: 'Project Team',
+ target: '_blank',
+ to: '/team',
+ },
+ {
+ label: 'JIRA',
+ href: 'https://issues.apache.org/jira/projects/AMBARI/issues',
+ },
+ {
+ label: 'User Group',
+ href: 'https://www.meetup.com/Apache-Ambari-User-Group/',
+ },
+ {
+ label: 'Maling List',
+ target: '_blank',
+ to: '/old/mail-lists.html',
+ },
+ {
+ label: "Project License",
+ href: 'https://www.apache.org/licenses/'
+ }
+ ],
+ },
+ {
+ type: 'docsVersionDropdown',
+ position: 'right',
+ },
+ {
+ href: 'https://github.com/apache/ambari',
+ label: 'GitHub',
+ position: 'right',
+ },
+ ],
+ },
+ footer: {
+ style: 'dark',
+ copyright: `Copyright © ${new Date().getFullYear()} Apache Ambari. Built with Docusaurus.`,
+ },
+ prism: {
+ theme: lightCodeTheme,
+ darkTheme: darkCodeTheme,
+ },
+ }),
+ plugins: ['docusaurus-plugin-less',],
+};
+
+module.exports = config;
diff --git a/package.json b/package.json
new file mode 100644
index 0000000..23e11d7
--- /dev/null
+++ b/package.json
@@ -0,0 +1,75 @@
+{
+ "name": "ambari-website",
+ "version": "0.0.1",
+ "private": true,
+ "scripts": {
+ "commit": "git add -A && git-cz",
+ "docusaurus": "docusaurus",
+ "start": "docusaurus start",
+ "build": "docusaurus build",
+ "swizzle": "docusaurus swizzle",
+ "deploy": "docusaurus deploy",
+ "clear": "docusaurus clear",
+ "serve": "docusaurus serve",
+ "write-translations": "docusaurus write-translations",
+ "write-heading-ids": "docusaurus write-heading-ids",
+ "typecheck": "tsc"
+ },
+ "config": {
+ "commitizen": {
+ "path": "cz-conventional-changelog"
+ }
+ },
+ "lint-staged": {
+ "*.{ts,tsx,js}": [
+ "eslint --fix",
+ "prettier --write"
+ ],
+ "*.{css}": [
+ "stylelint --fix",
+ "prettier --write"
+ ]
+ },
+ "commitlint": {
+ "extends": [
+ "@commitlint/config-conventional"
+ ]
+ },
+ "dependencies": {
+ "@docusaurus/core": "2.1.0",
+ "@docusaurus/preset-classic": "2.1.0",
+ "@mdx-js/react": "^1.6.22",
+ "clsx": "^1.2.1",
+ "docusaurus-plugin-less": "^2.0.2",
+ "less": "^4.1.3",
+ "less-loader": "^11.1.0",
+ "prism-react-renderer": "^1.3.5",
+ "react": "^17.0.2",
+ "react-dom": "^17.0.2",
+ "react-particles": "^2.5.3",
+ "tsparticles": "^2.5.3"
+ },
+ "devDependencies": {
+ "@commitlint/config-conventional": "^17.1.0",
+ "@docusaurus/module-type-aliases": "2.1.0",
+ "@tsconfig/docusaurus": "^1.0.5",
+ "cz-conventional-changelog": "^3.3.0",
+ "standard-version": "^9.5.0",
+ "typescript": "^4.7.4"
+ },
+ "browserslist": {
+ "production": [
+ ">0.5%",
+ "not dead",
+ "not op_mini all"
+ ],
+ "development": [
+ "last 1 chrome version",
+ "last 1 firefox version",
+ "last 1 safari version"
+ ]
+ },
+ "engines": {
+ "node": ">=16.14"
+ }
+}
diff --git a/pom.xml b/pom.xml
new file mode 100644
index 0000000..af0dd7b
--- /dev/null
+++ b/pom.xml
@@ -0,0 +1,92 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/maven-v4_0_0.xsd">
+ <name>Ambari-Website</name>
+ <description>
+ Ambari is a web-based tool for performing installation, management, and monitoring of Apache Hadoop clusters. The stack of components that are currently supported by
+ Ambari includes Falcon, HBase, HCatalog, HDFS, Hive, MapReduce, Oozie, Pig, Scoop, Storm, Templeton, Tez, YARN and Zookeeper.
+ </description>
+ <url>https://ambari.apache.org</url>
+ <modelVersion>4.0.0</modelVersion>
+
+ <groupId>org.apache.ambari</groupId>
+ <version>2.7.6</version>
+ <artifactId>ambari-website</artifactId>
+ <organization>
+ <name>Apache Software Foundation</name>
+ <url>https://www.apache.org/</url>
+ </organization>
+
+ <build>
+ <plugins>
+ <plugin>
+ <artifactId>maven-antrun-plugin</artifactId>
+ <version>3.1.0</version>
+ <executions>
+ <execution>
+ <phase>generate-sources</phase>
+ <configuration>
+ <target>
+ <!--generate old version site-->
+ <delete dir="static/old"/>
+ <exec executable="mvn" dir="../ambari/docs">
+ <arg line="site:site" />
+ </exec>
+ <mkdir dir="static/old" />
+ <copy todir="static/old">
+ <fileset dir="../ambari/docs/target"/>
+ </copy>
+
+ <!--generate swagger api doc-->
+ <delete dir="static/swagger" />
+ <exec executable="mvn" dir="../ambari/ambari-server">
+ <arg line="swagger-codegen:generate@generate-swagger-html2" />
+ </exec>
+ <mkdir dir="static/swagger" />
+ <copy todir="static/swagger">
+ <fileset dir="../ambari/ambari-server/target/generated-sources/swagger"/>
+ </copy>
+
+ <!--generate ambari server javadoc -->
+ <delete dir="static/javadoc/apidocs" />
+ <exec executable="mvn" dir="../ambari/ambari-server">
+ <arg value="javadoc:javadoc"/>
+ <arg value="-Ddoclint=none"/>
+ </exec>
+ <mkdir dir="static/javadoc/apidocs" />
+ <copy todir="static/javadoc/apidocs">
+ <fileset dir="../ambari/ambari-server/target/site/apidocs"/>
+ </copy>
+ </target>
+ </configuration>
+ <goals>
+ <goal>
+ run
+ </goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ </plugins>
+ </build>
+
+
+</project>
diff --git a/sidebars.js b/sidebars.js
new file mode 100644
index 0000000..76c665a
--- /dev/null
+++ b/sidebars.js
@@ -0,0 +1,159 @@
+/**
+ * Creating a sidebar enables you to:
+ - create an ordered group of docs
+ - render a sidebar for each doc of that group
+ - provide next/previous navigation
+
+ The sidebars can be generated from the filesystem, or explicitly defined here.
+
+ Create as many sidebars as you want.
+ */
+
+// @ts-check
+
+/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
+const sidebars = {
+ // ambariSidebar: [{type: 'autogenerated', dirName: '.'}],
+
+ ambariSidebar: [
+ "introduction",
+ {
+ type: "category",
+ label: "Quick Start",
+ collapsible: true,
+ collapsed: false,
+ items: [
+ "quick-start/quick-start-guide",
+ "quick-start/quick-start-for-new-vm-users"
+ ]
+ },
+ {
+ type: "category",
+ label: "Ambari Design",
+ collapsible: true,
+ collapsed: false,
+ link: {type: 'doc', id: 'ambari-design/index'},
+ items: [
+ "ambari-design/alerts",
+ {
+ type: "category",
+ label: "Automated Kerberizaton",
+ link: {type: "doc", id: "ambari-design/kerberos/index"},
+ items:[
+ "ambari-design/kerberos/kerberos_descriptor",
+ "ambari-design/kerberos/kerberos_service",
+ "ambari-design/kerberos/enabling_kerberos"
+ ]
+ },
+ {
+ type: "category",
+ label: "Blueprints",
+ link: {type: "doc", id: "ambari-design/blueprints/index"},
+ items: [
+ "ambari-design/blueprints/blueprint-ha",
+ "ambari-design/blueprints/blueprint-ranger"
+ ]
+ },
+ "ambari-design/enhanced-configs/index",
+ "ambari-design/service-dashboard/index",
+
+ {
+ type: "category",
+ label: "Metrics",
+ link: { type: "doc", id: "ambari-design/metrics/index" },
+ items: [
+ "ambari-design/metrics/metrics-collector-api-specification",
+ "ambari-design/metrics/configuration",
+ "ambari-design/metrics/operations",
+ "ambari-design/metrics/troubleshooting",
+ "ambari-design/metrics/metrics-api-specification",
+ "ambari-design/metrics/stack-defined-metrics",
+ "ambari-design/metrics/upgrading-ambari-metrics-system",
+ "ambari-design/metrics/ambari-server-metrics",
+ "ambari-design/metrics/ambari-metrics-whitelisting"
+ ]
+ },
+
+ "ambari-design/quick-links",
+
+ {
+ type: "category",
+ label: "Stacks and Services",
+ link: {type: "doc", id: "ambari-design/stack-and-services/index"},
+ items: [
+ "ambari-design/stack-and-services/overview",
+ "ambari-design/stack-and-services/custom-services",
+ "ambari-design/stack-and-services/defining-a-custom-stack-and-services",
+ "ambari-design/stack-and-services/extensions",
+ "ambari-design/stack-and-services/how-to-define-stacks-and-services",
+ "ambari-design/stack-and-services/management-packs",
+ "ambari-design/stack-and-services/stack-inheritance",
+ "ambari-design/stack-and-services/stack-properties",
+ "ambari-design/stack-and-services/writing-metainfo",
+ "ambari-design/stack-and-services/faq",
+ "ambari-design/stack-and-services/hooks",
+ "ambari-design/stack-and-services/version-functions-conf-select-and-stack-select"
+ ]
+ },
+
+ "ambari-design/technology-stack",
+
+ {
+ type: "category",
+ label: "Views",
+ link: {type: "doc", id: "ambari-design/views/index"},
+ items: [
+ "ambari-design/views/framework-services",
+ "ambari-design/views/view-api",
+ "ambari-design/views/view-definition"
+ ]
+ }
+
+ ]
+ },
+
+ {
+ type: "category",
+ label: "Ambari Development",
+ link: {type: "doc", id: "ambari-dev/index"},
+ items: [
+ "ambari-dev/development-process-for-new-major-features",
+ "ambari-dev/ambari-code-layout",
+ "ambari-dev/apache-ambari-jira",
+ "ambari-dev/coding-guidelines-for-ambari",
+ "ambari-dev/how-to-commit",
+ "ambari-dev/how-to-contribute",
+ "ambari-dev/code-review-guidelines",
+ "ambari-dev/releasing-ambari",
+ "ambari-dev/admin-view-ambari-admin-development",
+ "ambari-dev/unit-test-reports",
+ "ambari-dev/development-in-docker",
+ "ambari-dev/developer-tools",
+ "ambari-dev/feature-flags",
+ "ambari-dev/verifying-release-candidate"
+ ]
+ },
+
+ {
+ type: "category",
+ label: "Ambari Plugin Contributions",
+ link: {type: "doc", id: "ambari-plugin-contribution/index"},
+ items: [
+ {
+ type: "category",
+ label: "Ambari SCOM Management Pack",
+ link: {type: "doc", id: "ambari-plugin-contribution/scom/index"},
+ items: [
+ "ambari-plugin-contribution/scom/installation"
+ ]
+ },
+ "ambari-plugin-contribution/step-by-step"
+ ]
+ },
+
+ "ambari-alerts"
+ ]
+
+};
+
+module.exports = sidebars;
diff --git a/src/components/Confetti/index.tsx b/src/components/Confetti/index.tsx
new file mode 100644
index 0000000..4975be7
--- /dev/null
+++ b/src/components/Confetti/index.tsx
@@ -0,0 +1,129 @@
+import React from "react";
+import { useCallback } from "react";
+import type { Container, Engine } from "tsparticles-engine";
+import Particles from "react-particles";
+import { loadFull } from "tsparticles";
+import styles from './styles.module.css';
+
+export default function Confetti() {
+ const particlesInit = useCallback(async (engine: Engine) => {
+ console.log(engine);
+
+ // you can initialize the tsParticles instance (engine) here, adding custom shapes or presets
+ // this loads the tsparticles package bundle, it's the easiest method for getting everything ready
+ // starting from v2 you can add only the features you need reducing the bundle size
+ await loadFull(engine);
+ }, []);
+
+ const particlesLoaded = useCallback(async (container: Container | undefined) => {
+ await console.log(container);
+ }, []);
+
+ // id="confetti"
+ // className={styles.confetti}
+
+ return (
+ <Particles
+ className={styles.confetti}
+ id="confetti"
+ init={particlesInit}
+ loaded={particlesLoaded}
+ options={{
+ fullScreen: {
+ enable: false,
+ },
+ particles: {
+ number: {
+ value: 20,
+ density: {
+ enable: true,
+ value_area: 200
+ }
+ },
+ size: {
+ value: 5,
+ random: true,
+ anim: {
+ enable: true,
+ speed: 1,
+ size_min: 1,
+ sync: false
+ }
+ },
+ shape: {
+ type: "circle",
+ },
+ color: {
+ value: ["#EA5532", "#F6AD3C", "#FFF33F", "#00A95F", "#00ADA9", "#00AFEC", "#4D4398", "#E85298"],
+ },
+ opacity: {
+ value: 1,
+ animation: {
+ enable: false,
+ startValue: "max",
+ destroy: "min",
+ speed: 10,
+ sync: true
+ }
+ },
+ move: {
+ enable: true,
+ speed: {
+ min: 1,
+ max: 5
+ },
+ direction: "top",
+ random: false,
+ straight: false,
+ out_mode: "out",
+ bounce: false,
+ attract: {
+ enable: false,
+ rotateX: 600,
+ rotateY: 1200
+ }
+ },
+ roll: {
+ darken: {
+ enable: true,
+ value: 30
+ },
+ enable: true,
+ speed: {
+ min: 15,
+ max: 25
+ }
+ },
+ tilt: {
+ direction: "random",
+ enable: true,
+ move: true,
+ value: {
+ min: 0,
+ max: 360
+ },
+ animation: {
+ enable: true,
+ speed: 60
+ }
+ },
+ },
+ interactivity: {
+ detect_on: "canvas",
+ events: {
+ onhover: {
+ enable: false,
+ },
+ onclick: {
+ enable: false,
+ },
+ resize: true
+ },
+ },
+ retina_detect: true
+ }}
+
+
+ />
+ );
+ };
\ No newline at end of file
diff --git a/src/components/Confetti/styles.module.css b/src/components/Confetti/styles.module.css
new file mode 100644
index 0000000..3b8a188
--- /dev/null
+++ b/src/components/Confetti/styles.module.css
@@ -0,0 +1,11 @@
+.confetti {
+ margin: 0;
+ padding: 0;
+ display: flex;
+ position: absolute;
+ top: 0;
+ left: 0;
+ bottom: 0;
+ right: 0;
+ z-index: -1;
+}
\ No newline at end of file
diff --git a/src/components/HomepageFeatures/index.tsx b/src/components/HomepageFeatures/index.tsx
new file mode 100644
index 0000000..3a349af
--- /dev/null
+++ b/src/components/HomepageFeatures/index.tsx
@@ -0,0 +1,89 @@
+
+ /*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import React from 'react';
+import clsx from 'clsx';
+import styles from './styles.module.css';
+
+type FeatureItem = {
+ title: string;
+ Svg: React.ComponentType<React.ComponentProps<'svg'>>;
+ description: JSX.Element;
+};
+
+const FeatureList: FeatureItem[] = [
+ {
+ title: 'Provision a Hadoop Cluster',
+ Svg: require('@site/static/img/hadoop-logo.svg').default,
+ description: (
+ <>
+ Ambari provides a step-by-step wizard for installing Hadoop services across
+ any number of hosts.
+ </>
+ ),
+ },
+ {
+ title: 'Manage a Hadoop Cluster',
+ Svg: require('@site/static/img/hammer-and-wrench.svg').default,
+ description: (
+ <>
+ Ambari provides central management for starting, stopping, and reconfiguring
+ Hadoop services across the entire cluster.
+ </>
+ ),
+ },
+ {
+ title: 'Monitor a Hadoop Cluster',
+ Svg: require('@site/static/img/monitor.svg').default,
+ description: (
+ <>
+ Ambari provides a dashboard for monitoring health and status of the Hadoop cluster.
+ </>
+ ),
+ },
+];
+
+function Feature({title, Svg, description}: FeatureItem) {
+ return (
+ <div className={clsx('col col--4')}>
+ <div className="text--center">
+ <Svg className={styles.featureSvg} role="img" />
+ </div>
+ <div className="text--center padding-horiz--md">
+ <h3>{title}</h3>
+ <p>{description}</p>
+ </div>
+ </div>
+ );
+}
+
+export default function HomepageFeatures(): JSX.Element {
+ return (
+ <section className={styles.features}>
+ <div className="container">
+ <div className="row">
+ {FeatureList.map((props, idx) => (
+ <Feature key={idx} {...props} />
+ ))}
+ </div>
+ </div>
+ </section>
+ );
+}
diff --git a/src/components/HomepageFeatures/styles.module.css b/src/components/HomepageFeatures/styles.module.css
new file mode 100644
index 0000000..b248eb2
--- /dev/null
+++ b/src/components/HomepageFeatures/styles.module.css
@@ -0,0 +1,11 @@
+.features {
+ display: flex;
+ align-items: center;
+ padding: 2rem 0;
+ width: 100%;
+}
+
+.featureSvg {
+ height: 200px;
+ width: 200px;
+}
diff --git a/src/css/custom.css b/src/css/custom.css
new file mode 100644
index 0000000..e31822f
--- /dev/null
+++ b/src/css/custom.css
@@ -0,0 +1,60 @@
+/**
+ * Any CSS included here will be global. The classic template
+ * bundles Infima by default. Infima is a CSS framework designed to
+ * work well for content-centric websites.
+ */
+
+/* You can override the default Infima variables here. */
+:root {
+ --ifm-color-primary: #2e8555;
+ --ifm-color-primary-dark: #29784c;
+ --ifm-color-primary-darker: #277148;
+ --ifm-color-primary-darkest: #205d3b;
+ --ifm-color-primary-light: #33925d;
+ --ifm-color-primary-lighter: #359962;
+ --ifm-color-primary-lightest: #3cad6e;
+ --ifm-code-font-size: 95%;
+ --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
+
+ --ifm-color-hero-title: #0066cc;
+}
+
+/* For readability concerns, you should choose a lighter palette in dark mode. */
+[data-theme='dark'] {
+ --ifm-color-primary: #25c2a0;
+ --ifm-color-primary-dark: #21af90;
+ --ifm-color-primary-darker: #1fa588;
+ --ifm-color-primary-darkest: #1a8870;
+ --ifm-color-primary-light: #29d5b0;
+ --ifm-color-primary-lighter: #32d8b4;
+ --ifm-color-primary-lightest: #4fddbf;
+ --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);
+}
+
+.docusaurus-highlight-code-line {
+ background-color: rgba(0, 0, 0, 0.1);
+ display: block;
+ margin: 0 calc(-1 * var(--ifm-pre-padding));
+ padding: 0 var(--ifm-pre-padding);
+}
+
+[data-theme='dark'] .docusaurus-highlight-code-line {
+ background-color: rgba(0, 0, 0, 0.3);
+}
+
+
+.hero {
+ color: var(--ifm-color-hero-title);
+ background-color: transparent;
+
+ background-position: right 4rem top 1rem;
+ background-repeat: no-repeat;
+}
+
+.hero .hero__title {
+ font-size: 5rem;
+}
+
+.hero .hero__subtitle {
+ font-size: 2rem;
+}
diff --git a/src/hooks/useMediaQuery.jsx b/src/hooks/useMediaQuery.jsx
new file mode 100644
index 0000000..93d138e
--- /dev/null
+++ b/src/hooks/useMediaQuery.jsx
@@ -0,0 +1,17 @@
+import { useState, useEffect } from "react";
+
+const useMediaQuery = (query) => {
+ const [matches, setMatches] = useState(false);
+ useEffect(() => {
+ const media = window.matchMedia(query);
+ if (media.matches !== matches) {
+ setMatches(media.matches);
+ }
+ const listener = () => setMatches(media.matches);
+ window.addEventListener("resize", listener);
+ return () => window.removeEventListener("resize", listener);
+ }, [matches, query]);
+ return matches;
+}
+
+export default useMediaQuery;
\ No newline at end of file
diff --git a/src/pages/index.module.css b/src/pages/index.module.css
new file mode 100644
index 0000000..30a77b6
--- /dev/null
+++ b/src/pages/index.module.css
@@ -0,0 +1,32 @@
+/**
+ * CSS files with the .module.css suffix will be treated as CSS modules
+ * and scoped locally.
+ */
+
+.heroBanner {
+ padding: 4rem 0;
+ text-align: center;
+ position: relative;
+ overflow: hidden;
+}
+
+@media screen and (max-width: 996px) {
+ .heroBanner {
+ padding: 2rem;
+ }
+}
+
+.buttons {
+ display: flex;
+ align-items: center;
+ justify-content: center;
+
+}
+
+.button_icon{
+ margin-right: 8px;
+ margin-top: -5px;
+ width: 28px;
+ height: 28px;
+ vertical-align: middle;
+}
diff --git a/src/pages/index.tsx b/src/pages/index.tsx
new file mode 100644
index 0000000..c6177bc
--- /dev/null
+++ b/src/pages/index.tsx
@@ -0,0 +1,124 @@
+
+ /*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import React, { useState } from 'react';
+import clsx from 'clsx';
+import Link from '@docusaurus/Link';
+import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
+import useBaseUrl from '@docusaurus/useBaseUrl';
+import Layout from '@theme/Layout';
+import HomepageFeatures from '@site/src/components/HomepageFeatures';
+
+import styles from './index.module.css';
+import Confetti from '../components/Confetti';
+import useMediaQuery from '../hooks/useMediaQuery';
+
+function HomepageHeader() {
+
+ const isMobileScreen = useMediaQuery("(max-width: 800px)")
+
+ const {siteConfig} = useDocusaurusContext();
+
+ const [flag, setFlag] = useState(1);
+
+ function changeFlag(val) {
+ setFlag(val == 1 ? 2 : 1)
+}
+
+ return (
+ <header className={clsx('hero hero--primary', styles.heroBanner)}>
+ <div className="container">
+ <Confetti />
+ <h1 className="hero__title">{siteConfig.title}</h1>
+ <p className="hero__subtitle">{siteConfig.tagline}</p>
+
+ {isMobileScreen?
+ (
+ <>
+ <div className={styles.buttons}>
+ <Link
+ className="button button--primary button--lg margin-bottom--sm"
+ to="/docs/quick-start/quick-start-guide">
+ GET STARTED
+ </Link>
+ </div>
+ <div className={styles.buttons}>
+ <Link
+ className="button button--secondary button--outline button--lg margin-top--sm"
+ to="https://github.com/apache/ambari"
+ >
+ <img className={styles.button_icon} src={useBaseUrl('/img/github' + flag + '.svg')} alt="github"/>
+ <span>GITHUB</span>
+ </Link>
+ </div>
+ <div className={styles.buttons}>
+ <Link
+ className="button button--secondary button--outline button--lg margin-top--sm"
+ to="https://the-asf.slack.com/archives/C014FSPE668">
+ <img className={styles.button_icon} src='/img/slack.svg' alt="slack"/>
+ <span>SLACK</span>
+ </Link>
+ </div>
+ </>
+ )
+ :
+ (
+ <div className={styles.buttons}>
+ <Link
+ className="button button--primary button--lg margin-right--sm"
+ to="/docs/quick-start/quick-start-guide">
+ GET STARTED
+ </Link>
+ <Link
+ className="button button--secondary button--outline button--lg margin-left--sm"
+ to="https://github.com/apache/ambari"
+ onMouseOver={() => changeFlag(1)} onMouseOut={() => changeFlag(2)}
+ >
+ <img className={styles.button_icon} src={useBaseUrl('/img/github' + flag + '.svg')} alt="github"/>
+ <span>GITHUB</span>
+ </Link>
+ <Link
+ className="button button--secondary button--outline button--lg margin-left--sm"
+ to="https://the-asf.slack.com/archives/C014FSPE668">
+ <img className={styles.button_icon} src='/img/slack.svg' alt="slack"/>
+ <span>SLACK</span>
+ </Link>
+ </div>
+ )
+ }
+ </div>
+ </header>
+ );
+}
+
+export default function Home(): JSX.Element {
+ const {siteConfig} = useDocusaurusContext();
+ return (
+
+ <Layout
+ title={`${siteConfig.title}`}
+ description="Description will go into a meta tag in <head />">
+ <HomepageHeader />
+ <main>
+ <HomepageFeatures />
+ </main>
+ </Layout>
+ );
+}
diff --git a/src/pages/team/index.js b/src/pages/team/index.js
new file mode 100644
index 0000000..b19e483
--- /dev/null
+++ b/src/pages/team/index.js
@@ -0,0 +1,57 @@
+import React from 'react';
+import useIsBrowser from '@docusaurus/useIsBrowser';
+import config from "./languages.json";
+import Layout from '@theme/Layout';
+import './index.less';
+
+export default function () {
+ const isBrowser = useIsBrowser();
+ const language = isBrowser && location.pathname.indexOf('/zh-CN/') === 0 ? 'zh-CN' : 'en';
+ const dataSource = config?.[language];
+
+ return (
+ <Layout title="team" description="team page">
+ <div className="container">
+ <div className="block team_page">
+ <h3 className="team_title">Ambari Team</h3>
+ <p className="team_desc" dangerouslySetInnerHTML={ { __html: dataSource.info.desc } }/>
+ <h3 className="team_title">PMC</h3>
+ <p className="team_desc">{dataSource.info.tip}</p>
+ <ul className="character_list">
+ {
+ dataSource.pmc.map((item, i) => (
+ <a href={'https://github.com/' + item.githubId} key={i} target="_blank">
+ <li className="character_item text_center" style={{'listStyle': 'none'}}>
+ <img className="character_avatar" src={item.avatarUrl} alt={item.name}/>
+ <div className="character_desc">
+ <h3 className="character_name">{item.name}</h3>
+ <h3 className="character_id"><span className="githubId">githubId:</span>{item.githubId}</h3>
+ </div>
+ </li>
+ </a>
+ ))
+ }
+ </ul>
+
+ <h3 className="team_title">Committers</h3>
+ <p className="team_desc">{dataSource.info.tip}</p>
+ <ul className="character_list">
+ {
+ dataSource.committer.map((item, i) => (
+ <a href={'https://github.com/' + item.githubId} key={i} target="_blank">
+ <li className="character_item text_center" style={{'listStyle': 'none'}}>
+ <img className="character_avatar" src={item.avatarUrl} alt={item.name}/>
+ <div className="character_desc">
+ <h3 className="character_name">{item.name}</h3>
+ <h3 className="character_id"><span className="githubId">githubId:</span>{item.githubId}</h3>
+ </div>
+ </li>
+ </a>
+ ))
+ }
+ </ul>
+ </div>
+ </div>
+ </Layout>
+ );
+}
diff --git a/src/pages/team/index.less b/src/pages/team/index.less
new file mode 100644
index 0000000..26bf048
--- /dev/null
+++ b/src/pages/team/index.less
@@ -0,0 +1,91 @@
+ @active-color: #3B84F6;//#2863f9
+ @enhance-color: #0F1222;
+
+.team_page {
+
+ a {
+ text-decoration: none;
+ }
+
+ margin-top: 50px;
+
+ .team_title {
+ font-size: 25px;
+ font-weight: 500;
+ color: #0F1222;
+ margin-top: 50px;
+ }
+
+ .team_desc {
+ margin-bottom: 40px;
+ }
+
+ .character_list {
+ display: grid;
+ grid-template-columns: repeat(6, 1fr);
+ grid-column-gap: 15px;
+ grid-row-gap: 15px;
+ padding: 20px 0 0px;
+
+ .character_item {
+ border: 1px solid rgba(205, 221, 250, 0.8);
+ background-color: rgba(205, 222, 234, 0.2);
+ box-shadow: 0 3px 5px rgb(47 85 212 / 8%);
+ border-radius: 2px;
+ min-width: 0;
+ padding: 0 10px 5px;
+ margin: 40px 0 42px 0;
+
+ .character_avatar {
+ width: 120px;
+ height: 120px;
+ background: #D8D8D8;
+ display: inline-block;
+ border-radius: 50%;
+ border: 5px solid rgba(255,255,255,.08);
+ margin: -4rem auto -.5rem;
+ }
+
+ .character_name {
+ color: @enhance-color;
+ line-height: 36px;
+ font-size: 15px;
+ white-space: nowrap;
+ font-weight: 400;
+ overflow: hidden;
+ background-color: #fff;
+ margin-top: 20px;
+ border-radius: 18px;
+ text-overflow: ellipsis;
+ border: 1px solid rgba(205, 221, 250, 0.6);
+ box-shadow: 0 3px 5px rgb(47 85 212 / 5%);
+ }
+
+ .character_id {
+ color: #333;
+ line-height: 36px;
+ font-size: 12px;
+ white-space: nowrap;
+ font-weight: 400;
+ overflow: hidden;
+ border-radius: 18px;
+ background-color: #fff;
+ text-overflow: ellipsis;
+ border: 1px solid rgba(205, 221, 250, 0.6);
+ box-shadow: 0 3px 5px rgb(47 85 212 / 5%);
+ .githubId {
+ color: #666;
+ margin-right: 2px;
+ }
+ }
+
+ .character_link {
+ color: rgba(15, 18, 34, 0.65);
+ font-weight: 400;
+ white-space: nowrap;
+ overflow: hidden;
+ text-overflow: ellipsis;
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/pages/team/languages.json b/src/pages/team/languages.json
new file mode 100644
index 0000000..0858890
--- /dev/null
+++ b/src/pages/team/languages.json
@@ -0,0 +1,168 @@
+{
+ "en": {
+ "info": {
+ "desc": "The Ambari Team",
+ "tip": "(In no particular order)"
+ },
+ "pmc": [
+
+ {
+ "apacheId": "atri",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/1724131?v=4",
+ "email": "atri@apache.org",
+ "gitUrl": "https://github.com/atris",
+ "githubId": "atri",
+ "name": "Atri Sharma"
+ },
+ {
+ "apacheId": "brahma",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/1954406?v=4",
+ "email": "brahma@apache.org",
+ "gitUrl": "https://github.com/brahmareddybattula",
+ "githubId": "brahmareddybattula",
+ "name": "Brahma Reddy Battula"
+ },
+ {
+ "apacheId": "ddas",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/938524?v=4",
+ "email": "ddas@hortonworks.com",
+ "gitUrl": "https://github.com/ddraj",
+ "githubId": "ddas",
+ "name": "Devaraj Das"
+ },
+ {
+ "apacheId": "evansye",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/2102804?v=4",
+ "email": "evansye@apache.org",
+ "gitUrl": "https://github.com/evans-ye",
+ "githubId": "evans-ye",
+ "name": "Evans Ye"
+ },
+ {
+ "apacheId": "guyuqi",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/20575685?v=4",
+ "email": "guyuqi@apache.org",
+ "gitUrl": "https://github.com/guyuqi",
+ "githubId": "guyuqi",
+ "name": "Yuqi Gu"
+ },
+ {
+ "apacheId": "iwasakims",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/1856607?v=4",
+ "email": "iwasakims@apache.org",
+ "gitUrl": "https://github.com/iwasakims",
+ "githubId": "iwasakims",
+ "name": "Masatake Iwasaki"
+ }
+ ,
+ {
+ "apacheId": "junhe",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/17865359?v=4",
+ "email": "junhe@apache.org",
+ "gitUrl": "https://github.com/JunHe77",
+ "githubId": "JunHe77",
+ "name": "Jun He"
+ }
+
+ ,
+ {
+ "apacheId": "masatana",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/240015?v=4",
+ "email": "masatana@apache.org",
+ "gitUrl": "https://github.com/masahirotanaka",
+ "githubId": "masahirotanaka",
+ "name": "Masahiro Tanaka"
+ }
+
+ ,
+ {
+ "apacheId": "mithmatt",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/5906984?v=4",
+ "email": "mithmatt@apache.org",
+ "gitUrl": "https://github.com/mithmatt",
+ "githubId": "mithmatt",
+ "name": "Matt"
+ }
+
+ ,
+ {
+ "apacheId": "rvs",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/34680?v=4",
+ "email": "rvs@apache.org",
+ "gitUrl": "https://github.com/rvs",
+ "githubId": "rvs",
+ "name": "Roman Shaposhnik"
+ },
+ {
+ "apacheId": "sekikn",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/898388?v=4",
+ "email": "sekikn@apache.org",
+ "gitUrl": "https://github.com/sekikn",
+ "githubId": "sekikn",
+ "name": "Kengo Seki"
+ }
+ ,
+ {
+ "apacheId": "umamahesh",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/1719507?v=4",
+ "email": "umamahesh@apache.org",
+ "gitUrl": "https://github.com/umamaheswararao",
+ "githubId": "umamaheswararao",
+ "name": "Uma Maheswara Rao G"
+ }
+ ,
+ {
+ "apacheId": "vgogate",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/913821?v=4",
+ "email": "vgogate@apache.org",
+ "gitUrl": "https://github.com/vgogate",
+ "githubId": "vgogate",
+ "name": "Suhas"
+ }
+ ,
+ {
+ "apacheId": "vishalsuvagia",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/7812238?v=4",
+ "email": "vishalsuvagia@apache.org",
+ "gitUrl": "https://github.com/vishalsuvagia",
+ "githubId": "vishalsuvagia",
+ "name": "Vishal Suvagia"
+ }
+ ,
+ {
+ "apacheId": "weichiu",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/2691807?v=4",
+ "email": "weichiu@apache.org",
+ "gitUrl": "https://github.com/jojochuang",
+ "githubId": "jojochuang",
+ "name": "Wei-Chiu Chuang"
+ },
+ {
+ "apacheId": "wuzhiguo",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/31196226?v=4",
+ "email": "wuzhiguo@apache.org",
+ "gitUrl": "https://github.com/kevinw66",
+ "githubId": "kevinw66",
+ "name": "Zhiguo Wu"
+ }
+ ],
+ "committer": [
+ {
+ "apacheId": "yaolei",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/13212217?v=4",
+ "email": "yaolei@apache.org",
+ "gitUrl": "https://github.com/smallyao",
+ "githubId": "smallyao",
+ "name": "Lei Yao"
+ },
+ {
+ "apacheId": "vjasani",
+ "avatarUrl": "https://avatars.githubusercontent.com/u/34790606?v=4",
+ "email": "vjasani@apache.org",
+ "gitUrl": "https://github.com/virajjasani",
+ "githubId": "virajjasani",
+ "name": "Viraj Jasani"
+ }
+ ]
+ }
+}
\ No newline at end of file
diff --git a/static/.nojekyll b/static/.nojekyll
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/static/.nojekyll
diff --git a/static/img/ambari-logo-1.png b/static/img/ambari-logo-1.png
new file mode 100644
index 0000000..7ce8965
--- /dev/null
+++ b/static/img/ambari-logo-1.png
Binary files differ
diff --git a/static/img/ambari-logo.png b/static/img/ambari-logo.png
new file mode 100644
index 0000000..f051796
--- /dev/null
+++ b/static/img/ambari-logo.png
Binary files differ
diff --git a/static/img/docusaurus.png b/static/img/docusaurus.png
new file mode 100644
index 0000000..f458149
--- /dev/null
+++ b/static/img/docusaurus.png
Binary files differ
diff --git a/static/img/favicon.ico b/static/img/favicon.ico
new file mode 100644
index 0000000..cb6dc78
--- /dev/null
+++ b/static/img/favicon.ico
Binary files differ
diff --git a/static/img/github1.svg b/static/img/github1.svg
new file mode 100644
index 0000000..b44a014
--- /dev/null
+++ b/static/img/github1.svg
@@ -0,0 +1 @@
+<?xml version="1.0" standalone="no"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"><svg t="1640352495431" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="4704" xmlns:xlink="http://www.w3.org/1999/xlink" width="200" height="200"><defs><style type="text/css"></style></defs><path d="M512 85.333333C276.266667 85.333333 85.333333 276.266667 85.333333 512a426.410667 426.410667 0 0 0 291.754667 404.821333c21.333333 3.712 29.312-9.088 29.312-20.309333 0-10.112-0.554667-43.690667-0.554667-79.445333-107.178667 19.754667-134.912-26.112-143.445333-50.133334-4.821333-12.288-25.6-50.133333-43.733333-60.288-14.933333-7.978667-36.266667-27.733333-0.554667-28.245333 33.621333-0.554667 57.6 30.933333 65.621333 43.733333 38.4 64.512 99.754667 46.378667 124.245334 35.2 3.754667-27.733333 14.933333-46.378667 27.221333-57.045333-94.933333-10.666667-194.133333-47.488-194.133333-210.688 0-46.421333 16.512-84.778667 43.733333-114.688-4.266667-10.666667-19.2-54.4 4.266667-113.066667 0 0 35.712-11.178667 117.333333 43.776a395.946667 395.946667 0 0 1 106.666667-14.421333c36.266667 0 72.533333 4.778667 106.666666 14.378667 81.578667-55.466667 117.333333-43.690667 117.333334-43.690667 23.466667 58.666667 8.533333 102.4 4.266666 113.066667 27.178667 29.866667 43.733333 67.712 43.733334 114.645333 0 163.754667-99.712 200.021333-194.645334 210.688 15.445333 13.312 28.8 38.912 28.8 78.933333 0 57.045333-0.554667 102.912-0.554666 117.333334 0 11.178667 8.021333 24.490667 29.354666 20.224A427.349333 427.349333 0 0 0 938.666667 512c0-235.733333-190.933333-426.666667-426.666667-426.666667z" fill="#000000" p-id="4705"></path></svg>
\ No newline at end of file
diff --git a/static/img/github2.svg b/static/img/github2.svg
new file mode 100644
index 0000000..5b16b11
--- /dev/null
+++ b/static/img/github2.svg
@@ -0,0 +1 @@
+<?xml version="1.0" standalone="no"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"><svg t="1640528563362" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="4705" xmlns:xlink="http://www.w3.org/1999/xlink" width="200" height="200"><defs><style type="text/css"></style></defs><path d="M960 512c0 97.76-28.704 185.216-85.664 263.264-56.96 78.016-130.496 131.84-220.64 161.856-10.304 1.824-18.368 0.448-22.848-4.032a22.4 22.4 0 0 1-7.2-17.504v-122.88c0-37.632-10.304-65.44-30.464-82.912a409.856 409.856 0 0 0 59.616-10.368 222.752 222.752 0 0 0 54.72-22.816c18.848-10.784 34.528-23.36 47.104-38.592 12.544-15.232 22.848-35.904 30.912-61.44 8.096-25.568 12.128-54.688 12.128-87.904 0-47.072-15.232-86.976-46.208-120.16 14.368-35.456 13.024-74.912-4.48-118.848-10.752-3.616-26.432-1.344-47.072 6.272s-38.56 16.16-53.824 25.568l-21.984 13.888c-36.32-10.304-73.536-15.232-112.096-15.232s-75.776 4.928-112.096 15.232a444.48 444.48 0 0 0-24.672-15.68c-10.336-6.272-26.464-13.888-48.896-22.432-21.952-8.96-39.008-11.232-50.24-8.064-17.024 43.936-18.368 83.424-4.032 118.848-30.496 33.632-46.176 73.536-46.176 120.608 0 33.216 4.032 62.336 12.128 87.456 8.032 25.12 18.368 45.76 30.496 61.44 12.544 15.68 28.224 28.704 47.072 39.04 18.848 10.304 37.216 17.92 54.72 22.816a409.6 409.6 0 0 0 59.648 10.368c-15.712 13.856-25.12 34.048-28.704 60.064a99.744 99.744 0 0 1-26.464 8.512 178.208 178.208 0 0 1-33.184 2.688c-13.024 0-25.568-4.032-38.144-12.544-12.544-8.512-23.296-20.64-32.256-36.32a97.472 97.472 0 0 0-28.256-30.496c-11.232-8.064-21.088-12.576-28.704-13.92l-11.648-1.792c-8.096 0-13.92 0.928-17.056 2.688-3.136 1.792-4.032 4.032-2.688 6.72s3.136 5.408 5.376 8.096 4.928 4.928 7.616 7.168l4.032 2.688c8.544 4.032 17.056 11.232 25.568 21.984 8.544 10.752 14.368 20.64 18.4 29.6l5.824 13.44c4.928 14.816 13.44 26.912 25.568 35.872 12.096 8.992 25.088 14.816 39.008 17.504 13.888 2.688 27.36 4.032 40.352 4.032s23.776-0.448 32.288-2.24l13.472-2.24c0 14.784 0 32.288 0.416 52.032 0 19.744 0.48 30.496 0.48 31.392a22.624 22.624 0 0 1-7.648 17.472c-4.928 4.48-12.992 5.824-23.296 4.032-90.144-30.048-163.68-83.84-220.64-161.888C92.256 697.216 64 609.312 64 512c0-81.152 20.192-156.064 60.096-224.672s94.176-122.88 163.232-163.232C355.936 84.192 430.816 64 512 64s156.064 20.192 224.672 60.096 122.88 94.176 163.232 163.232C939.808 355.488 960 430.848 960 512" fill="#ffffff" p-id="4706"></path></svg>
\ No newline at end of file
diff --git a/static/img/guides/confirm_hosts.png b/static/img/guides/confirm_hosts.png
new file mode 100644
index 0000000..d94c08a
--- /dev/null
+++ b/static/img/guides/confirm_hosts.png
Binary files differ
diff --git a/static/img/guides/custom_services.png b/static/img/guides/custom_services.png
new file mode 100644
index 0000000..446944b
--- /dev/null
+++ b/static/img/guides/custom_services.png
Binary files differ
diff --git a/static/img/guides/host_checked.png b/static/img/guides/host_checked.png
new file mode 100644
index 0000000..9c8cd86
--- /dev/null
+++ b/static/img/guides/host_checked.png
Binary files differ
diff --git a/static/img/guides/services_issues.png b/static/img/guides/services_issues.png
new file mode 100644
index 0000000..4b7cc96
--- /dev/null
+++ b/static/img/guides/services_issues.png
Binary files differ
diff --git a/static/img/guides/virtual_box_env.png b/static/img/guides/virtual_box_env.png
new file mode 100644
index 0000000..8dce8b4
--- /dev/null
+++ b/static/img/guides/virtual_box_env.png
Binary files differ
diff --git a/static/img/hadoop-logo.svg b/static/img/hadoop-logo.svg
new file mode 100644
index 0000000..e9dd25e
--- /dev/null
+++ b/static/img/hadoop-logo.svg
@@ -0,0 +1,7 @@
+<svg xmlns="http://www.w3.org/2000/svg" viewBox="-80 -100 850 687.962" xmlns:xlink="http://www.w3.org/1999/xlink">
+
+ <g>
+ <title>Layer 1</title>
+ <image href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAqsAAAIACAMAAACSHtXuAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAMAUExURUdwTAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP//AMzMAP//zP39AAQEAPz8AAEBAP7+APr6APj4APDwAAYGAPb2AA0NAPLyAM3NAO7uAOPjAMfHANnZANjYAPT0ANTUAM7OAOTkAOrqAN/fANPTAJKSANzcAOfnAD4+AMnJABkZAMrKALu7AAoKAGZmAGJiAEpKAM/PAKWlAL29AKKiACIiACkpANHRAB8fALGxALa2AJWVAEJCAMXFADU1AMvLAIeHAKurAI6OAIGBABISAISEANbWAK+vAJqaAJ+fABYWAHFxAAgIADo6AK2tAF9fANvbAC4uALOzAGxsAN7eACwsAE5OAMPDAOHhADg4AJ2dACQkAOnpAA8PAIqKABQUABwcAHR0AEREAObmAHd3AFhYAOzsAGlpAMDAADIyAFBQAExMAFJSAFpaALi4AHx8AG5uAAgIB/LywX5+AFZWACYmAFRUAIuLAHp6ABYWEkdHAP//Bv7+y8HBAP//DKioAA8PDBERAOTkt///Hv//E6mph///nUZGAOvrvP//WlxcAP//OP//xvr6yP//KDQ0Kf//L///uGtrVYGBZ1lZR2JiTt7esv//ySUlHvf3xhsbAP//SbW1kP//jf//v46OcSsrInJyW3p6YdnZrf//bVFRQf//qdLSqP//fP//o///Zf//laCggMHBmrCwjB4eGM3NpExMPJiYev//sv//hv//dYeHbP//UT4+MsbGn///rv//QDExAENDNpOTdkdHOTAwALq6lTk5LWpqLTc3Da2tari4OFZWI4GBOtTUac3NTNvbNtnZg5+fUszMCOnpiZeXUVNIaSgAAAA3dFJOUwAJDXQs+uoC7/2kBfT3FavFlxFQWMBuntU30OYye+JCPWdLsGKAtcojRl2S3hyNINomhbvNF4kQDoVWAAAgAElEQVR42uydeXAT1xnAY1uy5FO2ZVs+kG/L9228q7WRbZI4pIyhNSVugIIpV9JyhislwYGEEAiEhCRl2knTcMw0CT4oGIpjA8bGNowBBwhQsINDCOEa6JCk03SmnbaSVhK29u1qd61jtfp+fwQTPDLS+/H2e9/73vceewxwAGOV0QHwKQBCx1uZVRyaB58DICwKJCN/L4tODfELysqxo/mSjEyfEpWiKCQmKj87Ozs/Jq0oWZXrk5lR4AWfP8CO4Ez/qJQRWoX7+2KYNNQ+poZFp5Qkx8gxBmJDVD4ZY2EkAEZyfEL8MP/gYaJmaoL0+vip7PD0Tw8v1MRibIlNVifAFAugkSmL/DAsLunR/0hSBBm9SS4Y5Ut7xas1vhhnpCGpCbCYAyhk5hv0UHlb5thU0ySYljC60DQpN02K8UauyAyDwQGGIQkxiqG0/N7f5Fdc+CheNSChMM0PGy3SohTQFbBMqsaHfYR5/SRJNmtSyD9mHJuikGN2QpqshGAAMKA2CpErM0mmMs+FEbwDVYk6TYfZFd+SdBgoINLgQpDpYS+LNM+G8hSerzemMB9zADpNEoyVhxNvmEblphVUXoRZDc1YnqJmYw4jPxHyWJ6Md5whnWnaqlKaJ9VAXpNqemo+5lh81d4wZB5LrkFVU2Caag4y03gEh8GRaZgTkKeCrR5KulQ/WZGqylRmH7I4r7oDlMlSzEnIfSAS8Ej0fgaOIXVTmJ//nHOqBbm+fKTb+cm8H2Z98H79zJmf7fnDXxe9ua2M7Q5sogxGzuMYK8X8MshZ1axqnITbS3ilhHB0dN+xE+cHT//rXhNuxbg3FjYvqV/0pO2XiMqAsfM0UjEskvwqyxyqclv/S7K4ZPzLjnX3PzxMtA+0HsBLx5XjNNQ0v1M7w8ZLaQpg9DwKWRzmT34VaVaASywYEM5hSh060X/nFEEQd2+27cVtU9m4u5YxJhjNphrgfmRgUeSAR5tWRgoOq6qcQtZR6lD3YI9eU6LXMKGyZ3zzHqZ4IBs2BzwI/0DySRoWR46+iv2aJUHBsixl38UHFwyeEu1n+ppwrlSuf/8l+tdWQFG2p+AVmGhylqOqsnCWudST5748YhS190zfXpwf4zfX0uevMmEUPQNlsulX0yzFUtWARFbbU7rv+skJlSCu3GrCR8P8HbSxgAamVo8glzyv4k2WVWsCWJoax0bU2w/2m0Q9e/MoPmqqli2gm1rDYSDFT0A8+WuJccyLWS2rZZlsalOO9ZtFJW60NuF2oWLOBpqf5g+7ruLPWJG/5BkrrWNZPUuToljEqF09ZlGJu324/ShfRWNrdjQMpockA4z1q2NYfKekyPaqv/OORVTiuD1NNVA6+03kT/VTw6arJyAx5p5YrKfDQm1lqcquHzz1yNT2q3txu1Px6+XIH10ESyxPmVZDbX9fpq3M/1DXfmIYhw7gDmH8DuR2VmwCDKXYSTdMlsU2UwDpNh7/ZRcfNgw3tbcVdxh1yHyr1AcGU+SEGqoAbdWByCIDuUyp+ki1A3cgTyz7LXIXCwoERE2YIQmQaOObHp3CQqZS719rGGkqcakFdygVk/+IOicbBWddxYzhvLXGVqQayLjwv0AQVq4easIdTHnNnLmofYF4GFHxpljj9APM3APQ258pl9q/i6BwxeGq6nlmYT1iavWLhDEVK0n68WU+siph2Pu/f7qBaipxowV3BhMmv4jadg2FDi0iJRnDIng+/8tOfEWgaD+KO4fquo2vozKt0P1KlARLMSlTDiAgi/ZcX3cPgaYVdxaVU7RLEPVX+bDCEiORGFbIlCSgS6qWde6nMZU4hDuPipXa9esQDS8SYGTFRzEWx5CTzKMpVNGd6CEa6Fw96kRX8dJXtL/4jPoXDIIyQdFRgGFK+j+NptlUvX+ZoOcM7lTKa7Tat6hHXnVqGFyRkcq0sMpAr6pOXiOY6MCdzBta7VpE8ZUK0gHiIkpHXwkYjuz5s7PrFKOqN3Cn865Wu3E7Ih0ABdhiQmJuDoAgEVn/d/sCwcwXzncVf0ar1e6g7gvE5MAIiygECMphSBAgJtUHDTZUdX4IYOBnelmXUYNWrs2OAAETk0v3Jz7IQ1S2JlWCaN+Lu0rWpc9CdYB4yZOHcVBVd+6ITVWJKzjuMlmnUzOtUugeIBIi1RwCgH2nCRZ84yJX8S16WRd+Sv1rQ+5KHChoVsopiPqlz3vYqOqSpZUlG6DV/hJKWcSJN035nBKRAbi+i5WqTqwFoPC0QdYXqEexNJC7cn9o7uaND6Kq2tnATlWiz3Wu4q8ZZF39JOSuRAh6YSVBtP79lmBLmwtdLZ9okHXt25C78pTcAPXa9LJB1qq6Jr1q6eA+SYtOBwRCw3YxzrXUyqqyg+xVJQ640lW8ss4g68LfUE+2JMLQio0Aar1q2TXCbVzFq54yyLq4nhrGZEEPIZEROrpZlSBaXOuqcU9Az8c6SAeIHOp2lW6QcCtX8RpS1p/PgN4BoiaJmlj9mnAzV0snkbI2Uxuz+EJ1gGgooGarujiq6up41dA+aDIp64fzqNUB0DtAJHhT+wB0E+7nqjlk1W5ENGhTQcMrMSDTUEb27w2cXe1wvav4VJOs0xDpgOI8GGn3J5VarrKLs6ou3beybAnMN8mq3U2tDvCFbQH3X1dRsjyf7OeuKnFLAK7i1U+ZZV29lbotkAqZVvcmnbKu2nmHh6rEVSG4aixmJXluLnRrFxleMZQRPchHVdfVWo/kFYusjYgVFsQB7gy1a+V5Xqq67AyLdWHAZIus02ZS97B0uVCA7a6k2CMFQJ4NFIar+E+1j0C0ZcFioEzQPRlDqa4+uYvgyVGBTKzDXNUuRbRpDfKBJZYoNgF2XuarqjASAXpWDpcVFbRiIQUw9G6Hwk7rKiEtriwbAiY+QlyJFaSGqNWtCJaoKYN4jr+qxFlcaJkAks2oS4Zi4BpXN0mphudq8oNQLSvZrKuO7P/q4cOeI8LcZdUzxcpV7fSXUde4ZkFVq+DzqeH+sbTXqh1Gunm45/KX1/7xvx9//P6f239YsHXD9//593FhdQgYUW2lpbB4DepqzFgl2CBkolVM91V1nT54cPDr/m+7zp/r7D5x/f53x04O7SOHmfzvtlkrplcZjWhquzrQ6/qmlvQl1xZW/fnV1zf9Bfl+FbCNJViUaRhvZtSuaZ460oqmvgHhBQHVwz197r3arUxvSg5Nr4RJRjFPTV/a9Orm+eOQZhy42T7M1b+5XlXz4QA9z7+3fd66dfPmLtcxvLfkYBBDcOQl89F0a+2aF6eMY5Kj5eajUKC9SUAJq2m/snz14ZwlO2bNexz5Dn2TwA2BYeOyakQfy7kfvLDqNTZ6HBgQznaAobPV5JU1WyZUV1aUllZUVo//3bs1E8nMwLS1K95GvdMsyLUKiZwILpo+u/2j1T+p5mBI31nzHdeurgWom7qluhwRGVQ9bfR1YT0qIZAGJwaEQ5Kc5Vy64NM1qzdO4O5IyxlBHA6oYPoHVmUMD5b+HhUHwDFXgSAr1Nm0dNvL9Sua657gr0lrr9NvDuSeeTV2aFuxjfr2pdBISBB4a5gkfXzDnt2zF48fvQgdx4W0d0XDBMN5rEZUIFACoghg/R9F+8T/fMeLUyrs5kHLIYOrA4J2FR9n7NS6dhPiTDYUCrqaMTSXVZZdH/yvnS9O2fuNgKpY6XsIGE8PvkWtZ/EfjazpkYoouZ8uMC4iNBFquXmuqtCpqqH+w8SA/e/4+UL4E6t+jWXsftm4R2c/WZNCRhYahMbDJM2ZFOQNgEODR/SrIEdcR9Uq+IjVcHCATLeuplyKpeIZZSESgnFq2A/jhpr+WtVDjtli6usVdiqATG2R+7CNs6w/mkI7JgSDsuCuAg65qiz6GwCvOGo39F47cU/4sppKXD+2TgjwSF0ppXRr16ASqJBlSYA/au1/3lgjfclxG/dt7ZcE7ypeaeoltNmqCkvKeVMgXsqQEIwNBw3Z4KWhvwHwriN7pba13xK+rNWmLgJrrYJWOceuwhIbZRYaqJBlsQMQgvjkTpI3AN9wbFvfjrtNwpd1gqkGa72VrFGcntvBcbY2BKHRi+0PEVWreps8onLc0Z1S21qF76ql5cWq5VYFrVxWBCxKgvx8wEZGclCbVRdPkcdNHd/Ut8MNJlZ8oknW2TN4XzlcyKoiCHbEGDN+2YiPrJs8o9regQPGZECdSdZlI3cF/Fg/tTN07MrXFFAgS7/hh4qiTPeq9raBpVYhq3amVYjJMjEaHMu2INgfnKShAPUZdgrghl82yaQqJ/4wS8f2RVYthNjNgxwOBanBSnQaBVGtout0/cXptpm05E+NLokCnt/KY/+K2mHx/+xde1BU1x2uEhOjSY0ayENr1Whbm2hjkvbuvSzLAruIILBSEJGXK++XvEWFVRAEAVF5K6AiqGhlMMw4zThlnPo240TEse1MlaidZsykttP80f+7933O2Xvu7rJ32V3g/AW7e+/dPfe75/we3+/7qS/2RecnpUo5WDNkbluhKkj/OKw2oQuLT4mki0T8dCGNG6sbAxQj68UnHm0YmmzOlaB9aT+01lqkVq8a2LNu67Hkxy6b6QJjJ1QnrI4WWBpfX5KYXFlMNBRFbKnbH0GvSNsNSpFf/QeTviSKiwImeyH3Fwq0DyIJp0VWJxptX6spE8/7e0tRotUz0LQLqvayALV1oQXRA1dzzuq5E+n7dwslzRGbfZWKdBZlmG91W6n/5FsdgYL0JZISWGBton1QqKYAp93k34ourXMXzYDTBqjytqqNJICAqFBT9qmki5VH4JBMUE5TroDU0hCl4FJ7jr6tacbNLg2ykmVI/MlKJn89qlwnrqr+IaXZKkOZfmZhlZ9BOajKp6sCI+JTmvb25KRJKz9p8vo6RemdsEClsFLF7JbqAztDXAJVVbDwm9qQwJUs/3Q+mhbMF/ailnSypNO8YBcgJYhzZ0hXUFx1CT4FQP0JW1wSvut07BHZUuy8gXhAJCpMsbhSJ1v3lGqMDFa5aGzif1Xt13bERNchM5TIe4iGSPOZ0rQqLUkWIE/9TK4VzFZJpQBu8oqqOFZpTKJeNjQYFNsHApXcpBisok6wV+iqdR1URYuVLEN++BrbE1bNnJMZzFANO9ro+BtJ7oZFiZbMZK8EHsVrUonVx5fkO6UFDATJAfXLrvO1kEBkhGK7v28hm4dXD5AbXAdVUFkYiQV8gg0zrUVMrQbOgGlnyg6zygvMf9eZ/yqCPzajSWhG6frFP5v3oZRMNfGfEfloVfVxmZ0/LWn3NljKNL1dOZoT13PicBlJhrgQqqotwq/bUWybNzQLUQbVR0BLdHMCrf7STv95EZYimt4wXbvSC4NSlgR4j4MqphCwthh35J6uknhUHTqyxVcxhAxyNc9xJpLc7EqoqrRCB1eyBKlBwfAC0HoLE5QFKyT6mQ2LQT+cD5uuLQrmL131sbcVmaprfKuqP0uTq03l0hZqTmuBpY45GaNgqN7IWR6pJ0myzsWZXaAVRp4t7hVKBGyFzmN+CLOYfxnT9ZgSdbKePGYv95n34RzrhIlRXv0fEwJIUUsfVyQBVDI9UEF48MHMBvPaHebvYqy2A53b4BnRLJWYfS9ktnL8oHxtNxHHvsAIvORCC+vb0yofMOsDn3nv28ia/HZYXhW1HnOeExJI3bDRV0F0GLkrZ5ihmq51MVRVfqIRQF5F2mJb3oHVyGx93cg5qWwBVzRBDIDYhU/oM20C/b9e+MUcwuZx4REP1X9K3qIhDFTTci2hGhGuJDhyOQNgT5X5IQhXuXyEAV0GEQMe5bDMeg+1l3K5ZBXbrDC+mFBzNg3bDsYEffqL6WCcfuD10ev2yVJffkbJ+lWhYKwKgK0+0xKqii6qqnZOXLrcaD71FjegIm4BfuopeBa9keXCon99Nh/WYHsTVRBEN39a1tWCAy1TXOpq1s9Xe79tt85/71NK1q/aDHI1Dok7labQuZaqefRzl2qiEwv+boDVYODHbjsLz+NycMXwsrgPiXBGocP8UijM5O6bLsqZ85ev9p5DTGBohLaqmJoVQB4noahT3Pn2WUA1TGGD8iTP9qSjYAFuAFWVL2CwktlYi3WNZb1lRQAUrso27085yHI9CMValk1V+9TnF28RExwvhG4T0vkqIJ+YlknuE/6p3IpC1aA0ZbSGu2yom1gAbD5UHIg0Kx8KWP655SyncpFhf5ZUkEk7/WJhA9doKxm7Tk+VBXXNx59MvG0acUOAqrTCpEGkACTnklWC6Xo4Cw3/K55S2sX5JCaaWOjvHliFGmMbNRIx1pVSQgCHS7nj65gDO+nilYvAUylEBsCeL1MMqD9d9dFbDgCVUN+i5I1VVbdgKxwwz2aXcCRqrKbrFMdFM3ul0/Tpw90DqoKsBTv+CCtc/fYns30+lexQxy+hjcxhoXQiQZNuEV/YCoVYl0wlrYC1Pt5vEo6N50Kz6jvSWgDxAqrz6bCKsJD0I1AtVT72aWAv1kDbGlFuAlUObPwwwQvrewulFw51FXd0CGPu7rwI+lrgI9A9NY2AtV7vaAhHx4/3rBirvjX8dDPrqLCspiKR1TA/5WHB7YhlbuNYqUJS9tXoNRp92sESjk3eZdPmZeKjCOxWf47pZtMuQTeEvbV1UwKoi1Y4vKIy+mpiX3WMam89lEw9yecV1UYYqvudYU2yOIh1htc2Mar1PiDOHNRVSHt8u2xYL8ozYaj2MK+mSMUXtkL84E+nAK9v5YI5hBLjwm0Bqt9IG6sBR6H4VIcQK0SoKk5xfJj6A80QXQWrdT1SqxNRSkRCshmuyVZnubhWCqpdyOlLpXixnp4OWP+7JYQyo/eZ2HX6B9ltmMhjyal86UZqqPNXVZWBpdG7x7Ka2y9J3jnc3WdtYa3km9OHMFDdxpqke9AQHBsfIM9PHQmW+e96T8zdH3089qAXee2+1QbpOi5jVczyU1P4iEA9bKv6OgUency1aGMj0sXLamD2vzDzWtPdliafaNmrg8L9uTnsyyctTGGukgsqvXjHg92pz163G6WauzdvjD8aoagz17A5AOofmApr3tSPZicykfv3IExW8XMOQpgn4zh9BZeyVrVZbeX4NvQm0ig3+xUkb49GseUEHOhLLK0tbjYhZYs3PZUYuHyBne7U5ZfXXzxjHP1LFDXyCnl3TIQqrnWPNgPGJkcvToCqANKd5aIX8kQA0nVBgPDMpGLZOU6tJQ9h34zNEjYoNt1VwJn/bRKX4roStNqjPOCm/tS7v7TLFP3x4ZVHlziYMuMmrmZVRrmK2/OP8NX+egnHKtJpQXq6Wq6YNoz3uwanutr85nKrM92PsgIEzyuJFNP9rFfFVTgSsVIPXzU7n5meXh0wf4UdWdS7t178dYRCxhhaXQV84i84OaBK9rPn+Xo4bt8bBLHa7jSs0JS7HvoSgS6Bakx+T8XZuCN7GmIPlMj4T5qC3MMSr8bWi99ax1IAtu7jTpMXKJdkyPBo/sosL1s9f/W161eGKYnxAh9Ype7ghCvieZuLG0Ps/x0gVJ1oSvZxnlW6a5gAASKjqkLWJK2x9LiagGnR1rEn2sV/Lk+aN8HRV8h+6FS/8jCk2uhQjT68f4aSHvfV2MAqVg2Aj8UHZcICDgmdoF/lRBzlE0SGKwNWLfyPzLQrP1i8F8zz+7Vwuq0DfKeBWMw2wdFXyEJHG765O1Iv3HwyTGHHMyRa1fs9ZYMFEMPeoiSErgkSASJ1TsRKNpeB0LkIq3zIk9xrB1LT6sGvqzVwSM0Uql7bsI4ip0A0qPZMRfY3Vi2zZUEde3aJkhm3L+DI1XIWgIqtAT5Si2AVLLFudCZUCgiCjuNGuDD9n27dBIDGUSO4zwRXc2ZEbgcfNdWU4K+2kZvUSohr5SlQXfm+dQv11b9vU/LjzChyzHfgu1j1ah0brOkTkclwA2pAuSrnpooIPZ0ta3FhbNWXWVpZXTT1ZaumQCIQEwloEQSFmgSltrh4uSdDsnuGZxisr31uNc7/8sVXlLWBBlYBxiqdBfiDbCieOAso/xQwagxAcbVzd2cDm191KXHVl04l0TN98+kIde+/z+XgqskWn/MtpaJMm7hOnpNVjQmQrIxZ5QlMqoXWIv/XHgxTNoxbONkqZuC7AlWAGSu2SJhJDjqvZAV1NsoH6CiAEoib+KFmezPLbOCPc9P1pBd/P6Lp7xy82RAVth0oyRLlWTJM1q/FpFnVnhVhfWOFFUmfC2OPKJvGDeThH4ViBX/DTlwpG18B41M7zZNYCSSsfJ28qNWYFCFZR5WETvzgUhqrgIH/BHtHTtDhqe1Q9vnksQagqtLqBsELENV4FC9wqXySSv18fMQ2pFJXUIxDa/E3eKV11rPaDc19HBRcbXf2BnxAgavEFFak7nIgsBZjdsyJ58CUvcLdkyhD5rGk1qYy0w6z2bRthzF/X5poMSTstaG0kS+P6YFO7N6UgNnrZLf/b28MU7aOp714GqBsszXtHoHnLI7jBJEihladbixucJALEGVMNK9rBwcdyXvR+NEDhDRqHHNXGFmKTfnMlq9GdGrjimySN+SliGFe4Eq39v5l41TXrozYjFTLaNU49Pbf8fNWxXwcYf93EQmiq+X8zGf7xM3VKNOpZOZpKy4kqx39DsdBs+kM5r7wNP+66GYYqerkAhspjVredICO/8yNfap5cpv/4++pS7ZD9au7yPEPoLfvyDSwYEqCcpD6vw7ikFgLqJoUrMbYvxIWtObw3ChN9w4HKYu00lQiNGsYwURgPsKzBppZ4m9QZZIxxPaLpYu2lqymm5uMxTKp/97rtyl7xshLTJcV640BA4MkllUyH0hihTgfq4F2K1iEDEFdI2tSHI550ZnPITAweAYTtkJtFV3dprBGOw1lXtgNkrie666W6jw5pA5T9o1b2PpqK6FVriQwFtVVMYqiAJORTQq2z9LQGZOhNS+uZCfzTR0rKjDjZwNor97HdPVQJKIsmdT9jXuyqfGWau9De5FKfYemYhFmyw8ys8ZUWqSgWK3VVPF/ToYWus4e16r9FEyK1p/mq8IdAytdevI/wagaHxvFWGdKGT1CglAYi92RpPp/6q41uorqCjeoRFYCCEIr8ohFUGCJFBEmE24m7wcJ5AEkaUhILq9AAiQESAIkQOSVgIR3QgIir0qiBCIVFypWeUgJD5VijHSJ1LYqrHZZobVdqz+6Vu+cOfM4M2dmzpk5tys9v/AyXmbO7Duz97e/79sTTMt/D/UzNSzstGBmB2xbWEkuErVGw8q58h+8E/8HseqrNVJJCXwl+mFGlU0KccFVsEbxfHFxp5hR3b/xqUXbKp7Rj1N6JnTx4mqMOabacZM6UsPuHTLzWIVe61aTAUtlDwndUtLVScTx5g60Ikw1vBtwPc99co+tzE15JdKf/nn42sU26+YMi6QoQqbMIqPan+lyodrPtFH14DZ9pBrQKq1o1X7iuogSVmPs1RX9KumrOSrJzb1L4ZOJjptjIjWJk6cZuJkzkOgrrvYssyURrWbxJpFNCPO7MtUqqI8p8n/aQaQauVXf6w44ZzUbeD3OWw2I2+V7T8yVKndz69aSdViPmEfQ/iTX8oXYumpf6ZS9ISv/aF7DwR27d+9Y1ZC3Z19W9fZcZMgHi1hdjy2uev1/vP89Nw47iNTjx/+m+54b+kOuWm1ZnSjQLObNF3Hjs26Fm1s3iYQeE7HO6nFXu8hdQziyden0BQUAoIuIjI2u3zU1uUDZhr0nVZ/A11jE6i7sCM2QLtWpMnv/3+0Mc7TO6xNefQPh7VC7FAAZBlSWGR2fuCtZTqdSifO/plI3ty6dBF5tsH431yYxVzBMjE1MyyjQTWSbx+KbE1EVJlxdaZDwYJP6v+VLZ5Ea9r2+L9uuP+I3lg1K0cNG4wLkTdThf+TNpOwjbm7dFAJobIddIrnRT7xwQMMuV9+ALLxhZCCAX9k1aYHdepts8sUzDkP1vq4NeMfwRV9Y7lip7M8LQzVWizVS9awmcevc1cW2r+46e2HJUv8gwmDeZbEKlbH4MShAACKb6TKzWceY6FRaTjuM1LBOHbrSZkC8zr5luWNZHJer8QBI1GPVqcTg6mpp3qjjVWb3q1hBIILKLfQP3TYDnXT5EovvlHUva7QXENxVWlXdGT9Uw87cMZsIZK+yButlAXpIGF740bQpQDO30JW/T41NrOZ4SBR7+dJpsx6OUY/O+pvO4jtlRgAKbYymCKixFIT+blShOgxvpNrm+KEa1v5A913Grzr3huV++YpQtZfKJ0QZ0inyt+nr3F9dcaUTrWO1bCuZ/dxJ6cHK2MsgCi2uVjFLgjWOjLRAQN/BZNTsXiMf+tmzwaNoQvUFk/L/lONQNfhWfWs85Kr1fu2DpqfS0rI/w6WPyKuIldxcV8YskZaRnj6XUAq9zz/UMK+sl5Q1LO6XbBTUinRZiUezPtV/PAGWP7KP5DjVk+K5GoRnVXkuHHceqhesaYDiumVTfFdKuny4EEpdKh3FKpPjlrnjuVoR+iKzSGX7MVL27drDLfOlkner5sdVbt2SX1ImdZlmKMTqxSxBKxQIIBweFDh8oL1Ce+zzStbZm4JVjbf9vXPPeaQa5FVfY8L+Q+vtKue4ebwJpTqFigsgjv3zJLgqaaxe2zPJ3VCWS+mMG1gppejodtSwchPQVStWwdksYlUBrRbTMwJG9390kN0xIeMEJwSu0XgAoOOwi1DVE1YetBuPuW6zXXnyrcXkpmvpxCuicKg1PtQ/q4jCuSdfvZianKLmNflZ6w5QBG54zswrmDR4z4Elm0sV9b+HBc4QLu/7a9Ra1qHdH7HLax9/ESFxBxE7VWAFAJ5vw1wkAHrCymUMmGAlXAFPsmxum9oH0HWoxNQ/gZy0JCIvC/w0m3KvhyJWs8HFFJYerBLx0Jis5TS28TXN2UT/CBMA14tjBDxiP5Qt8DWiNOsAACAASURBVAVBsDEWHvkier59SEN1ALat2nLfxUPVQFhpwUlevrHZLB7RVeu4I5l0QivR8XlOhl9Cdeo2Kovv1tZj+6WIExpf3TuJ4pk6dQ3pj4KJAk1mWpXSSQN6jbMbhGF08XmKMFRDsPPlPrjkJlT1hBU9Y9XOEUBay7kYjcV6vDH1pyjsRUhputcfoTqlms6OvhqWQNvyyvn1FJBv1BFUlCpUzyydZTImaBNLgHUJFWg1aIhNJxbj4vMcYaiO7Im72I/a3YSqnrDiwT6jv7PbrCo5ucOV/LFUjKVMqcPpB7f/iP2OptE0lvySn0yjFUxG5q/Mzzvpu/gCM9tAJo2rqXDnFwkUplb9fOHU3yr7HIbpOA0n5FXhOgDCJ64iVW9dLWCJL5/bgiYercxKn4JNpDKX2AQgcn/4syx1MjYpfxbPT6PCWPe+qdK2j1aAZGbictOUYBaLK0vDglbWMpbhYvll0QMYiwOchDHOQ7XtmrtQve2xsgKwt1qD66TWtNKoqvLyFK/0zZLVMPuxv9LEXmHNBvJIrXyl0Hc9dCLoHFluIOQvgb/QxJ3Gmy7/oYLFpcn0INSFFecYHBSoQemDzScMBj6EfYmPcx6qdzrdheqpFvOJQOR4lYhYaQArYwlfRi4JkIZTcE3s5dleQG6KK9WwkdrOf9V58/YnDwSzmSizeT6hhg5fnRYnY1NK1VSjHbzWWOpNj4+KmM6UECBLWVEZS39MrQTzgm5izSSYQ6UjHsNvCdE0ohBcmH9wyl2ovq9jAXyEPersr233arsGsPIa0an1NEaWr0CTAcZT/zLBpJINC3i+yaCeuI2TRMesmS3qbijZ1lOgx1+1KhmbpGnq1s6GHxbKn9SxuLhoHucYbKyDfg7z077PWucIw/BD4snA1QG4/7nDXVVlqKs68CDtZ7ZbVaZxVsGFZRpNUx089qpZ9+FjRSRMyBN/Ugchk0zj0tX+teEd3SRyAguoTwJKY1ap+Xm0+lSNq1CyiQylOcYE4JA3vxkBWA2c50elbmqvJ8H7PJDaG4WkbfswDlc9f9xlqOpaq3fxoX/uhO1WTfcUqn0AzPMwnWbSBHj+XaFpypJAAGJH50qFVpaEMMnadaM8q0SbgIQ06qZSKkgnYrQFU5PypYs1OXikh2WsKk3W6RbywId/wUkUqb4gVAeaEVZ8x5ktggnaY3DdqgsuIzXsUpulwQohv0pcG3da21ZHJ5BXJxPBTaxkbNO2Skwg4byNRQD87ECv8qa2yJwPfLknOxBbAU+pmELNJ6pgLy/K8JNkFauRumFicCGN/qHBsNgKAAmAYIa+9gs2DVUCqUFfTKALf3QZqR9/8Q9LgxUyPaDUjo6rsHyshsZSFEqZECLnExgy8o9x3DwVVBOtjIROU1ZkZYOYKXidgGYJ4NZoFeP1b5rIq2W6VwmL65vIY+WBWhPWYT24IYDNF/gM+LsJJvX/BH2kza2iGEUUgKnJPKddBup7byy1sgMm1ANKa8YWm2kAkRQdU690PuWG7peLVcFlL9WIwUVyzKemuXsVmCKX4ai0A3XhMe0nylBfvTT3XZPPnS1lWmGMSTPgp77/HAn+9Dz4qyfxVVLA0/pA2/3yVgoyAEYF6PnKZaD6AsxjYQdMqgeUVok6ZcWLv8UUE3yg5e1mhmN/Nl9pXoR4a1wxMHQPd8CN2ArAN/qaSsrLY/SS/9itZq96uQoqYgPI8TgTVrXXP0wRto6Syi585jleR1ThtiSFllIUVxgVwCHnoXr2z38BWpTYeXY6ACmu3yLYqI2tdkNWKDwmoQ6jjqfBZC3fjzl1AFCbNrUeztjjX9Vn+5dk0GonYDWsd4iXibc1GwlzuUGfZzi2SOdszShWq7GRFdKD47qDHtXYYItW6Qj9rOm8KaGR85FPLKdnjsKEqtNm1W+vXz2hFhv2aBUBEUBMV2daYavSIXT3WxqIlcpC6RSZvrYsZXJGWmKkBoacsQWN1XuwJRJXghgbUK9Gg0MVJJQ2GlvMchSXM4lVxdUFUT3IFP7xYmMfYABB0oPzCWwGMEjX/8+dpTlPuHpaqAgH9GQWqrfeUR+SSUir5rIZU/sWUQN/tUXLinrBd2OeX+apybPOVgjatsc1yDVfDJC3ZMdNiGSRn40SrqQmlgdTW+6VeYdMLkyZNrgf17gSs8jHJAMUKePDEvsG6YDRX6Voa0D7LmuvIYzKqnOfabtPUUhP/JBpp/ZDkn2azXR2VYMqzGPPt1b6Ow2XNTCzlLl71onlV4KLf3Q5x8Wg6iw4n34m5uAUhq6WKoGVRyQ6T8DnnfjnAQD87GluyaIP1fx47WnaMw0DjVwX4Usn7/73rBhHpvAXSWEVGp7AdHYVHNNUy0KVZ/5gLV54WxeqK4GyscDNs/wHg4S6GQ6xwhycBje/nm2s7sD4Woq1/dOaMj14PEGoHgzH5IpWMq4JxmSVngP4p9/rC6RkBNnoCHNTWIWms51dtRGiemLiGME8VuPlc130dzkB8MBGPdCsuiEhZPgS3npMv5jbgz0TCOaGso3V5cYm6+gecltgkKCDBzQMQDRX9UBKTbhEMowrUgUVPfAeGY8bCUC/o43U6xhCH0JRazH1avmO6lHFaHbVQpjX+2kAcIYSrD+Ca/yDlKvmAY9YdyhZnQHvj5TogVjn4wiYFDKOVbTJGgAIK8rDUHqsDgzASE7RXDNmCZrELOQXrLQ2IBxjZAFcpIRSv8FxpNBGnGmlRlRYqbQJRhhTrVxz+CVhDY2S0Z3i1B8VUmQlqP9To92e+bJ4XF+jEX+49G57nXGszjHaBPWWk8yxPcw8LgJQXDX3gK56aOb5CsGCaviTIGO/6u77VC//d7C0kynb7XmAhB2rUFU+wWrW+g/wvJY4GqlmvxSqZ4L332ckyspKMJGrwOX51wgGft8cKza1BBGsYxyrq41ck+fkIkvqWA3B4FVotypO1YBBOuNq2JmWYKsgvMiAI6KXYNfnV61RISkvaTnsqrAKDU+Vd6mGza7LPQqRPuIXgaDy45ox4z//Aq9hAFWVueXLHuOy9TkLaLDmxlvFKpu2lYpZ5RhEpwECfG0HdTd7hQ9GxRAqjFYDgxd1zR5gdBkyaAhbKASrb5vqTqYiIx1MO2AfnyDapEy7NgDtkonJzeKXxvohViOSeaQQqQJdt7Wu4bZGI8W/ybSy8q1chvCq2gtIMpBXRvjKIVD4D5Xo0sZsNQSJNC1LbBZsZfyXvSt/juK4wkIWElcRDDYQmwJHVNnGGAxx8OxEq9EBkgBJYAkdiFsgVuBIQgJUARMItwiIBAwGIRSOhMPBGFSkIDhgwi+BMgV2EhcBXC6SCpQTnMPxfxDN7Mz065nu3p6d7gmo9v0Iq9nt6Tc9733ve98zgqYV1J7D5FddwCr/ZJW7jA6p41wYAGdihR5oUaK6VknPYG/vkHGwZljOulQ/2ubPNBQNfEMORUqtqzi1mdWoauAPmiCCjl1jxckrz0TR1Sh+P4TCr+qPMU41WHgzEauDNj2N3HPoHrHyb+63P2u231asCkBtgjnPd49yhQ8FroUCPXIULbJKkQx0R4Og+OUkobKvPwutlA7ebHHzAkKmvJ1hrc7q/SiriJ9KQZzwYBVjiZlFq/UdJxfqM6Ap1MBRbh0A3grVDa48m0lZ+f41vsQqKlWlJyrFon31a1VWwKpjwtONmcaKMpfFuPFimR+3uY7mXH0d62g5HoXTEp+hekyVc77FIEUzQoAU419GurnVOFsFPk7lCFVd83bDMko1INlVW73C11x16R5b0hfDq65TYYWHfLcoR3BiBeLV1aqsgFW3SUXq1tVdWbgoVw1tIESe01jclOYo9VHQmwIRHrc7SYGDTDr/YBf9Okrkx4oAlSDDzI1g0hgX7ORqQIwI4CxfXvUgRqkpezsXtHqVL7GyBZUFqupbP1CrkxawRs+1jZarCmnsmk9IofScfA3thXNaIBsACQWq6manmkW6GV8aDX+pfZmx5k9RNSSjBc6LMeIVcivLt10FKy7G6qOYXBNM0vFf1Avd4LxFC0RWrHDMysD0JAWshq/OMdXihIhmFLYSal7VLt4VMINr1yqoipyDfLUN7vBYI5qMsk3GEbnS4zEMYAl68UxzDXez7XtYFcAlsvoHHoZKjNe/XgaAGjJhKr3qc85bVEFQXPdrNlm4TGbAqmOf66IjN4S4yz4S4N/Ewk+NYuhuUSEN8tUfOQUoX4hi9z16E1X+sKaTffYFJxvJ2k6OFsGnXMEqR73qfGzJidAyrmSNQ7wiGiVFmI0r8Zmd++2RGrAWt0aBmGYh0UvFAVrRpYX+rHTZIVGrQb5a6ZzHlhKtiaYQBwqnwBd4rR2vTCGoDZD1XMY4u16PxJ6ufukix5Imd3LhVfd4X6OqhOkPNorXIWUWmn2sVRpslYiYx+wEkfUyT1E2U/+kQ5hIIIQOAQpqoUuvRcPVQcTePky1wiKsZFgEn8NkXx3JagaM3V99mwtjOsTF2Hp0ivMONXsdXMVjW+yUVJUZsHbUCZy3WkIWUW1XlF3Uv6kUJmodAsMtVPVXTl9NTkfvaicKMAGKpFkBSWYRUXGIOD1zhPP//hrTVR9yude7eRBYoFIBf8t5g0wuaESs8JQ9pqxWlRiw7qgT+UagXKZeUfYzceQaYXgx8tX5roargTZxpaezvgqDTc0s52Qgkudesq+iulXac04aQCyFtVuc3rWHS7blAXfuKxwD0M2elR6eKTFgzRaHVrHKGq3U5zhTL7F2iPqmCuSr7ZS3dR8Smw+6mhlyA7qEuoXsq6iX9TtKzBF+juI/ZypUMgc+AEd9NQOgeF60TiqKU9ZKDFiNg6g5S6qrZk1kzJUtFUiyAlrBdF99kdBp/TL8bJ35w4CQwgVwQoNPjrIrVqkeI4A7vOHlLq7q6kXe+xMNwKeJnq2LpBOaJAasm/RgNUfusVrCEq02xHyEBTgtlLMQ+OoAQlvfYKg2h+fLDvyrX/I4wswMJ2v1LDsCuPYl73p2cB2rd71BegXC20yqUXeaxIA1IhFisKxBCdPjYR1mz8sU9VXlsc/VoYRwFR6WUYA4ewH0VSSMMTzpBRvd6mdpYU7o5wkDuPU77vXwHasf8V4u+q4QXwNFZOEtqrSANVtOlyxujawGlT0CKwFoDosztwK++qqj3qQbIJ1oFU6n77IaWKoa7SoFjHZ2rTBFVh+d438jwWP17FFfzQBdllMg6QUdsX/kZlVawJrbdV7nyPbVlSxmykG3cqAPm0bBrMDs1Ofd6OpA8Ml6Z01Bt07YaG3TqazLjHdMBQgzBwKc/zX/crhAAF7OinmsRjLF73AJavtRpQWsO2SyYmz0bSIDEtMpJvuFfdUCRi3AtFQ3n38Y+OQvCMfqTA2DVEdoGAzQf5wXHsCDU/yrwbHVo/44K+axWlAsY4sRsT0qlh2R8B1TZGMAuq1nzFvPWKMoE4VFN4ASiNdYcV91zghOd44uysGP1SWOsVZjobh1slNS8D1WM+BDL8s5yVOy+pz7ci3y4CQUJJVJC1g3qfnSXXVSmCGrpkNWlcK+KkdlcFdsX+3ZlyHrN9UB0+oGSHmvoMbqXkZq1fd1L9DqZ15WMxkeq0coJatr5zwdq5vk7HGlAwiQ8UgUlkt31dDWKkZBb6nQcBWGmW0UudTnLXWrJEyUFRth3Iz7KtAHiraujkm1xNfSXDyAbxiues/TapbzBBb83l8qA1k1bZ1zJrqEZ6IoR76vLlseA0WuE/ZVgA6gYrq6b8Cc/1us7hX9hmTirgohhWSzt7qnGQ64egG0f9Bd9aK3B28FB8Hqb9zh76QCiXk0eqyOqbIC1hL5rhpqZ2VvcxWlU1xiuoOIM+GFqnFufeB0x7ma6/DV1QBeRdWDUUT9aoYg0JfeFnOCh7f6lRfouSBX1h4jlZuwOeBN/FMhP7EK5TMVVbbRdQPisDeJDobrVgw1BwZB8ir4ZLk7XG0kYV99tIFO7Qvj/Pu9oFM1lA1VgcKUy97kP6WlNkIBNYZqVSqHVaqtZYXEWXmChgZbqSKyORTF9Jfccik9QP/KrBBOBVBBjzX0+R59CK0ALLzKo6uCuiX9tL7EX1YoVCVM9kXBF/qpy+XQYwIxJo26tOvIEPhiAjnRLNr41Gcdk9nMuiscE1OIuWpDFXFQVhrJVekE03te14Llhn/2e9GusKZQ5ksUNaXvjd61GU+gq+YzXzxLhAkEGhZBHjabNuBqZC+3jhXIkNpDWPWry5pcmsMk8asYBNPPvC4F0zj6j89mAH1JM7JlbjMCrb5WZQWs0o0NkqxUlFUCv4scZOIqaX2eI+j7Am3AbMzlu47VY+RJQQT5anoZ4I7ntayD1/2vv2YAnTUmuZSO4qQ1s8Rpuz9WdlxZIzAEAF2sjjZpMDRlNEE7vUcvrNkK41gtJ7p83yEk8jVNa/2m56VMhsKAn/psBghlRAok13xAuaRRtKrLY2I14loCHPAq1qmsDAQKqaSZfyAIWJwBRLFMXTqXHkDy6yRXfY8iCXT+lJ+9pwm43eInwbwp/ZQ77UqupnUzV83VBPJWcKxpF20o+2DSlADYcr0I+urMelKGNn6o4uFYffSB96VANsORoz6BhZzpFbJ3EgABHeaty+5evrqQ0Ysdh4GuABWL94bDItUrMean1hRPJ3cXDE1jT8CmRKtXz3lfyTtaTMDqNv9hXR7A+/hj++e2yVAg+P/bMpGZVShURG6RwkXSRhDnUkEO6nEUr84jDLRKHkKZ1E4GAS59FMdK3oqttsbfXJAfAOkjdMFJYRWi5PcY2YFaoS8KmBNtplCtk1JSYk77PWxhtL8kqAGkD6e4KoVg+lU8K4HH+dnL/hKrrJYAqpNw/teG6N0r6l6+WiX0WM2G6fsKp/SarV7dn+irUDpNKyO0bA01aCvjX1Ro9kcxNQBjJXmxJNc9JFb5mUFs5U8URy+rWpDRnVy1vF7oE58PXPU05kZwOlBaGnncXwoY+PtPdwSQ+rL+mT69qa5KZkLdjGslBTFRAP7EKhSMy4DaxT5zD/K7k69WTxd6OciywqrpyqAkDgOM67Ax5kNdiDgF/YYl9X3mWYVhRCbU7Q/8HlKK8hd/FaugIB2gsSRugPbjY4L5DaVUeDWFx1dhl/+J6FU6EOo1/rupbv/sXMEu2d86F99KoNabdtlXxSowQ0BAjbkHPw4ljGaQzY+NubYY0rEMVQQOmoWAMOskVWrQDEFyyf5GnCuBqdV9Xz1WwdluF4V1QcIlqfkuLDdhk3ZT+Vw1aaAdja5mC1mZAo6n57KBpTvxLgXOXbniR7wiQNvoqrJK0wx+8q1YpXUFDOD0VSDBdsZkwGhUTw1vbGhA773r7/tC6x0Gh1n8XVTCJtkAJLxM7ab0FWEG9CxVbHAK1Ell21j7TzaSVAag/WwVxo/5xBcRmuWr3/gpAwRos91AQGnCKTlSK3zK9VO8vpru1LpRq8kHa/jDrV3/eYDZEX0x/qXAvsbr8Y4HRGhtILe/RXH2snY7+oo4gwzpQzSmNduS7Uqr9nPWwVpvhAh1eSzA6ryPpcB49chlnxlbSTDFzizEYlxsbUNmwiuJhjVKH8RcawyvryYhBHWPpbky0eWpi82y1gkWBuojAgiF9rL6V255C4NzVgW0AYtdjAA1N+GWRMMapY/Bze7F7aqgL6WNIGQRPVPL3FjDFaERgEMc6BM/MkOh0K7JAW0AwExmm3fonYRbEg32W2/V4oIBYPe1tsHqYYGCg50djaiMO5FRBr3ray1LWJMCvWVWK6uD2gDwUK9KVAOYVkhtYBnN76u6OqsjmVVnrTM9v3VL01TwJStZZGh/COi78FnTcKLBVU9Xatwd2AaUoZ+8Uk1UAxiWUUDu6FfcEpZ8qFVVA+pjeftg+4cnzzTgsgGVDDL0Q5+rqafzYj1duqGzMLAd2OAGrZ7EZtYADJKs1G1xsAHMAcCKU52RZiAE+CJ++V6KnaATY71QAVqqlge3AzPAKBs1UQ3gDFenYpor/dKS4goCtrF9FYUA99/3qVxFcDKNNijgmgcUIH/99gAPtnzFfeumJByTYJC4glHqvKRWONmqmumrP6CTrG8LTapxNRcPsG1GuzI7yC1oVfDxgd2vN0CMZavUcPUNT776Wm++g7UOnd1/ksDY20/jxnoQcDksUtaOw1Db0BzrLk3PSrimO3NWaTLBfERrfFYb3oxBtCY6uCqEsbeNIurK3771ltIaLNsZyNSeVhNUK6pBhcD9OOu0vzdfHQX0gpbQfXULHVwVwthbiK3hU1slg7vPamlYKQt2D0DLz9pEckW1LNjCuhPb5nFJHg3MZq1ZSvHUWU12cBb+jRzG3m6iViY3uhrJU9oCLseDatsPrRvVkvBNJmKFE6Sf9uqrQIhNqVpE8tTGeavpZSVRROgI9nbQvvAWX1TUKtrUgDcBMFh3WreqOeGbLowHnnmdNOlVThsAfaRjNu6nDYvmbmfOsxTWX4ITEe4f9ZJaFdcrytygN+GM4mL+JJIrwiFERayUCZ59dRjWWP0/9q41OKqkCl9BlgCLKLCrYWUBo+7KIruuYM+tDDPJDJkZ8mJCQkJmkpB3AnmTDNmECCwQsgECgTCbkAhZIBBxhxjFYguxrNXyUVjloxQtLZ+1ltYqpVX+9J+Z5+3u23dyJ3O7+5Z7z08qTN++55ue0+d85zv2+13RSMDb3dvUic+yeIcWEdrxCFnocQJ8QOsEAJ3MQ8VqQuHKuFzFDQG+grh4nZC4ySZWZNWOjR1vyCJRWf+jIW0Vs0bCQGJ1AcZIbAYySyuU668Zl6u4IQAsmY5Jrqi1FS8A1fY3LSSBFOwoKpYRJLKqKt6+FkwOs//xdUPq4KJRuVKRBcBCgA0LwKrw+edUYxVjWb+n5b5cyCwZ8OPr6tIAwZydRWTvhkyoxTfmjhwDnqhP4RAA4dSDp1csBKvCJ1NUQvWrb2tIsZZZtwWbnaUmwjgWDKnf5OEHqYulJeYOgxaIGjyV2oNeftKEhVmqSrBidMCHdzXdWOVRLGRVoTjgCvZEdHIJE8/LWiv//ySDk7QMmLpaBBbMXUU7BLaowurvNVKvIJszuw4NWf87/8sI1RC6uTjijEwmyOi5wgwZ84doBINlqxeKVWF1mhqs/pBq574oNpahqkSu+f5LSP5kjI+c5HFJPUFySLEBUDhXAkG1Gm07fUZIwjY/NX+4isn3abuxdJmIHOif53ofqh1Zmvk4QuLb9EgeMQQtIEMaWIdR125KBqvC6lefngerj2nerMJt5LfQFYfi/o/qUGH2ASdPSJTeTskjRpVV4WYlovXPZduF5GzRs+sSya7e0/5cFZvRkNUciPcqQhfLLF6tznfkCgFGlRXJ6omKDazgi0Ly9oXFaVAskPJRJDD4A62aVThvHFbTQOtlpcpDQx1htnMbL1eMk7Bq9AdKBtesUCnLJLIAeDCQuubTmzZtXpM6d06/ApMBrtMMAUwmJ4HjCC4qXZysYX2tM9zAcYWIVZeB0WiOBhnyh7KsVy4SNDd4fjum4/uje1pvLlKPu4mCVak3NTKNs9GkL6waiYCowf2rIpY6X649VCFlljn7C2W16YicXLaqkLUxXOS6Y9IZVo1EQPSqDFMBAlkacAESYGJjHayahwCxayMWsl4g/azmhDsVzByFJPtJeQAjEUA8VrGE1RYKUBUWK1etvklvd7fRnd2S/2lepIv0BEdn7CflVw1GQDRahY/VbJSWBNbTwOrH4BXep1gICJpLgZQLamR/GqHszPLki14k1a0MRgApCYAKBIMl22lgFRnG/o6WGlakr6L0PURD1nO4/m+016mDpzf6Ys93BvaLw8BpMLcKs1b2XUCxuooGVAV4Gvtb1GdOSVrdh1FiwDj6d/sipeUzXM8wiQ9wHMZqgQFUE6oKgB+rYCMVrMJV158jGau7FDboJulnhefyILFCQ7Qvn6s7asmSNW4DqFibVTbauwK2UYHqIniJf9MsWoVvTNAG9yPbe+QgXMBL+PpDog7fhz2z10CqyZofJwmwgF5rVRVXxRGsv6KyR2iLAfTLOC39UVSvyOLh6w+pi+GWaCStEINnBYuj51CoPr+CClY3KqZXv019j2hHy5GYpElllAd5ma8/XNLDVYhG0gq5WFXGKVkhY9hplQKQqRN3qWxyB1xCfoDssCIaJ0RTBPaTfB2SKz3bsGgkrWA7Br8OfLTPUx+ig9VNSqWAf1DaJZyVa25ASq2RODA2FLaXs0MC8imXYcv7oEPVryDen+CowGTKVvC84L+ySMsNWuQH6+FYzpW3bEQVQXvNYFrJIgCsI5DasYpKsrxPlQwQMXh0J8q4sgcZTGdjzaNtvF0CtdscRrD6QWda5SIi04/YRKtYifU3LOb5IgdrAOkovwGlq8AF7spRkJx4M4JVtsKWxaM6K5RdEuNdrCglAeZsuVIT611qW0XqyIeQFoEMkzQkuZu7TyQ6wCyqrchqJJt1b/tIU0OFzqReHPBhI1aZMaxupgVVIU2BDvAevb0ibPLWU/ADHL4U0++s4z+nt5YkZcFo3nXm7sETJVlzQXu5R2dZBzRYzZ7CoLpuKTWsfhZe57rmYtZE2yUqyXVd6SfXXLnYTunidwfFKt0Ea2ag7eap8GlVOtyst9jY6kNeRTneXbqBGlShQdhz9h3aVavIwVqplPGQ0NHC3ytO6bnKMSFwemd+Rvt4TEVrqmtfve7SY0iXtViDRwBp9KAqIGqXULj6E5r7RYLzGQup+fsqf6/UyOexUhcMfl2KO54UiWKO7qoObnTWJEYFBCtfpojVbXAynkUaIGjp9XF6dfVAWgnZpPIUO1oXc4l7NtatS0JXAfoe9uN+W08RqvCQFgSr99htuYqA1YAO/CIxrS2nMaxSqlLE3sRUcHbubv1JaKP1KhlrFXxiBU2sKlCtH1IO0POVJyOHJAN04JeMMnIHC71iiusE+gAAIABJREFUgCNSBuk55Ale4PRXybUh2Sqxyo7PmtgosMKqWdMJrPF3De/5oAyr2TpwjEd6nCYcq3SE2MNUSMvl0Cmerz95F9tu5CVU4wWrxMdZLRyrFjYpq5DlwLtuwPa8Xw+euU2awkJVIqAlVGgOj9Ly6Y/L5UJP1X0tOFS3LGKHVfA2LY1gQvIyHq1c1INrJDErUIRjlUolqTiYELGEoZq7Q3dQ9aNQxfUeAVhGNwLAsCrlV39Hfet7lEVm+/Xgmkyod9GLY5VKZ0B7cKmb4e9Cuu6gegl7BZOywO1LlKGK5AGgutXf6V9dKhXFu3frwTfQEPlS2VBQKkXW4K/LbGigdq7uoGrdg72BNllW/IUVtLG6jcwH+Cf97RcrUSDHdeEdaCLnfRlW62msGFQm7g2d2qEAwOXVD1QzcrEXUITXq0DKi7ShisxqBd9gUwqIfFULFYbNtOrCPbWEYax0CQE9AGQFUwCFkWvV7AO9CBE46rH9d9tlEcBa6lBF+QC/pChlFT9vBQXq93XhnkqgXLUSRSeNu0ukBbEyM3bOnq/Sxbsoxm5V4mCZDKpb6UMV5Vn9mVHZSp63GpMeYlQX/hmBhtd65FPsKawYAMB8YO5r4IDj1zv8BTQz8d9/sVsO1WeWMsBqGlmB/Wss3sJOJ4EY2KILqFobSPMtJaMwM6A7HBhLhdUQd6asjXOitaAS3/vBIzKoblnNAKpoX8Afo1D9KZv3ECOySgNPwIwusOoFSv3W1LA6CcAg0h4jRkjn1Rzfw84c2dbbZNcqkPJhFlAVPk7suX6XzZvYEYnZD1jgNhY9GNyz2M4GqxfBlCjmQh/siD7ABC9RonS3U7bzYTmFc8nnmEBV2Aov+i/a4gCy+4SMXH5NH7cJe7zsKp14tRRMS/eqaGIg0uA7yaU7wF8v27fnppwTZ9nMBqpoz/X3WVFXYhaK21slYSvLSV1g9QbcV0OAKoVqhQuUncZmvUIF9yMnmHNZHT75vqvfIPA3FzOCKqpl8QuqGoEky8OGePXpAqo7YUHxDgJWKeRXm8EE3st9BRFPmPDaGJazHDmEbVeVEqD6KiuoohpBP6A3gUXJgrU7yCdFusAqTFK0nyY4rZBGGqAqH8Mi3nd3/JrHbdvBDanitJ0A1ZeYQVVYSxwZxA6rGZWiZ1Z6ggE9QDUD5ij2kbxGgbvSW+fEydXTMmDYS7pO+3a56N4/rX4fac/eEsAXqsKn4IW/y4y+KtlZOLl6SnfHKrhB8huFi3nTtCwiHSKBYw6uATF/wE+Lip1RXE/asXi7k/Q0WxlCVUiFL3R/4oBVa36F9AQn9ADVPDgqM1eTHEfh+G/IluXBihQGPNtLDo2KYv3e4jytU2dW1zEnEaneJuKTfIYlVFFd67c4YNXkqNNPIaC4uWqoF3FLC9F12qeQXON58c931MxvTAdJtU6fhges1bankrhbcd9wFukhlj3LFKrCRwCJwMoSqzaoDsJPiX9nc1FFC8Ej14jOQ3NLWnQJjM6YEsFq8FfweG9jOCmRs8uVtGBBhl8JqKJYVEt8gJVr2EJVWGohEa1YYvUAxBKxcsGprbG3xa5wfhFDAGws+57h5Ml7jYT7fQeYz2orXmuNZHx9AwWOBeYI0m1un6hog2PktZ9KFVhbCom8whKrUJP5Ew5ArezqMyuDoYToPiy9mnf1SdK8aBfh37qACuscHwpIHPCcgbO2zAS+8da8ggGfU4yD1D6Fdde9zByqwivwA3yLec7KZLojLT/BGKfpgcmG+EhoIzowF/sYZ3vWJAXZoF6gzswtkzMIu7YwZ6DY78iMVzqwZjr87r358WAa/PV/orTmqkXsoYo2BvyMA1bPS8tPMkWqs7d0PhRkNRNdiGeXfGK3eUr7dobLQL31jHd55aXgQt+xkwPu4rMFfpfLZXO5/P6CYveukzm++nlAGpoENFyrtNqSxQIPWwU/w/dY11hNJje0fBe7ZQs6TqlAwATZi/gPtjvYK2ee1jrl2QcSMsuZyx2jolY2eOWI4krPp3KBqvAS/BCPI1j9LTvQXIWWP8hq0dGbdlXuryH7Eb/DOMIaB3XN2j5lD0jYLFOXh7zJA/XL5Y/iLLJ8NR+oCutJhatfs8NqR9zJ7FRy/QfrVHp+iuxJWYXVGkz3VMwh5U2bhs/pBwu0R/tHarIXjFPP4NG4QXzKWoGXbYaf4+v0Fdhxg6pWTEoB7pFZ1U4fIXvzkuxDT0b7G0s17ObzgiTMXnfrWk1z4gfqSFNW/A9O284NqsIGQFAIeJcdVmFGBP2p1vlHzfP+jPa09FdM3+6eafR6yA6VX/ldIRpuSFWoSTMO1iGQrFlqm8rbZk6rg+mBovK+c/N94nNrBI72IiBMuHrIDqulDLHaesUS1xNlYw8OVc17GBFEV0JBgOgJpd/MJ/zaf4uTsvNj/eU3DraTyxpiq7dq6H/sXWtQU+kZDiCIiCyIILgUQUHqBS912SQFIgkhRCECQUDkTgyCEBAUZVEQs3KR5a4gFy+zCLKslHW2O07tTPuj7W4v0+nOtr/azrSdttvLbGft9Gd/VXL9vnPec3JOknOSHnh/iWjOd8558n3v5Xmf1zg3ncXgg/zf9vUkVEVbt0AdV7xBVYNenVvqStMybU59+e7qOUYb0ATw2eY5SOfMOs9DT92REVCkid1sme8sTJfMLj6onJqaMk5Vdt/rbTToR5hfZt8xkYctFuoM+A5fWC3EngaHiYCmcbpkeucqc58OAqJF9vCcZY7MBZ3rFP5LYu+yt3aLPG4RENv6Y76wegsHTT1HlyEr29tbmcZbVtiEHzBTxTL4ufmBlYnb5Sq3odurkLorXOQFhk25tDJYf8AXVgkczXJOFK1Pz1Aei40tbINl2BtV2bQerS5xj2toPcEmu5o13HuZS6SGegVSCd2BVlbg9/nCqo4YB7hfhre/myqi0t+tZZ3VqaZAoK1DqcVWZtB3uNBsImWDpQevd/QaoyGTE6D6vxEn8hLDulisTKtf8IVVVDjKTHObcO/nT3ZSvMGsJ6vOpMqpep9V9vlkdobD6FOniwNX2MDJcidN16rcjtQ3E5NFXmNHxAB75XueKQaYzk61Gz9c2UKRMlxY6nOqqpNNuVXetP2bhwjjM+uxkwcFm4zVAuXQaRctOGj/ZpE3WTAa3PyYLw12m6XPER7QDbcNy0ntGIVz5CXXnK1AUndaIRMQzqJeh2z5qhMcaA2b89xIOSXEFdu2d4evyMsMYwX+lKfZFojJidmkF24iLJX1wPHU3FWni+XZNEtDZcovYpHR6DxracqHbFBlU9xyX54rYGe4j8j7DEsEyN7nvTFAoiIqz8y6owH+NOy7ZVW6wkOiwxw2a/7+LL6TN+rYnRZLLHBVSjNwwhkL3Oe3SeSdlogt9N98E1jXyulELumMy2AtrgDrMc+NK65w5bJpd/xiXKaU4H+UP2jI4MZd7bVd85WrMN0SFRQfs1nktYaNYrHor/2ZV6xKigeJj981N0DTBpa3L3f2ucbrpNcFwEciS5sqiGzlkYpahq5rRjkLgNnkNsoQNzkpOuHtvVHbWJz6EWHxcb4ir7ZN+JK/5pu8Ys6BEoOCcRcCrNSOn8NIbXINqdI8BxvjJJG6NEvK7I5UMtpdq51yV1F5Q8vE6ZToHX4H3wg9GkgZQcUm7T3oFx7j5SgFFFjF4o+e8TcwAGWAEiN2g9N9zGXgSZhldHFPZTIjndS6vAqc5Y0Mcq53WEC1vBmQBxcTPc6tIceid4fvSIiP9FuzyPiEHXviNiX7iP6PbPObhFv/I79FVlvdh9ioN+hcYhIOqcor3dCJVO3wAFcBOpCNpL21x7GWqpEFVr+yOR2Iz3FIJEDbT7z1X/E1jI1guUSwDjWw/5D8SiikSrvnhh4kqZTBTl8Ece1niLnSUYdzBxvZFVjN1uoh/T7eLIh463/iu3BFCVbZPMskumIJCklkVfXOhPynb/ZPFKgVKo1GqdFoVIrJAgZcFCWsXDZF+AYNORqMVMoCq1esl0EZOrsFCFVfUqRodlh/wz9WJTdJVZcbbASu5K0j0Ku8UcMWpnVFEwpn8xBauN+OuNtn0vdlKWUssGr9JjYjtx+4VYBYTSDfu6mP5S8ewKpESxJPKB9mmpNM1YH1xYWL7HB6ut81JV45KBJBrkLJhmmddzYqFjbODCqLIkQXIIJ88z/ivxhgy7OSq6L6q4zYnl1gH3VpWzOb2Klo0vWCGbixdkBFfJrWgRUWWLXN3kBJQAcECNUjwGHzd547WbEKFpDjmV51hKCMVbDyn/WYRUK1rl/lFpVCcGNtAUVdqCWwrrLAqo2GgxxK/tsFiNVvQFHz+7x2XBFe9SKwotElusBZ2wayqTJfMidSF+a7jdt1Ig/4fDgFdYOy4bWGOVT11mvUI38ZIUQX4Ch0+594JMFqfdd3YYrm43qQ19reegOMQ2TL9R4AKlUmoJcCZ1RUmA7mWLX546iqoJ8AoRoN3v5PPJNgtb0oCuqmTP9y+Fa77eCU32xoWaTK7Rh0TI/+kxr3rr4Auso0FUOKYlCWzgneCuqwf1OAWA0D7/+DZ55JsFrt1ChtH9z5HoPhq4URmrTO+TMMgymtyu1K2v3QhShlibJgkWHGPNSFPigLECVAqG6lYDT8g2e2NWlveuEKr+2deUaaFNlFai4k3yE181oaaQJQb66W4a2ONtiugQp1vC1ArIZTPIIveJYLJjutw06LjjAM/tsLuBlolgqlAa7Rlp2gqgizex20Q7ULPWaiBYjVvRTP4HPPJa1spDi9U0i93s2Eo1KXr+Rq3RroemMYnZm45ilyJTmdUbfVOMIbRwkERwUI1e3+eI3I/se1eSwfehasyrFy1khNW3zI5OxXcDjupRi6JNbPciCWFB+Rvzmmrp7LtJXW521oE4JY4FmASPxFI8miLz2YtLIl+O/3ylghVTZb77mzn45oJV1Alxly5BBx5QOkRKup3eq8ruo6swOkD41Gg4VYCMAaWMUl9+0+4g95lbOgsPScS+PM0SqruuQ47u9Xcr1oaJhpE+p8B4hEyaSs9gIx0Zpt+uun0r7WRYjmcMGI1zmwAooQ6YCE5pWnUoP9h095lQigilOKpF2LzDyBrAeO99QcNfej3jIcsgECfESilCiHiVZTRrbU5JA2XJkbRL6z5YYpInMMGzK8LVl4UD2GNwQMnUJDgC89xLQipAPypdKVeYOjzVXW88hhf0phsZyPFauha+NzquJfP/uQtxwlWrvMSglW6k3TauuSsaLi8aMzV8kJuVXs+3xY8FAV35NK7fqSaZ/zOt+C5t2vVddrW2YuUOI1bbrzlsOePq2Kp/WehC6PZ4sPrbU0b08SO2C0mgP7RQap4gaseBcgPG91UwDhWdXgZbpff/szb8CqRHnakk6/aHwxSMzkPDdUnnHc7587mc7bcsG5pgQKuGmOiS+JjClrwVFv/l8DDr+IZTjpN0H4UDXJd01h5YCPvQKsqUjRsrnhTsvdqe4ni09eVowN62oZEVPkfC4WqgTUEx51kukN+BwnHREVWIrirFniIPMefcxYhiUZBMiwIkFV3Gkqfth//sMzD7JXcFPVOd142q/ieanQKuaJz9pcVvLZR+7Gxqhk1dZEVOkLY2uXtRpXq2vrnre3Ot7Cd9XgI8KHapr57gdRIdbfeglWJen52U4ANe9dBe8j3ieghZCawIMsbAxy2XAQG/OmxuQGhy4s6AdLr4szZxDpOB3BvxBcP0AMmbJimT0+hRIDfynxGpMXsQaqOtUD68xh4K6Kxf4hFrAGkXNvl/BK8z2ii365Ai3NtRF+vXMdQFU8bGFAIO2sP/unxItMwwKt1VqPANU65opgAL/PWgTdvJP8u8e4f63WddtP+ayq1rNotYo4BiF06zqAaslZMmH395996E1glSi1eYwaUfNVJzz2hXKcXSWQSwAC8StSv4565U7b2NgVXRPuCt0hVrQOhQitFQCAatUp4MH+y+OMAFKUPZnrIOjXTmZ4coEgcQWarG5XmjgMNIq1UN2EEtm3a+eIKeeAY+sAqrNyW6T90P4AZJ/+TeJ1ljGphbMC1Tkn1RmeXt1NYGF9EL0vyP5CEoFf36aSZdFYj5amTpJoZ2DMethV5UjjBSIx/cVfJV5pGYoJbU5htflArC5sLzpZoJB7xcogd7UVFJJEFCT9wDaqOrqdu2+MPPwqUGgE6+hg8mNZlqN5QWS48ge/k3i1nUhPPeFVCwLd1TmwMhyPvJSELRDHoRfcW1Nffx26ugEh5ACh7apxAFTHlVgA22fnQaR9Itkwl91VePxfKNZKBKtP61sANa+VMVC2I/bYOoBqo5nRqYVa2b/+7gb+XM2u2ttLz2NsKAxbcVSq03pjTa7NC5fX6aYW4H+XlLJ+oCpR2DMhyDn03w38sfFJoJyaXWSqEtNUTcQLibE0jTnnB6rmZpYNo9TUyCAfgUF1NwDVEg05i41k7f6zAUBXyQD2qvWdNiwXir+clCTne8z9I0XrAKrIhL5+YEZS2kfKDQS6Rgawi6gNnSvDNsY4/PX4BDkL1agYwUEV8N9vqKBNoQzpDrq2gUDmlksrulYlld6mLd1H+js1kOqw0M5/0X4AqgMYYa4OorHf3kAgY0uHyGB6lHFRgbH3SEN6Yo6yh+ou4clW7IGgio+uyQdVQmo3MMjU1LQ068wVqbTGAYHfN4wlUgMjNwsOquHA8WIgTFlCtBiR6KpXeJjK4KgU20/rApSQcq2hkKfGZmvddliAQgAHgMLItJr4rNshSRtZjuCO6nyOPriQ1gWYJ7Xxi6H5vD6JwUyRelCAvdVgDe8VWVLZLhx6H1H7mBIYVFNzOcptyGmzAGm1JCUfcRj4tlLCmMRYAYkhIiFCFcgh9wDq3+l5UA37+iTPJzTH+fqidzn6ZEgj2N5mMWD6+WwWfXRltuTDgQ5i/+M7hDgS6LWvCuyqt8EJeFqQyt7GL1YV3IK1X8pVyhjiA9oZ/Y/I09LEYsocvs+B41sogRoRmSwSpsVsYwpVlCaEqAWNaHjF6nuclh+KpUVc+RZAgXWV4AIQhVjp5KdDDgSR665bdoWFbxcJ1XyAwFL/HsXzbgdJl2d4xWq/gsMPV0ulXH3zFLRcgAGrwAE+3yCO/uWl7PEL2xsadSg29uiuiKBvJUT7iIRswEwgfbFjl6sZSVtd4JVun1PA3Wdr8qQ53DkXZBvFswBrVkHVHrBhpIHrdFA1EXkttoT8jw4+sVqXz9lHr3XqcLZr/4+9K2uKIumi4YLLqGiAuzMOyjijjjM6Ll/RQdNAs8rSILIJyr7Ygg3IoqgoKrKKqCw6g0KEKIYRhuGTEf4Ef4sP8xc+GpquzKzMrKzqvE3EmPcVodOs01k3zz333AaeHNAZ9Er1Yt7yW6IURPVs1eiWUMzZcL121YxMDnkTxgbmeEcB2J9eSHE8YEc25Vh9SbovkF7pnNvVdxiGycDdXCgk6SVt9GX1MXxYTYV7S/sNBsASDIrGyqdXqTpYYy5+UxBlp6te/pbrtFUNUg/oC19bUybY0edHUx6YYWAnb/hKCupliDf1/6UwuhwnSW216bEWjAnk11rDhtViRx7gPR0sv0jmNgU+ZVtbHFIYZZ2r8yIvyoABnXMlDtaF6zTI2Ze8eG0Eo4pHjFAtfYS426Nz2VK0//h4H5uxk8Bqj4U7wsOVOFizHQ4IIiB+0XH4POSyySjUXaiysB88NRWwfJ9BDga8Z6FUiE6iqw8XFXDe4RgC+LOLeXgZGB2WlMFzsnzIArE/flIgDQQ5x7raCvmC2tzXhgmrGQ7fSyhdSTVYCkDRrTRfNE5NDyQH7WYq1u8zogistolRO4FyNnKwDoeneLVwR2kelk+ELdXqn4QzBdCr1O3keApMxbpxtUJpIAgD61kRhpPKW4enSzB94e7hygRJVh3V91cmBZgjf4SrWKMVSANBTPt4L5rbGQ7Wr2Fpv550OMrNOGC7pfq5lpVJAYbIn2X918dR2Qyi22xM5D2cRz1YR8NDrzq8shWzgZ695rTaFUkBLvgMP2xSCSstDhAJ64jA1uuqgFbkYP2UGQasFvgHwlXKfUFXLRsgvlqRFOCL8YfjmH5lk0IpXbzSJbD3Cbrc6j3yq0/CgNVsf5vyY5BDr0/LCF8KgLzma3kabMWwIrGJaIu8b23ze5DiVUoBPFbPL47XKAAAUq3cP4tGDq8QMEapvpZdNIwRVLEQxHQ6IUIo0RPcVrRB6Es46FV/r5fEKlkw+V54KQPRqzQtgL5tFTRDtm9KEiCiCBCSMenmVl6keu0qg4Zq0lJf4rg8uipHN+m6BbRoihywrB0bc2uIt6o5QKRyNSO0/w8ogmFNux0GetWP1SZZJ15+FSLHHYPLW8gY4qYAmFmIIgL0WE9UAxqFKvtJwYdckxZGCcvk0sXjk5Q/5i7S7+eljzXtX5g10zoCdD3lHSpWMUnAfgXS5dhlTW0diBGKJZOm1QEXBJ4FLsmh34KSn2ENUKOavNOaWTrRWYBHPBaAJAIiFEaX4yA5IUjoCSQGhe7NqGFYBzi9uvgcQzy/k0ZyKPPQJkCWHE+ZE9ShtwzRZ8bVYE9EKQKCrBVhOOPqFHu1ZdBGil/MB8VqTgCr70I5UYuNHJLXJUrX2Upb2I5rE3Ss+jDbpuMKpKwkQJB6Cja8l6GzPyZAsdoQwKrdylW8+6yHORL1GtjXi9loZdQCBCJNFQOoEUPO9RA7WOMbqK2XzwGhmrCcy12whdP8nAwGNHrB/GNSKZ+mj1786mMsCLNf2aMwGgzSJmha7DHoHQIlqBcWoJA1N3jvGLH0e4mpkwOdGeyRwktDJaohljzAs7DQplgrqqMPElZBNl1pH4TOKV1u1eW0JtcOKfdbxGq5cHbqLi7oNJvVvjSspxxgxbSblT7X2uVlrQgbqXZQQTQYq0mr5H9FWkVRM7E7yC+nwV2v8oNYNe23TUh1n8sv8pihFBU9+cIjW0Fm2TUxV9Sn1NaM+F2z3sqC8YYf0Baht2BYLQp6v9Luf4lJybnpmeeKBx5ke6ocFsK3tHoIQxeKhQViB3qduaRvSrzCiOOkq/dFc3PnxCrjW9RSNcFutXIRq/VnB/xRUFDwoOhudk6np6Eqz2E3Asx7MUxNmN2gUs9e0iC6n0cVQjm0lVZiak6BT8DxocOV64AES/G6p7YzyyEvAtJmN9CLAN+oeo3mYkVGCX94kDpYLRmwEH0Z1S541XUu4v/+URpSywtfLK1afn2YogZs0XfpQil7UVgXywEFUDQMQ76cpRZ5Q1TG5qwCweo5BKvTV7whozTrY0fFbb1CLJ9sMxJWWQhxOm562wvEGoVPNM4YpiY1TgqKApfnCKHXqxcgnlNLt7meIOHQ98/UaPUHGyD1ea/P3ClpdOL/42Tpats8eoEsYAvQzFnhQ2W9wg6jF/vTBGvlGOx6BdK+7MGxugzZ+qaJ2Y7rPc9NIVrqrS6cufRwsDGFOsBEeg5w1qhJQWqns7ylvlWNARwFi3HGxX1rVW7HC+R3bwEQQAnk/DIybWmv+1ZSOVcxdXP8dUfL/Hxb2/xoy7WZ8dlLX/7+PNg3lubiT4Vywx+riA/oo34eVlHDUO2kgiceR4wPr9AKxY33CWq35Zuxuclxu5IjHyRlQaPVKXisYsUV5WZhiB8MD8/Fkokm06n2SxpoqTUfGKuSDbNTMzjVVa2de6ziW/m7AicRkesMTy+lKz03NTmJ5FoTcxj36jrQLCAHGKv9cpebzfFa0bSb/NT6vvlc1u86oo2PL2XJbjGvwZNdNPBsMj01HjngKKVuNCMclMwFxGcAY7VH6nKNGussZBJAdzMfq5gR+y6FTXOSdQGshcQmNuQUnBW8vnaApKuOe1BYvSL1YlXFPStnxEppS7FVQdMQq3ZTLtctVojLD6hG2PkMhFgHw+o4bAbgRcxUGstMNhLtC9JiFTSN8ddm4xN03bQCVqxZOF2+cMX/zKGwKrP9pti4NagL+KjZPmJk9a8KmbSUlUZBvi0Vxyp66HXDNIMAYHV6Uao/KFFflcFrstJ6TfexBV3eKQVMWhyiPcqrNcJYvQbmwlIMhtXu1ri4CZtNXMKcHsaRDJnuY5syXrHDsvqfYrUoVlHNRQWMalk2Vr/4hQ+ZYwvZjixBQIKHT0I1Wcyl1KBLRq11B+15prwWg2pZO5QmQJfXoVjd+ts6V2hI/RqYktTvlNYYEE+hn+8hE0GdXeYbibUG71WwpMfqvdRnWimkZxqyajts47KCYvXMwpfreMz2nSd3xW47sS7COlQvBUUA85okX+vEuyba6S9WN/KEQiWrfrWb+lTHbghsMWoaqHXKxKqHilWic35V5OG1e2KOHIzevhDR0QePxOxZe3z7XiZS62qQT5iWM0omscgk/fwk8q2vVuZrQnHmBPXBuipM+0ayPqH0qkztci6dajDvmovZwUSq8wnWa+MeficDqg9otDNq+fVO5AXVii50n8IkG6z0k1UbrrXCYNcDaZYuW1Ah/3iK/frvJZ2NfXeAoOqYQ7elTASrmPBxg4IkJw1gvDZdc9z3V2mjDfcWsesKIgXtEVZ1RO1iX7xutRiP/XIgqN5AluEassxTa5sVInkXLNabs/21T1Bwob0GsoSoFq3o/LyPfai+L4JoXKDmqji1WinYsKgMWMWpq62sp1zPNGBovYX9w1aQmxV+RT7N+S8c2MgmqmD8t+lQxbSoaa8EhRXYetcrQHLjf8wnfZX+GqsZw/+ZRBrAzaLJObeOnZw7VToIVOPv0p0y0HaJccGaSrOacGUlfmHTlb2jxguCd5hICCXKV7NZF7gIa8qGpZ7HzjgYqGbTyyN9li9W/k5bTYlXrMQeTr7XXdGDb+58O3n6ghBWZHsHy578jw2MhQ93wSA1LiHHXOHvqhWtVTvwVtsYhUYz7moHr+TzpuL6cs/Qh45eY0VtdKz8AAAL9UlEQVRIHgzwNBAdq6lFMpYeS1/zo7ZkIKgmdTKSeBR0E+KCtdvYuncoMJrF+pP8arurruTvl3Of652gKnvCiwAjxv5gvBLoiersJBBS45I9DIFEPSoBsuC70YrvqbK2No8j++yKQsqAjtVyl8C5SmMxXBNFUEiNS21gQG4K62G3olqf1lQbi9WyQKw9qDqlvWzPcRTz2ip6EYByLSzJAENqXC7L7rXaaZ1apQwPXbhERiooCsTRDXaw+kIaX5XBSeQY1cejxq9OIRxS49ws69dmtArQXmMJq9hUZk3bqYAodMXaagOrU7K6QfIok6jMVMgGO1mYuRXL3dXMqRmoDsC8x4o9AkPRVuIRvd8yVm9ISgSreOY5LD3AaXI1FYBQLRZrRHlv2c2wG9MEqNqVYKw6tMUiVrNhoFpzURNwJyezlpRiOKgOMLH26hGyhMc1lrGK+QVrhxUKReP4D5aaRdrlMEGG63UF9ikRUfRvFrmah2BIjS9i9/NcRUmIK9ZdYvFXyBGFQfH48VcLWP0HBqr45ZiVxEWKGx4CVQAMX6s5G47GUyKvEBUMhj1W+Gydl5EANPCtyZmDn86QqykHgiqTVvUnq+he1TWHjFU1NMBirN21JWyVAAppOYR/yG7GfSOKXA3M+II4N2eKVhc6/TfF1iSOCkVahRZRa3YLQDUtdJFVuhEIpYSUi9VttX4jrLNqIEY4E16fYwNV39mBKs54KazaS1z/NKWwKkMGQiYFCLP4h7AtHkgn2SwIYfUAr5tnMDS6yijSUTmAbbj+dGozD6shm1rTSMvLeN+Bi63n2EYsB0AHmJTNs/TA8uq6fntYvSr0ElFhHpvW/vJn7On9mzdGbN53mhQPng/xzCownfvI7Qsk61aj8m9V53mnKgbVtB6b47fwPgs17FpSrCH8ryCYoNf4Z+xbzV4O2b/yUjZUM3mzib3YgehqswnVMqeSW0PENpklzXQqE/QqTRPmxkn5ap1kqObzMDaDl9Zu2oQqORtJ9VxJKmoR2AilTTQxX6TiyHcG2EQyawVSu1WKOAh7Tqxz2vYEzhb8D0UpmEkJYuigM4SpZsmMtqVR/COOreYu6BTgKIBUD08cdYFQzZbZxirmDaZtUSiTE8fwB9QUgrCaIQV9/hjnAGIs5c/aZ4mpah5vJAXRzXO12TZUHXV43UOhTEr8LKv8npotNODFfDrZYVJqfU4Wq8oZQuPoryR7efvtQ7VVU00sAEF4YbsGbGqW8pmFoFpciHBsldmS9sKoV7is6uU3ZNtMCKcqPpFVzQ+WFJERUpwBJtlCEF+fZlEfR47t7pXTAcCjqgoJokKb+3971/7U1BGFG0J4JMEkAgmPBAiE8JIIJJoyjRFBFOKALRrkqVRoMIAafDG+KGKLhrYgTqU4dXTG5zjDOP2Jf6v/QqUz2N7dzc3uvXs1N5zvZ7Nucj/2nv3Od84ZkkHVC8gsbpBXlbhZSfNYtffSTtCheh82opaAGaa+VA9+OYY7GnrEDtVL66gHbTIqg6oDG8LFcjOAZzwSWGXyQ4Cu62IPbkl4sTI2UuwKVQJeMSd4I+HHXf8V47Z9J7rFzTOEIQs/T4UkUnVmDVnLDDzjgQLkZ2UeFXSo/1Qr/dBnytAN3VWAXmI99L83/cnr4b7u7gfhJDuMTSbo8rrx7GU8eomVqlPL6EJa4JkCOSt2LbM9WS9SYSIoi+p1aECbcCzyyUqRfdUj4r0SBhemX43een7t7em5+Pj3sRv3osPnN5dmQoSANnRjdh4nfm4j8IwDimXLQ0eYfJy0RwwaRd+mrQ9sO8nI1EujkucWBYOBjpHOzl+vrgyuTk5Ort5MQHpQrLjAJr8OT5wcF4T6eqmGbl97UdPiLdmFqfShKm9AOyse0Bjle0V7qUcRskRuDejB+i3dKX+cjap3J5SnKhyritysViRMCvpR9NoifMFmaWh31qiT1GUzzEZVTFVVADkQrXKBiUNvING37pZwfQf91ixoJE3T0PoYG1UXg8pTFUoC+CA7h0O5yGMxF6eQDEYGa9x+9GB9T5Hn7WVh6tLaZ2CqvwVoxgV29Iftlt+1UjgbSri8h2VzaMTq/5B0K91MedDlz0HVBmAZHzhREeYbKWl2ersRkzv+AHrvmziRZHtdTIo9KVTVWYxcmaqDLAAvoEdX4GvOXI0K17cz7a4affIveo+Kts9gkVZfkELVrIqv9ps4xrBFoFZxA9bjqp0zVzdZHVYCQQ1NXo1Ex55IaE9NClW3SNQq2b/932ZacvgwtdRqAIpxQw36825y5mpI1pXYhW5vurX1/iEpjn+qUNW0I6gZKhtKcmUSNbewHpjKE1jDoEdSmu2IsSIop/tINtY1eLa1NdJFVAAeyFVVg0JTjabYddDky5IUEOiKbPUHgF0Kc3VFwvxIUbNIQJbfqBilyuB2WckJPGptP8VgAFgnEdBI7lpoaLTX1VtbGmx6U5Xb6y0sNJtrPsLn85Ug8NWYvW69pdqZnwkHqgIwY4/sHTtX79Nz1cG6QWx0wLN/Vw330NuosRB6mnQUltmBDepKW/n9gU2+foCgvLKjPeg1Z2fOZG//UTobNZ1X1QxDfVId1fhTW2HtIHmEfuCzhBI5Jzabc2Bn7bHr4b6+ExHGFhNE254N3topjzrCc1sJcQxXh/3yYgBCmLIopw7qGfHKDoK9CqAhZWk65pjqmMcYyuQlkCITtQV03JVM1egC0QcFcydUAT15tAV9feBh8WvNnF92x9wWdHd/Sy2GnvqL9GWbmoEGqoCdLBCOvKTUrg7fF6fHS6ppFqIwNGEisCSmhuaJWqkXuqKpBYlmC1+91kVzr0p2BT/LoZijAk0h3ZYSBXx4TfyeHhjopxrUJpzMMjIaSRaqdidNayJapjRlCDMHLrBHAeMdpO+Y5wIGqAhWsWnXpxNXj7Y9CVOUNgl7RBol3gD3YaONWam6RFRVS8EGlQ7Xqx3p/eKVgf4n7T1H2g5/ClHberr6++gM+MO0o1fEUY5GmoFxRq6+IX25GiiEUhkM3iROjNdn57b19+NjY5FIZIyp/H4Wkdyl7hGtvfKvbrIpAJAASBORNfmk1pEzj2LymztLH0iWgUUBa0w9AAYJXiiYOKXKk9VCY3RbOXeFuXXeBK+eDhVBOcLVnxCqpg9clFVGE7/Psrx8x5Fbt4yXrgcz2cTlRABe8KqoFs1mWh9xYOHsFGVv8qENfp0dDZjV9uo9Sqr+cRMzAFSDqqpmOLPofe+BhfnZ4eSe5jc8K+UzMSF4mfKIf48ZACrgcasbew/m+Vmweu5RXOyAvXcH/YQ8j4gW28FFql6+V7DPQQJA/TjgYS2OD05urb+NDRDO1PhTLFGk08jbHi5XrFHkr2K30U8VwpNOB2Ro90kohQusPny6/nw2HosOnx+OxuKXR6dJjia5HNlTig+PT0rW86vYnwyMm0wTZNeZ8vzKwCp3b+V4HfRWkjBgZgN3q8BDTqPA1eXNVYCqOvkykQNf9aHoBSuENwLOgpEo6YU9Ljf301XPYV8EB+PyDRHHyh3830O6Kv2gqbTs43qs8rDfZzThC3f+Rj0JcLsNECir6YlaZ1UOL67y6e3YTNrPOaLWO/SOVLJaAU81fe9axQ4vj3CgScNnP3U6wuKdL7DRqUOXl5UKRAApDEN5S43M25aRm1DkJJfcjAp8NdHFQXLrfugytRvEgXw54WteOb+dtCRIS/w0evpC6ON9KvZ2/gdo3Q/hq0la+Mo3AW8RnY6mtBQBUE34ate6mbuU19Ty3YNe4uleooEHuNvC12Ktm+F8zdHylomyTZKoWgblVbsTmU5bEc19K6dFgV4Rkk7Wslp4arsXmgqrzScmaOVVFSj02m1gHzMBpypEsM2VWou3CaVssMxdXadgfKhlFNFMYAMAfBK1MsvzXVatw+HQOgvKmxW/xlSySBJGcAEAviBqfdRUrYL3P+DL6hEeujigBPqrAr44ioso7lSQqwKkxNFqTVJ9a86HHwmQIshwJGar0QaDgAApdbbWF5JMAHnuekipAlIOjVZh0lfna8gHPRWQqmjO13pser3Noy2wQ7NKAAAAAKgY/wDl2nNAE6uXJQAAAABJRU5ErkJggg==" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAqsAAAIACAMAAACSHtXuAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAMAUExURUdwTAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP//AMzMAP//zP39AAQEAPz8AAEBAP7+APr6APj4APDwAAYGAPb2AA0NAPLyAM3NAO7uAOPjAMfHANnZANjYAPT0ANTUAM7OAOTkAOrqAN/fANPTAJKSANzcAOfnAD4+AMnJABkZAMrKALu7AAoKAGZmAGJiAEpKAM/PAKWlAL29AKKiACIiACkpANHRAB8fALGxALa2AJWVAEJCAMXFADU1AMvLAIeHAKurAI6OAIGBABISAISEANbWAK+vAJqaAJ+fABYWAHFxAAgIADo6AK2tAF9fANvbAC4uALOzAGxsAN7eACwsAE5OAMPDAOHhADg4AJ2dACQkAOnpAA8PAIqKABQUABwcAHR0AEREAObmAHd3AFhYAOzsAGlpAMDAADIyAFBQAExMAFJSAFpaALi4AHx8AG5uAAgIB/LywX5+AFZWACYmAFRUAIuLAHp6ABYWEkdHAP//Bv7+y8HBAP//DKioAA8PDBERAOTkt///Hv//E6mph///nUZGAOvrvP//WlxcAP//OP//xvr6yP//KDQ0Kf//L///uGtrVYGBZ1lZR2JiTt7esv//ySUlHvf3xhsbAP//SbW1kP//jf//v46OcSsrInJyW3p6YdnZrf//bVFRQf//qdLSqP//fP//o///Zf//laCggMHBmrCwjB4eGM3NpExMPJiYev//sv//hv//dYeHbP//UT4+MsbGn///rv//QDExAENDNpOTdkdHOTAwALq6lTk5LWpqLTc3Da2tari4OFZWI4GBOtTUac3NTNvbNtnZg5+fUszMCOnpiZeXUVNIaSgAAAA3dFJOUwAJDXQs+uoC7/2kBfT3FavFlxFQWMBuntU30OYye+JCPWdLsGKAtcojRl2S3hyNINomhbvNF4kQDoVWAAAgAElEQVR42uydeXAT1xnAY1uy5FO2ZVs+kG/L9228q7WRbZI4pIyhNSVugIIpV9JyhislwYGEEAiEhCRl2knTcMw0CT4oGIpjA8bGNowBBwhQsINDCOEa6JCk03SmnbaSVhK29u1qd61jtfp+fwQTPDLS+/H2e9/73vceewxwAGOV0QHwKQBCx1uZVRyaB58DICwKJCN/L4tODfELysqxo/mSjEyfEpWiKCQmKj87Ozs/Jq0oWZXrk5lR4AWfP8CO4Ez/qJQRWoX7+2KYNNQ+poZFp5Qkx8gxBmJDVD4ZY2EkAEZyfEL8MP/gYaJmaoL0+vip7PD0Tw8v1MRibIlNVifAFAugkSmL/DAsLunR/0hSBBm9SS4Y5Ut7xas1vhhnpCGpCbCYAyhk5hv0UHlb5thU0ySYljC60DQpN02K8UauyAyDwQGGIQkxiqG0/N7f5Fdc+CheNSChMM0PGy3SohTQFbBMqsaHfYR5/SRJNmtSyD9mHJuikGN2QpqshGAAMKA2CpErM0mmMs+FEbwDVYk6TYfZFd+SdBgoINLgQpDpYS+LNM+G8hSerzemMB9zADpNEoyVhxNvmEblphVUXoRZDc1YnqJmYw4jPxHyWJ6Md5whnWnaqlKaJ9VAXpNqemo+5lh81d4wZB5LrkFVU2Caag4y03gEh8GRaZgTkKeCrR5KulQ/WZGqylRmH7I4r7oDlMlSzEnIfSAS8Ej0fgaOIXVTmJ//nHOqBbm+fKTb+cm8H2Z98H79zJmf7fnDXxe9ua2M7Q5sogxGzuMYK8X8MshZ1axqnITbS3ilhHB0dN+xE+cHT//rXhNuxbg3FjYvqV/0pO2XiMqAsfM0UjEskvwqyxyqclv/S7K4ZPzLjnX3PzxMtA+0HsBLx5XjNNQ0v1M7w8ZLaQpg9DwKWRzmT34VaVaASywYEM5hSh060X/nFEEQd2+27cVtU9m4u5YxJhjNphrgfmRgUeSAR5tWRgoOq6qcQtZR6lD3YI9eU6LXMKGyZ3zzHqZ4IBs2BzwI/0DySRoWR46+iv2aJUHBsixl38UHFwyeEu1n+ppwrlSuf/8l+tdWQFG2p+AVmGhylqOqsnCWudST5748YhS190zfXpwf4zfX0uevMmEUPQNlsulX0yzFUtWARFbbU7rv+skJlSCu3GrCR8P8HbSxgAamVo8glzyv4k2WVWsCWJoax0bU2w/2m0Q9e/MoPmqqli2gm1rDYSDFT0A8+WuJccyLWS2rZZlsalOO9ZtFJW60NuF2oWLOBpqf5g+7ruLPWJG/5BkrrWNZPUuToljEqF09ZlGJu324/ShfRWNrdjQMpockA4z1q2NYfKekyPaqv/OORVTiuD1NNVA6+03kT/VTw6arJyAx5p5YrKfDQm1lqcquHzz1yNT2q3txu1Px6+XIH10ESyxPmVZDbX9fpq3M/1DXfmIYhw7gDmH8DuR2VmwCDKXYSTdMlsU2UwDpNh7/ZRcfNgw3tbcVdxh1yHyr1AcGU+SEGqoAbdWByCIDuUyp+ki1A3cgTyz7LXIXCwoERE2YIQmQaOObHp3CQqZS719rGGkqcakFdygVk/+IOicbBWddxYzhvLXGVqQayLjwv0AQVq4easIdTHnNnLmofYF4GFHxpljj9APM3APQ258pl9q/i6BwxeGq6nlmYT1iavWLhDEVK0n68WU+siph2Pu/f7qBaipxowV3BhMmv4jadg2FDi0iJRnDIng+/8tOfEWgaD+KO4fquo2vozKt0P1KlARLMSlTDiAgi/ZcX3cPgaYVdxaVU7RLEPVX+bDCEiORGFbIlCSgS6qWde6nMZU4hDuPipXa9esQDS8SYGTFRzEWx5CTzKMpVNGd6CEa6Fw96kRX8dJXtL/4jPoXDIIyQdFRgGFK+j+NptlUvX+ZoOcM7lTKa7Tat6hHXnVqGFyRkcq0sMpAr6pOXiOY6MCdzBta7VpE8ZUK0gHiIkpHXwkYjuz5s7PrFKOqN3Cn865Wu3E7Ih0ABdhiQmJuDoAgEVn/d/sCwcwXzncVf0ar1e6g7gvE5MAIiygECMphSBAgJtUHDTZUdX4IYOBnelmXUYNWrs2OAAETk0v3Jz7IQ1S2JlWCaN+Lu0rWpc9CdYB4yZOHcVBVd+6ITVWJKzjuMlmnUzOtUugeIBIi1RwCgH2nCRZ84yJX8S16WRd+Sv1rQ+5KHChoVsopiPqlz3vYqOqSpZUlG6DV/hJKWcSJN035nBKRAbi+i5WqTqwFoPC0QdYXqEexNJC7cn9o7uaND6Kq2tnATlWiz3Wu4q8ZZF39JOSuRAh6YSVBtP79lmBLmwtdLZ9okHXt25C78pTcAPXa9LJB1qq6Jr1q6eA+SYtOBwRCw3YxzrXUyqqyg+xVJQ640lW8ss4g68LfUE+2JMLQio0Aar1q2TXCbVzFq54yyLq4nhrGZEEPIZEROrpZlSBaXOuqcU9Az8c6SAeIHOp2lW6QcCtX8RpS1p/PgN4BoiaJmlj9mnAzV0snkbI2Uxuz+EJ1gGgooGarujiq6up41dA+aDIp64fzqNUB0DtAJHhT+wB0E+7nqjlk1W5ENGhTQcMrMSDTUEb27w2cXe1wvav4VJOs0xDpgOI8GGn3J5VarrKLs6ou3beybAnMN8mq3U2tDvCFbQH3X1dRsjyf7OeuKnFLAK7i1U+ZZV29lbotkAqZVvcmnbKu2nmHh6rEVSG4aixmJXluLnRrFxleMZQRPchHVdfVWo/kFYusjYgVFsQB7gy1a+V5Xqq67AyLdWHAZIus02ZS97B0uVCA7a6k2CMFQJ4NFIar+E+1j0C0ZcFioEzQPRlDqa4+uYvgyVGBTKzDXNUuRbRpDfKBJZYoNgF2XuarqjASAXpWDpcVFbRiIQUw9G6Hwk7rKiEtriwbAiY+QlyJFaSGqNWtCJaoKYN4jr+qxFlcaJkAks2oS4Zi4BpXN0mphudq8oNQLSvZrKuO7P/q4cOeI8LcZdUzxcpV7fSXUde4ZkFVq+DzqeH+sbTXqh1Gunm45/KX1/7xvx9//P6f239YsHXD9//593FhdQgYUW2lpbB4DepqzFgl2CBkolVM91V1nT54cPDr/m+7zp/r7D5x/f53x04O7SOHmfzvtlkrplcZjWhquzrQ6/qmlvQl1xZW/fnV1zf9Bfl+FbCNJViUaRhvZtSuaZ460oqmvgHhBQHVwz197r3arUxvSg5Nr4RJRjFPTV/a9Orm+eOQZhy42T7M1b+5XlXz4QA9z7+3fd66dfPmLtcxvLfkYBBDcOQl89F0a+2aF6eMY5Kj5eajUKC9SUAJq2m/snz14ZwlO2bNexz5Dn2TwA2BYeOyakQfy7kfvLDqNTZ6HBgQznaAobPV5JU1WyZUV1aUllZUVo//3bs1E8nMwLS1K95GvdMsyLUKiZwILpo+u/2j1T+p5mBI31nzHdeurgWom7qluhwRGVQ9bfR1YT0qIZAGJwaEQ5Kc5Vy64NM1qzdO4O5IyxlBHA6oYPoHVmUMD5b+HhUHwDFXgSAr1Nm0dNvL9Sua657gr0lrr9NvDuSeeTV2aFuxjfr2pdBISBB4a5gkfXzDnt2zF48fvQgdx4W0d0XDBMN5rEZUIFACoghg/R9F+8T/fMeLUyrs5kHLIYOrA4J2FR9n7NS6dhPiTDYUCrqaMTSXVZZdH/yvnS9O2fuNgKpY6XsIGE8PvkWtZ/EfjazpkYoouZ8uMC4iNBFquXmuqtCpqqH+w8SA/e/4+UL4E6t+jWXsftm4R2c/WZNCRhYahMbDJM2ZFOQNgEODR/SrIEdcR9Uq+IjVcHCATLeuplyKpeIZZSESgnFq2A/jhpr+WtVDjtli6usVdiqATG2R+7CNs6w/mkI7JgSDsuCuAg65qiz6GwCvOGo39F47cU/4sppKXD+2TgjwSF0ppXRr16ASqJBlSYA/au1/3lgjfclxG/dt7ZcE7ypeaeoltNmqCkvKeVMgXsqQEIwNBw3Z4KWhvwHwriN7pba13xK+rNWmLgJrrYJWOceuwhIbZRYaqJBlsQMQgvjkTpI3AN9wbFvfjrtNwpd1gqkGa72VrFGcntvBcbY2BKHRi+0PEVWreps8onLc0Z1S21qF76ql5cWq5VYFrVxWBCxKgvx8wEZGclCbVRdPkcdNHd/Ut8MNJlZ8oknW2TN4XzlcyKoiCHbEGDN+2YiPrJs8o9regQPGZECdSdZlI3cF/Fg/tTN07MrXFFAgS7/hh4qiTPeq9raBpVYhq3amVYjJMjEaHMu2INgfnKShAPUZdgrghl82yaQqJ/4wS8f2RVYthNjNgxwOBanBSnQaBVGtout0/cXptpm05E+NLokCnt/KY/+K2mHx/+xde1BU1x2uEhOjSY0ayENr1Whbm2hjkvbuvSzLAruIILBSEJGXK++XvEWFVRAEAVF5K6AiqGhlMMw4zThlnPo240TEse1MlaidZsykttP80f+7933O2Xvu7rJ32V3g/AW7e+/dPfe75/we3+/7qS/2RecnpUo5WDNkbluhKkj/OKw2oQuLT4mki0T8dCGNG6sbAxQj68UnHm0YmmzOlaB9aT+01lqkVq8a2LNu67Hkxy6b6QJjJ1QnrI4WWBpfX5KYXFlMNBRFbKnbH0GvSNsNSpFf/QeTviSKiwImeyH3Fwq0DyIJp0VWJxptX6spE8/7e0tRotUz0LQLqvayALV1oQXRA1dzzuq5E+n7dwslzRGbfZWKdBZlmG91W6n/5FsdgYL0JZISWGBton1QqKYAp93k34ourXMXzYDTBqjytqqNJICAqFBT9qmki5VH4JBMUE5TroDU0hCl4FJ7jr6tacbNLg2ykmVI/MlKJn89qlwnrqr+IaXZKkOZfmZhlZ9BOajKp6sCI+JTmvb25KRJKz9p8vo6RemdsEClsFLF7JbqAztDXAJVVbDwm9qQwJUs/3Q+mhbMF/ailnSypNO8YBcgJYhzZ0hXUFx1CT4FQP0JW1wSvut07BHZUuy8gXhAJCpMsbhSJ1v3lGqMDFa5aGzif1Xt13bERNchM5TIe4iGSPOZ0rQqLUkWIE/9TK4VzFZJpQBu8oqqOFZpTKJeNjQYFNsHApXcpBisok6wV+iqdR1URYuVLEN++BrbE1bNnJMZzFANO9ro+BtJ7oZFiZbMZK8EHsVrUonVx5fkO6UFDATJAfXLrvO1kEBkhGK7v28hm4dXD5AbXAdVUFkYiQV8gg0zrUVMrQbOgGlnyg6zygvMf9eZ/yqCPzajSWhG6frFP5v3oZRMNfGfEfloVfVxmZ0/LWn3NljKNL1dOZoT13PicBlJhrgQqqotwq/bUWybNzQLUQbVR0BLdHMCrf7STv95EZYimt4wXbvSC4NSlgR4j4MqphCwthh35J6uknhUHTqyxVcxhAxyNc9xJpLc7EqoqrRCB1eyBKlBwfAC0HoLE5QFKyT6mQ2LQT+cD5uuLQrmL131sbcVmaprfKuqP0uTq03l0hZqTmuBpY45GaNgqN7IWR6pJ0myzsWZXaAVRp4t7hVKBGyFzmN+CLOYfxnT9ZgSdbKePGYv95n34RzrhIlRXv0fEwJIUUsfVyQBVDI9UEF48MHMBvPaHebvYqy2A53b4BnRLJWYfS9ktnL8oHxtNxHHvsAIvORCC+vb0yofMOsDn3nv28ia/HZYXhW1HnOeExJI3bDRV0F0GLkrZ5ihmq51MVRVfqIRQF5F2mJb3oHVyGx93cg5qWwBVzRBDIDYhU/oM20C/b9e+MUcwuZx4REP1X9K3qIhDFTTci2hGhGuJDhyOQNgT5X5IQhXuXyEAV0GEQMe5bDMeg+1l3K5ZBXbrDC+mFBzNg3bDsYEffqL6WCcfuD10ev2yVJffkbJ+lWhYKwKgK0+0xKqii6qqnZOXLrcaD71FjegIm4BfuopeBa9keXCon99Nh/WYHsTVRBEN39a1tWCAy1TXOpq1s9Xe79tt85/71NK1q/aDHI1Dok7labQuZaqefRzl2qiEwv+boDVYODHbjsLz+NycMXwsrgPiXBGocP8UijM5O6bLsqZ85ev9p5DTGBohLaqmJoVQB4noahT3Pn2WUA1TGGD8iTP9qSjYAFuAFWVL2CwktlYi3WNZb1lRQAUrso27085yHI9CMValk1V+9TnF28RExwvhG4T0vkqIJ+YlknuE/6p3IpC1aA0ZbSGu2yom1gAbD5UHIg0Kx8KWP655SyncpFhf5ZUkEk7/WJhA9doKxm7Tk+VBXXNx59MvG0acUOAqrTCpEGkACTnklWC6Xo4Cw3/K55S2sX5JCaaWOjvHliFGmMbNRIx1pVSQgCHS7nj65gDO+nilYvAUylEBsCeL1MMqD9d9dFbDgCVUN+i5I1VVbdgKxwwz2aXcCRqrKbrFMdFM3ul0/Tpw90DqoKsBTv+CCtc/fYns30+lexQxy+hjcxhoXQiQZNuEV/YCoVYl0wlrYC1Pt5vEo6N50Kz6jvSWgDxAqrz6bCKsJD0I1AtVT72aWAv1kDbGlFuAlUObPwwwQvrewulFw51FXd0CGPu7rwI+lrgI9A9NY2AtV7vaAhHx4/3rBirvjX8dDPrqLCspiKR1TA/5WHB7YhlbuNYqUJS9tXoNRp92sESjk3eZdPmZeKjCOxWf47pZtMuQTeEvbV1UwKoi1Y4vKIy+mpiX3WMam89lEw9yecV1UYYqvudYU2yOIh1htc2Mar1PiDOHNRVSHt8u2xYL8ozYaj2MK+mSMUXtkL84E+nAK9v5YI5hBLjwm0Bqt9IG6sBR6H4VIcQK0SoKk5xfJj6A80QXQWrdT1SqxNRSkRCshmuyVZnubhWCqpdyOlLpXixnp4OWP+7JYQyo/eZ2HX6B9ltmMhjyal86UZqqPNXVZWBpdG7x7Ka2y9J3jnc3WdtYa3km9OHMFDdxpqke9AQHBsfIM9PHQmW+e96T8zdH3089qAXee2+1QbpOi5jVczyU1P4iEA9bKv6OgUency1aGMj0sXLamD2vzDzWtPdliafaNmrg8L9uTnsyyctTGGukgsqvXjHg92pz163G6WauzdvjD8aoagz17A5AOofmApr3tSPZicykfv3IExW8XMOQpgn4zh9BZeyVrVZbeX4NvQm0ig3+xUkb49GseUEHOhLLK0tbjYhZYs3PZUYuHyBne7U5ZfXXzxjHP1LFDXyCnl3TIQqrnWPNgPGJkcvToCqANKd5aIX8kQA0nVBgPDMpGLZOU6tJQ9h34zNEjYoNt1VwJn/bRKX4roStNqjPOCm/tS7v7TLFP3x4ZVHlziYMuMmrmZVRrmK2/OP8NX+egnHKtJpQXq6Wq6YNoz3uwanutr85nKrM92PsgIEzyuJFNP9rFfFVTgSsVIPXzU7n5meXh0wf4UdWdS7t178dYRCxhhaXQV84i84OaBK9rPn+Xo4bt8bBLHa7jSs0JS7HvoSgS6Bakx+T8XZuCN7GmIPlMj4T5qC3MMSr8bWi99ax1IAtu7jTpMXKJdkyPBo/sosL1s9f/W161eGKYnxAh9Ype7ghCvieZuLG0Ps/x0gVJ1oSvZxnlW6a5gAASKjqkLWJK2x9LiagGnR1rEn2sV/Lk+aN8HRV8h+6FS/8jCk2uhQjT68f4aSHvfV2MAqVg2Aj8UHZcICDgmdoF/lRBzlE0SGKwNWLfyPzLQrP1i8F8zz+7Vwuq0DfKeBWMw2wdFXyEJHG765O1Iv3HwyTGHHMyRa1fs9ZYMFEMPeoiSErgkSASJ1TsRKNpeB0LkIq3zIk9xrB1LT6sGvqzVwSM0Uql7bsI4ip0A0qPZMRfY3Vi2zZUEde3aJkhm3L+DI1XIWgIqtAT5Si2AVLLFudCZUCgiCjuNGuDD9n27dBIDGUSO4zwRXc2ZEbgcfNdWU4K+2kZvUSohr5SlQXfm+dQv11b9vU/LjzChyzHfgu1j1ah0brOkTkclwA2pAuSrnpooIPZ0ta3FhbNWXWVpZXTT1ZaumQCIQEwloEQSFmgSltrh4uSdDsnuGZxisr31uNc7/8sVXlLWBBlYBxiqdBfiDbCieOAso/xQwagxAcbVzd2cDm191KXHVl04l0TN98+kIde+/z+XgqskWn/MtpaJMm7hOnpNVjQmQrIxZ5QlMqoXWIv/XHgxTNoxbONkqZuC7AlWAGSu2SJhJDjqvZAV1NsoH6CiAEoib+KFmezPLbOCPc9P1pBd/P6Lp7xy82RAVth0oyRLlWTJM1q/FpFnVnhVhfWOFFUmfC2OPKJvGDeThH4ViBX/DTlwpG18B41M7zZNYCSSsfJ28qNWYFCFZR5WETvzgUhqrgIH/BHtHTtDhqe1Q9vnksQagqtLqBsELENV4FC9wqXySSv18fMQ2pFJXUIxDa/E3eKV11rPaDc19HBRcbXf2BnxAgavEFFak7nIgsBZjdsyJ58CUvcLdkyhD5rGk1qYy0w6z2bRthzF/X5poMSTstaG0kS+P6YFO7N6UgNnrZLf/b28MU7aOp714GqBsszXtHoHnLI7jBJEihladbixucJALEGVMNK9rBwcdyXvR+NEDhDRqHHNXGFmKTfnMlq9GdGrjimySN+SliGFe4Eq39v5l41TXrozYjFTLaNU49Pbf8fNWxXwcYf93EQmiq+X8zGf7xM3VKNOpZOZpKy4kqx39DsdBs+kM5r7wNP+66GYYqerkAhspjVredICO/8yNfap5cpv/4++pS7ZD9au7yPEPoLfvyDSwYEqCcpD6vw7ikFgLqJoUrMbYvxIWtObw3ChN9w4HKYu00lQiNGsYwURgPsKzBppZ4m9QZZIxxPaLpYu2lqymm5uMxTKp/97rtyl7xshLTJcV640BA4MkllUyH0hihTgfq4F2K1iEDEFdI2tSHI550ZnPITAweAYTtkJtFV3dprBGOw1lXtgNkrie666W6jw5pA5T9o1b2PpqK6FVriQwFtVVMYqiAJORTQq2z9LQGZOhNS+uZCfzTR0rKjDjZwNor97HdPVQJKIsmdT9jXuyqfGWau9De5FKfYemYhFmyw8ys8ZUWqSgWK3VVPF/ToYWus4e16r9FEyK1p/mq8IdAytdevI/wagaHxvFWGdKGT1CglAYi92RpPp/6q41uorqCjeoRFYCCEIr8ohFUGCJFBEmE24m7wcJ5AEkaUhILq9AAiQESAIkQOSVgIR3QgIir0qiBCIVFypWeUgJD5VijHSJ1LYqrHZZobVdqz+6Vu+cOfM4M2dmzpk5tys9v/AyXmbO7Duz97e/79sTTMt/D/UzNSzstGBmB2xbWEkuErVGw8q58h+8E/8HseqrNVJJCXwl+mFGlU0KccFVsEbxfHFxp5hR3b/xqUXbKp7Rj1N6JnTx4mqMOabacZM6UsPuHTLzWIVe61aTAUtlDwndUtLVScTx5g60Ikw1vBtwPc99co+tzE15JdKf/nn42sU26+YMi6QoQqbMIqPan+lyodrPtFH14DZ9pBrQKq1o1X7iuogSVmPs1RX9KumrOSrJzb1L4ZOJjptjIjWJk6cZuJkzkOgrrvYssyURrWbxJpFNCPO7MtUqqI8p8n/aQaQauVXf6w44ZzUbeD3OWw2I2+V7T8yVKndz69aSdViPmEfQ/iTX8oXYumpf6ZS9ISv/aF7DwR27d+9Y1ZC3Z19W9fZcZMgHi1hdjy2uev1/vP89Nw47iNTjx/+m+54b+kOuWm1ZnSjQLObNF3Hjs26Fm1s3iYQeE7HO6nFXu8hdQziyden0BQUAoIuIjI2u3zU1uUDZhr0nVZ/A11jE6i7sCM2QLtWpMnv/3+0Mc7TO6xNefQPh7VC7FAAZBlSWGR2fuCtZTqdSifO/plI3ty6dBF5tsH431yYxVzBMjE1MyyjQTWSbx+KbE1EVJlxdaZDwYJP6v+VLZ5Ea9r2+L9uuP+I3lg1K0cNG4wLkTdThf+TNpOwjbm7dFAJobIddIrnRT7xwQMMuV9+ALLxhZCCAX9k1aYHdepts8sUzDkP1vq4NeMfwRV9Y7lip7M8LQzVWizVS9awmcevc1cW2r+46e2HJUv8gwmDeZbEKlbH4MShAACKb6TKzWceY6FRaTjuM1LBOHbrSZkC8zr5luWNZHJer8QBI1GPVqcTg6mpp3qjjVWb3q1hBIILKLfQP3TYDnXT5EovvlHUva7QXENxVWlXdGT9Uw87cMZsIZK+yButlAXpIGF740bQpQDO30JW/T41NrOZ4SBR7+dJpsx6OUY/O+pvO4jtlRgAKbYymCKixFIT+blShOgxvpNrm+KEa1v5A913Grzr3huV++YpQtZfKJ0QZ0inyt+nr3F9dcaUTrWO1bCuZ/dxJ6cHK2MsgCi2uVjFLgjWOjLRAQN/BZNTsXiMf+tmzwaNoQvUFk/L/lONQNfhWfWs85Kr1fu2DpqfS0rI/w6WPyKuIldxcV8YskZaRnj6XUAq9zz/UMK+sl5Q1LO6XbBTUinRZiUezPtV/PAGWP7KP5DjVk+K5GoRnVXkuHHceqhesaYDiumVTfFdKuny4EEpdKh3FKpPjlrnjuVoR+iKzSGX7MVL27drDLfOlkner5sdVbt2SX1ImdZlmKMTqxSxBKxQIIBweFDh8oL1Ce+zzStbZm4JVjbf9vXPPeaQa5FVfY8L+Q+vtKue4ebwJpTqFigsgjv3zJLgqaaxe2zPJ3VCWS+mMG1gppejodtSwchPQVStWwdksYlUBrRbTMwJG9390kN0xIeMEJwSu0XgAoOOwi1DVE1YetBuPuW6zXXnyrcXkpmvpxCuicKg1PtQ/q4jCuSdfvZianKLmNflZ6w5QBG54zswrmDR4z4Elm0sV9b+HBc4QLu/7a9Ra1qHdH7HLax9/ESFxBxE7VWAFAJ5vw1wkAHrCymUMmGAlXAFPsmxum9oH0HWoxNQ/gZy0JCIvC/w0m3KvhyJWs8HFFJYerBLx0Jis5TS28TXN2UT/CBMA14tjBDxiP5Qt8DWiNOsAACAASURBVAVBsDEWHvkier59SEN1ALat2nLfxUPVQFhpwUlevrHZLB7RVeu4I5l0QivR8XlOhl9Cdeo2Kovv1tZj+6WIExpf3TuJ4pk6dQ3pj4KJAk1mWpXSSQN6jbMbhGF08XmKMFRDsPPlPrjkJlT1hBU9Y9XOEUBay7kYjcV6vDH1pyjsRUhputcfoTqlms6OvhqWQNvyyvn1FJBv1BFUlCpUzyydZTImaBNLgHUJFWg1aIhNJxbj4vMcYaiO7Im72I/a3YSqnrDiwT6jv7PbrCo5ucOV/LFUjKVMqcPpB7f/iP2OptE0lvySn0yjFUxG5q/Mzzvpu/gCM9tAJo2rqXDnFwkUplb9fOHU3yr7HIbpOA0n5FXhOgDCJ64iVW9dLWCJL5/bgiYercxKn4JNpDKX2AQgcn/4syx1MjYpfxbPT6PCWPe+qdK2j1aAZGbictOUYBaLK0vDglbWMpbhYvll0QMYiwOchDHOQ7XtmrtQve2xsgKwt1qD66TWtNKoqvLyFK/0zZLVMPuxv9LEXmHNBvJIrXyl0Hc9dCLoHFluIOQvgb/QxJ3Gmy7/oYLFpcn0INSFFecYHBSoQemDzScMBj6EfYmPcx6qdzrdheqpFvOJQOR4lYhYaQArYwlfRi4JkIZTcE3s5dleQG6KK9WwkdrOf9V58/YnDwSzmSizeT6hhg5fnRYnY1NK1VSjHbzWWOpNj4+KmM6UECBLWVEZS39MrQTzgm5izSSYQ6UjHsNvCdE0ohBcmH9wyl2ovq9jAXyEPersr233arsGsPIa0an1NEaWr0CTAcZT/zLBpJINC3i+yaCeuI2TRMesmS3qbijZ1lOgx1+1KhmbpGnq1s6GHxbKn9SxuLhoHucYbKyDfg7z077PWucIw/BD4snA1QG4/7nDXVVlqKs68CDtZ7ZbVaZxVsGFZRpNUx089qpZ9+FjRSRMyBN/Ugchk0zj0tX+teEd3SRyAguoTwJKY1ap+Xm0+lSNq1CyiQylOcYE4JA3vxkBWA2c50elbmqvJ8H7PJDaG4WkbfswDlc9f9xlqOpaq3fxoX/uhO1WTfcUqn0AzPMwnWbSBHj+XaFpypJAAGJH50qFVpaEMMnadaM8q0SbgIQ06qZSKkgnYrQFU5PypYs1OXikh2WsKk3W6RbywId/wUkUqb4gVAeaEVZ8x5ktggnaY3DdqgsuIzXsUpulwQohv0pcG3da21ZHJ5BXJxPBTaxkbNO2Skwg4byNRQD87ECv8qa2yJwPfLknOxBbAU+pmELNJ6pgLy/K8JNkFauRumFicCGN/qHBsNgKAAmAYIa+9gs2DVUCqUFfTKALf3QZqR9/8Q9LgxUyPaDUjo6rsHyshsZSFEqZECLnExgy8o9x3DwVVBOtjIROU1ZkZYOYKXidgGYJ4NZoFeP1b5rIq2W6VwmL65vIY+WBWhPWYT24IYDNF/gM+LsJJvX/BH2kza2iGEUUgKnJPKddBup7byy1sgMm1ANKa8YWm2kAkRQdU690PuWG7peLVcFlL9WIwUVyzKemuXsVmCKX4ai0A3XhMe0nylBfvTT3XZPPnS1lWmGMSTPgp77/HAn+9Dz4qyfxVVLA0/pA2/3yVgoyAEYF6PnKZaD6AsxjYQdMqgeUVok6ZcWLv8UUE3yg5e1mhmN/Nl9pXoR4a1wxMHQPd8CN2ArAN/qaSsrLY/SS/9itZq96uQoqYgPI8TgTVrXXP0wRto6Syi585jleR1ThtiSFllIUVxgVwCHnoXr2z38BWpTYeXY6ACmu3yLYqI2tdkNWKDwmoQ6jjqfBZC3fjzl1AFCbNrUeztjjX9Vn+5dk0GonYDWsd4iXibc1GwlzuUGfZzi2SOdszShWq7GRFdKD47qDHtXYYItW6Qj9rOm8KaGR85FPLKdnjsKEqtNm1W+vXz2hFhv2aBUBEUBMV2daYavSIXT3WxqIlcpC6RSZvrYsZXJGWmKkBoacsQWN1XuwJRJXghgbUK9Gg0MVJJQ2GlvMchSXM4lVxdUFUT3IFP7xYmMfYABB0oPzCWwGMEjX/8+dpTlPuHpaqAgH9GQWqrfeUR+SSUir5rIZU/sWUQN/tUXLinrBd2OeX+apybPOVgjatsc1yDVfDJC3ZMdNiGSRn40SrqQmlgdTW+6VeYdMLkyZNrgf17gSs8jHJAMUKePDEvsG6YDRX6Voa0D7LmuvIYzKqnOfabtPUUhP/JBpp/ZDkn2azXR2VYMqzGPPt1b6Ow2XNTCzlLl71onlV4KLf3Q5x8Wg6iw4n34m5uAUhq6WKoGVRyQ6T8DnnfjnAQD87GluyaIP1fx47WnaMw0DjVwX4Usn7/73rBhHpvAXSWEVGp7AdHYVHNNUy0KVZ/5gLV54WxeqK4GyscDNs/wHg4S6GQ6xwhycBje/nm2s7sD4Woq1/dOaMj14PEGoHgzH5IpWMq4JxmSVngP4p9/rC6RkBNnoCHNTWIWms51dtRGiemLiGME8VuPlc130dzkB8MBGPdCsuiEhZPgS3npMv5jbgz0TCOaGso3V5cYm6+gecltgkKCDBzQMQDRX9UBKTbhEMowrUgUVPfAeGY8bCUC/o43U6xhCH0JRazH1avmO6lHFaHbVQpjX+2kAcIYSrD+Ca/yDlKvmAY9YdyhZnQHvj5TogVjn4wiYFDKOVbTJGgAIK8rDUHqsDgzASE7RXDNmCZrELOQXrLQ2IBxjZAFcpIRSv8FxpNBGnGmlRlRYqbQJRhhTrVxz+CVhDY2S0Z3i1B8VUmQlqP9To92e+bJ4XF+jEX+49G57nXGszjHaBPWWk8yxPcw8LgJQXDX3gK56aOb5CsGCaviTIGO/6u77VC//d7C0kynb7XmAhB2rUFU+wWrW+g/wvJY4GqlmvxSqZ4L332ckyspKMJGrwOX51wgGft8cKza1BBGsYxyrq41ck+fkIkvqWA3B4FVotypO1YBBOuNq2JmWYKsgvMiAI6KXYNfnV61RISkvaTnsqrAKDU+Vd6mGza7LPQqRPuIXgaDy45ox4z//Aq9hAFWVueXLHuOy9TkLaLDmxlvFKpu2lYpZ5RhEpwECfG0HdTd7hQ9GxRAqjFYDgxd1zR5gdBkyaAhbKASrb5vqTqYiIx1MO2AfnyDapEy7NgDtkonJzeKXxvohViOSeaQQqQJdt7Wu4bZGI8W/ybSy8q1chvCq2gtIMpBXRvjKIVD4D5Xo0sZsNQSJNC1LbBZsZfyXvSt/juK4wkIWElcRDDYQmwJHVNnGGAxx8OxEq9EBkgBJYAkdiFsgVuBIQgJUARMItwiIBAwGIRSOhMPBGFSkIDhgwi+BMgV2EhcBXC6SCpQTnMPxfxDN7Mz065nu3p6d7gmo9v0Iq9nt6Tc9733ve98zgqYV1J7D5FddwCr/ZJW7jA6p41wYAGdihR5oUaK6VknPYG/vkHGwZljOulQ/2ubPNBQNfEMORUqtqzi1mdWoauAPmiCCjl1jxckrz0TR1Sh+P4TCr+qPMU41WHgzEauDNj2N3HPoHrHyb+63P2u231asCkBtgjnPd49yhQ8FroUCPXIULbJKkQx0R4Og+OUkobKvPwutlA7ebHHzAkKmvJ1hrc7q/SiriJ9KQZzwYBVjiZlFq/UdJxfqM6Ap1MBRbh0A3grVDa48m0lZ+f41vsQqKlWlJyrFon31a1VWwKpjwtONmcaKMpfFuPFimR+3uY7mXH0d62g5HoXTEp+hekyVc77FIEUzQoAU419GurnVOFsFPk7lCFVd83bDMko1INlVW73C11x16R5b0hfDq65TYYWHfLcoR3BiBeLV1aqsgFW3SUXq1tVdWbgoVw1tIESe01jclOYo9VHQmwIRHrc7SYGDTDr/YBf9Okrkx4oAlSDDzI1g0hgX7ORqQIwI4CxfXvUgRqkpezsXtHqVL7GyBZUFqupbP1CrkxawRs+1jZarCmnsmk9IofScfA3thXNaIBsACQWq6manmkW6GV8aDX+pfZmx5k9RNSSjBc6LMeIVcivLt10FKy7G6qOYXBNM0vFf1Avd4LxFC0RWrHDMysD0JAWshq/OMdXihIhmFLYSal7VLt4VMINr1yqoipyDfLUN7vBYI5qMsk3GEbnS4zEMYAl68UxzDXez7XtYFcAlsvoHHoZKjNe/XgaAGjJhKr3qc85bVEFQXPdrNlm4TGbAqmOf66IjN4S4yz4S4N/Ewk+NYuhuUSEN8tUfOQUoX4hi9z16E1X+sKaTffYFJxvJ2k6OFsGnXMEqR73qfGzJidAyrmSNQ7wiGiVFmI0r8Zmd++2RGrAWt0aBmGYh0UvFAVrRpYX+rHTZIVGrQb5a6ZzHlhKtiaYQBwqnwBd4rR2vTCGoDZD1XMY4u16PxJ6ufukix5Imd3LhVfd4X6OqhOkPNorXIWUWmn2sVRpslYiYx+wEkfUyT1E2U/+kQ5hIIIQOAQpqoUuvRcPVQcTePky1wiKsZFgEn8NkXx3JagaM3V99mwtjOsTF2Hp0ivMONXsdXMVjW+yUVJUZsHbUCZy3WkIWUW1XlF3Uv6kUJmodAsMtVPVXTl9NTkfvaicKMAGKpFkBSWYRUXGIOD1zhPP//hrTVR9yude7eRBYoFIBf8t5g0wuaESs8JQ9pqxWlRiw7qgT+UagXKZeUfYzceQaYXgx8tX5roargTZxpaezvgqDTc0s52Qgkudesq+iulXac04aQCyFtVuc3rWHS7blAXfuKxwD0M2elR6eKTFgzRaHVrHKGq3U5zhTL7F2iPqmCuSr7ZS3dR8Smw+6mhlyA7qEuoXsq6iX9TtKzBF+juI/ZypUMgc+AEd9NQOgeF60TiqKU9ZKDFiNg6g5S6qrZk1kzJUtFUiyAlrBdF99kdBp/TL8bJ35w4CQwgVwQoNPjrIrVqkeI4A7vOHlLq7q6kXe+xMNwKeJnq2LpBOaJAasm/RgNUfusVrCEq02xHyEBTgtlLMQ+OoAQlvfYKg2h+fLDvyrX/I4wswMJ2v1LDsCuPYl73p2cB2rd71BegXC20yqUXeaxIA1IhFisKxBCdPjYR1mz8sU9VXlsc/VoYRwFR6WUYA4ewH0VSSMMTzpBRvd6mdpYU7o5wkDuPU77vXwHasf8V4u+q4QXwNFZOEtqrSANVtOlyxujawGlT0CKwFoDosztwK++qqj3qQbIJ1oFU6n77IaWKoa7SoFjHZ2rTBFVh+d438jwWP17FFfzQBdllMg6QUdsX/kZlVawJrbdV7nyPbVlSxmykG3cqAPm0bBrMDs1Ofd6OpA8Ml6Z01Bt07YaG3TqazLjHdMBQgzBwKc/zX/crhAAF7OinmsRjLF73AJavtRpQWsO2SyYmz0bSIDEtMpJvuFfdUCRi3AtFQ3n38Y+OQvCMfqTA2DVEdoGAzQf5wXHsCDU/yrwbHVo/44K+axWlAsY4sRsT0qlh2R8B1TZGMAuq1nzFvPWKMoE4VFN4ASiNdYcV91zghOd44uysGP1SWOsVZjobh1slNS8D1WM+BDL8s5yVOy+pz7ci3y4CQUJJVJC1g3qfnSXXVSmCGrpkNWlcK+KkdlcFdsX+3ZlyHrN9UB0+oGSHmvoMbqXkZq1fd1L9DqZ15WMxkeq0coJatr5zwdq5vk7HGlAwiQ8UgUlkt31dDWKkZBb6nQcBWGmW0UudTnLXWrJEyUFRth3Iz7KtAHiraujkm1xNfSXDyAbxiues/TapbzBBb83l8qA1k1bZ1zJrqEZ6IoR76vLlseA0WuE/ZVgA6gYrq6b8Cc/1us7hX9hmTirgohhWSzt7qnGQ64egG0f9Bd9aK3B28FB8Hqb9zh76QCiXk0eqyOqbIC1hL5rhpqZ2VvcxWlU1xiuoOIM+GFqnFufeB0x7ma6/DV1QBeRdWDUUT9aoYg0JfeFnOCh7f6lRfouSBX1h4jlZuwOeBN/FMhP7EK5TMVVbbRdQPisDeJDobrVgw1BwZB8ir4ZLk7XG0kYV99tIFO7Qvj/Pu9oFM1lA1VgcKUy97kP6WlNkIBNYZqVSqHVaqtZYXEWXmChgZbqSKyORTF9Jfccik9QP/KrBBOBVBBjzX0+R59CK0ALLzKo6uCuiX9tL7EX1YoVCVM9kXBF/qpy+XQYwIxJo26tOvIEPhiAjnRLNr41Gcdk9nMuiscE1OIuWpDFXFQVhrJVekE03te14Llhn/2e9GusKZQ5ksUNaXvjd61GU+gq+YzXzxLhAkEGhZBHjabNuBqZC+3jhXIkNpDWPWry5pcmsMk8asYBNPPvC4F0zj6j89mAH1JM7JlbjMCrb5WZQWs0o0NkqxUlFUCv4scZOIqaX2eI+j7Am3AbMzlu47VY+RJQQT5anoZ4I7ntayD1/2vv2YAnTUmuZSO4qQ1s8Rpuz9WdlxZIzAEAF2sjjZpMDRlNEE7vUcvrNkK41gtJ7p83yEk8jVNa/2m56VMhsKAn/psBghlRAok13xAuaRRtKrLY2I14loCHPAq1qmsDAQKqaSZfyAIWJwBRLFMXTqXHkDy6yRXfY8iCXT+lJ+9pwm43eInwbwp/ZQ77UqupnUzV83VBPJWcKxpF20o+2DSlADYcr0I+urMelKGNn6o4uFYffSB96VANsORoz6BhZzpFbJ3EgABHeaty+5evrqQ0Ysdh4GuABWL94bDItUrMean1hRPJ3cXDE1jT8CmRKtXz3lfyTtaTMDqNv9hXR7A+/hj++e2yVAg+P/bMpGZVShURG6RwkXSRhDnUkEO6nEUr84jDLRKHkKZ1E4GAS59FMdK3oqttsbfXJAfAOkjdMFJYRWi5PcY2YFaoS8KmBNtplCtk1JSYk77PWxhtL8kqAGkD6e4KoVg+lU8K4HH+dnL/hKrrJYAqpNw/teG6N0r6l6+WiX0WM2G6fsKp/SarV7dn+irUDpNKyO0bA01aCvjX1Ro9kcxNQBjJXmxJNc9JFb5mUFs5U8URy+rWpDRnVy1vF7oE58PXPU05kZwOlBaGnncXwoY+PtPdwSQ+rL+mT69qa5KZkLdjGslBTFRAP7EKhSMy4DaxT5zD/K7k69WTxd6OciywqrpyqAkDgOM67Ax5kNdiDgF/YYl9X3mWYVhRCbU7Q/8HlKK8hd/FaugIB2gsSRugPbjY4L5DaVUeDWFx1dhl/+J6FU6EOo1/rupbv/sXMEu2d86F99KoNabdtlXxSowQ0BAjbkHPw4ljGaQzY+NubYY0rEMVQQOmoWAMOskVWrQDEFyyf5GnCuBqdV9Xz1WwdluF4V1QcIlqfkuLDdhk3ZT+Vw1aaAdja5mC1mZAo6n57KBpTvxLgXOXbniR7wiQNvoqrJK0wx+8q1YpXUFDOD0VSDBdsZkwGhUTw1vbGhA773r7/tC6x0Gh1n8XVTCJtkAJLxM7ab0FWEG9CxVbHAK1Ell21j7TzaSVAag/WwVxo/5xBcRmuWr3/gpAwRos91AQGnCKTlSK3zK9VO8vpru1LpRq8kHa/jDrV3/eYDZEX0x/qXAvsbr8Y4HRGhtILe/RXH2snY7+oo4gwzpQzSmNduS7Uqr9nPWwVpvhAh1eSzA6ryPpcB49chlnxlbSTDFzizEYlxsbUNmwiuJhjVKH8RcawyvryYhBHWPpbky0eWpi82y1gkWBuojAgiF9rL6V255C4NzVgW0AYtdjAA1N+GWRMMapY/Bze7F7aqgL6WNIGQRPVPL3FjDFaERgEMc6BM/MkOh0K7JAW0AwExmm3fonYRbEg32W2/V4oIBYPe1tsHqYYGCg50djaiMO5FRBr3ray1LWJMCvWVWK6uD2gDwUK9KVAOYVkhtYBnN76u6OqsjmVVnrTM9v3VL01TwJStZZGh/COi78FnTcKLBVU9Xatwd2AaUoZ+8Uk1UAxiWUUDu6FfcEpZ8qFVVA+pjeftg+4cnzzTgsgGVDDL0Q5+rqafzYj1duqGzMLAd2OAGrZ7EZtYADJKs1G1xsAHMAcCKU52RZiAE+CJ++V6KnaATY71QAVqqlge3AzPAKBs1UQ3gDFenYpor/dKS4goCtrF9FYUA99/3qVxFcDKNNijgmgcUIH/99gAPtnzFfeumJByTYJC4glHqvKRWONmqmumrP6CTrG8LTapxNRcPsG1GuzI7yC1oVfDxgd2vN0CMZavUcPUNT776Wm++g7UOnd1/ksDY20/jxnoQcDksUtaOw1Db0BzrLk3PSrimO3NWaTLBfERrfFYb3oxBtCY6uCqEsbeNIurK3771ltIaLNsZyNSeVhNUK6pBhcD9OOu0vzdfHQX0gpbQfXULHVwVwthbiK3hU1slg7vPamlYKQt2D0DLz9pEckW1LNjCuhPb5nFJHg3MZq1ZSvHUWU12cBb+jRzG3m6iViY3uhrJU9oCLseDatsPrRvVkvBNJmKFE6Sf9uqrQIhNqVpE8tTGeavpZSVRROgI9nbQvvAWX1TUKtrUgDcBMFh3WreqOeGbLowHnnmdNOlVThsAfaRjNu6nDYvmbmfOsxTWX4ITEe4f9ZJaFdcrytygN+GM4mL+JJIrwiFERayUCZ59dRjWWP0/9q41OKqkCl9BlgCLKLCrYWUBo+7KIruuYM+tDDPJDJkZ8mJCQkJmkpB3AnmTDNmECCwQsgECgTCbkAhZIBBxhxjFYguxrNXyUVjloxQtLZ+1ltYqpVX+9J+Z5+3u23dyJ3O7+5Z7z08qTN++55ue0+d85zv2+13RSMDb3dvUic+yeIcWEdrxCFnocQJ8QOsEAJ3MQ8VqQuHKuFzFDQG+grh4nZC4ySZWZNWOjR1vyCJRWf+jIW0Vs0bCQGJ1AcZIbAYySyuU668Zl6u4IQAsmY5Jrqi1FS8A1fY3LSSBFOwoKpYRJLKqKt6+FkwOs//xdUPq4KJRuVKRBcBCgA0LwKrw+edUYxVjWb+n5b5cyCwZ8OPr6tIAwZydRWTvhkyoxTfmjhwDnqhP4RAA4dSDp1csBKvCJ1NUQvWrb2tIsZZZtwWbnaUmwjgWDKnf5OEHqYulJeYOgxaIGjyV2oNeftKEhVmqSrBidMCHdzXdWOVRLGRVoTjgCvZEdHIJE8/LWiv//ySDk7QMmLpaBBbMXUU7BLaowurvNVKvIJszuw4NWf87/8sI1RC6uTjijEwmyOi5wgwZ84doBINlqxeKVWF1mhqs/pBq574oNpahqkSu+f5LSP5kjI+c5HFJPUFySLEBUDhXAkG1Gm07fUZIwjY/NX+4isn3abuxdJmIHOif53ofqh1Zmvk4QuLb9EgeMQQtIEMaWIdR125KBqvC6lefngerj2nerMJt5LfQFYfi/o/qUGH2ASdPSJTeTskjRpVV4WYlovXPZduF5GzRs+sSya7e0/5cFZvRkNUciPcqQhfLLF6tznfkCgFGlRXJ6omKDazgi0Ly9oXFaVAskPJRJDD4A62aVThvHFbTQOtlpcpDQx1htnMbL1eMk7Bq9AdKBtesUCnLJLIAeDCQuubTmzZtXpM6d06/ApMBrtMMAUwmJ4HjCC4qXZysYX2tM9zAcYWIVZeB0WiOBhnyh7KsVy4SNDd4fjum4/uje1pvLlKPu4mCVak3NTKNs9GkL6waiYCowf2rIpY6X649VCFlljn7C2W16YicXLaqkLUxXOS6Y9IZVo1EQPSqDFMBAlkacAESYGJjHayahwCxayMWsl4g/azmhDsVzByFJPtJeQAjEUA8VrGE1RYKUBUWK1etvklvd7fRnd2S/2lepIv0BEdn7CflVw1GQDRahY/VbJSWBNbTwOrH4BXep1gICJpLgZQLamR/GqHszPLki14k1a0MRgApCYAKBIMl22lgFRnG/o6WGlakr6L0PURD1nO4/m+016mDpzf6Ys93BvaLw8BpMLcKs1b2XUCxuooGVAV4Gvtb1GdOSVrdh1FiwDj6d/sipeUzXM8wiQ9wHMZqgQFUE6oKgB+rYCMVrMJV158jGau7FDboJulnhefyILFCQ7Qvn6s7asmSNW4DqFibVTbauwK2UYHqIniJf9MsWoVvTNAG9yPbe+QgXMBL+PpDog7fhz2z10CqyZofJwmwgF5rVRVXxRGsv6KyR2iLAfTLOC39UVSvyOLh6w+pi+GWaCStEINnBYuj51CoPr+CClY3KqZXv019j2hHy5GYpElllAd5ma8/XNLDVYhG0gq5WFXGKVkhY9hplQKQqRN3qWxyB1xCfoDssCIaJ0RTBPaTfB2SKz3bsGgkrWA7Br8OfLTPUx+ig9VNSqWAf1DaJZyVa25ASq2RODA2FLaXs0MC8imXYcv7oEPVryDen+CowGTKVvC84L+ySMsNWuQH6+FYzpW3bEQVQXvNYFrJIgCsI5DasYpKsrxPlQwQMXh0J8q4sgcZTGdjzaNtvF0CtdscRrD6QWda5SIi04/YRKtYifU3LOb5IgdrAOkovwGlq8AF7spRkJx4M4JVtsKWxaM6K5RdEuNdrCglAeZsuVIT611qW0XqyIeQFoEMkzQkuZu7TyQ6wCyqrchqJJt1b/tIU0OFzqReHPBhI1aZMaxupgVVIU2BDvAevb0ibPLWU/ADHL4U0++s4z+nt5YkZcFo3nXm7sETJVlzQXu5R2dZBzRYzZ7CoLpuKTWsfhZe57rmYtZE2yUqyXVd6SfXXLnYTunidwfFKt0Ea2ag7eap8GlVOtyst9jY6kNeRTneXbqBGlShQdhz9h3aVavIwVqplPGQ0NHC3ytO6bnKMSFwemd+Rvt4TEVrqmtfve7SY0iXtViDRwBp9KAqIGqXULj6E5r7RYLzGQup+fsqf6/UyOexUhcMfl2KO54UiWKO7qoObnTWJEYFBCtfpojVbXAynkUaIGjp9XF6dfVAWgnZpPIUO1oXc4l7NtatS0JXAfoe9uN+W08RqvCQFgSr99htuYqA1YAO/CIxrS2nMaxSqlLE3sRUcHbubv1JaKP1KhlrFXxiBU2sKlCtH1IO0POVJyOHJAN04JeMMnIHC71iiusE+gAAIABJREFUgCNSBuk55Ale4PRXybUh2Sqxyo7PmtgosMKqWdMJrPF3De/5oAyr2TpwjEd6nCYcq3SE2MNUSMvl0Cmerz95F9tu5CVU4wWrxMdZLRyrFjYpq5DlwLtuwPa8Xw+euU2awkJVIqAlVGgOj9Ly6Y/L5UJP1X0tOFS3LGKHVfA2LY1gQvIyHq1c1INrJDErUIRjlUolqTiYELGEoZq7Q3dQ9aNQxfUeAVhGNwLAsCrlV39Hfet7lEVm+/Xgmkyod9GLY5VKZ0B7cKmb4e9Cuu6gegl7BZOywO1LlKGK5AGgutXf6V9dKhXFu3frwTfQEPlS2VBQKkXW4K/LbGigdq7uoGrdg72BNllW/IUVtLG6jcwH+Cf97RcrUSDHdeEdaCLnfRlW62msGFQm7g2d2qEAwOXVD1QzcrEXUITXq0DKi7ShisxqBd9gUwqIfFULFYbNtOrCPbWEYax0CQE9AGQFUwCFkWvV7AO9CBE46rH9d9tlEcBa6lBF+QC/pChlFT9vBQXq93XhnkqgXLUSRSeNu0ukBbEyM3bOnq/Sxbsoxm5V4mCZDKpb6UMV5Vn9mVHZSp63GpMeYlQX/hmBhtd65FPsKawYAMB8YO5r4IDj1zv8BTQz8d9/sVsO1WeWMsBqGlmB/Wss3sJOJ4EY2KILqFobSPMtJaMwM6A7HBhLhdUQd6asjXOitaAS3/vBIzKoblnNAKpoX8Afo1D9KZv3ECOySgNPwIwusOoFSv3W1LA6CcAg0h4jRkjn1Rzfw84c2dbbZNcqkPJhFlAVPk7suX6XzZvYEYnZD1jgNhY9GNyz2M4GqxfBlCjmQh/siD7ABC9RonS3U7bzYTmFc8nnmEBV2Aov+i/a4gCy+4SMXH5NH7cJe7zsKp14tRRMS/eqaGIg0uA7yaU7wF8v27fnppwTZ9nMBqpoz/X3WVFXYhaK21slYSvLSV1g9QbcV0OAKoVqhQuUncZmvUIF9yMnmHNZHT75vqvfIPA3FzOCKqpl8QuqGoEky8OGePXpAqo7YUHxDgJWKeRXm8EE3st9BRFPmPDaGJazHDmEbVeVEqD6KiuoohpBP6A3gUXJgrU7yCdFusAqTFK0nyY4rZBGGqAqH8Mi3nd3/JrHbdvBDanitJ0A1ZeYQVVYSxwZxA6rGZWiZ1Z6ggE9QDUD5ij2kbxGgbvSW+fEydXTMmDYS7pO+3a56N4/rX4fac/eEsAXqsKn4IW/y4y+KtlZOLl6SnfHKrhB8huFi3nTtCwiHSKBYw6uATF/wE+Lip1RXE/asXi7k/Q0WxlCVUiFL3R/4oBVa36F9AQn9ADVPDgqM1eTHEfh+G/IluXBihQGPNtLDo2KYv3e4jytU2dW1zEnEaneJuKTfIYlVFFd67c4YNXkqNNPIaC4uWqoF3FLC9F12qeQXON58c931MxvTAdJtU6fhges1bankrhbcd9wFukhlj3LFKrCRwCJwMoSqzaoDsJPiX9nc1FFC8Ej14jOQ3NLWnQJjM6YEsFq8FfweG9jOCmRs8uVtGBBhl8JqKJYVEt8gJVr2EJVWGohEa1YYvUAxBKxcsGprbG3xa5wfhFDAGws+57h5Ml7jYT7fQeYz2orXmuNZHx9AwWOBeYI0m1un6hog2PktZ9KFVhbCom8whKrUJP5Ew5ArezqMyuDoYToPiy9mnf1SdK8aBfh37qACuscHwpIHPCcgbO2zAS+8da8ggGfU4yD1D6Fdde9zByqwivwA3yLec7KZLojLT/BGKfpgcmG+EhoIzowF/sYZ3vWJAXZoF6gzswtkzMIu7YwZ6DY78iMVzqwZjr87r358WAa/PV/orTmqkXsoYo2BvyMA1bPS8tPMkWqs7d0PhRkNRNdiGeXfGK3eUr7dobLQL31jHd55aXgQt+xkwPu4rMFfpfLZXO5/P6CYveukzm++nlAGpoENFyrtNqSxQIPWwU/w/dY11hNJje0fBe7ZQs6TqlAwATZi/gPtjvYK2ee1jrl2QcSMsuZyx2jolY2eOWI4krPp3KBqvAS/BCPI1j9LTvQXIWWP8hq0dGbdlXuryH7Eb/DOMIaB3XN2j5lD0jYLFOXh7zJA/XL5Y/iLLJ8NR+oCutJhatfs8NqR9zJ7FRy/QfrVHp+iuxJWYXVGkz3VMwh5U2bhs/pBwu0R/tHarIXjFPP4NG4QXzKWoGXbYaf4+v0Fdhxg6pWTEoB7pFZ1U4fIXvzkuxDT0b7G0s17ObzgiTMXnfrWk1z4gfqSFNW/A9O284NqsIGQFAIeJcdVmFGBP2p1vlHzfP+jPa09FdM3+6eafR6yA6VX/ldIRpuSFWoSTMO1iGQrFlqm8rbZk6rg+mBovK+c/N94nNrBI72IiBMuHrIDqulDLHaesUS1xNlYw8OVc17GBFEV0JBgOgJpd/MJ/zaf4uTsvNj/eU3DraTyxpiq7dq6H/sXWtQU+kZDiCIiCyIILgUQUHqBS912SQFIgkhRCECQUDkTgyCEBAUZVEQs3KR5a4gFy+zCLKslHW2O07tTPuj7W4v0+nOtr/azrSdttvLbGft9Gd/VXL9vnPec3JOknOSHnh/iWjOd8558n3v5Xmf1zg3ncXgg/zf9vUkVEVbt0AdV7xBVYNenVvqStMybU59+e7qOUYb0ATw2eY5SOfMOs9DT92REVCkid1sme8sTJfMLj6onJqaMk5Vdt/rbTToR5hfZt8xkYctFuoM+A5fWC3EngaHiYCmcbpkeucqc58OAqJF9vCcZY7MBZ3rFP5LYu+yt3aLPG4RENv6Y76wegsHTT1HlyEr29tbmcZbVtiEHzBTxTL4ufmBlYnb5Sq3odurkLorXOQFhk25tDJYf8AXVgkczXJOFK1Pz1Aei40tbINl2BtV2bQerS5xj2toPcEmu5o13HuZS6SGegVSCd2BVlbg9/nCqo4YB7hfhre/myqi0t+tZZ3VqaZAoK1DqcVWZtB3uNBsImWDpQevd/QaoyGTE6D6vxEn8hLDulisTKtf8IVVVDjKTHObcO/nT3ZSvMGsJ6vOpMqpep9V9vlkdobD6FOniwNX2MDJcidN16rcjtQ3E5NFXmNHxAB75XueKQaYzk61Gz9c2UKRMlxY6nOqqpNNuVXetP2bhwjjM+uxkwcFm4zVAuXQaRctOGj/ZpE3WTAa3PyYLw12m6XPER7QDbcNy0ntGIVz5CXXnK1AUndaIRMQzqJeh2z5qhMcaA2b89xIOSXEFdu2d4evyMsMYwX+lKfZFojJidmkF24iLJX1wPHU3FWni+XZNEtDZcovYpHR6DxracqHbFBlU9xyX54rYGe4j8j7DEsEyN7nvTFAoiIqz8y6owH+NOy7ZVW6wkOiwxw2a/7+LL6TN+rYnRZLLHBVSjNwwhkL3Oe3SeSdlogt9N98E1jXyulELumMy2AtrgDrMc+NK65w5bJpd/xiXKaU4H+UP2jI4MZd7bVd85WrMN0SFRQfs1nktYaNYrHor/2ZV6xKigeJj981N0DTBpa3L3f2ucbrpNcFwEciS5sqiGzlkYpahq5rRjkLgNnkNsoQNzkpOuHtvVHbWJz6EWHxcb4ir7ZN+JK/5pu8Ys6BEoOCcRcCrNSOn8NIbXINqdI8BxvjJJG6NEvK7I5UMtpdq51yV1F5Q8vE6ZToHX4H3wg9GkgZQcUm7T3oFx7j5SgFFFjF4o+e8TcwAGWAEiN2g9N9zGXgSZhldHFPZTIjndS6vAqc5Y0Mcq53WEC1vBmQBxcTPc6tIceid4fvSIiP9FuzyPiEHXviNiX7iP6PbPObhFv/I79FVlvdh9ioN+hcYhIOqcor3dCJVO3wAFcBOpCNpL21x7GWqpEFVr+yOR2Iz3FIJEDbT7z1X/E1jI1guUSwDjWw/5D8SiikSrvnhh4kqZTBTl8Ece1niLnSUYdzBxvZFVjN1uoh/T7eLIh463/iu3BFCVbZPMskumIJCklkVfXOhPynb/ZPFKgVKo1GqdFoVIrJAgZcFCWsXDZF+AYNORqMVMoCq1esl0EZOrsFCFVfUqRodlh/wz9WJTdJVZcbbASu5K0j0Ku8UcMWpnVFEwpn8xBauN+OuNtn0vdlKWUssGr9JjYjtx+4VYBYTSDfu6mP5S8ewKpESxJPKB9mmpNM1YH1xYWL7HB6ut81JV45KBJBrkLJhmmddzYqFjbODCqLIkQXIIJ88z/ivxhgy7OSq6L6q4zYnl1gH3VpWzOb2Klo0vWCGbixdkBFfJrWgRUWWLXN3kBJQAcECNUjwGHzd547WbEKFpDjmV51hKCMVbDyn/WYRUK1rl/lFpVCcGNtAUVdqCWwrrLAqo2GgxxK/tsFiNVvQFHz+7x2XBFe9SKwotElusBZ2wayqTJfMidSF+a7jdt1Ig/4fDgFdYOy4bWGOVT11mvUI38ZIUQX4Ch0+594JMFqfdd3YYrm43qQ19reegOMQ2TL9R4AKlUmoJcCZ1RUmA7mWLX546iqoJ8AoRoN3v5PPJNgtb0oCuqmTP9y+Fa77eCU32xoWaTK7Rh0TI/+kxr3rr4Auso0FUOKYlCWzgneCuqwf1OAWA0D7/+DZ55JsFrt1ChtH9z5HoPhq4URmrTO+TMMgymtyu1K2v3QhShlibJgkWHGPNSFPigLECVAqG6lYDT8g2e2NWlveuEKr+2deUaaFNlFai4k3yE181oaaQJQb66W4a2ONtiugQp1vC1ArIZTPIIveJYLJjutw06LjjAM/tsLuBlolgqlAa7Rlp2gqgizex20Q7ULPWaiBYjVvRTP4HPPJa1spDi9U0i93s2Eo1KXr+Rq3RroemMYnZm45ilyJTmdUbfVOMIbRwkERwUI1e3+eI3I/se1eSwfehasyrFy1khNW3zI5OxXcDjupRi6JNbPciCWFB+Rvzmmrp7LtJXW521oE4JY4FmASPxFI8miLz2YtLIl+O/3ylghVTZb77mzn45oJV1Alxly5BBx5QOkRKup3eq8ruo6swOkD41Gg4VYCMAaWMUl9+0+4g95lbOgsPScS+PM0SqruuQ47u9Xcr1oaJhpE+p8B4hEyaSs9gIx0Zpt+uun0r7WRYjmcMGI1zmwAooQ6YCE5pWnUoP9h095lQigilOKpF2LzDyBrAeO99QcNfej3jIcsgECfESilCiHiVZTRrbU5JA2XJkbRL6z5YYpInMMGzK8LVl4UD2GNwQMnUJDgC89xLQipAPypdKVeYOjzVXW88hhf0phsZyPFauha+NzquJfP/uQtxwlWrvMSglW6k3TauuSsaLi8aMzV8kJuVXs+3xY8FAV35NK7fqSaZ/zOt+C5t2vVddrW2YuUOI1bbrzlsOePq2Kp/WehC6PZ4sPrbU0b08SO2C0mgP7RQap4gaseBcgPG91UwDhWdXgZbpff/szb8CqRHnakk6/aHwxSMzkPDdUnnHc7587mc7bcsG5pgQKuGmOiS+JjClrwVFv/l8DDr+IZTjpN0H4UDXJd01h5YCPvQKsqUjRsrnhTsvdqe4ni09eVowN62oZEVPkfC4WqgTUEx51kukN+BwnHREVWIrirFniIPMefcxYhiUZBMiwIkFV3Gkqfth//sMzD7JXcFPVOd142q/ieanQKuaJz9pcVvLZR+7Gxqhk1dZEVOkLY2uXtRpXq2vrnre3Ot7Cd9XgI8KHapr57gdRIdbfeglWJen52U4ANe9dBe8j3ieghZCawIMsbAxy2XAQG/OmxuQGhy4s6AdLr4szZxDpOB3BvxBcP0AMmbJimT0+hRIDfynxGpMXsQaqOtUD68xh4K6Kxf4hFrAGkXNvl/BK8z2ii365Ai3NtRF+vXMdQFU8bGFAIO2sP/unxItMwwKt1VqPANU65opgAL/PWgTdvJP8u8e4f63WddtP+ayq1rNotYo4BiF06zqAaslZMmH395996E1glSi1eYwaUfNVJzz2hXKcXSWQSwAC8StSv4565U7b2NgVXRPuCt0hVrQOhQitFQCAatUp4MH+y+OMAFKUPZnrIOjXTmZ4coEgcQWarG5XmjgMNIq1UN2EEtm3a+eIKeeAY+sAqrNyW6T90P4AZJ/+TeJ1ljGphbMC1Tkn1RmeXt1NYGF9EL0vyP5CEoFf36aSZdFYj5amTpJoZ2DMethV5UjjBSIx/cVfJV5pGYoJbU5htflArC5sLzpZoJB7xcogd7UVFJJEFCT9wDaqOrqdu2+MPPwqUGgE6+hg8mNZlqN5QWS48ge/k3i1nUhPPeFVCwLd1TmwMhyPvJSELRDHoRfcW1Nffx26ugEh5ACh7apxAFTHlVgA22fnQaR9Itkwl91VePxfKNZKBKtP61sANa+VMVC2I/bYOoBqo5nRqYVa2b/+7gb+XM2u2ttLz2NsKAxbcVSq03pjTa7NC5fX6aYW4H+XlLJ+oCpR2DMhyDn03w38sfFJoJyaXWSqEtNUTcQLibE0jTnnB6rmZpYNo9TUyCAfgUF1NwDVEg05i41k7f6zAUBXyQD2qvWdNiwXir+clCTne8z9I0XrAKrIhL5+YEZS2kfKDQS6Rgawi6gNnSvDNsY4/PX4BDkL1agYwUEV8N9vqKBNoQzpDrq2gUDmlksrulYlld6mLd1H+js1kOqw0M5/0X4AqgMYYa4OorHf3kAgY0uHyGB6lHFRgbH3SEN6Yo6yh+ou4clW7IGgio+uyQdVQmo3MMjU1LQ068wVqbTGAYHfN4wlUgMjNwsOquHA8WIgTFlCtBiR6KpXeJjK4KgU20/rApSQcq2hkKfGZmvddliAQgAHgMLItJr4rNshSRtZjuCO6nyOPriQ1gWYJ7Xxi6H5vD6JwUyRelCAvdVgDe8VWVLZLhx6H1H7mBIYVFNzOcptyGmzAGm1JCUfcRj4tlLCmMRYAYkhIiFCFcgh9wDq3+l5UA37+iTPJzTH+fqidzn6ZEgj2N5mMWD6+WwWfXRltuTDgQ5i/+M7hDgS6LWvCuyqt8EJeFqQyt7GL1YV3IK1X8pVyhjiA9oZ/Y/I09LEYsocvs+B41sogRoRmSwSpsVsYwpVlCaEqAWNaHjF6nuclh+KpUVc+RZAgXWV4AIQhVjp5KdDDgSR665bdoWFbxcJ1XyAwFL/HsXzbgdJl2d4xWq/gsMPV0ulXH3zFLRcgAGrwAE+3yCO/uWl7PEL2xsadSg29uiuiKBvJUT7iIRswEwgfbFjl6sZSVtd4JVun1PA3Wdr8qQ53DkXZBvFswBrVkHVHrBhpIHrdFA1EXkttoT8jw4+sVqXz9lHr3XqcLZr/4+9K2uKIumi4YLLqGiAuzMOyjijjjM6Ll/RQdNAs8rSILIJyr7Ygg3IoqgoKrKKqCw6g0KEKIYRhuGTEf4Ef4sP8xc+GpquzKzMrKzqvE3EmPcVodOs01k3zz333AaeHNAZ9Er1Yt7yW6IURPVs1eiWUMzZcL121YxMDnkTxgbmeEcB2J9eSHE8YEc25Vh9SbovkF7pnNvVdxiGycDdXCgk6SVt9GX1MXxYTYV7S/sNBsASDIrGyqdXqTpYYy5+UxBlp6te/pbrtFUNUg/oC19bUybY0edHUx6YYWAnb/hKCupliDf1/6UwuhwnSW216bEWjAnk11rDhtViRx7gPR0sv0jmNgU+ZVtbHFIYZZ2r8yIvyoABnXMlDtaF6zTI2Ze8eG0Eo4pHjFAtfYS426Nz2VK0//h4H5uxk8Bqj4U7wsOVOFizHQ4IIiB+0XH4POSyySjUXaiysB88NRWwfJ9BDga8Z6FUiE6iqw8XFXDe4RgC+LOLeXgZGB2WlMFzsnzIArE/flIgDQQ5x7raCvmC2tzXhgmrGQ7fSyhdSTVYCkDRrTRfNE5NDyQH7WYq1u8zogistolRO4FyNnKwDoeneLVwR2kelk+ELdXqn4QzBdCr1O3keApMxbpxtUJpIAgD61kRhpPKW4enSzB94e7hygRJVh3V91cmBZgjf4SrWKMVSANBTPt4L5rbGQ7Wr2Fpv550OMrNOGC7pfq5lpVJAYbIn2X918dR2Qyi22xM5D2cRz1YR8NDrzq8shWzgZ695rTaFUkBLvgMP2xSCSstDhAJ64jA1uuqgFbkYP2UGQasFvgHwlXKfUFXLRsgvlqRFOCL8YfjmH5lk0IpXbzSJbD3Cbrc6j3yq0/CgNVsf5vyY5BDr0/LCF8KgLzma3kabMWwIrGJaIu8b23ze5DiVUoBPFbPL47XKAAAUq3cP4tGDq8QMEapvpZdNIwRVLEQxHQ6IUIo0RPcVrRB6Es46FV/r5fEKlkw+V54KQPRqzQtgL5tFTRDtm9KEiCiCBCSMenmVl6keu0qg4Zq0lJf4rg8uipHN+m6BbRoihywrB0bc2uIt6o5QKRyNSO0/w8ogmFNux0GetWP1SZZJ15+FSLHHYPLW8gY4qYAmFmIIgL0WE9UAxqFKvtJwYdckxZGCcvk0sXjk5Q/5i7S7+eljzXtX5g10zoCdD3lHSpWMUnAfgXS5dhlTW0diBGKJZOm1QEXBJ4FLsmh34KSn2ENUKOavNOaWTrRWYBHPBaAJAIiFEaX4yA5IUjoCSQGhe7NqGFYBzi9uvgcQzy/k0ZyKPPQJkCWHE+ZE9ShtwzRZ8bVYE9EKQKCrBVhOOPqFHu1ZdBGil/MB8VqTgCr70I5UYuNHJLXJUrX2Upb2I5rE3Ss+jDbpuMKpKwkQJB6Cja8l6GzPyZAsdoQwKrdylW8+6yHORL1GtjXi9loZdQCBCJNFQOoEUPO9RA7WOMbqK2XzwGhmrCcy12whdP8nAwGNHrB/GNSKZ+mj1786mMsCLNf2aMwGgzSJmha7DHoHQIlqBcWoJA1N3jvGLH0e4mpkwOdGeyRwktDJaohljzAs7DQplgrqqMPElZBNl1pH4TOKV1u1eW0JtcOKfdbxGq5cHbqLi7oNJvVvjSspxxgxbSblT7X2uVlrQgbqXZQQTQYq0mr5H9FWkVRM7E7yC+nwV2v8oNYNe23TUh1n8sv8pihFBU9+cIjW0Fm2TUxV9Sn1NaM+F2z3sqC8YYf0Baht2BYLQp6v9Luf4lJybnpmeeKBx5ke6ocFsK3tHoIQxeKhQViB3qduaRvSrzCiOOkq/dFc3PnxCrjW9RSNcFutXIRq/VnB/xRUFDwoOhudk6np6Eqz2E3Asx7MUxNmN2gUs9e0iC6n0cVQjm0lVZiak6BT8DxocOV64AES/G6p7YzyyEvAtJmN9CLAN+oeo3mYkVGCX94kDpYLRmwEH0Z1S541XUu4v/+URpSywtfLK1afn2YogZs0XfpQil7UVgXywEFUDQMQ76cpRZ5Q1TG5qwCweo5BKvTV7whozTrY0fFbb1CLJ9sMxJWWQhxOm562wvEGoVPNM4YpiY1TgqKApfnCKHXqxcgnlNLt7meIOHQ98/UaPUHGyD1ea/P3ClpdOL/42Tpats8eoEsYAvQzFnhQ2W9wg6jF/vTBGvlGOx6BdK+7MGxugzZ+qaJ2Y7rPc9NIVrqrS6cufRwsDGFOsBEeg5w1qhJQWqns7ylvlWNARwFi3HGxX1rVW7HC+R3bwEQQAnk/DIybWmv+1ZSOVcxdXP8dUfL/Hxb2/xoy7WZ8dlLX/7+PNg3lubiT4Vywx+riA/oo34eVlHDUO2kgiceR4wPr9AKxY33CWq35Zuxuclxu5IjHyRlQaPVKXisYsUV5WZhiB8MD8/Fkokm06n2SxpoqTUfGKuSDbNTMzjVVa2de6ziW/m7AicRkesMTy+lKz03NTmJ5FoTcxj36jrQLCAHGKv9cpebzfFa0bSb/NT6vvlc1u86oo2PL2XJbjGvwZNdNPBsMj01HjngKKVuNCMclMwFxGcAY7VH6nKNGussZBJAdzMfq5gR+y6FTXOSdQGshcQmNuQUnBW8vnaApKuOe1BYvSL1YlXFPStnxEppS7FVQdMQq3ZTLtctVojLD6hG2PkMhFgHw+o4bAbgRcxUGstMNhLtC9JiFTSN8ddm4xN03bQCVqxZOF2+cMX/zKGwKrP9pti4NagL+KjZPmJk9a8KmbSUlUZBvi0Vxyp66HXDNIMAYHV6Uao/KFFflcFrstJ6TfexBV3eKQVMWhyiPcqrNcJYvQbmwlIMhtXu1ri4CZtNXMKcHsaRDJnuY5syXrHDsvqfYrUoVlHNRQWMalk2Vr/4hQ+ZYwvZjixBQIKHT0I1Wcyl1KBLRq11B+15prwWg2pZO5QmQJfXoVjd+ts6V2hI/RqYktTvlNYYEE+hn+8hE0GdXeYbibUG71WwpMfqvdRnWimkZxqyajts47KCYvXMwpfreMz2nSd3xW47sS7COlQvBUUA85okX+vEuyba6S9WN/KEQiWrfrWb+lTHbghsMWoaqHXKxKqHilWic35V5OG1e2KOHIzevhDR0QePxOxZe3z7XiZS62qQT5iWM0omscgk/fwk8q2vVuZrQnHmBPXBuipM+0ayPqH0qkztci6dajDvmovZwUSq8wnWa+MeficDqg9otDNq+fVO5AXVii50n8IkG6z0k1UbrrXCYNcDaZYuW1Ah/3iK/frvJZ2NfXeAoOqYQ7elTASrmPBxg4IkJw1gvDZdc9z3V2mjDfcWsesKIgXtEVZ1RO1iX7xutRiP/XIgqN5AluEassxTa5sVInkXLNabs/21T1Bwob0GsoSoFq3o/LyPfai+L4JoXKDmqji1WinYsKgMWMWpq62sp1zPNGBovYX9w1aQmxV+RT7N+S8c2MgmqmD8t+lQxbSoaa8EhRXYetcrQHLjf8wnfZX+GqsZw/+ZRBrAzaLJObeOnZw7VToIVOPv0p0y0HaJccGaSrOacGUlfmHTlb2jxguCd5hICCXKV7NZF7gIa8qGpZ7HzjgYqGbTyyN9li9W/k5bTYlXrMQeTr7XXdGDb+58O3n6ghBWZHsHy578jw2MhQ93wSA1LiHHXOHvqhWtVTvwVtsYhUYz7moHr+TzpuL6cs/Qh45eY0VtdKz8AAAL9UlEQVRIHgzwNBAdq6lFMpYeS1/zo7ZkIKgmdTKSeBR0E+KCtdvYuncoMJrF+pP8arurruTvl3Of652gKnvCiwAjxv5gvBLoiersJBBS45I9DIFEPSoBsuC70YrvqbK2No8j++yKQsqAjtVyl8C5SmMxXBNFUEiNS21gQG4K62G3olqf1lQbi9WyQKw9qDqlvWzPcRTz2ip6EYByLSzJAENqXC7L7rXaaZ1apQwPXbhERiooCsTRDXaw+kIaX5XBSeQY1cejxq9OIRxS49ws69dmtArQXmMJq9hUZk3bqYAodMXaagOrU7K6QfIok6jMVMgGO1mYuRXL3dXMqRmoDsC8x4o9AkPRVuIRvd8yVm9ISgSreOY5LD3AaXI1FYBQLRZrRHlv2c2wG9MEqNqVYKw6tMUiVrNhoFpzURNwJyezlpRiOKgOMLH26hGyhMc1lrGK+QVrhxUKReP4D5aaRdrlMEGG63UF9ikRUfRvFrmah2BIjS9i9/NcRUmIK9ZdYvFXyBGFQfH48VcLWP0HBqr45ZiVxEWKGx4CVQAMX6s5G47GUyKvEBUMhj1W+Gydl5EANPCtyZmDn86QqykHgiqTVvUnq+he1TWHjFU1NMBirN21JWyVAAppOYR/yG7GfSOKXA3M+II4N2eKVhc6/TfF1iSOCkVahRZRa3YLQDUtdJFVuhEIpYSUi9VttX4jrLNqIEY4E16fYwNV39mBKs54KazaS1z/NKWwKkMGQiYFCLP4h7AtHkgn2SwIYfUAr5tnMDS6yijSUTmAbbj+dGozD6shm1rTSMvLeN+Bi63n2EYsB0AHmJTNs/TA8uq6fntYvSr0ElFhHpvW/vJn7On9mzdGbN53mhQPng/xzCownfvI7Qsk61aj8m9V53mnKgbVtB6b47fwPgs17FpSrCH8ryCYoNf4Z+xbzV4O2b/yUjZUM3mzib3YgehqswnVMqeSW0PENpklzXQqE/QqTRPmxkn5ap1kqObzMDaDl9Zu2oQqORtJ9VxJKmoR2AilTTQxX6TiyHcG2EQyawVSu1WKOAh7Tqxz2vYEzhb8D0UpmEkJYuigM4SpZsmMtqVR/COOreYu6BTgKIBUD08cdYFQzZbZxirmDaZtUSiTE8fwB9QUgrCaIQV9/hjnAGIs5c/aZ4mpah5vJAXRzXO12TZUHXV43UOhTEr8LKv8npotNODFfDrZYVJqfU4Wq8oZQuPoryR7efvtQ7VVU00sAEF4YbsGbGqW8pmFoFpciHBsldmS9sKoV7is6uU3ZNtMCKcqPpFVzQ+WFJERUpwBJtlCEF+fZlEfR47t7pXTAcCjqgoJokKb+3971/7U1BGFG0J4JMEkAgmPBAiE8JIIJJoyjRFBFOKALRrkqVRoMIAafDG+KGKLhrYgTqU4dXTG5zjDOP2Jf6v/QqUz2N7dzc3uvXs1N5zvZ7Nucj/2nv3Od84ZkkHVC8gsbpBXlbhZSfNYtffSTtCheh82opaAGaa+VA9+OYY7GnrEDtVL66gHbTIqg6oDG8LFcjOAZzwSWGXyQ4Cu62IPbkl4sTI2UuwKVQJeMSd4I+HHXf8V47Z9J7rFzTOEIQs/T4UkUnVmDVnLDDzjgQLkZ2UeFXSo/1Qr/dBnytAN3VWAXmI99L83/cnr4b7u7gfhJDuMTSbo8rrx7GU8eomVqlPL6EJa4JkCOSt2LbM9WS9SYSIoi+p1aECbcCzyyUqRfdUj4r0SBhemX43een7t7em5+Pj3sRv3osPnN5dmQoSANnRjdh4nfm4j8IwDimXLQ0eYfJy0RwwaRd+mrQ9sO8nI1EujkucWBYOBjpHOzl+vrgyuTk5Ort5MQHpQrLjAJr8OT5wcF4T6eqmGbl97UdPiLdmFqfShKm9AOyse0Bjle0V7qUcRskRuDejB+i3dKX+cjap3J5SnKhyritysViRMCvpR9NoifMFmaWh31qiT1GUzzEZVTFVVADkQrXKBiUNvING37pZwfQf91ixoJE3T0PoYG1UXg8pTFUoC+CA7h0O5yGMxF6eQDEYGa9x+9GB9T5Hn7WVh6tLaZ2CqvwVoxgV29Iftlt+1UjgbSri8h2VzaMTq/5B0K91MedDlz0HVBmAZHzhREeYbKWl2ersRkzv+AHrvmziRZHtdTIo9KVTVWYxcmaqDLAAvoEdX4GvOXI0K17cz7a4affIveo+Kts9gkVZfkELVrIqv9ps4xrBFoFZxA9bjqp0zVzdZHVYCQQ1NXo1Ex55IaE9NClW3SNQq2b/932ZacvgwtdRqAIpxQw36825y5mpI1pXYhW5vurX1/iEpjn+qUNW0I6gZKhtKcmUSNbewHpjKE1jDoEdSmu2IsSIop/tINtY1eLa1NdJFVAAeyFVVg0JTjabYddDky5IUEOiKbPUHgF0Kc3VFwvxIUbNIQJbfqBilyuB2WckJPGptP8VgAFgnEdBI7lpoaLTX1VtbGmx6U5Xb6y0sNJtrPsLn85Ug8NWYvW69pdqZnwkHqgIwY4/sHTtX79Nz1cG6QWx0wLN/Vw330NuosRB6mnQUltmBDepKW/n9gU2+foCgvLKjPeg1Z2fOZG//UTobNZ1X1QxDfVId1fhTW2HtIHmEfuCzhBI5Jzabc2Bn7bHr4b6+ExHGFhNE254N3topjzrCc1sJcQxXh/3yYgBCmLIopw7qGfHKDoK9CqAhZWk65pjqmMcYyuQlkCITtQV03JVM1egC0QcFcydUAT15tAV9feBh8WvNnF92x9wWdHd/Sy2GnvqL9GWbmoEGqoCdLBCOvKTUrg7fF6fHS6ppFqIwNGEisCSmhuaJWqkXuqKpBYlmC1+91kVzr0p2BT/LoZijAk0h3ZYSBXx4TfyeHhjopxrUJpzMMjIaSRaqdidNayJapjRlCDMHLrBHAeMdpO+Y5wIGqAhWsWnXpxNXj7Y9CVOUNgl7RBol3gD3YaONWam6RFRVS8EGlQ7Xqx3p/eKVgf4n7T1H2g5/ClHberr6++gM+MO0o1fEUY5GmoFxRq6+IX25GiiEUhkM3iROjNdn57b19+NjY5FIZIyp/H4Wkdyl7hGtvfKvbrIpAJAASBORNfmk1pEzj2LymztLH0iWgUUBa0w9AAYJXiiYOKXKk9VCY3RbOXeFuXXeBK+eDhVBOcLVnxCqpg9clFVGE7/Psrx8x5Fbt4yXrgcz2cTlRABe8KqoFs1mWh9xYOHsFGVv8qENfp0dDZjV9uo9Sqr+cRMzAFSDqqpmOLPofe+BhfnZ4eSe5jc8K+UzMSF4mfKIf48ZACrgcasbew/m+Vmweu5RXOyAvXcH/YQ8j4gW28FFql6+V7DPQQJA/TjgYS2OD05urb+NDRDO1PhTLFGk08jbHi5XrFHkr2K30U8VwpNOB2Ro90kohQusPny6/nw2HosOnx+OxuKXR6dJjia5HNlTig+PT0rW86vYnwyMm0wTZNeZ8vzKwCp3b+V4HfRWkjBgZgN3q8BDTqPA1eXNVYCqOvkykQNf9aHoBSuENwLOgpEo6YU9Ljf301XPYV8EB+PyDRHHyh3830O6Kv2gqbTs43qs8rDfZzThC3f+Rj0JcLsNECir6YlaZ1UOL67y6e3YTNrPOaLWO/SOVLJaAU81fe9axQ4vj3CgScNnP3U6wuKdL7DRqUOXl5UKRAApDEN5S43M25aRm1DkJJfcjAp8NdHFQXLrfugytRvEgXw54WteOb+dtCRIS/w0evpC6ON9KvZ2/gdo3Q/hq0la+Mo3AW8RnY6mtBQBUE34ate6mbuU19Ty3YNe4uleooEHuNvC12Ktm+F8zdHylomyTZKoWgblVbsTmU5bEc19K6dFgV4Rkk7Wslp4arsXmgqrzScmaOVVFSj02m1gHzMBpypEsM2VWou3CaVssMxdXadgfKhlFNFMYAMAfBK1MsvzXVatw+HQOgvKmxW/xlSySBJGcAEAviBqfdRUrYL3P+DL6hEeujigBPqrAr44ioso7lSQqwKkxNFqTVJ9a86HHwmQIshwJGar0QaDgAApdbbWF5JMAHnuekipAlIOjVZh0lfna8gHPRWQqmjO13pser3Noy2wQ7NKAAAAAKgY/wDl2nNAE6uXJQAAAABJRU5ErkJggg==" id="svg_1" height="512" width="683" y="39" x="56.5"/>
+ </g>
+</svg>
\ No newline at end of file
diff --git a/static/img/hammer-and-wrench.svg b/static/img/hammer-and-wrench.svg
new file mode 100644
index 0000000..d9c2b50
--- /dev/null
+++ b/static/img/hammer-and-wrench.svg
@@ -0,0 +1 @@
+<svg viewBox="-10 -20 100 100" xmlns="http://www.w3.org/2000/svg"><path d="m20.9 50.9 30-30c3.5.8 7.3-.1 10-2.8s3.6-6.5 2.8-10l-8.7 8.6-6.1-1.6-1.6-6.1 8.7-8.7c-3.5-.8-7.3.1-10 2.8s-3.6 6.5-2.8 10l-30 30c-3.5-.8-7.3.1-10 2.8-4.2 4.1-4.2 10.9-.1 15s10.9 4.1 15 0c2.7-2.7 3.6-6.5 2.8-10m-8.7 8.6-6.1-1.6-1.6-6.1 4.5-4.5 6.1 1.6 1.6 6.1z" fill="#94989b"/><path d="m28.8 21.9-5.6 5.8-5.5-5.7 5.5-5.8z" fill="#3e4347"/><path d="m16.7 5.1-9.8 10.1c-.4.4-.4 1 0 1.3l3.7 3.8 3.7 3.8c.4.4.9.4 1.3 0l9.8-10.1c.4-.4.4-1 0-1.3l-7.4-7.6c-.3-.4-.9-.4-1.3 0m-16.4 16.9c-.4.4-.4 1 0 1.3l7.3 7.7c.4.4.9.4 1.3 0 0 0 2-2.1 2.1-2.2l-8.6-8.9c-.1 0-2.1 2.1-2.1 2.1" fill="#94989b"/><path d="m10.5 20.4-3.7-3.8s1.2 2.1-2 2.5c-1.3.2-2.1.4-2.5.8l8.6 8.9c.4-.5.6-1.3.8-2.6.4-3.3 2.4-2 2.4-2zm29.1-16.1c-10.1-10.3-21.2 1.2-21.2 1.2l6.5 6.7s6.3-8.5 14.2-6.1c.9.3 1.7.7 2 .5.4-.3-.8-1.6-1.5-2.3" fill="#3e4347"/><path d="m26 24.8-3.6 3.7s1.9 3 5.1 6.3c3.5 3.6 8.2 5.7 12.9 10.5 7 7.2 12.8 15 14.9 17.9.8 1.1.9 1 1.9 0l3-3.1z" fill="#f2b200"/><path d="m26 24.8 3.6-3.7s2.9 1.9 6.1 5.2c3.5 3.6 5.5 8.5 10.2 13.3 7 7.2 14.5 13.2 17.4 15.4 1.1.8 1 1 0 2l-3 3.1z" fill="#ffce31"/></svg>
\ No newline at end of file
diff --git a/static/img/logo.svg b/static/img/logo.svg
new file mode 100644
index 0000000..9db6d0d
--- /dev/null
+++ b/static/img/logo.svg
@@ -0,0 +1 @@
+<svg width="200" height="200" viewBox="0 0 200 200" xmlns="http://www.w3.org/2000/svg"><g fill="none" fill-rule="evenodd"><path fill="#FFF" d="M99 52h84v34H99z"/><path d="M23 163c-7.398 0-13.843-4.027-17.303-10A19.886 19.886 0 0 0 3 163c0 11.046 8.954 20 20 20h20v-20H23z" fill="#3ECC5F"/><path d="M112.98 57.376L183 53V43c0-11.046-8.954-20-20-20H73l-2.5-4.33c-1.112-1.925-3.889-1.925-5 0L63 23l-2.5-4.33c-1.111-1.925-3.889-1.925-5 0L53 23l-2.5-4.33c-1.111-1.925-3.889-1.925-5 0L43 23c-.022 0-.042.003-.065.003l-4.142-4.141c-1.57-1.571-4.252-.853-4.828 1.294l-1.369 5.104-5.192-1.392c-2.148-.575-4.111 1.389-3.535 3.536l1.39 5.193-5.102 1.367c-2.148.576-2.867 3.259-1.296 4.83l4.142 4.142c0 .021-.003.042-.003.064l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 53l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 63l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 73l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 83l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 93l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 103l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 113l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 123l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 133l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 143l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 153l-4.33 2.5c-1.925 1.111-1.925 3.889 0 5L23 163c0 11.046 8.954 20 20 20h120c11.046 0 20-8.954 20-20V83l-70.02-4.376A10.645 10.645 0 0 1 103 68c0-5.621 4.37-10.273 9.98-10.624" fill="#3ECC5F"/><path fill="#3ECC5F" d="M143 183h30v-40h-30z"/><path d="M193 158c-.219 0-.428.037-.639.064-.038-.15-.074-.301-.116-.451A5 5 0 0 0 190.32 148a4.96 4.96 0 0 0-3.016 1.036 26.531 26.531 0 0 0-.335-.336 4.955 4.955 0 0 0 1.011-2.987 5 5 0 0 0-9.599-1.959c-.148-.042-.297-.077-.445-.115.027-.211.064-.42.064-.639a5 5 0 0 0-5-5 5 5 0 0 0-5 5c0 .219.037.428.064.639-.148.038-.297.073-.445.115a4.998 4.998 0 0 0-9.599 1.959c0 1.125.384 2.151 1.011 2.987-3.717 3.632-6.031 8.693-6.031 14.3 0 11.046 8.954 20 20 20 9.339 0 17.16-6.41 19.361-15.064.211.027.42.064.639.064a5 5 0 0 0 5-5 5 5 0 0 0-5-5" fill="#44D860"/><path fill="#3ECC5F" d="M153 123h30v-20h-30z"/><path d="M193 115.5a2.5 2.5 0 1 0 0-5c-.109 0-.214.019-.319.032-.02-.075-.037-.15-.058-.225a2.501 2.501 0 0 0-.963-4.807c-.569 0-1.088.197-1.508.518a6.653 6.653 0 0 0-.168-.168c.314-.417.506-.931.506-1.494a2.5 2.5 0 0 0-4.8-.979A9.987 9.987 0 0 0 183 103c-5.522 0-10 4.478-10 10s4.478 10 10 10c.934 0 1.833-.138 2.69-.377a2.5 2.5 0 0 0 4.8-.979c0-.563-.192-1.077-.506-1.494.057-.055.113-.111.168-.168.42.321.939.518 1.508.518a2.5 2.5 0 0 0 .963-4.807c.021-.074.038-.15.058-.225.105.013.21.032.319.032" fill="#44D860"/><path d="M63 55.5a2.5 2.5 0 0 1-2.5-2.5c0-4.136-3.364-7.5-7.5-7.5s-7.5 3.364-7.5 7.5a2.5 2.5 0 1 1-5 0c0-6.893 5.607-12.5 12.5-12.5S65.5 46.107 65.5 53a2.5 2.5 0 0 1-2.5 2.5" fill="#000"/><path d="M103 183h60c11.046 0 20-8.954 20-20V93h-60c-11.046 0-20 8.954-20 20v70z" fill="#FFFF50"/><path d="M168.02 124h-50.04a1 1 0 1 1 0-2h50.04a1 1 0 1 1 0 2m0 20h-50.04a1 1 0 1 1 0-2h50.04a1 1 0 1 1 0 2m0 20h-50.04a1 1 0 1 1 0-2h50.04a1 1 0 1 1 0 2m0-49.814h-50.04a1 1 0 1 1 0-2h50.04a1 1 0 1 1 0 2m0 19.814h-50.04a1 1 0 1 1 0-2h50.04a1 1 0 1 1 0 2m0 20h-50.04a1 1 0 1 1 0-2h50.04a1 1 0 1 1 0 2M183 61.611c-.012 0-.022-.006-.034-.005-3.09.105-4.552 3.196-5.842 5.923-1.346 2.85-2.387 4.703-4.093 4.647-1.889-.068-2.969-2.202-4.113-4.46-1.314-2.594-2.814-5.536-5.963-5.426-3.046.104-4.513 2.794-5.807 5.167-1.377 2.528-2.314 4.065-4.121 3.994-1.927-.07-2.951-1.805-4.136-3.813-1.321-2.236-2.848-4.75-5.936-4.664-2.994.103-4.465 2.385-5.763 4.4-1.373 2.13-2.335 3.428-4.165 3.351-1.973-.07-2.992-1.51-4.171-3.177-1.324-1.873-2.816-3.993-5.895-3.89-2.928.1-4.399 1.97-5.696 3.618-1.232 1.564-2.194 2.802-4.229 2.724a1 1 0 0 0-.072 2c3.017.101 4.545-1.8 5.872-3.487 1.177-1.496 2.193-2.787 4.193-2.855 1.926-.082 2.829 1.115 4.195 3.045 1.297 1.834 2.769 3.914 5.731 4.021 3.103.104 4.596-2.215 5.918-4.267 1.182-1.834 2.202-3.417 4.15-3.484 1.793-.067 2.769 1.35 4.145 3.681 1.297 2.197 2.766 4.686 5.787 4.796 3.125.108 4.634-2.62 5.949-5.035 1.139-2.088 2.214-4.06 4.119-4.126 1.793-.042 2.728 1.595 4.111 4.33 1.292 2.553 2.757 5.445 5.825 5.556l.169.003c3.064 0 4.518-3.075 5.805-5.794 1.139-2.41 2.217-4.68 4.067-4.773v-2z" fill="#000"/><path fill="#3ECC5F" d="M83 183h40v-40H83z"/><path d="M143 158c-.219 0-.428.037-.639.064-.038-.15-.074-.301-.116-.451A5 5 0 0 0 140.32 148a4.96 4.96 0 0 0-3.016 1.036 26.531 26.531 0 0 0-.335-.336 4.955 4.955 0 0 0 1.011-2.987 5 5 0 0 0-9.599-1.959c-.148-.042-.297-.077-.445-.115.027-.211.064-.42.064-.639a5 5 0 0 0-5-5 5 5 0 0 0-5 5c0 .219.037.428.064.639-.148.038-.297.073-.445.115a4.998 4.998 0 0 0-9.599 1.959c0 1.125.384 2.151 1.011 2.987-3.717 3.632-6.031 8.693-6.031 14.3 0 11.046 8.954 20 20 20 9.339 0 17.16-6.41 19.361-15.064.211.027.42.064.639.064a5 5 0 0 0 5-5 5 5 0 0 0-5-5" fill="#44D860"/><path fill="#3ECC5F" d="M83 123h40v-20H83z"/><path d="M133 115.5a2.5 2.5 0 1 0 0-5c-.109 0-.214.019-.319.032-.02-.075-.037-.15-.058-.225a2.501 2.501 0 0 0-.963-4.807c-.569 0-1.088.197-1.508.518a6.653 6.653 0 0 0-.168-.168c.314-.417.506-.931.506-1.494a2.5 2.5 0 0 0-4.8-.979A9.987 9.987 0 0 0 123 103c-5.522 0-10 4.478-10 10s4.478 10 10 10c.934 0 1.833-.138 2.69-.377a2.5 2.5 0 0 0 4.8-.979c0-.563-.192-1.077-.506-1.494.057-.055.113-.111.168-.168.42.321.939.518 1.508.518a2.5 2.5 0 0 0 .963-4.807c.021-.074.038-.15.058-.225.105.013.21.032.319.032" fill="#44D860"/><path d="M143 41.75c-.16 0-.33-.02-.49-.05a2.52 2.52 0 0 1-.47-.14c-.15-.06-.29-.14-.431-.23-.13-.09-.259-.2-.38-.31-.109-.12-.219-.24-.309-.38s-.17-.28-.231-.43a2.619 2.619 0 0 1-.189-.96c0-.16.02-.33.05-.49.03-.16.08-.31.139-.47.061-.15.141-.29.231-.43.09-.13.2-.26.309-.38.121-.11.25-.22.38-.31.141-.09.281-.17.431-.23.149-.06.31-.11.47-.14.32-.07.65-.07.98 0 .159.03.32.08.47.14.149.06.29.14.43.23.13.09.259.2.38.31.11.12.22.25.31.38.09.14.17.28.23.43.06.16.11.31.14.47.029.16.05.33.05.49 0 .66-.271 1.31-.73 1.77-.121.11-.25.22-.38.31-.14.09-.281.17-.43.23a2.565 2.565 0 0 1-.96.19m20-1.25c-.66 0-1.3-.27-1.771-.73a3.802 3.802 0 0 1-.309-.38c-.09-.14-.17-.28-.231-.43a2.619 2.619 0 0 1-.189-.96c0-.66.27-1.3.729-1.77.121-.11.25-.22.38-.31.141-.09.281-.17.431-.23.149-.06.31-.11.47-.14.32-.07.66-.07.98 0 .159.03.32.08.47.14.149.06.29.14.43.23.13.09.259.2.38.31.459.47.73 1.11.73 1.77 0 .16-.021.33-.05.49-.03.16-.08.32-.14.47-.07.15-.14.29-.23.43-.09.13-.2.26-.31.38-.121.11-.25.22-.38.31-.14.09-.281.17-.43.23a2.565 2.565 0 0 1-.96.19" fill="#000"/></g></svg>
\ No newline at end of file
diff --git a/static/img/monitor.svg b/static/img/monitor.svg
new file mode 100644
index 0000000..56c0216
--- /dev/null
+++ b/static/img/monitor.svg
@@ -0,0 +1 @@
+<svg enable-background="new 0 0 32 32" height="25" viewBox="0 0 32 32" width="32" xmlns="http://www.w3.org/2000/svg"><g fill="#ffd561"><path d="m12 8h1v14h-1z"/><path d="m10 8h1v14h-1z"/><path d="m14 8v14h14v-12s-1-2-2-2z"/></g><path d="m28.5 6h-25c-.83 0-1.5.67-1.5 1.5v16c0 .83.67 1.5 1.5 1.5h11.5v2h-5v1h12v-1h-5v-2h11.5c.83 0 1.5-.67 1.5-1.5v-16c0-.83-.67-1.5-1.5-1.5zm.5 15h-23v1h23v1.5c0 .28-.22.5-.5.5h-25c-.28 0-.5-.22-.5-.5v-16c0-.28.22-.5.5-.5h25c.28 0 .5.22.5.5z" fill="#2d2220"/><path d="m28 9.5v10h-1v-10c0-.4-.28-.49-.51-.5h-20.99c-.4 0-.49.27-.5.51v12.49h-1v-12.5c0-.6.4-1.5 1.5-1.5h21c.6 0 1.5.4 1.5 1.5z" fill="#2d2220"/><path d="m11 28h8v1h-8z" fill="#ffd561"/><path d="m20 28h3v1h-3z" fill="#ffd561"/></svg>
\ No newline at end of file
diff --git a/static/img/slack.svg b/static/img/slack.svg
new file mode 100644
index 0000000..9865de6
--- /dev/null
+++ b/static/img/slack.svg
@@ -0,0 +1,12 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<svg width="127px" height="127px" viewBox="0 0 127 127" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
+ <title>Slack_icon_2019 (1)</title>
+ <g id="页面-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
+ <g id="Slack_icon_2019-(1)" transform="translate(0.700000, 0.600000)" fill-rule="nonzero">
+ <path d="M26.5,79.4 C26.5,86.7 20.6,92.6 13.3,92.6 C6,92.6 0.1,86.7 0.1,79.4 C0.1,72.1 6,66.2 13.3,66.2 L26.5,66.2 L26.5,79.4 Z M33.1,79.4 C33.1,72.1 39,66.2 46.3,66.2 C53.6,66.2 59.5,72.1 59.5,79.4 L59.5,112.4 C59.5,119.7 53.6,125.6 46.3,125.6 C39,125.6 33.1,119.7 33.1,112.4 L33.1,79.4 Z" id="形状" fill="#E01E5A"></path>
+ <path d="M46.3,26.4 C39,26.4 33.1,20.5 33.1,13.2 C33.1,5.9 39,0 46.3,0 C53.6,0 59.5,5.9 59.5,13.2 L59.5,26.4 L46.3,26.4 Z M46.3,33.1 C53.6,33.1 59.5,39 59.5,46.3 C59.5,53.6 53.6,59.5 46.3,59.5 L13.2,59.5 C5.9,59.5 -3.55271368e-15,53.6 -3.55271368e-15,46.3 C-3.55271368e-15,39 5.9,33.1 13.2,33.1 L46.3,33.1 Z" id="形状" fill="#36C5F0"></path>
+ <path d="M99.2,46.3 C99.2,39 105.1,33.1 112.4,33.1 C119.7,33.1 125.6,39 125.6,46.3 C125.6,53.6 119.7,59.5 112.4,59.5 L99.2,59.5 L99.2,46.3 Z M92.6,46.3 C92.6,53.6 86.7,59.5 79.4,59.5 C72.1,59.5 66.2,53.6 66.2,46.3 L66.2,13.2 C66.2,5.9 72.1,0 79.4,0 C86.7,0 92.6,5.9 92.6,13.2 L92.6,46.3 L92.6,46.3 Z" id="形状" fill="#2EB67D"></path>
+ <path d="M79.4,99.2 C86.7,99.2 92.6,105.1 92.6,112.4 C92.6,119.7 86.7,125.6 79.4,125.6 C72.1,125.6 66.2,119.7 66.2,112.4 L66.2,99.2 L79.4,99.2 Z M79.4,92.6 C72.1,92.6 66.2,86.7 66.2,79.4 C66.2,72.1 72.1,66.2 79.4,66.2 L112.5,66.2 C119.8,66.2 125.7,72.1 125.7,79.4 C125.7,86.7 119.8,92.6 112.5,92.6 L79.4,92.6 Z" id="形状" fill="#ECB22E"></path>
+ </g>
+ </g>
+</svg>
\ No newline at end of file
diff --git a/static/img/undraw_docusaurus_mountain.svg b/static/img/undraw_docusaurus_mountain.svg
new file mode 100644
index 0000000..af961c4
--- /dev/null
+++ b/static/img/undraw_docusaurus_mountain.svg
@@ -0,0 +1,171 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1088" height="687.962" viewBox="0 0 1088 687.962">
+ <title>Easy to Use</title>
+ <g id="Group_12" data-name="Group 12" transform="translate(-57 -56)">
+ <g id="Group_11" data-name="Group 11" transform="translate(57 56)">
+ <path id="Path_83" data-name="Path 83" d="M1017.81,560.461c-5.27,45.15-16.22,81.4-31.25,110.31-20,38.52-54.21,54.04-84.77,70.28a193.275,193.275,0,0,1-27.46,11.94c-55.61,19.3-117.85,14.18-166.74,3.99a657.282,657.282,0,0,0-104.09-13.16q-14.97-.675-29.97-.67c-15.42.02-293.07,5.29-360.67-131.57-16.69-33.76-28.13-75-32.24-125.27-11.63-142.12,52.29-235.46,134.74-296.47,155.97-115.41,369.76-110.57,523.43,7.88C941.15,276.621,1036.99,396.031,1017.81,560.461Z" transform="translate(-56 -106.019)" fill="#3f3d56"/>
+ <path id="Path_84" data-name="Path 84" d="M986.56,670.771c-20,38.52-47.21,64.04-77.77,80.28a193.272,193.272,0,0,1-27.46,11.94c-55.61,19.3-117.85,14.18-166.74,3.99a657.3,657.3,0,0,0-104.09-13.16q-14.97-.675-29.97-.67-23.13.03-46.25,1.72c-100.17,7.36-253.82-6.43-321.42-143.29L382,283.981,444.95,445.6l20.09,51.59,55.37-75.98L549,381.981l130.2,149.27,36.8-81.27L970.78,657.9l14.21,11.59Z" transform="translate(-56 -106.019)" fill="#f2f2f2"/>
+ <path id="Path_85" data-name="Path 85" d="M302,282.962l26-57,36,83-31-60Z" opacity="0.1"/>
+ <path id="Path_86" data-name="Path 86" d="M610.5,753.821q-14.97-.675-29.97-.67L465.04,497.191Z" transform="translate(-56 -106.019)" opacity="0.1"/>
+ <path id="Path_87" data-name="Path 87" d="M464.411,315.191,493,292.962l130,150-132-128Z" opacity="0.1"/>
+ <path id="Path_88" data-name="Path 88" d="M908.79,751.051a193.265,193.265,0,0,1-27.46,11.94L679.2,531.251Z" transform="translate(-56 -106.019)" opacity="0.1"/>
+ <circle id="Ellipse_11" data-name="Ellipse 11" cx="3" cy="3" r="3" transform="translate(479 98.962)" fill="#f2f2f2"/>
+ <circle id="Ellipse_12" data-name="Ellipse 12" cx="3" cy="3" r="3" transform="translate(396 201.962)" fill="#f2f2f2"/>
+ <circle id="Ellipse_13" data-name="Ellipse 13" cx="2" cy="2" r="2" transform="translate(600 220.962)" fill="#f2f2f2"/>
+ <circle id="Ellipse_14" data-name="Ellipse 14" cx="2" cy="2" r="2" transform="translate(180 265.962)" fill="#f2f2f2"/>
+ <circle id="Ellipse_15" data-name="Ellipse 15" cx="2" cy="2" r="2" transform="translate(612 96.962)" fill="#f2f2f2"/>
+ <circle id="Ellipse_16" data-name="Ellipse 16" cx="2" cy="2" r="2" transform="translate(736 192.962)" fill="#f2f2f2"/>
+ <circle id="Ellipse_17" data-name="Ellipse 17" cx="2" cy="2" r="2" transform="translate(858 344.962)" fill="#f2f2f2"/>
+ <path id="Path_89" data-name="Path 89" d="M306,121.222h-2.76v-2.76h-1.48v2.76H299V122.7h2.76v2.759h1.48V122.7H306Z" fill="#f2f2f2"/>
+ <path id="Path_90" data-name="Path 90" d="M848,424.222h-2.76v-2.76h-1.48v2.76H841V425.7h2.76v2.759h1.48V425.7H848Z" fill="#f2f2f2"/>
+ <path id="Path_91" data-name="Path 91" d="M1144,719.981c0,16.569-243.557,74-544,74s-544-57.431-544-74,243.557,14,544,14S1144,703.413,1144,719.981Z" transform="translate(-56 -106.019)" fill="#3f3d56"/>
+ <path id="Path_92" data-name="Path 92" d="M1144,719.981c0,16.569-243.557,74-544,74s-544-57.431-544-74,243.557,14,544,14S1144,703.413,1144,719.981Z" transform="translate(-56 -106.019)" opacity="0.1"/>
+ <ellipse id="Ellipse_18" data-name="Ellipse 18" cx="544" cy="30" rx="544" ry="30" transform="translate(0 583.962)" fill="#3f3d56"/>
+ <path id="Path_93" data-name="Path 93" d="M624,677.981c0,33.137-14.775,24-33,24s-33,9.137-33-24,33-96,33-96S624,644.844,624,677.981Z" transform="translate(-56 -106.019)" fill="#ff6584"/>
+ <path id="Path_94" data-name="Path 94" d="M606,690.66c0,15.062-6.716,10.909-15,10.909s-15,4.153-15-10.909,15-43.636,15-43.636S606,675.6,606,690.66Z" transform="translate(-56 -106.019)" opacity="0.1"/>
+ <rect id="Rectangle_97" data-name="Rectangle 97" width="92" height="18" rx="9" transform="translate(489 604.962)" fill="#2f2e41"/>
+ <rect id="Rectangle_98" data-name="Rectangle 98" width="92" height="18" rx="9" transform="translate(489 586.962)" fill="#2f2e41"/>
+ <path id="Path_95" data-name="Path 95" d="M193,596.547c0,55.343,34.719,100.126,77.626,100.126" transform="translate(-56 -106.019)" fill="#3f3d56"/>
+ <path id="Path_96" data-name="Path 96" d="M270.626,696.673c0-55.965,38.745-101.251,86.626-101.251" transform="translate(-56 -106.019)" fill="#6c63ff"/>
+ <path id="Path_97" data-name="Path 97" d="M221.125,601.564c0,52.57,22.14,95.109,49.5,95.109" transform="translate(-56 -106.019)" fill="#6c63ff"/>
+ <path id="Path_98" data-name="Path 98" d="M270.626,696.673c0-71.511,44.783-129.377,100.126-129.377" transform="translate(-56 -106.019)" fill="#3f3d56"/>
+ <path id="Path_99" data-name="Path 99" d="M254.3,697.379s11.009-.339,14.326-2.7,16.934-5.183,17.757-1.395,16.544,18.844,4.115,18.945-28.879-1.936-32.19-3.953S254.3,697.379,254.3,697.379Z" transform="translate(-56 -106.019)" fill="#a8a8a8"/>
+ <path id="Path_100" data-name="Path 100" d="M290.716,710.909c-12.429.1-28.879-1.936-32.19-3.953-2.522-1.536-3.527-7.048-3.863-9.591l-.368.014s.7,8.879,4.009,10.9,19.761,4.053,32.19,3.953c3.588-.029,4.827-1.305,4.759-3.2C294.755,710.174,293.386,710.887,290.716,710.909Z" transform="translate(-56 -106.019)" opacity="0.2"/>
+ <path id="Path_101" data-name="Path 101" d="M777.429,633.081c0,38.029,23.857,68.8,53.341,68.8" transform="translate(-56 -106.019)" fill="#3f3d56"/>
+ <path id="Path_102" data-name="Path 102" d="M830.769,701.882c0-38.456,26.623-69.575,59.525-69.575" transform="translate(-56 -106.019)" fill="#6c63ff"/>
+ <path id="Path_103" data-name="Path 103" d="M796.755,636.528c0,36.124,15.213,65.354,34.014,65.354" transform="translate(-56 -106.019)" fill="#6c63ff"/>
+ <path id="Path_104" data-name="Path 104" d="M830.769,701.882c0-49.139,30.773-88.9,68.8-88.9" transform="translate(-56 -106.019)" fill="#3f3d56"/>
+ <path id="Path_105" data-name="Path 105" d="M819.548,702.367s7.565-.233,9.844-1.856,11.636-3.562,12.2-.958,11.368,12.949,2.828,13.018-19.844-1.33-22.119-2.716S819.548,702.367,819.548,702.367Z" transform="translate(-56 -106.019)" fill="#a8a8a8"/>
+ <path id="Path_106" data-name="Path 106" d="M844.574,711.664c-8.54.069-19.844-1.33-22.119-2.716-1.733-1.056-2.423-4.843-2.654-6.59l-.253.01s.479,6.1,2.755,7.487,13.579,2.785,22.119,2.716c2.465-.02,3.317-.9,3.27-2.2C847.349,711.159,846.409,711.649,844.574,711.664Z" transform="translate(-56 -106.019)" opacity="0.2"/>
+ <path id="Path_107" data-name="Path 107" d="M949.813,724.718s11.36-1.729,14.5-4.591,16.89-7.488,18.217-3.667,19.494,17.447,6.633,19.107-30.153,1.609-33.835-.065S949.813,724.718,949.813,724.718Z" transform="translate(-56 -106.019)" fill="#a8a8a8"/>
+ <path id="Path_108" data-name="Path 108" d="M989.228,734.173c-12.86,1.659-30.153,1.609-33.835-.065-2.8-1.275-4.535-6.858-5.2-9.45l-.379.061s1.833,9.109,5.516,10.783,20.975,1.725,33.835.065c3.712-.479,4.836-1.956,4.529-3.906C993.319,732.907,991.991,733.817,989.228,734.173Z" transform="translate(-56 -106.019)" opacity="0.2"/>
+ <path id="Path_109" data-name="Path 109" d="M670.26,723.9s9.587-1.459,12.237-3.875,14.255-6.32,15.374-3.095,16.452,14.725,5.6,16.125-25.448,1.358-28.555-.055S670.26,723.9,670.26,723.9Z" transform="translate(-56 -106.019)" fill="#a8a8a8"/>
+ <path id="Path_110" data-name="Path 110" d="M703.524,731.875c-10.853,1.4-25.448,1.358-28.555-.055-2.367-1.076-3.827-5.788-4.39-7.976l-.32.051s1.547,7.687,4.655,9.1,17.7,1.456,28.555.055c3.133-.4,4.081-1.651,3.822-3.3C706.977,730.807,705.856,731.575,703.524,731.875Z" transform="translate(-56 -106.019)" opacity="0.2"/>
+ <path id="Path_111" data-name="Path 111" d="M178.389,719.109s7.463-1.136,9.527-3.016,11.1-4.92,11.969-2.409,12.808,11.463,4.358,12.553-19.811,1.057-22.23-.043S178.389,719.109,178.389,719.109Z" transform="translate(-56 -106.019)" fill="#a8a8a8"/>
+ <path id="Path_112" data-name="Path 112" d="M204.285,725.321c-8.449,1.09-19.811,1.057-22.23-.043-1.842-.838-2.979-4.506-3.417-6.209l-.249.04s1.2,5.984,3.624,7.085,13.781,1.133,22.23.043c2.439-.315,3.177-1.285,2.976-2.566C206.973,724.489,206.1,725.087,204.285,725.321Z" transform="translate(-56 -106.019)" opacity="0.2"/>
+ <path id="Path_113" data-name="Path 113" d="M439.7,707.337c0,30.22-42.124,20.873-93.7,20.873s-93.074,9.347-93.074-20.873,42.118-36.793,93.694-36.793S439.7,677.117,439.7,707.337Z" transform="translate(-56 -106.019)" opacity="0.1"/>
+ <path id="Path_114" data-name="Path 114" d="M439.7,699.9c0,30.22-42.124,20.873-93.7,20.873s-93.074,9.347-93.074-20.873S295.04,663.1,346.616,663.1,439.7,669.676,439.7,699.9Z" transform="translate(-56 -106.019)" fill="#3f3d56"/>
+ </g>
+ <g id="docusaurus_keytar" transform="translate(312.271 493.733)">
+ <path id="Path_40" data-name="Path 40" d="M99,52h91.791V89.153H99Z" transform="translate(5.904 -14.001)" fill="#fff" fill-rule="evenodd"/>
+ <path id="Path_41" data-name="Path 41" d="M24.855,163.927A21.828,21.828,0,0,1,5.947,153a21.829,21.829,0,0,0,18.908,32.782H46.71V163.927Z" transform="translate(-3 -4.634)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_42" data-name="Path 42" d="M121.861,61.1l76.514-4.782V45.39A21.854,21.854,0,0,0,176.52,23.535H78.173L75.441,18.8a3.154,3.154,0,0,0-5.464,0l-2.732,4.732L64.513,18.8a3.154,3.154,0,0,0-5.464,0l-2.732,4.732L53.586,18.8a3.154,3.154,0,0,0-5.464,0L45.39,23.535c-.024,0-.046,0-.071,0l-4.526-4.525a3.153,3.153,0,0,0-5.276,1.414l-1.5,5.577-5.674-1.521a3.154,3.154,0,0,0-3.863,3.864L26,34.023l-5.575,1.494a3.155,3.155,0,0,0-1.416,5.278l4.526,4.526c0,.023,0,.046,0,.07L18.8,48.122a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,59.05a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,69.977a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,80.9a3.154,3.154,0,0,0,0,5.464L23.535,89.1,18.8,91.832a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,102.76a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,113.687a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,124.615a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,135.542a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,146.469a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,157.4a3.154,3.154,0,0,0,0,5.464l4.732,2.732L18.8,168.324a3.154,3.154,0,0,0,0,5.464l4.732,2.732A21.854,21.854,0,0,0,45.39,198.375H176.52a21.854,21.854,0,0,0,21.855-21.855V89.1l-76.514-4.782a11.632,11.632,0,0,1,0-23.219" transform="translate(-1.681 -17.226)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_43" data-name="Path 43" d="M143,186.71h32.782V143H143Z" transform="translate(9.984 -5.561)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_44" data-name="Path 44" d="M196.71,159.855a5.438,5.438,0,0,0-.7.07c-.042-.164-.081-.329-.127-.493a5.457,5.457,0,1,0-5.4-9.372q-.181-.185-.366-.367a5.454,5.454,0,1,0-9.384-5.4c-.162-.046-.325-.084-.486-.126a5.467,5.467,0,1,0-10.788,0c-.162.042-.325.08-.486.126a5.457,5.457,0,1,0-9.384,5.4,21.843,21.843,0,1,0,36.421,21.02,5.452,5.452,0,1,0,.7-10.858" transform="translate(10.912 -6.025)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_45" data-name="Path 45" d="M153,124.855h32.782V103H153Z" transform="translate(10.912 -9.271)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_46" data-name="Path 46" d="M194.855,116.765a2.732,2.732,0,1,0,0-5.464,2.811,2.811,0,0,0-.349.035c-.022-.082-.04-.164-.063-.246a2.733,2.733,0,0,0-1.052-5.253,2.7,2.7,0,0,0-1.648.566q-.09-.093-.184-.184a2.7,2.7,0,0,0,.553-1.633,2.732,2.732,0,0,0-5.245-1.07,10.928,10.928,0,1,0,0,21.031,2.732,2.732,0,0,0,5.245-1.07,2.7,2.7,0,0,0-.553-1.633q.093-.09.184-.184a2.7,2.7,0,0,0,1.648.566,2.732,2.732,0,0,0,1.052-5.253c.023-.081.042-.164.063-.246a2.814,2.814,0,0,0,.349.035" transform="translate(12.767 -9.377)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_47" data-name="Path 47" d="M65.087,56.891a2.732,2.732,0,0,1-2.732-2.732,8.2,8.2,0,0,0-16.391,0,2.732,2.732,0,0,1-5.464,0,13.659,13.659,0,0,1,27.319,0,2.732,2.732,0,0,1-2.732,2.732" transform="translate(0.478 -15.068)" fill-rule="evenodd"/>
+ <path id="Path_48" data-name="Path 48" d="M103,191.347h65.565a21.854,21.854,0,0,0,21.855-21.855V93H124.855A21.854,21.854,0,0,0,103,114.855Z" transform="translate(6.275 -10.199)" fill="#ffff50" fill-rule="evenodd"/>
+ <path id="Path_49" data-name="Path 49" d="M173.216,129.787H118.535a1.093,1.093,0,1,1,0-2.185h54.681a1.093,1.093,0,0,1,0,2.185m0,21.855H118.535a1.093,1.093,0,1,1,0-2.186h54.681a1.093,1.093,0,0,1,0,2.186m0,21.855H118.535a1.093,1.093,0,1,1,0-2.185h54.681a1.093,1.093,0,0,1,0,2.185m0-54.434H118.535a1.093,1.093,0,1,1,0-2.185h54.681a1.093,1.093,0,0,1,0,2.185m0,21.652H118.535a1.093,1.093,0,1,1,0-2.186h54.681a1.093,1.093,0,0,1,0,2.186m0,21.855H118.535a1.093,1.093,0,1,1,0-2.186h54.681a1.093,1.093,0,0,1,0,2.186M189.585,61.611c-.013,0-.024-.007-.037-.005-3.377.115-4.974,3.492-6.384,6.472-1.471,3.114-2.608,5.139-4.473,5.078-2.064-.074-3.244-2.406-4.494-4.874-1.436-2.835-3.075-6.049-6.516-5.929-3.329.114-4.932,3.053-6.346,5.646-1.5,2.762-2.529,4.442-4.5,4.364-2.106-.076-3.225-1.972-4.52-4.167-1.444-2.443-3.112-5.191-6.487-5.1-3.272.113-4.879,2.606-6.3,4.808-1.5,2.328-2.552,3.746-4.551,3.662-2.156-.076-3.27-1.65-4.558-3.472-1.447-2.047-3.077-4.363-6.442-4.251-3.2.109-4.807,2.153-6.224,3.954-1.346,1.709-2.4,3.062-4.621,2.977a1.093,1.093,0,0,0-.079,2.186c3.3.11,4.967-1.967,6.417-3.81,1.286-1.635,2.4-3.045,4.582-3.12,2.1-.09,3.091,1.218,4.584,3.327,1.417,2,3.026,4.277,6.263,4.394,3.391.114,5.022-2.42,6.467-4.663,1.292-2,2.406-3.734,4.535-3.807,1.959-.073,3.026,1.475,4.529,4.022,1.417,2.4,3.023,5.121,6.324,5.241,3.415.118,5.064-2.863,6.5-5.5,1.245-2.282,2.419-4.437,4.5-4.509,1.959-.046,2.981,1.743,4.492,4.732,1.412,2.79,3.013,5.95,6.365,6.071l.185,0c3.348,0,4.937-3.36,6.343-6.331,1.245-2.634,2.423-5.114,4.444-5.216Z" transform="translate(7.109 -13.11)" fill-rule="evenodd"/>
+ <path id="Path_50" data-name="Path 50" d="M83,186.71h43.71V143H83Z" transform="translate(4.42 -5.561)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <g id="Group_8" data-name="Group 8" transform="matrix(0.966, -0.259, 0.259, 0.966, 109.327, 91.085)">
+ <rect id="Rectangle_3" data-name="Rectangle 3" width="92.361" height="36.462" rx="2" transform="translate(0 0)" fill="#d8d8d8"/>
+ <g id="Group_2" data-name="Group 2" transform="translate(1.531 23.03)">
+ <rect id="Rectangle_4" data-name="Rectangle 4" width="5.336" height="5.336" rx="1" transform="translate(16.797 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_5" data-name="Rectangle 5" width="5.336" height="5.336" rx="1" transform="translate(23.12 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_6" data-name="Rectangle 6" width="5.336" height="5.336" rx="1" transform="translate(29.444 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_7" data-name="Rectangle 7" width="5.336" height="5.336" rx="1" transform="translate(35.768 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_8" data-name="Rectangle 8" width="5.336" height="5.336" rx="1" transform="translate(42.091 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_9" data-name="Rectangle 9" width="5.336" height="5.336" rx="1" transform="translate(48.415 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_10" data-name="Rectangle 10" width="5.336" height="5.336" rx="1" transform="translate(54.739 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_11" data-name="Rectangle 11" width="5.336" height="5.336" rx="1" transform="translate(61.063 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_12" data-name="Rectangle 12" width="5.336" height="5.336" rx="1" transform="translate(67.386 0)" fill="#4a4a4a"/>
+ <path id="Path_51" data-name="Path 51" d="M1.093,0H14.518a1.093,1.093,0,0,1,1.093,1.093V4.243a1.093,1.093,0,0,1-1.093,1.093H1.093A1.093,1.093,0,0,1,0,4.243V1.093A1.093,1.093,0,0,1,1.093,0ZM75,0H88.426a1.093,1.093,0,0,1,1.093,1.093V4.243a1.093,1.093,0,0,1-1.093,1.093H75a1.093,1.093,0,0,1-1.093-1.093V1.093A1.093,1.093,0,0,1,75,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ </g>
+ <g id="Group_3" data-name="Group 3" transform="translate(1.531 10.261)">
+ <path id="Path_52" data-name="Path 52" d="M1.093,0H6.218A1.093,1.093,0,0,1,7.31,1.093V4.242A1.093,1.093,0,0,1,6.218,5.335H1.093A1.093,1.093,0,0,1,0,4.242V1.093A1.093,1.093,0,0,1,1.093,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <rect id="Rectangle_13" data-name="Rectangle 13" width="5.336" height="5.336" rx="1" transform="translate(8.299 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_14" data-name="Rectangle 14" width="5.336" height="5.336" rx="1" transform="translate(14.623 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_15" data-name="Rectangle 15" width="5.336" height="5.336" rx="1" transform="translate(20.947 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_16" data-name="Rectangle 16" width="5.336" height="5.336" rx="1" transform="translate(27.271 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_17" data-name="Rectangle 17" width="5.336" height="5.336" rx="1" transform="translate(33.594 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_18" data-name="Rectangle 18" width="5.336" height="5.336" rx="1" transform="translate(39.918 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_19" data-name="Rectangle 19" width="5.336" height="5.336" rx="1" transform="translate(46.242 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_20" data-name="Rectangle 20" width="5.336" height="5.336" rx="1" transform="translate(52.565 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_21" data-name="Rectangle 21" width="5.336" height="5.336" rx="1" transform="translate(58.888 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_22" data-name="Rectangle 22" width="5.336" height="5.336" rx="1" transform="translate(65.212 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_23" data-name="Rectangle 23" width="5.336" height="5.336" rx="1" transform="translate(71.536 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_24" data-name="Rectangle 24" width="5.336" height="5.336" rx="1" transform="translate(77.859 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_25" data-name="Rectangle 25" width="5.336" height="5.336" rx="1" transform="translate(84.183 0)" fill="#4a4a4a"/>
+ </g>
+ <g id="Group_4" data-name="Group 4" transform="translate(91.05 9.546) rotate(180)">
+ <path id="Path_53" data-name="Path 53" d="M1.093,0H6.219A1.093,1.093,0,0,1,7.312,1.093v3.15A1.093,1.093,0,0,1,6.219,5.336H1.093A1.093,1.093,0,0,1,0,4.243V1.093A1.093,1.093,0,0,1,1.093,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <rect id="Rectangle_26" data-name="Rectangle 26" width="5.336" height="5.336" rx="1" transform="translate(8.299 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_27" data-name="Rectangle 27" width="5.336" height="5.336" rx="1" transform="translate(14.623 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_28" data-name="Rectangle 28" width="5.336" height="5.336" rx="1" transform="translate(20.947 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_29" data-name="Rectangle 29" width="5.336" height="5.336" rx="1" transform="translate(27.271 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_30" data-name="Rectangle 30" width="5.336" height="5.336" rx="1" transform="translate(33.594 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_31" data-name="Rectangle 31" width="5.336" height="5.336" rx="1" transform="translate(39.918 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_32" data-name="Rectangle 32" width="5.336" height="5.336" rx="1" transform="translate(46.242 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_33" data-name="Rectangle 33" width="5.336" height="5.336" rx="1" transform="translate(52.565 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_34" data-name="Rectangle 34" width="5.336" height="5.336" rx="1" transform="translate(58.889 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_35" data-name="Rectangle 35" width="5.336" height="5.336" rx="1" transform="translate(65.213 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_36" data-name="Rectangle 36" width="5.336" height="5.336" rx="1" transform="translate(71.537 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_37" data-name="Rectangle 37" width="5.336" height="5.336" rx="1" transform="translate(77.86 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_38" data-name="Rectangle 38" width="5.336" height="5.336" rx="1" transform="translate(84.183 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_39" data-name="Rectangle 39" width="5.336" height="5.336" rx="1" transform="translate(8.299 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_40" data-name="Rectangle 40" width="5.336" height="5.336" rx="1" transform="translate(14.623 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_41" data-name="Rectangle 41" width="5.336" height="5.336" rx="1" transform="translate(20.947 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_42" data-name="Rectangle 42" width="5.336" height="5.336" rx="1" transform="translate(27.271 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_43" data-name="Rectangle 43" width="5.336" height="5.336" rx="1" transform="translate(33.594 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_44" data-name="Rectangle 44" width="5.336" height="5.336" rx="1" transform="translate(39.918 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_45" data-name="Rectangle 45" width="5.336" height="5.336" rx="1" transform="translate(46.242 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_46" data-name="Rectangle 46" width="5.336" height="5.336" rx="1" transform="translate(52.565 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_47" data-name="Rectangle 47" width="5.336" height="5.336" rx="1" transform="translate(58.889 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_48" data-name="Rectangle 48" width="5.336" height="5.336" rx="1" transform="translate(65.213 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_49" data-name="Rectangle 49" width="5.336" height="5.336" rx="1" transform="translate(71.537 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_50" data-name="Rectangle 50" width="5.336" height="5.336" rx="1" transform="translate(77.86 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_51" data-name="Rectangle 51" width="5.336" height="5.336" rx="1" transform="translate(84.183 0)" fill="#4a4a4a"/>
+ </g>
+ <g id="Group_6" data-name="Group 6" transform="translate(1.531 16.584)">
+ <path id="Path_54" data-name="Path 54" d="M1.093,0h7.3A1.093,1.093,0,0,1,9.485,1.093v3.15A1.093,1.093,0,0,1,8.392,5.336h-7.3A1.093,1.093,0,0,1,0,4.243V1.094A1.093,1.093,0,0,1,1.093,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <g id="Group_5" data-name="Group 5" transform="translate(10.671 0)">
+ <rect id="Rectangle_52" data-name="Rectangle 52" width="5.336" height="5.336" rx="1" fill="#4a4a4a"/>
+ <rect id="Rectangle_53" data-name="Rectangle 53" width="5.336" height="5.336" rx="1" transform="translate(6.324 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_54" data-name="Rectangle 54" width="5.336" height="5.336" rx="1" transform="translate(12.647 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_55" data-name="Rectangle 55" width="5.336" height="5.336" rx="1" transform="translate(18.971 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_56" data-name="Rectangle 56" width="5.336" height="5.336" rx="1" transform="translate(25.295 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_57" data-name="Rectangle 57" width="5.336" height="5.336" rx="1" transform="translate(31.619 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_58" data-name="Rectangle 58" width="5.336" height="5.336" rx="1" transform="translate(37.942 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_59" data-name="Rectangle 59" width="5.336" height="5.336" rx="1" transform="translate(44.265 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_60" data-name="Rectangle 60" width="5.336" height="5.336" rx="1" transform="translate(50.589 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_61" data-name="Rectangle 61" width="5.336" height="5.336" rx="1" transform="translate(56.912 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_62" data-name="Rectangle 62" width="5.336" height="5.336" rx="1" transform="translate(63.236 0)" fill="#4a4a4a"/>
+ </g>
+ <path id="Path_55" data-name="Path 55" d="M1.094,0H8A1.093,1.093,0,0,1,9.091,1.093v3.15A1.093,1.093,0,0,1,8,5.336H1.093A1.093,1.093,0,0,1,0,4.243V1.094A1.093,1.093,0,0,1,1.093,0Z" transform="translate(80.428 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ </g>
+ <g id="Group_7" data-name="Group 7" transform="translate(1.531 29.627)">
+ <rect id="Rectangle_63" data-name="Rectangle 63" width="5.336" height="5.336" rx="1" transform="translate(0 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_64" data-name="Rectangle 64" width="5.336" height="5.336" rx="1" transform="translate(6.324 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_65" data-name="Rectangle 65" width="5.336" height="5.336" rx="1" transform="translate(12.647 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_66" data-name="Rectangle 66" width="5.336" height="5.336" rx="1" transform="translate(18.971 0)" fill="#4a4a4a"/>
+ <path id="Path_56" data-name="Path 56" d="M1.093,0H31.515a1.093,1.093,0,0,1,1.093,1.093V4.244a1.093,1.093,0,0,1-1.093,1.093H1.093A1.093,1.093,0,0,1,0,4.244V1.093A1.093,1.093,0,0,1,1.093,0ZM34.687,0h3.942a1.093,1.093,0,0,1,1.093,1.093V4.244a1.093,1.093,0,0,1-1.093,1.093H34.687a1.093,1.093,0,0,1-1.093-1.093V1.093A1.093,1.093,0,0,1,34.687,0Z" transform="translate(25.294 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <rect id="Rectangle_67" data-name="Rectangle 67" width="5.336" height="5.336" rx="1" transform="translate(66.003 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_68" data-name="Rectangle 68" width="5.336" height="5.336" rx="1" transform="translate(72.327 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_69" data-name="Rectangle 69" width="5.336" height="5.336" rx="1" transform="translate(84.183 0)" fill="#4a4a4a"/>
+ <path id="Path_57" data-name="Path 57" d="M5.336,0V1.18A1.093,1.093,0,0,1,4.243,2.273H1.093A1.093,1.093,0,0,1,0,1.18V0Z" transform="translate(83.59 2.273) rotate(180)" fill="#4a4a4a"/>
+ <path id="Path_58" data-name="Path 58" d="M5.336,0V1.18A1.093,1.093,0,0,1,4.243,2.273H1.093A1.093,1.093,0,0,1,0,1.18V0Z" transform="translate(78.255 3.063)" fill="#4a4a4a"/>
+ </g>
+ <rect id="Rectangle_70" data-name="Rectangle 70" width="88.927" height="2.371" rx="1.085" transform="translate(1.925 1.17)" fill="#4a4a4a"/>
+ <rect id="Rectangle_71" data-name="Rectangle 71" width="4.986" height="1.581" rx="0.723" transform="translate(4.1 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_72" data-name="Rectangle 72" width="4.986" height="1.581" rx="0.723" transform="translate(10.923 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_73" data-name="Rectangle 73" width="4.986" height="1.581" rx="0.723" transform="translate(16.173 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_74" data-name="Rectangle 74" width="4.986" height="1.581" rx="0.723" transform="translate(21.421 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_75" data-name="Rectangle 75" width="4.986" height="1.581" rx="0.723" transform="translate(26.671 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_76" data-name="Rectangle 76" width="4.986" height="1.581" rx="0.723" transform="translate(33.232 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_77" data-name="Rectangle 77" width="4.986" height="1.581" rx="0.723" transform="translate(38.48 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_78" data-name="Rectangle 78" width="4.986" height="1.581" rx="0.723" transform="translate(43.73 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_79" data-name="Rectangle 79" width="4.986" height="1.581" rx="0.723" transform="translate(48.978 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_80" data-name="Rectangle 80" width="4.986" height="1.581" rx="0.723" transform="translate(55.54 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_81" data-name="Rectangle 81" width="4.986" height="1.581" rx="0.723" transform="translate(60.788 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_82" data-name="Rectangle 82" width="4.986" height="1.581" rx="0.723" transform="translate(66.038 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_83" data-name="Rectangle 83" width="4.986" height="1.581" rx="0.723" transform="translate(72.599 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_84" data-name="Rectangle 84" width="4.986" height="1.581" rx="0.723" transform="translate(77.847 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_85" data-name="Rectangle 85" width="4.986" height="1.581" rx="0.723" transform="translate(83.097 1.566)" fill="#d8d8d8" opacity="0.136"/>
+ </g>
+ <path id="Path_59" data-name="Path 59" d="M146.71,159.855a5.439,5.439,0,0,0-.7.07c-.042-.164-.081-.329-.127-.493a5.457,5.457,0,1,0-5.4-9.372q-.181-.185-.366-.367a5.454,5.454,0,1,0-9.384-5.4c-.162-.046-.325-.084-.486-.126a5.467,5.467,0,1,0-10.788,0c-.162.042-.325.08-.486.126a5.457,5.457,0,1,0-9.384,5.4,21.843,21.843,0,1,0,36.421,21.02,5.452,5.452,0,1,0,.7-10.858" transform="translate(6.275 -6.025)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_60" data-name="Path 60" d="M83,124.855h43.71V103H83Z" transform="translate(4.42 -9.271)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_61" data-name="Path 61" d="M134.855,116.765a2.732,2.732,0,1,0,0-5.464,2.811,2.811,0,0,0-.349.035c-.022-.082-.04-.164-.063-.246a2.733,2.733,0,0,0-1.052-5.253,2.7,2.7,0,0,0-1.648.566q-.09-.093-.184-.184a2.7,2.7,0,0,0,.553-1.633,2.732,2.732,0,0,0-5.245-1.07,10.928,10.928,0,1,0,0,21.031,2.732,2.732,0,0,0,5.245-1.07,2.7,2.7,0,0,0-.553-1.633q.093-.09.184-.184a2.7,2.7,0,0,0,1.648.566,2.732,2.732,0,0,0,1.052-5.253c.023-.081.042-.164.063-.246a2.811,2.811,0,0,0,.349.035" transform="translate(7.202 -9.377)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_62" data-name="Path 62" d="M143.232,42.33a2.967,2.967,0,0,1-.535-.055,2.754,2.754,0,0,1-.514-.153,2.838,2.838,0,0,1-.471-.251,4.139,4.139,0,0,1-.415-.339,3.2,3.2,0,0,1-.338-.415A2.7,2.7,0,0,1,140.5,39.6a2.968,2.968,0,0,1,.055-.535,3.152,3.152,0,0,1,.152-.514,2.874,2.874,0,0,1,.252-.47,2.633,2.633,0,0,1,.753-.754,2.837,2.837,0,0,1,.471-.251,2.753,2.753,0,0,1,.514-.153,2.527,2.527,0,0,1,1.071,0,2.654,2.654,0,0,1,.983.4,4.139,4.139,0,0,1,.415.339,4.019,4.019,0,0,1,.339.415,2.786,2.786,0,0,1,.251.47,2.864,2.864,0,0,1,.208,1.049,2.77,2.77,0,0,1-.8,1.934,4.139,4.139,0,0,1-.415.339,2.722,2.722,0,0,1-1.519.459m21.855-1.366a2.789,2.789,0,0,1-1.935-.8,4.162,4.162,0,0,1-.338-.415,2.7,2.7,0,0,1-.459-1.519,2.789,2.789,0,0,1,.8-1.934,4.139,4.139,0,0,1,.415-.339,2.838,2.838,0,0,1,.471-.251,2.752,2.752,0,0,1,.514-.153,2.527,2.527,0,0,1,1.071,0,2.654,2.654,0,0,1,.983.4,4.139,4.139,0,0,1,.415.339,2.79,2.79,0,0,1,.8,1.934,3.069,3.069,0,0,1-.055.535,2.779,2.779,0,0,1-.153.514,3.885,3.885,0,0,1-.251.47,4.02,4.02,0,0,1-.339.415,4.138,4.138,0,0,1-.415.339,2.722,2.722,0,0,1-1.519.459" transform="translate(9.753 -15.532)" fill-rule="evenodd"/>
+ </g>
+ </g>
+</svg>
diff --git a/static/img/undraw_docusaurus_react.svg b/static/img/undraw_docusaurus_react.svg
new file mode 100644
index 0000000..94b5cf0
--- /dev/null
+++ b/static/img/undraw_docusaurus_react.svg
@@ -0,0 +1,170 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1041.277" height="554.141" viewBox="0 0 1041.277 554.141">
+ <title>Powered by React</title>
+ <g id="Group_24" data-name="Group 24" transform="translate(-440 -263)">
+ <g id="Group_23" data-name="Group 23" transform="translate(439.989 262.965)">
+ <path id="Path_299" data-name="Path 299" d="M1040.82,611.12q-1.74,3.75-3.47,7.4-2.7,5.67-5.33,11.12c-.78,1.61-1.56,3.19-2.32,4.77-8.6,17.57-16.63,33.11-23.45,45.89A73.21,73.21,0,0,1,942.44,719l-151.65,1.65h-1.6l-13,.14-11.12.12-34.1.37h-1.38l-17.36.19h-.53l-107,1.16-95.51,1-11.11.12-69,.75H429l-44.75.48h-.48l-141.5,1.53-42.33.46a87.991,87.991,0,0,1-10.79-.54h0c-1.22-.14-2.44-.3-3.65-.49a87.38,87.38,0,0,1-51.29-27.54C116,678.37,102.75,655,93.85,629.64q-1.93-5.49-3.6-11.12C59.44,514.37,97,380,164.6,290.08q4.25-5.64,8.64-11l.07-.08c20.79-25.52,44.1-46.84,68.93-62,44-26.91,92.75-34.49,140.7-11.9,40.57,19.12,78.45,28.11,115.17,30.55,3.71.24,7.42.42,11.11.53,84.23,2.65,163.17-27.7,255.87-47.29,3.69-.78,7.39-1.55,11.12-2.28,66.13-13.16,139.49-20.1,226.73-5.51a189.089,189.089,0,0,1,26.76,6.4q5.77,1.86,11.12,4c41.64,16.94,64.35,48.24,74,87.46q1.37,5.46,2.37,11.11C1134.3,384.41,1084.19,518.23,1040.82,611.12Z" transform="translate(-79.34 -172.91)" fill="#f2f2f2"/>
+ <path id="Path_300" data-name="Path 300" d="M576.36,618.52a95.21,95.21,0,0,1-1.87,11.12h93.7V618.52Zm-78.25,62.81,11.11-.09V653.77c-3.81-.17-7.52-.34-11.11-.52ZM265.19,618.52v11.12h198.5V618.52ZM1114.87,279h-74V191.51q-5.35-2.17-11.12-4V279H776.21V186.58c-3.73.73-7.43,1.5-11.12,2.28V279H509.22V236.15c-3.69-.11-7.4-.29-11.11-.53V279H242.24V217c-24.83,15.16-48.14,36.48-68.93,62h-.07v.08q-4.4,5.4-8.64,11h8.64V618.52h-83q1.66,5.63,3.6,11.12h79.39v93.62a87,87,0,0,0,12.2,2.79c1.21.19,2.43.35,3.65.49h0a87.991,87.991,0,0,0,10.79.54l42.33-.46v-97H498.11v94.21l11.11-.12V629.64H765.09V721l11.12-.12V629.64H1029.7v4.77c.76-1.58,1.54-3.16,2.32-4.77q2.63-5.45,5.33-11.12,1.73-3.64,3.47-7.4v-321h76.42Q1116.23,284.43,1114.87,279ZM242.24,618.52V290.08H498.11V618.52Zm267,0V290.08H765.09V618.52Zm520.48,0H776.21V290.08H1029.7Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_301" data-name="Path 301" d="M863.09,533.65v13l-151.92,1.4-1.62.03-57.74.53-1.38.02-17.55.15h-.52l-106.98.99L349.77,551.4h-.15l-44.65.42-.48.01-198.4,1.82v-15l46.65-28,93.6-.78,2-.01.66-.01,2-.03,44.94-.37,2.01-.01.64-.01,2-.01L315,509.3l.38-.01,35.55-.3h.29l277.4-2.34,6.79-.05h.68l5.18-.05,37.65-.31,2-.03,1.85-.02h.96l11.71-.09,2.32-.03,3.11-.02,9.75-.09,15.47-.13,2-.02,3.48-.02h.65l74.71-.64Z" fill="#65617d"/>
+ <path id="Path_302" data-name="Path 302" d="M863.09,533.65v13l-151.92,1.4-1.62.03-57.74.53-1.38.02-17.55.15h-.52l-106.98.99L349.77,551.4h-.15l-44.65.42-.48.01-198.4,1.82v-15l46.65-28,93.6-.78,2-.01.66-.01,2-.03,44.94-.37,2.01-.01.64-.01,2-.01L315,509.3l.38-.01,35.55-.3h.29l277.4-2.34,6.79-.05h.68l5.18-.05,37.65-.31,2-.03,1.85-.02h.96l11.71-.09,2.32-.03,3.11-.02,9.75-.09,15.47-.13,2-.02,3.48-.02h.65l74.71-.64Z" opacity="0.2"/>
+ <path id="Path_303" data-name="Path 303" d="M375.44,656.57v24.49a6.13,6.13,0,0,1-3.5,5.54,6,6,0,0,1-2.5.6l-34.9.74a6,6,0,0,1-2.7-.57,6.12,6.12,0,0,1-3.57-5.57V656.57Z" transform="translate(-79.34 -172.91)" fill="#3f3d56"/>
+ <path id="Path_304" data-name="Path 304" d="M375.44,656.57v24.49a6.13,6.13,0,0,1-3.5,5.54,6,6,0,0,1-2.5.6l-34.9.74a6,6,0,0,1-2.7-.57,6.12,6.12,0,0,1-3.57-5.57V656.57Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_305" data-name="Path 305" d="M377.44,656.57v24.49a6.13,6.13,0,0,1-3.5,5.54,6,6,0,0,1-2.5.6l-34.9.74a6,6,0,0,1-2.7-.57,6.12,6.12,0,0,1-3.57-5.57V656.57Z" transform="translate(-79.34 -172.91)" fill="#3f3d56"/>
+ <rect id="Rectangle_137" data-name="Rectangle 137" width="47.17" height="31.5" transform="translate(680.92 483.65)" fill="#3f3d56"/>
+ <rect id="Rectangle_138" data-name="Rectangle 138" width="47.17" height="31.5" transform="translate(680.92 483.65)" opacity="0.1"/>
+ <rect id="Rectangle_139" data-name="Rectangle 139" width="47.17" height="31.5" transform="translate(678.92 483.65)" fill="#3f3d56"/>
+ <path id="Path_306" data-name="Path 306" d="M298.09,483.65v4.97l-47.17,1.26v-6.23Z" opacity="0.1"/>
+ <path id="Path_307" data-name="Path 307" d="M460.69,485.27v168.2a4,4,0,0,1-3.85,3.95l-191.65,5.1h-.05a4,4,0,0,1-3.95-3.95V485.27a4,4,0,0,1,3.95-3.95h191.6a4,4,0,0,1,3.95,3.95Z" transform="translate(-79.34 -172.91)" fill="#65617d"/>
+ <path id="Path_308" data-name="Path 308" d="M265.19,481.32v181.2h-.05a4,4,0,0,1-3.95-3.95V485.27a4,4,0,0,1,3.95-3.95Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_309" data-name="Path 309" d="M194.59,319.15h177.5V467.4l-177.5,4Z" fill="#39374d"/>
+ <path id="Path_310" data-name="Path 310" d="M726.09,483.65v6.41l-47.17-1.26v-5.15Z" opacity="0.1"/>
+ <path id="Path_311" data-name="Path 311" d="M867.69,485.27v173.3a4,4,0,0,1-4,3.95h0L672,657.42a4,4,0,0,1-3.85-3.95V485.27a4,4,0,0,1,3.95-3.95H863.7a4,4,0,0,1,3.99,3.95Z" transform="translate(-79.34 -172.91)" fill="#65617d"/>
+ <path id="Path_312" data-name="Path 312" d="M867.69,485.27v173.3a4,4,0,0,1-4,3.95h0V481.32h0a4,4,0,0,1,4,3.95Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_313" data-name="Path 313" d="M775.59,319.15H598.09V467.4l177.5,4Z" fill="#39374d"/>
+ <path id="Path_314" data-name="Path 314" d="M663.19,485.27v168.2a4,4,0,0,1-3.85,3.95l-191.65,5.1h0a4,4,0,0,1-4-3.95V485.27a4,4,0,0,1,3.95-3.95h191.6A4,4,0,0,1,663.19,485.27Z" transform="translate(-79.34 -172.91)" fill="#65617d"/>
+ <path id="Path_315" data-name="Path 315" d="M397.09,319.15h177.5V467.4l-177.5,4Z" fill="#4267b2"/>
+ <path id="Path_316" data-name="Path 316" d="M863.09,533.65v13l-151.92,1.4-1.62.03-57.74.53-1.38.02-17.55.15h-.52l-106.98.99L349.77,551.4h-.15l-44.65.42-.48.01-198.4,1.82v-15l202.51-1.33h.48l40.99-.28h.19l283.08-1.87h.29l.17-.01h.47l4.79-.03h1.46l74.49-.5,4.4-.02.98-.01Z" opacity="0.1"/>
+ <circle id="Ellipse_111" data-name="Ellipse 111" cx="51.33" cy="51.33" r="51.33" transform="translate(435.93 246.82)" fill="#fbbebe"/>
+ <path id="Path_317" data-name="Path 317" d="M617.94,550.07s-99.5,12-90,0c3.44-4.34,4.39-17.2,4.2-31.85-.06-4.45-.22-9.06-.45-13.65-1.1-22-3.75-43.5-3.75-43.5s87-41,77-8.5c-4,13.13-2.69,31.57.35,48.88.89,5.05,1.92,10,3,14.7a344.66,344.66,0,0,0,9.65,33.92Z" transform="translate(-79.34 -172.91)" fill="#fbbebe"/>
+ <path id="Path_318" data-name="Path 318" d="M585.47,546c11.51-2.13,23.7-6,34.53-1.54,2.85,1.17,5.47,2.88,8.39,3.86s6.12,1.22,9.16,1.91c10.68,2.42,19.34,10.55,24.9,20s8.44,20.14,11.26,30.72l6.9,25.83c6,22.45,12,45.09,13.39,68.3a2437.506,2437.506,0,0,1-250.84,1.43c5.44-10.34,11-21.31,10.54-33s-7.19-23.22-4.76-34.74c1.55-7.34,6.57-13.39,9.64-20.22,8.75-19.52,1.94-45.79,17.32-60.65,6.92-6.68,17-9.21,26.63-8.89,12.28.41,24.85,4.24,37,6.11C555.09,547.48,569.79,548.88,585.47,546Z" transform="translate(-79.34 -172.91)" fill="#ff6584"/>
+ <path id="Path_319" data-name="Path 319" d="M716.37,657.17l-.1,1.43v.1l-.17,2.3-1.33,18.51-1.61,22.3-.46,6.28-1,13.44v.17l-107,1-175.59,1.9v.84h-.14v-1.12l.45-14.36.86-28.06.74-23.79.07-2.37a10.53,10.53,0,0,1,11.42-10.17c4.72.4,10.85.89,18.18,1.41l3,.22c42.33,2.94,120.56,6.74,199.5,2,1.66-.09,3.33-.19,5-.31,12.24-.77,24.47-1.76,36.58-3a10.53,10.53,0,0,1,11.6,11.23Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_320" data-name="Path 320" d="M429.08,725.44v-.84l175.62-1.91,107-1h.3v-.17l1-13.44.43-6,1.64-22.61,1.29-17.9v-.44a10.617,10.617,0,0,0-.11-2.47.3.3,0,0,0,0-.1,10.391,10.391,0,0,0-2-4.64,10.54,10.54,0,0,0-9.42-4c-12.11,1.24-24.34,2.23-36.58,3-1.67.12-3.34.22-5,.31-78.94,4.69-157.17.89-199.5-2l-3-.22c-7.33-.52-13.46-1-18.18-1.41a10.54,10.54,0,0,0-11.24,8.53,11,11,0,0,0-.18,1.64l-.68,22.16L429.54,710l-.44,14.36v1.12Z" transform="translate(-79.34 -172.91)" fill="#3f3d56"/>
+ <path id="Path_321" data-name="Path 321" d="M716.67,664.18l-1.23,15.33-1.83,22.85-.46,5.72-1,12.81-.06.64v.17h0l-.15,1.48.11-1.48h-.29l-107,1-175.65,1.9v-.28l.49-14.36,1-28.06.64-18.65A6.36,6.36,0,0,1,434.3,658a6.25,6.25,0,0,1,3.78-.9c2.1.17,4.68.37,7.69.59,4.89.36,10.92.78,17.94,1.22,13,.82,29.31,1.7,48,2.42,52,2,122.2,2.67,188.88-3.17,3-.26,6.1-.55,9.13-.84a6.26,6.26,0,0,1,3.48.66,5.159,5.159,0,0,1,.86.54,6.14,6.14,0,0,1,2,2.46,3.564,3.564,0,0,1,.25.61A6.279,6.279,0,0,1,716.67,664.18Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_322" data-name="Path 322" d="M377.44,677.87v3.19a6.13,6.13,0,0,1-3.5,5.54l-40.1.77a6.12,6.12,0,0,1-3.57-5.57v-3Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_323" data-name="Path 323" d="M298.59,515.57l-52.25,1V507.9l52.25-1Z" fill="#3f3d56"/>
+ <path id="Path_324" data-name="Path 324" d="M298.59,515.57l-52.25,1V507.9l52.25-1Z" opacity="0.1"/>
+ <path id="Path_325" data-name="Path 325" d="M300.59,515.57l-52.25,1V507.9l52.25-1Z" fill="#3f3d56"/>
+ <path id="Path_326" data-name="Path 326" d="M758.56,679.87v3.19a6.13,6.13,0,0,0,3.5,5.54l40.1.77a6.12,6.12,0,0,0,3.57-5.57v-3Z" transform="translate(-79.34 -172.91)" opacity="0.1"/>
+ <path id="Path_327" data-name="Path 327" d="M678.72,517.57l52.25,1V509.9l-52.25-1Z" opacity="0.1"/>
+ <path id="Path_328" data-name="Path 328" d="M676.72,517.57l52.25,1V509.9l-52.25-1Z" fill="#3f3d56"/>
+ <path id="Path_329" data-name="Path 329" d="M534.13,486.79c.08,7-3.16,13.6-5.91,20.07a163.491,163.491,0,0,0-12.66,74.71c.73,11,2.58,22,.73,32.9s-8.43,21.77-19,24.9c17.53,10.45,41.26,9.35,57.76-2.66,8.79-6.4,15.34-15.33,21.75-24.11a97.86,97.86,0,0,1-13.31,44.75A103.43,103.43,0,0,0,637,616.53c4.31-5.81,8.06-12.19,9.72-19.23,3.09-13-1.22-26.51-4.51-39.5a266.055,266.055,0,0,1-6.17-33c-.43-3.56-.78-7.22.1-10.7,1-4.07,3.67-7.51,5.64-11.22,5.6-10.54,5.73-23.3,2.86-34.88s-8.49-22.26-14.06-32.81c-4.46-8.46-9.3-17.31-17.46-22.28-5.1-3.1-11-4.39-16.88-5.64l-25.37-5.43c-5.55-1.19-11.26-2.38-16.87-1.51-9.47,1.48-16.14,8.32-22,15.34-4.59,5.46-15.81,15.71-16.6,22.86-.72,6.59,5.1,17.63,6.09,24.58,1.3,9,2.22,6,7.3,11.52C532,478.05,534.07,482,534.13,486.79Z" transform="translate(-79.34 -172.91)" fill="#3f3d56"/>
+ </g>
+ <g id="docusaurus_keytar" transform="translate(670.271 615.768)">
+ <path id="Path_40" data-name="Path 40" d="M99,52h43.635V69.662H99Z" transform="translate(-49.132 -33.936)" fill="#fff" fill-rule="evenodd"/>
+ <path id="Path_41" data-name="Path 41" d="M13.389,158.195A10.377,10.377,0,0,1,4.4,153a10.377,10.377,0,0,0,8.988,15.584H23.779V158.195Z" transform="translate(-3 -82.47)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_42" data-name="Path 42" d="M66.967,38.083l36.373-2.273V30.615A10.389,10.389,0,0,0,92.95,20.226H46.2l-1.3-2.249a1.5,1.5,0,0,0-2.6,0L41,20.226l-1.3-2.249a1.5,1.5,0,0,0-2.6,0l-1.3,2.249-1.3-2.249a1.5,1.5,0,0,0-2.6,0l-1.3,2.249-.034,0-2.152-2.151a1.5,1.5,0,0,0-2.508.672L25.21,21.4l-2.7-.723a1.5,1.5,0,0,0-1.836,1.837l.722,2.7-2.65.71a1.5,1.5,0,0,0-.673,2.509l2.152,2.152c0,.011,0,.022,0,.033l-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6L20.226,41l-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3-2.249,1.3a1.5,1.5,0,0,0,0,2.6l2.249,1.3A10.389,10.389,0,0,0,30.615,103.34H92.95A10.389,10.389,0,0,0,103.34,92.95V51.393L66.967,49.12a5.53,5.53,0,0,1,0-11.038" transform="translate(-9.836 -17.226)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_43" data-name="Path 43" d="M143,163.779h15.584V143H143Z" transform="translate(-70.275 -77.665)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_44" data-name="Path 44" d="M173.779,148.389a2.582,2.582,0,0,0-.332.033c-.02-.078-.038-.156-.06-.234a2.594,2.594,0,1,0-2.567-4.455q-.086-.088-.174-.175a2.593,2.593,0,1,0-4.461-2.569c-.077-.022-.154-.04-.231-.06a2.6,2.6,0,1,0-5.128,0c-.077.02-.154.038-.231.06a2.594,2.594,0,1,0-4.461,2.569,10.384,10.384,0,1,0,17.314,9.992,2.592,2.592,0,1,0,.332-5.161" transform="translate(-75.08 -75.262)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_45" data-name="Path 45" d="M153,113.389h15.584V103H153Z" transform="translate(-75.08 -58.444)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_46" data-name="Path 46" d="M183.389,108.944a1.3,1.3,0,1,0,0-2.6,1.336,1.336,0,0,0-.166.017c-.01-.039-.019-.078-.03-.117a1.3,1.3,0,0,0-.5-2.5,1.285,1.285,0,0,0-.783.269q-.043-.044-.087-.087a1.285,1.285,0,0,0,.263-.776,1.3,1.3,0,0,0-2.493-.509,5.195,5.195,0,1,0,0,10,1.3,1.3,0,0,0,2.493-.509,1.285,1.285,0,0,0-.263-.776q.044-.043.087-.087a1.285,1.285,0,0,0,.783.269,1.3,1.3,0,0,0,.5-2.5c.011-.038.02-.078.03-.117a1.337,1.337,0,0,0,.166.017" transform="translate(-84.691 -57.894)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_47" data-name="Path 47" d="M52.188,48.292a1.3,1.3,0,0,1-1.3-1.3,3.9,3.9,0,0,0-7.792,0,1.3,1.3,0,1,1-2.6,0,6.493,6.493,0,0,1,12.987,0,1.3,1.3,0,0,1-1.3,1.3" transform="translate(-21.02 -28.41)" fill-rule="evenodd"/>
+ <path id="Path_48" data-name="Path 48" d="M103,139.752h31.168a10.389,10.389,0,0,0,10.389-10.389V93H113.389A10.389,10.389,0,0,0,103,103.389Z" transform="translate(-51.054 -53.638)" fill="#ffff50" fill-rule="evenodd"/>
+ <path id="Path_49" data-name="Path 49" d="M141.1,94.017H115.106a.519.519,0,1,1,0-1.039H141.1a.519.519,0,0,1,0,1.039m0,10.389H115.106a.519.519,0,1,1,0-1.039H141.1a.519.519,0,0,1,0,1.039m0,10.389H115.106a.519.519,0,1,1,0-1.039H141.1a.519.519,0,0,1,0,1.039m0-25.877H115.106a.519.519,0,1,1,0-1.039H141.1a.519.519,0,0,1,0,1.039m0,10.293H115.106a.519.519,0,1,1,0-1.039H141.1a.519.519,0,0,1,0,1.039m0,10.389H115.106a.519.519,0,1,1,0-1.039H141.1a.519.519,0,0,1,0,1.039m7.782-47.993c-.006,0-.011,0-.018,0-1.605.055-2.365,1.66-3.035,3.077-.7,1.48-1.24,2.443-2.126,2.414-.981-.035-1.542-1.144-2.137-2.317-.683-1.347-1.462-2.876-3.1-2.819-1.582.054-2.344,1.451-3.017,2.684-.715,1.313-1.2,2.112-2.141,2.075-1-.036-1.533-.938-2.149-1.981-.686-1.162-1.479-2.467-3.084-2.423-1.555.053-2.319,1.239-2.994,2.286-.713,1.106-1.213,1.781-2.164,1.741-1.025-.036-1.554-.784-2.167-1.65-.688-.973-1.463-2.074-3.062-2.021a3.815,3.815,0,0,0-2.959,1.879c-.64.812-1.14,1.456-2.2,1.415a.52.52,0,0,0-.037,1.039,3.588,3.588,0,0,0,3.05-1.811c.611-.777,1.139-1.448,2.178-1.483,1-.043,1.47.579,2.179,1.582.674.953,1.438,2.033,2.977,2.089,1.612.054,2.387-1.151,3.074-2.217.614-.953,1.144-1.775,2.156-1.81.931-.035,1.438.7,2.153,1.912.674,1.141,1.437,2.434,3.006,2.491,1.623.056,2.407-1.361,3.09-2.616.592-1.085,1.15-2.109,2.14-2.143.931-.022,1.417.829,2.135,2.249.671,1.326,1.432,2.828,3.026,2.886l.088,0c1.592,0,2.347-1.6,3.015-3.01.592-1.252,1.152-2.431,2.113-2.479Z" transform="translate(-55.378 -38.552)" fill-rule="evenodd"/>
+ <path id="Path_50" data-name="Path 50" d="M83,163.779h20.779V143H83Z" transform="translate(-41.443 -77.665)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <g id="Group_8" data-name="Group 8" transform="matrix(0.966, -0.259, 0.259, 0.966, 51.971, 43.3)">
+ <rect id="Rectangle_3" data-name="Rectangle 3" width="43.906" height="17.333" rx="2" transform="translate(0 0)" fill="#d8d8d8"/>
+ <g id="Group_2" data-name="Group 2" transform="translate(0.728 10.948)">
+ <rect id="Rectangle_4" data-name="Rectangle 4" width="2.537" height="2.537" rx="1" transform="translate(7.985 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_5" data-name="Rectangle 5" width="2.537" height="2.537" rx="1" transform="translate(10.991 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_6" data-name="Rectangle 6" width="2.537" height="2.537" rx="1" transform="translate(13.997 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_7" data-name="Rectangle 7" width="2.537" height="2.537" rx="1" transform="translate(17.003 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_8" data-name="Rectangle 8" width="2.537" height="2.537" rx="1" transform="translate(20.009 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_9" data-name="Rectangle 9" width="2.537" height="2.537" rx="1" transform="translate(23.015 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_10" data-name="Rectangle 10" width="2.537" height="2.537" rx="1" transform="translate(26.021 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_11" data-name="Rectangle 11" width="2.537" height="2.537" rx="1" transform="translate(29.028 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_12" data-name="Rectangle 12" width="2.537" height="2.537" rx="1" transform="translate(32.034 0)" fill="#4a4a4a"/>
+ <path id="Path_51" data-name="Path 51" d="M.519,0H6.9A.519.519,0,0,1,7.421.52v1.5a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,2.017V.519A.519.519,0,0,1,.519,0ZM35.653,0h6.383a.519.519,0,0,1,.519.519v1.5a.519.519,0,0,1-.519.519H35.652a.519.519,0,0,1-.519-.519V.519A.519.519,0,0,1,35.652,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ </g>
+ <g id="Group_3" data-name="Group 3" transform="translate(0.728 4.878)">
+ <path id="Path_52" data-name="Path 52" d="M.519,0H2.956a.519.519,0,0,1,.519.519v1.5a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,2.017V.519A.519.519,0,0,1,.519,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <rect id="Rectangle_13" data-name="Rectangle 13" width="2.537" height="2.537" rx="1" transform="translate(3.945 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_14" data-name="Rectangle 14" width="2.537" height="2.537" rx="1" transform="translate(6.951 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_15" data-name="Rectangle 15" width="2.537" height="2.537" rx="1" transform="translate(9.958 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_16" data-name="Rectangle 16" width="2.537" height="2.537" rx="1" transform="translate(12.964 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_17" data-name="Rectangle 17" width="2.537" height="2.537" rx="1" transform="translate(15.97 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_18" data-name="Rectangle 18" width="2.537" height="2.537" rx="1" transform="translate(18.976 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_19" data-name="Rectangle 19" width="2.537" height="2.537" rx="1" transform="translate(21.982 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_20" data-name="Rectangle 20" width="2.537" height="2.537" rx="1" transform="translate(24.988 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_21" data-name="Rectangle 21" width="2.537" height="2.537" rx="1" transform="translate(27.994 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_22" data-name="Rectangle 22" width="2.537" height="2.537" rx="1" transform="translate(31 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_23" data-name="Rectangle 23" width="2.537" height="2.537" rx="1" transform="translate(34.006 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_24" data-name="Rectangle 24" width="2.537" height="2.537" rx="1" transform="translate(37.012 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_25" data-name="Rectangle 25" width="2.537" height="2.537" rx="1" transform="translate(40.018 0)" fill="#4a4a4a"/>
+ </g>
+ <g id="Group_4" data-name="Group 4" transform="translate(43.283 4.538) rotate(180)">
+ <path id="Path_53" data-name="Path 53" d="M.519,0H2.956a.519.519,0,0,1,.519.519v1.5a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,2.017V.519A.519.519,0,0,1,.519,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <rect id="Rectangle_26" data-name="Rectangle 26" width="2.537" height="2.537" rx="1" transform="translate(3.945 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_27" data-name="Rectangle 27" width="2.537" height="2.537" rx="1" transform="translate(6.951 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_28" data-name="Rectangle 28" width="2.537" height="2.537" rx="1" transform="translate(9.958 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_29" data-name="Rectangle 29" width="2.537" height="2.537" rx="1" transform="translate(12.964 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_30" data-name="Rectangle 30" width="2.537" height="2.537" rx="1" transform="translate(15.97 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_31" data-name="Rectangle 31" width="2.537" height="2.537" rx="1" transform="translate(18.976 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_32" data-name="Rectangle 32" width="2.537" height="2.537" rx="1" transform="translate(21.982 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_33" data-name="Rectangle 33" width="2.537" height="2.537" rx="1" transform="translate(24.988 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_34" data-name="Rectangle 34" width="2.537" height="2.537" rx="1" transform="translate(27.994 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_35" data-name="Rectangle 35" width="2.537" height="2.537" rx="1" transform="translate(31.001 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_36" data-name="Rectangle 36" width="2.537" height="2.537" rx="1" transform="translate(34.007 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_37" data-name="Rectangle 37" width="2.537" height="2.537" rx="1" transform="translate(37.013 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_38" data-name="Rectangle 38" width="2.537" height="2.537" rx="1" transform="translate(40.018 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_39" data-name="Rectangle 39" width="2.537" height="2.537" rx="1" transform="translate(3.945 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_40" data-name="Rectangle 40" width="2.537" height="2.537" rx="1" transform="translate(6.951 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_41" data-name="Rectangle 41" width="2.537" height="2.537" rx="1" transform="translate(9.958 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_42" data-name="Rectangle 42" width="2.537" height="2.537" rx="1" transform="translate(12.964 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_43" data-name="Rectangle 43" width="2.537" height="2.537" rx="1" transform="translate(15.97 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_44" data-name="Rectangle 44" width="2.537" height="2.537" rx="1" transform="translate(18.976 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_45" data-name="Rectangle 45" width="2.537" height="2.537" rx="1" transform="translate(21.982 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_46" data-name="Rectangle 46" width="2.537" height="2.537" rx="1" transform="translate(24.988 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_47" data-name="Rectangle 47" width="2.537" height="2.537" rx="1" transform="translate(27.994 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_48" data-name="Rectangle 48" width="2.537" height="2.537" rx="1" transform="translate(31.001 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_49" data-name="Rectangle 49" width="2.537" height="2.537" rx="1" transform="translate(34.007 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_50" data-name="Rectangle 50" width="2.537" height="2.537" rx="1" transform="translate(37.013 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_51" data-name="Rectangle 51" width="2.537" height="2.537" rx="1" transform="translate(40.018 0)" fill="#4a4a4a"/>
+ </g>
+ <g id="Group_6" data-name="Group 6" transform="translate(0.728 7.883)">
+ <path id="Path_54" data-name="Path 54" d="M.519,0h3.47a.519.519,0,0,1,.519.519v1.5a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,2.017V.52A.519.519,0,0,1,.519,0Z" transform="translate(0 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <g id="Group_5" data-name="Group 5" transform="translate(5.073 0)">
+ <rect id="Rectangle_52" data-name="Rectangle 52" width="2.537" height="2.537" rx="1" transform="translate(0 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_53" data-name="Rectangle 53" width="2.537" height="2.537" rx="1" transform="translate(3.006 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_54" data-name="Rectangle 54" width="2.537" height="2.537" rx="1" transform="translate(6.012 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_55" data-name="Rectangle 55" width="2.537" height="2.537" rx="1" transform="translate(9.018 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_56" data-name="Rectangle 56" width="2.537" height="2.537" rx="1" transform="translate(12.025 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_57" data-name="Rectangle 57" width="2.537" height="2.537" rx="1" transform="translate(15.031 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_58" data-name="Rectangle 58" width="2.537" height="2.537" rx="1" transform="translate(18.037 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_59" data-name="Rectangle 59" width="2.537" height="2.537" rx="1" transform="translate(21.042 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_60" data-name="Rectangle 60" width="2.537" height="2.537" rx="1" transform="translate(24.049 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_61" data-name="Rectangle 61" width="2.537" height="2.537" rx="1" transform="translate(27.055 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_62" data-name="Rectangle 62" width="2.537" height="2.537" rx="1" transform="translate(30.061 0)" fill="#4a4a4a"/>
+ </g>
+ <path id="Path_55" data-name="Path 55" d="M.52,0H3.8a.519.519,0,0,1,.519.519v1.5a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,2.017V.52A.519.519,0,0,1,.519,0Z" transform="translate(38.234 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ </g>
+ <g id="Group_7" data-name="Group 7" transform="translate(0.728 14.084)">
+ <rect id="Rectangle_63" data-name="Rectangle 63" width="2.537" height="2.537" rx="1" transform="translate(0 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_64" data-name="Rectangle 64" width="2.537" height="2.537" rx="1" transform="translate(3.006 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_65" data-name="Rectangle 65" width="2.537" height="2.537" rx="1" transform="translate(6.012 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_66" data-name="Rectangle 66" width="2.537" height="2.537" rx="1" transform="translate(9.018 0)" fill="#4a4a4a"/>
+ <path id="Path_56" data-name="Path 56" d="M.519,0H14.981A.519.519,0,0,1,15.5.519v1.5a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,2.018V.519A.519.519,0,0,1,.519,0Zm15.97,0h1.874a.519.519,0,0,1,.519.519v1.5a.519.519,0,0,1-.519.519H16.489a.519.519,0,0,1-.519-.519V.519A.519.519,0,0,1,16.489,0Z" transform="translate(12.024 0)" fill="#4a4a4a" fill-rule="evenodd"/>
+ <rect id="Rectangle_67" data-name="Rectangle 67" width="2.537" height="2.537" rx="1" transform="translate(31.376 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_68" data-name="Rectangle 68" width="2.537" height="2.537" rx="1" transform="translate(34.382 0)" fill="#4a4a4a"/>
+ <rect id="Rectangle_69" data-name="Rectangle 69" width="2.537" height="2.537" rx="1" transform="translate(40.018 0)" fill="#4a4a4a"/>
+ <path id="Path_57" data-name="Path 57" d="M2.537,0V.561a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,.561V0Z" transform="translate(39.736 1.08) rotate(180)" fill="#4a4a4a"/>
+ <path id="Path_58" data-name="Path 58" d="M2.537,0V.561a.519.519,0,0,1-.519.519H.519A.519.519,0,0,1,0,.561V0Z" transform="translate(37.2 1.456)" fill="#4a4a4a"/>
+ </g>
+ <rect id="Rectangle_70" data-name="Rectangle 70" width="42.273" height="1.127" rx="0.564" transform="translate(0.915 0.556)" fill="#4a4a4a"/>
+ <rect id="Rectangle_71" data-name="Rectangle 71" width="2.37" height="0.752" rx="0.376" transform="translate(1.949 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_72" data-name="Rectangle 72" width="2.37" height="0.752" rx="0.376" transform="translate(5.193 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_73" data-name="Rectangle 73" width="2.37" height="0.752" rx="0.376" transform="translate(7.688 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_74" data-name="Rectangle 74" width="2.37" height="0.752" rx="0.376" transform="translate(10.183 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_75" data-name="Rectangle 75" width="2.37" height="0.752" rx="0.376" transform="translate(12.679 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_76" data-name="Rectangle 76" width="2.37" height="0.752" rx="0.376" transform="translate(15.797 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_77" data-name="Rectangle 77" width="2.37" height="0.752" rx="0.376" transform="translate(18.292 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_78" data-name="Rectangle 78" width="2.37" height="0.752" rx="0.376" transform="translate(20.788 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_79" data-name="Rectangle 79" width="2.37" height="0.752" rx="0.376" transform="translate(23.283 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_80" data-name="Rectangle 80" width="2.37" height="0.752" rx="0.376" transform="translate(26.402 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_81" data-name="Rectangle 81" width="2.37" height="0.752" rx="0.376" transform="translate(28.897 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_82" data-name="Rectangle 82" width="2.37" height="0.752" rx="0.376" transform="translate(31.393 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_83" data-name="Rectangle 83" width="2.37" height="0.752" rx="0.376" transform="translate(34.512 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_84" data-name="Rectangle 84" width="2.37" height="0.752" rx="0.376" transform="translate(37.007 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ <rect id="Rectangle_85" data-name="Rectangle 85" width="2.37" height="0.752" rx="0.376" transform="translate(39.502 0.744)" fill="#d8d8d8" opacity="0.136"/>
+ </g>
+ <path id="Path_59" data-name="Path 59" d="M123.779,148.389a2.583,2.583,0,0,0-.332.033c-.02-.078-.038-.156-.06-.234a2.594,2.594,0,1,0-2.567-4.455q-.086-.088-.174-.175a2.593,2.593,0,1,0-4.461-2.569c-.077-.022-.154-.04-.231-.06a2.6,2.6,0,1,0-5.128,0c-.077.02-.154.038-.231.06a2.594,2.594,0,1,0-4.461,2.569,10.384,10.384,0,1,0,17.314,9.992,2.592,2.592,0,1,0,.332-5.161" transform="translate(-51.054 -75.262)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_60" data-name="Path 60" d="M83,113.389h20.779V103H83Z" transform="translate(-41.443 -58.444)" fill="#3ecc5f" fill-rule="evenodd"/>
+ <path id="Path_61" data-name="Path 61" d="M123.389,108.944a1.3,1.3,0,1,0,0-2.6,1.338,1.338,0,0,0-.166.017c-.01-.039-.019-.078-.03-.117a1.3,1.3,0,0,0-.5-2.5,1.285,1.285,0,0,0-.783.269q-.043-.044-.087-.087a1.285,1.285,0,0,0,.263-.776,1.3,1.3,0,0,0-2.493-.509,5.195,5.195,0,1,0,0,10,1.3,1.3,0,0,0,2.493-.509,1.285,1.285,0,0,0-.263-.776q.044-.043.087-.087a1.285,1.285,0,0,0,.783.269,1.3,1.3,0,0,0,.5-2.5c.011-.038.02-.078.03-.117a1.335,1.335,0,0,0,.166.017" transform="translate(-55.859 -57.894)" fill="#44d860" fill-rule="evenodd"/>
+ <path id="Path_62" data-name="Path 62" d="M141.8,38.745a1.41,1.41,0,0,1-.255-.026,1.309,1.309,0,0,1-.244-.073,1.349,1.349,0,0,1-.224-.119,1.967,1.967,0,0,1-.2-.161,1.52,1.52,0,0,1-.161-.2,1.282,1.282,0,0,1-.218-.722,1.41,1.41,0,0,1,.026-.255,1.5,1.5,0,0,1,.072-.244,1.364,1.364,0,0,1,.12-.223,1.252,1.252,0,0,1,.358-.358,1.349,1.349,0,0,1,.224-.119,1.309,1.309,0,0,1,.244-.073,1.2,1.2,0,0,1,.509,0,1.262,1.262,0,0,1,.468.192,1.968,1.968,0,0,1,.2.161,1.908,1.908,0,0,1,.161.2,1.322,1.322,0,0,1,.12.223,1.361,1.361,0,0,1,.1.5,1.317,1.317,0,0,1-.379.919,1.968,1.968,0,0,1-.2.161,1.346,1.346,0,0,1-.223.119,1.332,1.332,0,0,1-.5.1m10.389-.649a1.326,1.326,0,0,1-.92-.379,1.979,1.979,0,0,1-.161-.2,1.282,1.282,0,0,1-.218-.722,1.326,1.326,0,0,1,.379-.919,1.967,1.967,0,0,1,.2-.161,1.351,1.351,0,0,1,.224-.119,1.308,1.308,0,0,1,.244-.073,1.2,1.2,0,0,1,.509,0,1.262,1.262,0,0,1,.468.192,1.967,1.967,0,0,1,.2.161,1.326,1.326,0,0,1,.379.919,1.461,1.461,0,0,1-.026.255,1.323,1.323,0,0,1-.073.244,1.847,1.847,0,0,1-.119.223,1.911,1.911,0,0,1-.161.2,1.967,1.967,0,0,1-.2.161,1.294,1.294,0,0,1-.722.218" transform="translate(-69.074 -26.006)" fill-rule="evenodd"/>
+ </g>
+ <g id="React-icon" transform="translate(906.3 541.56)">
+ <path id="Path_330" data-name="Path 330" d="M263.668,117.179c0-5.827-7.3-11.35-18.487-14.775,2.582-11.4,1.434-20.477-3.622-23.382a7.861,7.861,0,0,0-4.016-1v4a4.152,4.152,0,0,1,2.044.466c2.439,1.4,3.5,6.724,2.672,13.574-.2,1.685-.52,3.461-.914,5.272a86.9,86.9,0,0,0-11.386-1.954,87.469,87.469,0,0,0-7.459-8.965c5.845-5.433,11.332-8.41,15.062-8.41V78h0c-4.931,0-11.386,3.514-17.913,9.611-6.527-6.061-12.982-9.539-17.913-9.539v4c3.712,0,9.216,2.959,15.062,8.356a84.687,84.687,0,0,0-7.405,8.947,83.732,83.732,0,0,0-11.4,1.972c-.412-1.793-.717-3.532-.932-5.2-.843-6.85.2-12.175,2.618-13.592a3.991,3.991,0,0,1,2.062-.466v-4h0a8,8,0,0,0-4.052,1c-5.039,2.9-6.168,11.96-3.568,23.328-11.153,3.443-18.415,8.947-18.415,14.757,0,5.828,7.3,11.35,18.487,14.775-2.582,11.4-1.434,20.477,3.622,23.382a7.882,7.882,0,0,0,4.034,1c4.931,0,11.386-3.514,17.913-9.611,6.527,6.061,12.982,9.539,17.913,9.539a8,8,0,0,0,4.052-1c5.039-2.9,6.168-11.96,3.568-23.328C256.406,128.511,263.668,122.988,263.668,117.179Zm-23.346-11.96c-.663,2.313-1.488,4.7-2.421,7.083-.735-1.434-1.506-2.869-2.349-4.3-.825-1.434-1.7-2.833-2.582-4.2C235.517,104.179,237.974,104.645,240.323,105.219Zm-8.212,19.1c-1.4,2.421-2.833,4.716-4.321,6.85-2.672.233-5.379.359-8.1.359-2.708,0-5.415-.126-8.069-.341q-2.232-3.2-4.339-6.814-2.044-3.523-3.73-7.136c1.112-2.4,2.367-4.805,3.712-7.154,1.4-2.421,2.833-4.716,4.321-6.85,2.672-.233,5.379-.359,8.1-.359,2.708,0,5.415.126,8.069.341q2.232,3.2,4.339,6.814,2.044,3.523,3.73,7.136C234.692,119.564,233.455,121.966,232.11,124.315Zm5.792-2.331c.968,2.4,1.793,4.805,2.474,7.136-2.349.574-4.823,1.058-7.387,1.434.879-1.381,1.757-2.8,2.582-4.25C236.4,124.871,237.167,123.419,237.9,121.984ZM219.72,141.116a73.921,73.921,0,0,1-4.985-5.738c1.614.072,3.263.126,4.931.126,1.685,0,3.353-.036,4.985-.126A69.993,69.993,0,0,1,219.72,141.116ZM206.38,130.555c-2.546-.377-5-.843-7.352-1.417.663-2.313,1.488-4.7,2.421-7.083.735,1.434,1.506,2.869,2.349,4.3S205.5,129.192,206.38,130.555ZM219.63,93.241a73.924,73.924,0,0,1,4.985,5.738c-1.614-.072-3.263-.126-4.931-.126-1.686,0-3.353.036-4.985.126A69.993,69.993,0,0,1,219.63,93.241ZM206.362,103.8c-.879,1.381-1.757,2.8-2.582,4.25-.825,1.434-1.6,2.869-2.331,4.3-.968-2.4-1.793-4.805-2.474-7.136C201.323,104.663,203.8,104.179,206.362,103.8Zm-16.227,22.449c-6.348-2.708-10.454-6.258-10.454-9.073s4.106-6.383,10.454-9.073c1.542-.663,3.228-1.255,4.967-1.811a86.122,86.122,0,0,0,4.034,10.92,84.9,84.9,0,0,0-3.981,10.866C193.38,127.525,191.694,126.915,190.134,126.252Zm9.647,25.623c-2.439-1.4-3.5-6.724-2.672-13.574.2-1.686.52-3.461.914-5.272a86.9,86.9,0,0,0,11.386,1.954,87.465,87.465,0,0,0,7.459,8.965c-5.845,5.433-11.332,8.41-15.062,8.41A4.279,4.279,0,0,1,199.781,151.875Zm42.532-13.663c.843,6.85-.2,12.175-2.618,13.592a3.99,3.99,0,0,1-2.062.466c-3.712,0-9.216-2.959-15.062-8.356a84.689,84.689,0,0,0,7.405-8.947,83.731,83.731,0,0,0,11.4-1.972A50.194,50.194,0,0,1,242.313,138.212Zm6.9-11.96c-1.542.663-3.228,1.255-4.967,1.811a86.12,86.12,0,0,0-4.034-10.92,84.9,84.9,0,0,0,3.981-10.866c1.775.556,3.461,1.165,5.039,1.829,6.348,2.708,10.454,6.258,10.454,9.073C259.67,119.994,255.564,123.562,249.216,126.252Z" fill="#61dafb"/>
+ <path id="Path_331" data-name="Path 331" d="M320.8,78.4Z" transform="translate(-119.082 -0.328)" fill="#61dafb"/>
+ <circle id="Ellipse_112" data-name="Ellipse 112" cx="8.194" cy="8.194" r="8.194" transform="translate(211.472 108.984)" fill="#61dafb"/>
+ <path id="Path_332" data-name="Path 332" d="M520.5,78.1Z" transform="translate(-282.975 -0.082)" fill="#61dafb"/>
+ </g>
+ </g>
+</svg>
diff --git a/static/img/undraw_docusaurus_tree.svg b/static/img/undraw_docusaurus_tree.svg
new file mode 100644
index 0000000..d9161d3
--- /dev/null
+++ b/static/img/undraw_docusaurus_tree.svg
@@ -0,0 +1,40 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="1129" height="663" viewBox="0 0 1129 663">
+ <title>Focus on What Matters</title>
+ <circle cx="321" cy="321" r="321" fill="#f2f2f2" />
+ <ellipse cx="559" cy="635.49998" rx="514" ry="27.50002" fill="#3f3d56" />
+ <ellipse cx="558" cy="627" rx="460" ry="22" opacity="0.2" />
+ <rect x="131" y="152.5" width="840" height="50" fill="#3f3d56" />
+ <path d="M166.5,727.3299A21.67009,21.67009,0,0,0,188.1701,749H984.8299A21.67009,21.67009,0,0,0,1006.5,727.3299V296h-840Z" transform="translate(-35.5 -118.5)" fill="#3f3d56" />
+ <path d="M984.8299,236H188.1701A21.67009,21.67009,0,0,0,166.5,257.6701V296h840V257.6701A21.67009,21.67009,0,0,0,984.8299,236Z" transform="translate(-35.5 -118.5)" fill="#3f3d56" />
+ <path d="M984.8299,236H188.1701A21.67009,21.67009,0,0,0,166.5,257.6701V296h840V257.6701A21.67009,21.67009,0,0,0,984.8299,236Z" transform="translate(-35.5 -118.5)" opacity="0.2" />
+ <circle cx="181" cy="147.5" r="13" fill="#3f3d56" />
+ <circle cx="217" cy="147.5" r="13" fill="#3f3d56" />
+ <circle cx="253" cy="147.5" r="13" fill="#3f3d56" />
+ <rect x="168" y="213.5" width="337" height="386" rx="5.33505" fill="#606060" />
+ <rect x="603" y="272.5" width="284" height="22" rx="5.47638" fill="#2e8555" />
+ <rect x="537" y="352.5" width="416" height="15" rx="5.47638" fill="#2e8555" />
+ <rect x="537" y="396.5" width="416" height="15" rx="5.47638" fill="#2e8555" />
+ <rect x="537" y="440.5" width="416" height="15" rx="5.47638" fill="#2e8555" />
+ <rect x="537" y="484.5" width="416" height="15" rx="5.47638" fill="#2e8555" />
+ <rect x="865" y="552.5" width="88" height="26" rx="7.02756" fill="#3ecc5f" />
+ <path d="M1088.60287,624.61594a30.11371,30.11371,0,0,0,3.98291-15.266c0-13.79652-8.54358-24.98081-19.08256-24.98081s-19.08256,11.18429-19.08256,24.98081a30.11411,30.11411,0,0,0,3.98291,15.266,31.248,31.248,0,0,0,0,30.53213,31.248,31.248,0,0,0,0,30.53208,31.248,31.248,0,0,0,0,30.53208,30.11408,30.11408,0,0,0-3.98291,15.266c0,13.79652,8.54353,24.98081,19.08256,24.98081s19.08256-11.18429,19.08256-24.98081a30.11368,30.11368,0,0,0-3.98291-15.266,31.248,31.248,0,0,0,0-30.53208,31.248,31.248,0,0,0,0-30.53208,31.248,31.248,0,0,0,0-30.53213Z" transform="translate(-35.5 -118.5)" fill="#3f3d56" />
+ <ellipse cx="1038.00321" cy="460.31783" rx="19.08256" ry="24.9808" fill="#3f3d56" />
+ <ellipse cx="1038.00321" cy="429.78574" rx="19.08256" ry="24.9808" fill="#3f3d56" />
+ <path d="M1144.93871,339.34489a91.61081,91.61081,0,0,0,7.10658-10.46092l-50.141-8.23491,54.22885.4033a91.566,91.566,0,0,0,1.74556-72.42605l-72.75449,37.74139,67.09658-49.32086a91.41255,91.41255,0,1,0-150.971,102.29805,91.45842,91.45842,0,0,0-10.42451,16.66946l65.0866,33.81447-69.40046-23.292a91.46011,91.46011,0,0,0,14.73837,85.83669,91.40575,91.40575,0,1,0,143.68892,0,91.41808,91.41808,0,0,0,0-113.02862Z" transform="translate(-35.5 -118.5)" fill="#3ecc5f" fill-rule="evenodd" />
+ <path d="M981.6885,395.8592a91.01343,91.01343,0,0,0,19.56129,56.51431,91.40575,91.40575,0,1,0,143.68892,0C1157.18982,436.82067,981.6885,385.60008,981.6885,395.8592Z" transform="translate(-35.5 -118.5)" opacity="0.1" />
+ <path d="M365.62,461.43628H477.094v45.12043H365.62Z" transform="translate(-35.5 -118.5)" fill="#fff" fill-rule="evenodd" />
+ <path d="M264.76252,608.74122a26.50931,26.50931,0,0,1-22.96231-13.27072,26.50976,26.50976,0,0,0,22.96231,39.81215H291.304V608.74122Z" transform="translate(-35.5 -118.5)" fill="#3ecc5f" fill-rule="evenodd" />
+ <path d="M384.17242,468.57061l92.92155-5.80726V449.49263a26.54091,26.54091,0,0,0-26.54143-26.54143H331.1161l-3.31768-5.74622a3.83043,3.83043,0,0,0-6.63536,0l-3.31768,5.74622-3.31767-5.74622a3.83043,3.83043,0,0,0-6.63536,0l-3.31768,5.74622L301.257,417.205a3.83043,3.83043,0,0,0-6.63536,0L291.304,422.9512c-.02919,0-.05573.004-.08625.004l-5.49674-5.49541a3.8293,3.8293,0,0,0-6.4071,1.71723l-1.81676,6.77338L270.607,424.1031a3.82993,3.82993,0,0,0-4.6912,4.69253l1.84463,6.89148-6.77072,1.81411a3.8315,3.8315,0,0,0-1.71988,6.40975l5.49673,5.49673c0,.02787-.004.05574-.004.08493l-5.74622,3.31768a3.83043,3.83043,0,0,0,0,6.63536l5.74621,3.31768L259.0163,466.081a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768-5.74622,3.31767a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768-5.74622,3.31768a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768-5.74622,3.31767a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768-5.74622,3.31768a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768-5.74622,3.31768a3.83042,3.83042,0,0,0,0,6.63535l5.74622,3.31768-5.74622,3.31768a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768L259.0163,558.976a3.83042,3.83042,0,0,0,0,6.63535l5.74622,3.31768-5.74622,3.31768a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768-5.74622,3.31768a3.83042,3.83042,0,0,0,0,6.63535l5.74622,3.31768-5.74622,3.31768a3.83043,3.83043,0,0,0,0,6.63536l5.74622,3.31768A26.54091,26.54091,0,0,0,291.304,635.28265H450.55254A26.5409,26.5409,0,0,0,477.094,608.74122V502.5755l-92.92155-5.80727a14.12639,14.12639,0,0,1,0-28.19762" transform="translate(-35.5 -118.5)" fill="#3ecc5f" fill-rule="evenodd" />
+ <path d="M424.01111,635.28265h39.81214V582.19979H424.01111Z" transform="translate(-35.5 -118.5)" fill="#3ecc5f" fill-rule="evenodd" />
+ <path d="M490.36468,602.10586a6.60242,6.60242,0,0,0-.848.08493c-.05042-.19906-.09821-.39945-.15393-.59852A6.62668,6.62668,0,1,0,482.80568,590.21q-.2203-.22491-.44457-.44589a6.62391,6.62391,0,1,0-11.39689-6.56369c-.1964-.05575-.39414-.10218-.59056-.15262a6.63957,6.63957,0,1,0-13.10086,0c-.1964.05042-.39414.09687-.59056.15262a6.62767,6.62767,0,1,0-11.39688,6.56369,26.52754,26.52754,0,1,0,44.23127,25.52756,6.6211,6.6211,0,1,0,.848-13.18579" transform="translate(-35.5 -118.5)" fill="#44d860" fill-rule="evenodd" />
+ <path d="M437.28182,555.65836H477.094V529.11693H437.28182Z" transform="translate(-35.5 -118.5)" fill="#3ecc5f" fill-rule="evenodd" />
+ <path d="M490.36468,545.70532a3.31768,3.31768,0,0,0,0-6.63536,3.41133,3.41133,0,0,0-.42333.04247c-.02655-.09953-.04911-.19907-.077-.29859a3.319,3.319,0,0,0-1.278-6.37923,3.28174,3.28174,0,0,0-2.00122.68742q-.10947-.11346-.22294-.22295a3.282,3.282,0,0,0,.67149-1.98265,3.31768,3.31768,0,0,0-6.37-1.2992,13.27078,13.27078,0,1,0,0,25.54082,3.31768,3.31768,0,0,0,6.37-1.2992,3.282,3.282,0,0,0-.67149-1.98265q.11347-.10947.22294-.22294a3.28174,3.28174,0,0,0,2.00122.68742,3.31768,3.31768,0,0,0,1.278-6.37923c.02786-.0982.05042-.19907.077-.29859a3.41325,3.41325,0,0,0,.42333.04246" transform="translate(-35.5 -118.5)" fill="#44d860" fill-rule="evenodd" />
+ <path d="M317.84538,466.081a3.31768,3.31768,0,0,1-3.31767-3.31768,9.953,9.953,0,1,0-19.90608,0,3.31768,3.31768,0,1,1-6.63535,0,16.58839,16.58839,0,1,1,33.17678,0,3.31768,3.31768,0,0,1-3.31768,3.31768" transform="translate(-35.5 -118.5)" fill-rule="evenodd" />
+ <path d="M370.92825,635.28265h79.62429A26.5409,26.5409,0,0,0,477.094,608.74122v-92.895H397.46968a26.54091,26.54091,0,0,0-26.54143,26.54143Z" transform="translate(-35.5 -118.5)" fill="#ffff50" fill-rule="evenodd" />
+ <path d="M457.21444,556.98543H390.80778a1.32707,1.32707,0,0,1,0-2.65414h66.40666a1.32707,1.32707,0,0,1,0,2.65414m0,26.54143H390.80778a1.32707,1.32707,0,1,1,0-2.65414h66.40666a1.32707,1.32707,0,0,1,0,2.65414m0,26.54143H390.80778a1.32707,1.32707,0,1,1,0-2.65414h66.40666a1.32707,1.32707,0,0,1,0,2.65414m0-66.10674H390.80778a1.32707,1.32707,0,0,1,0-2.65414h66.40666a1.32707,1.32707,0,0,1,0,2.65414m0,26.29459H390.80778a1.32707,1.32707,0,0,1,0-2.65414h66.40666a1.32707,1.32707,0,0,1,0,2.65414m0,26.54143H390.80778a1.32707,1.32707,0,0,1,0-2.65414h66.40666a1.32707,1.32707,0,0,1,0,2.65414M477.094,474.19076c-.01592,0-.0292-.008-.04512-.00663-4.10064.13934-6.04083,4.24132-7.75274,7.86024-1.78623,3.78215-3.16771,6.24122-5.43171,6.16691-2.50685-.09024-3.94007-2.92222-5.45825-5.91874-1.74377-3.44243-3.73438-7.34667-7.91333-7.20069-4.04227.138-5.98907,3.70784-7.70631,6.857-1.82738,3.35484-3.07084,5.39455-5.46887,5.30033-2.55727-.09289-3.91619-2.39536-5.48877-5.06013-1.75306-2.96733-3.77951-6.30359-7.8775-6.18946-3.97326.13669-5.92537,3.16507-7.64791,5.83912-1.82207,2.82666-3.09872,4.5492-5.52725,4.447-2.61832-.09289-3.9706-2.00388-5.53522-4.21611-1.757-2.4856-3.737-5.299-7.82308-5.16231-3.88567.13271-5.83779,2.61434-7.559,4.80135-1.635,2.07555-2.9116,3.71846-5.61218,3.615a1.32793,1.32793,0,1,0-.09555,2.65414c4.00377.134,6.03154-2.38873,7.79257-4.6275,1.562-1.9853,2.91027-3.69855,5.56441-3.78879,2.55594-.10882,3.75429,1.47968,5.56707,4.04093,1.7212,2.43385,3.67465,5.19416,7.60545,5.33616,4.11789.138,6.09921-2.93946,7.8536-5.66261,1.56861-2.43385,2.92221-4.53461,5.50734-4.62352,2.37944-.08892,3.67466,1.79154,5.50072,4.885,1.72121,2.91557,3.67069,6.21865,7.67977,6.36463,4.14709.14332,6.14965-3.47693,7.89475-6.68181,1.51155-2.77092,2.93814-5.38791,5.46621-5.4755,2.37944-.05573,3.62025,2.11668,5.45558,5.74622,1.71459,3.388,3.65875,7.22591,7.73019,7.37321l.22429.004c4.06614,0,5.99571-4.08074,7.70364-7.68905,1.51154-3.19825,2.94211-6.21069,5.3972-6.33411Z" transform="translate(-35.5 -118.5)" fill-rule="evenodd" />
+ <path d="M344.38682,635.28265h53.08286V582.19979H344.38682Z" transform="translate(-35.5 -118.5)" fill="#3ecc5f" fill-rule="evenodd" />
+ <path d="M424.01111,602.10586a6.60242,6.60242,0,0,0-.848.08493c-.05042-.19906-.09821-.39945-.15394-.59852A6.62667,6.62667,0,1,0,416.45211,590.21q-.2203-.22491-.44458-.44589a6.62391,6.62391,0,1,0-11.39689-6.56369c-.1964-.05575-.39413-.10218-.59054-.15262a6.63957,6.63957,0,1,0-13.10084,0c-.19641.05042-.39414.09687-.59055.15262a6.62767,6.62767,0,1,0-11.39689,6.56369,26.52755,26.52755,0,1,0,44.2313,25.52756,6.6211,6.6211,0,1,0,.848-13.18579" transform="translate(-35.5 -118.5)" fill="#44d860" fill-rule="evenodd" />
+ <path d="M344.38682,555.65836h53.08286V529.11693H344.38682Z" transform="translate(-35.5 -118.5)" fill="#3ecc5f" fill-rule="evenodd" />
+ <path d="M410.74039,545.70532a3.31768,3.31768,0,1,0,0-6.63536,3.41133,3.41133,0,0,0-.42333.04247c-.02655-.09953-.04911-.19907-.077-.29859a3.319,3.319,0,0,0-1.278-6.37923,3.28174,3.28174,0,0,0-2.00122.68742q-.10947-.11346-.22294-.22295a3.282,3.282,0,0,0,.67149-1.98265,3.31768,3.31768,0,0,0-6.37-1.2992,13.27078,13.27078,0,1,0,0,25.54082,3.31768,3.31768,0,0,0,6.37-1.2992,3.282,3.282,0,0,0-.67149-1.98265q.11347-.10947.22294-.22294a3.28174,3.28174,0,0,0,2.00122.68742,3.31768,3.31768,0,0,0,1.278-6.37923c.02786-.0982.05042-.19907.077-.29859a3.41325,3.41325,0,0,0,.42333.04246" transform="translate(-35.5 -118.5)" fill="#44d860" fill-rule="evenodd" />
+ <path d="M424.01111,447.8338a3.60349,3.60349,0,0,1-.65028-.06636,3.34415,3.34415,0,0,1-.62372-.18579,3.44679,3.44679,0,0,1-.572-.30522,5.02708,5.02708,0,0,1-.50429-.4114,3.88726,3.88726,0,0,1-.41007-.50428,3.27532,3.27532,0,0,1-.55737-1.84463,3.60248,3.60248,0,0,1,.06636-.65027,3.82638,3.82638,0,0,1,.18447-.62373,3.48858,3.48858,0,0,1,.30656-.57064,3.197,3.197,0,0,1,.91436-.91568,3.44685,3.44685,0,0,1,.572-.30523,3.344,3.344,0,0,1,.62372-.18578,3.06907,3.06907,0,0,1,1.30053,0,3.22332,3.22332,0,0,1,1.19436.491,5.02835,5.02835,0,0,1,.50429.41139,4.8801,4.8801,0,0,1,.41139.50429,3.38246,3.38246,0,0,1,.30522.57064,3.47806,3.47806,0,0,1,.25215,1.274A3.36394,3.36394,0,0,1,426.36,446.865a5.02708,5.02708,0,0,1-.50429.4114,3.3057,3.3057,0,0,1-1.84463.55737m26.54143-1.65884a3.38754,3.38754,0,0,1-2.35024-.96877,5.04185,5.04185,0,0,1-.41007-.50428,3.27532,3.27532,0,0,1-.55737-1.84463,3.38659,3.38659,0,0,1,.96744-2.34892,5.02559,5.02559,0,0,1,.50429-.41139,3.44685,3.44685,0,0,1,.572-.30523,3.3432,3.3432,0,0,1,.62373-.18579,3.06952,3.06952,0,0,1,1.30052,0,3.22356,3.22356,0,0,1,1.19436.491,5.02559,5.02559,0,0,1,.50429.41139,3.38792,3.38792,0,0,1,.96876,2.34892,3.72635,3.72635,0,0,1-.06636.65026,3.37387,3.37387,0,0,1-.18579.62373,4.71469,4.71469,0,0,1-.30522.57064,4.8801,4.8801,0,0,1-.41139.50429,5.02559,5.02559,0,0,1-.50429.41139,3.30547,3.30547,0,0,1-1.84463.55737" transform="translate(-35.5 -118.5)" fill-rule="evenodd" />
+</svg>
diff --git a/static/pdf/Ambari_Architecture.pdf b/static/pdf/Ambari_Architecture.pdf
new file mode 100644
index 0000000..9d3f86b
--- /dev/null
+++ b/static/pdf/Ambari_Architecture.pdf
Binary files differ
diff --git a/tsconfig.json b/tsconfig.json
new file mode 100644
index 0000000..6f47569
--- /dev/null
+++ b/tsconfig.json
@@ -0,0 +1,7 @@
+{
+ // This file is not used in compilation. It is here just for a nice editor experience.
+ "extends": "@tsconfig/docusaurus/tsconfig.json",
+ "compilerOptions": {
+ "baseUrl": "."
+ }
+}
diff --git a/versioned_docs/version-2.7.5/ambari-alerts.md b/versioned_docs/version-2.7.5/ambari-alerts.md
new file mode 100644
index 0000000..04288af
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-alerts.md
@@ -0,0 +1,13 @@
+# Ambari Alerts
+
+Help page for Alerts defined in Ambari.
+
+## Ambari Agent Heartbeat
+
+**Service**: Ambari
+**Component**: Ambari Server
+**Type**: SERVER
+**Groups**: AMBARI Default
+**Description**: This alert is triggered if the server has lost contact with an agent.
+
+If this alert is generated then the alert text will contain the host name (e.g. c6401.ambari.apache.org is not sending heartbeats.). Check that the agent is running and if its running tail the log to see if its receiving and heartbeat response from the server. Check if the server host name is correct in /etc/ambari-agent/conf/ambari-agent.ini file and is reachable.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/alerts.md b/versioned_docs/version-2.7.5/ambari-design/alerts.md
new file mode 100644
index 0000000..95655a2
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/alerts.md
@@ -0,0 +1,208 @@
+---
+
+title: Alerts
+
+---
+WEB and METRIC alert types include a `connection_timeout` property on the alert definition (see below in `AlertDefinition : source : uri : connection_timeout`). This value is in seconds and defaults to 5.0. Use the Ambari REST API by updating the `source` block if you need to modify the connection timeout.
+
+```json
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster/alert_definitions/42",
+ "AlertDefinition" : {
+ "cluster_name" : "MyCluster",
+ "component_name" : "APP_TIMELINE_SERVER",
+ "description" : "This host-level alert is triggered if the App Timeline Server Web UI is unreachable.",
+ "enabled" : true,
+ "id" : 42,
+ "ignore_host" : false,
+ "interval" : 1,
+ "label" : "App Timeline Web UI",
+ "name" : "yarn_app_timeline_server_webui",
+ "scope" : "ANY",
+ "service_name" : "YARN",
+ "source" : {
+ "reporting" : {
+ "ok" : {
+ "text" : "HTTP {0} response in {2:.3f}s"
+ },
+ "warning" : {
+ "text" : "HTTP {0} response from {1} in {2:.3f}s ({3})"
+ },
+ "critical" : {
+ "text" : "Connection failed to {1} ({3})"
+ }
+ },
+ "type" : "WEB",
+ "uri" : {
+ "http" : "{{yarn-site/yarn.timeline-service.webapp.address}}",
+ "https" : "{{yarn-site/yarn.timeline-service.webapp.https.address}}",
+ "https_property" : "{{yarn-site/yarn.http.policy}}",
+ "https_property_value" : "HTTPS_ONLY",
+ "kerberos_keytab" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.keytab}}",
+ "kerberos_principal" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.principal}}",
+ "default_port" : 0.0,
+ "connection_timeout" : 5.0
+ }
+ }
+ }
+}
+```
+
+For example, to update the `connection_timeout` with the API, you need to PUT the entire contents of the `source` block with your API call. For example, you need to PUT the following to update the `connection_timeout` to **6.5** seconds.
+
+```
+PUT /api/v1/clusters/MyCluster/alert_definitions/42
+
+{
+"AlertDefinition" : {
+ "source" : {
+ "reporting" : {
+ "ok" : {
+ "text" : "HTTP {0} response in {2:.3f}s"
+ },
+ "warning" : {
+ "text" : "HTTP {0} response from {1} in {2:.3f}s ({3})"
+ },
+ "critical" : {
+ "text" : "Connection failed to {1} ({3})"
+ }
+ },
+ "type" : "WEB",
+ "uri" : {
+ "http" : "{{yarn-site/yarn.timeline-service.webapp.address}}",
+ "https" : "{{yarn-site/yarn.timeline-service.webapp.https.address}}",
+ "https_property" : "{{yarn-site/yarn.http.policy}}",
+ "https_property_value" : "HTTPS_ONLY",
+ "kerberos_keytab" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.keytab}}",
+ "kerberos_principal" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.principal}}",
+ "default_port" : 0.0,
+ "connection_timeout" : 6.5
+ }
+ }
+ }
+}
+```
+
+## Creating a Script-based Alert Dispatcher
+
+This document will describe how to enable a custom script-based alert dispatcher which is capable of responding to Ambari alerts. The dispatcher will invoke a script with the parameters of the alert as command line arguments.
+
+The dispatcher must know the location of the script that is being executed. This is configured through `ambari.properties` by setting either:
+
+* `notification.dispatch.alert.script`
+* a custom property key that points to the script, such as `foo.bar.alert.dispatch.script`
+
+Some examples of this are:
+
+```
+notification.dispatch.alert.script=/contrib/ambari-alerts/scripts/default_logger.py
+com.mycompany.dispatch.syslog.script=/contrib/ambari-alerts/scripts/legacy_sys_logger.py
+com.mycompany.dispatch.shell.script=/contrib/ambari-alerts/scripts/shell_logger.sh
+```
+
+When an alert instance changes state and Ambari needs to dispatch that alert state change, the custom script will be invoked:
+
+```
+# main method which is called when invoked on the command line
+# :param definitionName: the alert definition unique ID
+# :param definitionLabel: the human readable alert definition label
+# :param serviceName: the service that the alert definition belongs to
+# :param alertState: the state of the alert (OK, WARNING, etc)
+# :param alertText: the text of the alert
+
+def main():
+ definitionName = sys.argv[1]
+ definitionLabel = sys.argv[2]
+ serviceName = sys.argv[3]
+ alertState = sys.argv[4]
+ alertText = sys.argv[5]
+```
+
+```
+POST api/v1/alert_targets
+
+ {
+ "AlertTarget": {
+ "name": "syslogger",
+ "description": "Syslog Target",
+ "notification_type": "ALERT_SCRIPT",
+ "global": true
+ }
+ }
+```
+
+The above call will create a global alert target that will dispatch all alerts across all alert groups. Without specifying `ambari.dispatch-property.script` as a property of the alert target, Ambari will look for the default configuration key of `notification.dispatch.alert.script` in `ambari.properties`.
+
+```
+POST api/v1/alert_targets
+
+ {
+ "AlertTarget": {
+ "name": "syslogger",
+ "description": "Syslog Target",
+ "notification_type": "ALERT_SCRIPT",
+ "global": true,
+ "properties": {
+ "ambari.dispatch-property.script": "com.mycompany.dispatch.syslog.script"
+ }
+ }
+ }
+```
+
+The above call also creates a global alert target. However, a specific script key is being specified. The result is that `ambari.properties` should contain something similar to:
+
+```
+com.mycompany.dispatch.syslog.script=/contrib/ambari-alerts/scripts/legacy_sys_logger.py
+```
+
+## Customizing the Alert Template
+Ambari 2.0 leverages its own alerting system to convey the state of various aspects of managed clusters. The notification template content produced by Ambari is tightly coupled to a notification type. Email and SNMP both have customizable templates that can be used to generate content. This document describes the steps necessary to change the template used by Ambari 2.0 when creating alert notifications.
+
+This procedure is targeted at Ambari Administrators that have access to the Ambari Server file system and the `ambari.properties` file. Those Administrators can create new templates or change the existing templates that are used when generating alert notification content. At this time, there is no mechanism to expose this flexibility via the APIs or the web client to end users.
+
+By default, an [alert-templates.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/alert-templates.xml) ships with Ambari 2.0 bundled inside of Ambari Server JAR. This file contains all of the templates for every known type of notification (for example EMAIL and SNMP). This file is bundled in the Ambari Server JAR so the template is not exposed on the disk. But we can use that file as a reference example.
+
+When you customize the alert template, you are effectively overriding the template bundled by default. To override the alert templates XML:
+
+Some alert notification types, such as EMAIL, automatically combine all pending alerts into a single outbound notification (" **digest**"). Others, like SNMP, never combine pending alerts and will always create a 1:1 notification for every alert in the system (" **individual**"). All alert notification types are specified in the same alert templates file, but the specific alert template for each notification type will most likely vary greatly.
+
+The structure of the template file is defined as follows. Each `<alert-template></alert-template>` element declares what type of alert notification it should be used for.
+
+Variable |Description
+---------------------------------------------|-------------------------------------------------
+$alert.getAlertDefinition() |The definition that the alert is an instance of.
+$alert.getAlertName() |The name of the alert.
+$alert.getAlertState() |The alert state (OK\|WARNING\|CRITICAL\|UNKNOWN)
+$alert.getAlertText() |The specific alert text.
+$alert.getComponentName() |The component, if any, that the alert is defined for.
+$alert.getHostName() |The hostname, if any, that the alert was triggered for.
+$alert.getServiceName() |The name of the service that the alert is defined for.
+$alert.hasComponentName() |True if the alert is for a specific service component.
+$alert.hasHostName() |True if the alert was triggered for a specific host.
+$ambari.getServerHostName() |The Ambari Server hostname.
+$ambari.getServerUrl() |The Ambari Server URL.
+$ambari.getServerVersion() |The Ambari Server version.
+$dispatch.getTargetDescription() |The notification target description.
+$dispatch.getTargetName() |The notification target name.
+$summary.getAlerts() |A list of all of the alerts in the notification.
+$summary.getAlerts(service,alertState) |A list of all alerts for a given service or alert state (OK\|WARNING\|CRITICAL\|UNKNOWN).
+$summary.getCriticalCount() |The CRITICAL alert count.
+$summary.getOkCount() |The OK alert count.
+$summary.getServices() |A list of all services that are reporting an alert in the notification.
+$summary.getServicesByAlertState(alertState) |A list of all services for a given alert state (OK|WARNING|CRITICAL|UNKNOWN).
+$summary.getTotalCount() |The total alert count.
+$summary.getUnknownCount() |The UNKNOWN alert count.
+$summary.getWarningCount() |The WARNING alert count.
+
+The template uses Apache Velocity to render all tokenized content. The following variables are available for use in your template:
+
+```
+$summary.getTotalCount()
+```
+
+```
+$summary.getAlerts()
+```
+
+The following example illustrates how to change the subject line of all outbound email notifications to include a hard coded identifier:
+
diff --git a/versioned_docs/version-2.7.5/ambari-design/ambari-architecture.md b/versioned_docs/version-2.7.5/ambari-design/ambari-architecture.md
new file mode 100644
index 0000000..042fb32
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/ambari-architecture.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 1
+---
+
+# Ambari Architecture
+
+ <embed src="/pdf/Ambari_Architecture.pdf" type="application/pdf" width="960px" height="700px"></embed>
+
+
+
diff --git a/versioned_docs/version-2.7.5/ambari-design/blueprints/blueprint-ha.md b/versioned_docs/version-2.7.5/ambari-design/blueprints/blueprint-ha.md
new file mode 100644
index 0000000..7f6d869
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/blueprints/blueprint-ha.md
@@ -0,0 +1,550 @@
+
+# Blueprint Support for HA Clusters
+
+## Summary
+
+As of Ambari 2.0, Blueprints are able to deploy the following components with HA:
+
++ HDFS NameNode HA
+
++ YARN ResourceManager HA
++ HBase RegionServers HA
+
+As of Ambari 2.1, Blueprints are able to deploy the following components with HA:
+
++ Hive Components ([AMBARI-10489](https://issues.apache.org/jira/browse/AMBARI-10489))
++ Storm Nimbus ([AMBARI-11087](https://issues.apache.org/jira/browse/AMBARI-11087))
++ Oozie Server ([AMBARI-6683](https://issues.apache.org/jira/browse/AMBARI-6683))
+
+
+
+This functionality currently requires providing fine-grained configurations. This document provides examples.
+
+### FAQ
+
+#### Compatibility with Ambari UI
+
+While this feature does not require the Ambari UI to function, the Blueprints HA feature is completely compatible with the Ambari UI. An HA cluster created via Blueprints can be monitored and configured via the Ambari UI, just as any other Blueprints cluster would function.
+
+#### Supported Stack Versions
+
+This feature is supported as of HDP 2.1 and newer releases. Previous versions of HDP have not been tested with this feature.
+
+### Examples
+
+#### Blueprint Example: HDFS NameNode HA Cluster
+
+HDFS NameNode HA allows a cluster to be configured such that a NameNode is not a single point of failure.
+
+For more details on [HDFS NameNode HA see the Apache Hadoop documentation](http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).
+
+In an Ambari-deployed HDFS NameNode HA cluster:
+
++ 2 NameNodes are deployed: an “active” and a “passive” namenode.
++ If the active NameNode should stop functioning properly, the passive node’s Zookeeper client will detect this case, and the passive node will become the new active node.
++ HDFS relies on Zookeeper to manage the details of failover in these cases.
++ The Blueprints HA feature will automatically invoke all required commands and setup steps for an HDFS NameNode HA cluster, provided that the correct configuration is provided in the Blueprint. The shared edit logs of each NameNode are managed by the Quorum Journal Manager, rather than NFS shared storage. The use of NFS shared storage in an HDFS HA setup is not supported by Ambari Blueprints, and is generally not encouraged.
+
+#### How
+
+The Blueprints HA feature will automatically invoke all required commands and setup steps for an HDFS NameNode HA cluster, provided that the correct configuration is provided in the Blueprint. The shared edit logs of each NameNode are managed by the Quorum Journal Manager, rather than NFS shared storage. The use of NFS shared storage in an HDFS HA setup is not supported by Ambari Blueprints, and is generally not encouraged.
+
+The following HDFS stack components should be included in any host group in a Blueprint that supports an HA HDFS NameNode:
+
+1. NAMENODE
+
+2. ZKFC
+
+3. ZOOKEEPER_SERVER
+
+4. JOURNALNODE
+
+
+#### Configuring Active and Standby NameNodes
+
+The HDFS “NAMENODE” component must be assigned to two servers, either via two separate host groups, or to a host group that maps to two physical servers in the Cluster Creation Template for this cluster.
+
+By default, the Blueprint processor will assign the “active” NameNode to one host, and the “standby” NameNode to another. The user of an HA Blueprint does not need to configure the initial status of each NameNode, since this can be assigned automatically.
+
+If desired, the user can configure the initial state of each NameNode by adding the following configuration properties in the “hadoop-env” namespace:
+
+1. dfs_ha_initial_namenode_active - This property should contain the hostname for the “active” NameNode in this cluster.
+
+2. dfs_ha_initial_namenode_standby - This property should contain the host name for the “passive” NameNode in this cluster.
+
+:::caution
+These properties should only be used when the initial state of the active or standby NameNodes needs to be configured to a specific node. This setting is only guaranteed to be accurate in the initial state of the cluster. Over time, the active/standby state of each NameNode may change as failover events occur in the cluster.
+
+The active or standby status of a NameNode is not recorded or expressed when an HDFS HA Cluster is being exported to a Blueprint, using the Blueprint REST API endpoint. Since clusters change over time, this state is only accurate in the initial startup of the cluster.
+
+Generally, it is assumed that most users will not need to choose the active or standby status of each NameNode, so the default behavior in Blueprints HA is to assign the status of each node automatically.
+:::
+
+#### Example Blueprint
+
+This is a minimal blueprint with HDFS HA: [hdfs_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/hdfs_ha_blueprint.json?version=4&modificationDate=1434548806000&api=v2)
+
+These are the base configurations required. See the blueprint above for more details:
+```json
+ "configurations":
+ { "core-site": {
+ "properties" : {
+ "fs.defaultFS" : "hdfs://mycluster",
+ "ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_3%:2181"
+ }}
+ },
+ { "hdfs-site": {
+ "properties" : {
+ "dfs.client.failover.proxy.provider.mycluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+ "dfs.ha.automatic-failover.enabled" : "true",
+ "dfs.ha.fencing.methods" : "shell(/bin/true)",
+ "dfs.ha.namenodes.mycluster" : "nn1,nn2",
+ "dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
+ "dfs.namenode.http-address.mycluster.nn1" : "%HOSTGROUP::master_1%:50070",
+ "dfs.namenode.http-address.mycluster.nn2" : "%HOSTGROUP::master_3%:50070",
+ "dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
+ "dfs.namenode.https-address.mycluster.nn1" : "%HOSTGROUP::master_1%:50470",
+ "dfs.namenode.https-address.mycluster.nn2" : "%HOSTGROUP::master_3%:50470",
+ "dfs.namenode.rpc-address.mycluster.nn1" : "%HOSTGROUP::master_1%:8020",
+ "dfs.namenode.rpc-address.mycluster.nn2" : "%HOSTGROUP::master_3%:8020",
+ "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::master_2%:8485;%HOSTGROUP::master_3%:8485/mycluster",
+ "dfs.nameservices" : "mycluster"
+ }}
+ }
+ ]
+ ```
+
+#### HostName Topology Substitution in Configuration Property Values
+
+The host-related properties should be set using the “HOSTGROUP” syntax to refer to a given Blueprint’s host group, in order to map each NameNode’s actual host (defined in the Cluster Creation Template) to the properties in hdfs-site that require these host mappings.
+
+The syntax for these properties should be:
+
+“%HOSTGROUP::HOST_GROUP_NAME%:PORT””
+
+For example, the following property from the snippet above:
+
+"dfs.namenode.http-address.mycluster.nn1":
+
+"%HOSTGROUP::master_1%:50070"
+
+This property value is interpreted by the Blueprint processor to refer to the host that maps to the “master_1” host group, which should include a “NAMENODE” among its list of components. The address property listed above should map to the host for “master_1”, at the port “50070”.
+
+Using this syntax is the most portable way to define host-specific properties within a Blueprint, since no direct host name references are used. This will allow a Blueprint to be applied in a wider variety of cluster deployments, and not be tied to a specific set of hostnames.
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-hdfs-ha" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/blueprint-hdfs-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+#### Example Cluster Creation Template
+
+```json
+{
+ "blueprint": "blueprint-hdfs-ha",
+ "default_password": "changethis",
+ "host_groups": [
+ { "hosts": [
+ { "fqdn": "c6401.ambari.apache.org" }
+ ], "name": "gateway"
+ },
+ { "hosts": [
+ { "fqdn": "c6402.ambari.apache.org" }
+ ], "name": "master_1"
+ },
+ { "hosts": [
+ { "fqdn": "c6403.ambari.apache.org" }
+ ], "name": "master_2"
+ },
+ { "hosts": [
+ { "fqdn": "c6404.ambari.apache.org" }
+ ], "name": "master_3"
+ },
+ { "hosts": [
+ { "fqdn": "c6405.ambari.apache.org" }
+ ],
+ "name": "slave_1"
+ }
+ ]
+}
+```
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+```
+POST /api/v1/clusters/my-hdfs-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-hdfs-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
+
+### Blueprint Example: Yarn ResourceManager HA Cluster
+
+#### Summary
+
+Yarn ResourceManager High Availability (HA) adds support for deploying two Yarn ResourceManagers in a given Yarn cluster. This support removes the single point of failure that occurs when single ResourceManager is used.
+
+The Yarn ResourceManager support for HA is somewhat similar to HDFS NameNode HA in that an “active/standby” architecture is adopted, with Zookeeper used to handle the failover scenarios between the two ResourceManager instances.
+
+The following Apache Hadoop documentation describes the steps required in order to setup Yarn ResourceManager HA manually:
+
+[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html)
+
+:::caution
+Ambari Blueprints will handle much of the details of server setup listed in this documentation, but the user of Ambari will need to define the various configuration properties listed in this article (yarn.resourcemanager.ha.enabled, yarn.resourcemanager.hostname.$NAME_OF_RESOURCE_MANAGER, etc). The example Blueprints listed below will demonstrate the configuration properties that must be included in the Blueprint for this feature to startup the HA cluster properly.
+:::
+
+ The following stack components should be included in any host group in a Blueprint that supports an HA Yarn ResourceManager
+
+1. RESOURCEMANAGER
+
+2. ZOOKEEPER_SERVER
+
+
+#### Initial setup of Active and Standby ResourceManagers
+
+The Yarn ResourceManager HA feature depends upon Zookeeper in order to manage the details of the “active” or “standby” status of a given ResourceManager. When the two ResourceManagers are starting up initially, the first ResourceManager instance to acquire a Zookeeper lock, called a “znode”, will become the “active” ResourceManager for the cluster, with the other instance assuming the role of the “standby” ResourceManager.
+
+:::caution
+The Blueprints HA feature does not support configuring the initial “active” or “standby” status of the ResourceManagers deployed in a Yarn HA cluster. The first instance to obtain the Zookeeper lock will become the “active” node. This allows the user to specify the host groups that contain the 2 ResourceManager instances, but also shields the user from the need to select the first “active” node.
+
+After the cluster has started up initially, the state of the “active” and “standby” ResourceManagers may change over time. The initial “active” server is not guaranteed to remain the “active” node over the lifetime of the cluster. During a failover event, the “standby” node may be required to fulfill the role of the “active” server.
+
+The active or standby status of a ResourceManager is not recorded or expressed when a Yarn cluster is being exported to a Blueprint, using the Blueprint REST API endpoint. Since clusters change over time, this state is only accurate in the initial startup of the cluster.
+:::
+
+#### Example Blueprint
+
+The following link includes an example Blueprint for a 3-node Yarn ResourceManager HA Cluster:
+
+[yarn_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/yarn_ha_blueprint.json?version=2&modificationDate=1432208770000&api=v2)
+
+```json
+{
+ "Blueprints": {
+ "stack_name": "HDP",
+ "stack_version": "2.2"
+ },
+ "host_groups": [
+ {
+ "name": "gateway",
+ "cardinality" : "1",
+ "components": [
+ { "name": "HDFS_CLIENT" },
+ { "name": "MAPREDUCE2_CLIENT" },
+ { "name": "METRICS_COLLECTOR" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "TEZ_CLIENT" },
+ { "name": "YARN_CLIENT" },
+ { "name": "ZOOKEEPER_CLIENT" }
+ ]
+ },
+ {
+ "name": "master_1",
+ "cardinality" : "1",
+ "components": [
+ { "name": "HISTORYSERVER" },
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "NAMENODE" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "master_2",
+ "cardinality" : "1",
+ "components": [
+ { "name": "APP_TIMELINE_SERVER" },
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "RESOURCEMANAGER" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "master_3",
+ "cardinality" : "1",
+ "components": [
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "RESOURCEMANAGER" },
+ { "name": "SECONDARY_NAMENODE" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "slave_1",
+ "components": [
+ { "name": "DATANODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "NODEMANAGER" }
+ ]
+ }
+ ],
+ "configurations": [
+ {
+ "core-site": {
+ "properties" : {
+ "fs.defaultFS" : "hdfs://%HOSTGROUP::master_1%:8020"
+ }}
+ },{
+ "yarn-site" : {
+ "properties" : {
+ "hadoop.registry.rm.enabled" : "false",
+ "hadoop.registry.zk.quorum" : "%HOSTGROUP::master_3%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_1%:2181",
+ "yarn.log.server.url" : "http://%HOSTGROUP::master_2%:19888/jobhistory/logs",
+ "yarn.resourcemanager.address" : "%HOSTGROUP::master_2%:8050",
+ "yarn.resourcemanager.admin.address" : "%HOSTGROUP::master_2%:8141",
+ "yarn.resourcemanager.cluster-id" : "yarn-cluster",
+ "yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
+ "yarn.resourcemanager.ha.enabled" : "true",
+ "yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
+ "yarn.resourcemanager.hostname" : "%HOSTGROUP::master_2%",
+ "yarn.resourcemanager.recovery.enabled" : "true",
+ "yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::master_2%:8025",
+ "yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::master_2%:8030",
+ "yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
+ "yarn.resourcemanager.webapp.address" : "%HOSTGROUP::master_2%:8088",
+ "yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::master_2%:8090",
+ "yarn.timeline-service.address" : "%HOSTGROUP::master_2%:10200",
+ "yarn.timeline-service.webapp.address" : "%HOSTGROUP::master_2%:8188",
+ "yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::master_2%:8190"
+ }
+ }
+ }
+ ]
+}
+```
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-yarn-ha" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/blueprint-yarn-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+
+```
+
+#### Example Cluster Creation Template
+
+```json
+{
+ "blueprint": "blueprint-yarn-ha",
+ "default_password": "changethis",
+ "configurations": [
+ { "yarn-site" : {
+ "yarn.resourcemanager.zk-address" : "c6402.ambari.apache.org:2181,c6403.ambari.apache.org:2181,c6404.ambari.apache.org:2181”,
+ ”yarn.resourcemanager.hostname.rm1" : "c6403.ambari.apache.org",
+ "yarn.resourcemanager.hostname.rm2" : "c6404.ambari.apache.org"
+ }}
+ ],
+ "host_groups": [
+ { "hosts": [
+ { "fqdn": "c6401.ambari.apache.org" }
+ ], "name": "gateway"
+ },
+ { "hosts": [
+ { "fqdn": "c6402.ambari.apache.org" }
+ ], "name": "master_1"
+ },
+ { "hosts": [
+ { "fqdn": "c6403.ambari.apache.org" }
+ ], "name": "master_2"
+ },
+ { "hosts": [
+ { "fqdn": "c6404.ambari.apache.org" }
+ ], "name": "master_3"
+ },
+ { "hosts": [
+ { "fqdn": "c6405.ambari.apache.org" }
+ ],
+ "name": "slave_1"
+ }
+ ]
+}
+```
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/my-yarn-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-yarn-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
+
+
+### Blueprint Example: HBase RegionServer HA Cluster
+
+#### Summary
+
+HBase provides a High Availability feature for reads across HBase Region Servers.
+
+The following link to the Apache HBase documentation provides more information on HA support in HBase:
+
+[http://hbase.apache.org/book.html#arch.timelineconsistent.reads](http://hbase.apache.org/book.html#arch.timelineconsistent.reads)
+
+:::caution
+The documentation listed here explains how to deploy an HBase RegionServer HA cluster via a Blueprint, but there are separate application-specific steps that must be taken in order to enable this feature for a specific table in HBase. A table must be created with replication enabled, so that multiple Region Servers can handle the keys for this table.
+:::
+
+For more information on how to define an HBase table with replication enabled (after the cluster has been created), please refer to the following HBase documentation:
+
+[http://hbase.apache.org/book.html#_creating_a_table_with_region_replication](http://hbase.apache.org/book.html#_creating_a_table_with_region_replication)
+
+The following stack components should be included in any host group in a Blueprint that supports an HA HBase RegionServer:
+
+1. HBASE_REGIONSERVER
+
+
+At least two “HBASE_REGIONSERVER” components must be deployed in order to enable this feature, so that table information can be replicated across more than one Region Server.
+
+#### Example Blueprint
+
+The following link includes an example Blueprint for a 2-node HBase RegionServer HA Cluster:
+
+[hbase_rs_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/hbase_rs_ha_blueprint.json?version=1&modificationDate=1427136904000&api=v2)
+
+The following JSON snippet includes the “hbase-site” configuration typically required for a cluster that utilizes the HBase RegionServer HA feature:
+
+```json
+{
+ "configurations" : [
+ {
+ "hbase-site" : {
+ ...
+ "hbase.regionserver.global.memstore.lowerLimit" : "0.38",
+ "hbase.regionserver.global.memstore.upperLimit" : "0.4",
+ "hbase.regionserver.handler.count" : "60",
+ "hbase.regionserver.info.port" : "60030",
+ "hbase.regionserver.storefile.refresh.period" : "20",
+ "hbase.rootdir" : "hdfs://%HOSTGROUP::host_group_1%:8020/apps/hbase/data",
+ "hbase.security.authentication" : "simple",
+ "hbase.security.authorization" : "false",
+ "hbase.superuser" : "hbase",
+ "hbase.tmp.dir" : "/hadoop/hbase",
+ "hbase.zookeeper.property.clientPort" : "2181",
+ "hbase.zookeeper.quorum" : "%HOSTGROUP::host_group_1%,%HOSTGROUP::host_group_2%",
+ "hbase.zookeeper.useMulti" : "true",
+ "hfile.block.cache.size" : "0.40",
+ "zookeeper.session.timeout" : "30000",
+ "zookeeper.znode.parent" : "/hbase-unsecure"
+ }
+
+ }
+ ]
+}
+```
+:::caution
+The JSON example above is not a complete set of “hbase-site” configurations, but rather shows the configuration settings that are relevant to HBase RegionServer HA. In particular, the “hbase.regionserver.storefile.refresh.period” setting is the most relevant to HBase RegionServer HA, since this property must be set to a value greater than zero in order for the HA feature to be enabled.
+:::
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-hbase-rs-ha" resource to the Ambari Server.
+
+POST /api/v1/blueprints/blueprint-hbase-rs-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+
+#### Example Cluster Creation Template
+```json
+{
+ "blueprint" : "blueprint-hbase-rs-ha",
+ "default_password" : "default",
+ "host_groups" :[
+ {
+ "name" : "host_group_1",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ },
+ {
+ "name" : "host_group_2",
+ "hosts" : [
+ {
+ "fqdn" : "c6402.ambari.apache.org"
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/my-hbase-rs-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-hbase-rs-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/blueprints/blueprint-ranger.md b/versioned_docs/version-2.7.5/ambari-design/blueprints/blueprint-ranger.md
new file mode 100644
index 0000000..e14f468
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/blueprints/blueprint-ranger.md
@@ -0,0 +1,423 @@
+---
+title: Blueprint support for Ranger
+---
+Starting from HDP2.3 Ranger can be deployed using Blueprints in two ways either using stack advisor or setting all the needed properties in the Blueprint.
+
+## Deploy Ranger with the use of stack advisor
+
+Stack advisor makes simple the deployment of Ranger as it sets automatically the needed properties thus the user has to provided only a minimal set of configurations. The configurations properties that must be provided in either the Blueprint or cluster creation template are:
+
+* admin-properties:
+ - DB_FLAVOR - the default is MYSQL. No need to provide this if MYSQL is to be used the database server for Ranger databases. Consult Ranger documentation for supported database servers. Also ensure the ambari-server has the appropriate jdbc driver installed for the selected database server type (e.g.: ambari-server setup --jdbc-driver)
+ - db_host - set the host:port of the database server that Ranger Admin will use
+ - db_root_user - the db user with root access that will be used during deployment to create the databases used by Ranger. By default root is used if this property is not specified.
+
+ - db_root_password - the password for root user
+ - db_password - the password that will be used access the Ranger database
+ - audit_db_password - the password that will be used to access the Ranger Audit db
+* ranger-env
+ - ranger_admin_password - this is the Ambari user password created for creating repositories and policies in Ranger Admin for each plugin
+ - ranger-yarn-plugin-enabled - Enable/Disable YARN Ranger plugin. The default is Disable.
+
+ - ranger-hdfs-plugin-enabled - Enable/Disable HDFS Ranger plugin. The default is Disable.
+
+ - ranger-hbase-plugin-enabled -Enable/Disable HBase Ranger plugin. The default is Disable.
+
+ - ... - check Ranger documentation for the list of supported ranger plugins
+* kms-properties
+ - DB_FLAVOR - the default is MYSQL. No need to provide this if MYSQL is to be used the database server for Ranger databases. Consult Ranger KMS documentation for supported database servers. Also ensure the ambari-server has the appropriate jdbc driver installed for the selected database server type (e.g.: ambari-server setup --jdbc-driver)
+ - SQL_CONNECTOR_JAR - the default is /usr/share/java/mysql-connector-java.jar
+ - KMS_MASTER_KEY_PASSWD
+ - db_host - the host:port of the database server that Ranger KMS will use
+ - db_root_user - the db user with root access that will be used during deployment to create the databases used by Ranger KMS. By default root is used if this property is not specified.
+
+ - db_root_password - database password for root user
+ - db_password - database password for the Ranger KMS schema
+
+* hadoop-env
+ - keyserver_port - Port number where Key Management Server is available
+ - keyserver_host - Hostnames where Key Management Server is installed
+
+## Deploy Ranger without the use of stack advisor
+
+Without stack advisor all the configs related to Ranger, Ranger KMS and ranger plugins that don't have default values must be set in the Blueprint or cluster creation template. Consult Ranger and ranger plugin plugins documentation for all properties.
+
+An example of such Blueprint where everything is set manually (note that this just covers a subset of currently supported configuration properties and ranger plugins):
+
+```json
+{
+ "configurations" : [
+ {
+ "admin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "DB_FLAVOR" : "MYSQL",
+ "audit_db_name" : "ranger_audit",
+ "db_name" : "ranger",
+ "audit_db_user" : "rangerlogger",
+ "SQL_CONNECTOR_JAR" : "/usr/share/java/mysql-connector-java.jar",
+ "db_user" : "rangeradmin",
+ "policymgr_external_url" : "http://%HOSTGROUP::host_group_1%:6080",
+ "db_host" : "172.17.0.9:3306",
+ "db_root_user" : "root"
+ }
+ }
+ },
+ {
+ "ranger-kms-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.kms.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient",
+ "ranger.plugin.kms.service.name" : "{{repo_name}}",
+ "ranger.plugin.kms.policy.rest.url" : "{{policymgr_mgr_url}}"
+ }
+ }
+ },
+ {
+ "kms-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "hadoop.kms.security.authorization.manager" : "org.apache.ranger.authorization.kms.authorizer.RangerKmsAuthorizer",
+ "hadoop.kms.key.provider.uri" : "dbks://http@localhost:9292/kms"
+ }
+ }
+ },
+ {
+ "ranger-hdfs-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "hadoop",
+ "ranger-hdfs-plugin-enabled" : "Yes",
+ "common.name.for.certificate" : "",
+ "policy_user" : "ambari-qa",
+ "hadoop.rpc.protection" : ""
+ }
+ }
+ },
+ {
+ "ranger-admin-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.ldap.group.searchfilter" : "{{ranger_ug_ldap_group_searchfilter}}",
+ "ranger.ldap.group.searchbase" : "{{ranger_ug_ldap_group_searchbase}}",
+ "ranger.sso.enabled" : "false",
+ "ranger.externalurl" : "{{ranger_external_url}}",
+ "ranger.sso.browser.useragent" : "Mozilla,chrome",
+ "ranger.service.https.attrib.ssl.enabled" : "false",
+ "ranger.ldap.ad.referral" : "ignore",
+ "ranger.jpa.jdbc.url" : "jdbc:mysql://172.17.0.9:3306/ranger",
+ "ranger.https.attrib.keystore.file" : "/etc/ranger/admin/conf/ranger-admin-keystore.jks",
+ "ranger.ldap.user.searchfilter" : "{{ranger_ug_ldap_user_searchfilter}}",
+ "ranger.jpa.jdbc.driver" : "com.mysql.jdbc.Driver",
+ "ranger.authentication.method" : "UNIX",
+ "ranger.service.host" : "{{ranger_host}}",
+ "ranger.jpa.audit.jdbc.user" : "{{ranger_audit_db_user}}",
+ "ranger.ldap.referral" : "ignore",
+ "ranger.jpa.audit.jdbc.credential.alias" : "rangeraudit",
+ "ranger.service.https.attrib.keystore.pass" : "SECRET:ranger-admin-site:2:ranger.service.https.attrib.keystore.pass",
+ "ranger.audit.solr.username" : "ranger_solr",
+ "ranger.sso.query.param.originalurl" : "originalUrl",
+ "ranger.service.http.enabled" : "true",
+ "ranger.audit.source.type" : "solr",
+ "ranger.ldap.url" : "{{ranger_ug_ldap_url}}",
+ "ranger.service.https.attrib.clientAuth" : "want",
+ "ranger.ldap.ad.domain" : "",
+ "ranger.ldap.ad.bind.dn" : "{{ranger_ug_ldap_bind_dn}}",
+ "ranger.credential.provider.path" : "/etc/ranger/admin/rangeradmin.jceks",
+ "ranger.jpa.audit.jdbc.driver" : "{{ranger_jdbc_driver}}",
+ "ranger.audit.solr.urls" : "",
+ "ranger.sso.publicKey" : "",
+ "ranger.ldap.bind.dn" : "{{ranger_ug_ldap_bind_dn}}",
+ "ranger.unixauth.service.port" : "5151",
+ "ranger.ldap.group.roleattribute" : "cn",
+ "ranger.jpa.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.sso.cookiename" : "hadoop-jwt",
+ "ranger.service.https.attrib.keystore.keyalias" : "rangeradmin",
+ "ranger.audit.solr.zookeepers" : "NONE",
+ "ranger.jpa.jdbc.user" : "{{ranger_db_user}}",
+ "ranger.jpa.jdbc.credential.alias" : "rangeradmin",
+ "ranger.ldap.ad.user.searchfilter" : "{{ranger_ug_ldap_user_searchfilter}}",
+ "ranger.ldap.user.dnpattern" : "uid={0},ou=users,dc=xasecure,dc=net",
+ "ranger.ldap.base.dn" : "dc=example,dc=com",
+ "ranger.service.http.port" : "6080",
+ "ranger.jpa.audit.jdbc.url" : "{{audit_jdbc_url}}",
+ "ranger.service.https.port" : "6182",
+ "ranger.sso.providerurl" : "",
+ "ranger.ldap.ad.url" : "{{ranger_ug_ldap_url}}",
+ "ranger.jpa.audit.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.unixauth.remote.login.enabled" : "true",
+ "ranger.ldap.ad.base.dn" : "dc=example,dc=com",
+ "ranger.unixauth.service.hostname" : "{{ugsync_host}}"
+ }
+ }
+ },
+ {
+ "dbks-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.ks.jpa.jdbc.url" : "jdbc:mysql://172.17.0.9:3306/rangerkms",
+ "hadoop.kms.blacklist.DECRYPT_EEK" : "hdfs",
+ "ranger.ks.jpa.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.ks.jdbc.sqlconnectorjar" : "{{ews_lib_jar_path}}",
+ "ranger.ks.jpa.jdbc.user" : "{{db_user}}",
+ "ranger.ks.jpa.jdbc.credential.alias" : "ranger.ks.jdbc.password",
+ "ranger.ks.jpa.jdbc.credential.provider.path" : "/etc/ranger/kms/rangerkms.jceks",
+ "ranger.ks.masterkey.credential.alias" : "ranger.ks.masterkey.password",
+ "ranger.ks.jpa.jdbc.driver" : "com.mysql.jdbc.Driver"
+ }
+ }
+ },
+ {
+ "kms-env" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "kms_log_dir" : "/var/log/ranger/kms",
+ "create_db_user" : "true",
+ "kms_group" : "kms",
+ "kms_user" : "kms",
+ "kms_port" : "9292"
+ }
+ }
+ },
+ {
+ "ranger-hdfs-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.hdfs.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+
+ {
+ "ranger-env" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "xml_configurations_supported" : "true",
+ "ranger_user" : "ranger",
+ "xasecure.audit.destination.hdfs.dir" : "hdfs://ambari-agent-1.node.dc1.consul:8020/ranger/audit",
+ "create_db_dbuser" : "true",
+ "ranger-hdfs-plugin-enabled" : "Yes",
+ "ranger_privelege_user_jdbc_url" : "jdbc:mysql://172.17.0.9:3306",
+ "ranger-knox-plugin-enabled" : "No",
+ "is_solrCloud_enabled" : "false",
+ "bind_anonymous" : "false",
+ "ranger-yarn-plugin-enabled" : "Yes",
+ "ranger-kafka-plugin-enabled" : "No",
+ "xasecure.audit.destination.hdfs" : "true",
+ "ranger-hive-plugin-enabled" : "No",
+ "xasecure.audit.destination.solr" : "false",
+ "xasecure.audit.destination.db" : "true",
+ "ranger_group" : "ranger",
+ "ranger_admin_username" : "amb_ranger_admin",
+ "ranger-hbase-plugin-enabled" : "Yes",
+ "admin_username" : "admin"
+ }
+ }
+ },
+
+ {
+ "kms-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "keyadmin",
+ "KMS_MASTER_KEY_PASSWD" : "SECRET:kms-properties:1:KMS_MASTER_KEY_PASSWD",
+ "DB_FLAVOR" : "MYSQL",
+ "db_name" : "rangerkms",
+ "SQL_CONNECTOR_JAR" : "/usr/share/java/mysql-connector-java.jar",
+ "db_user" : "rangerkms",
+ "db_host" : "172.17.0.9:3306",
+ "db_root_user" : "root"
+ }
+ }
+ },
+
+ {
+ "ranger-yarn-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.yarn.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+
+ {
+ "usersync-properties" : {
+ "properties_attributes" : { },
+ "properties" : { }
+ }
+ },
+
+ {
+ "ranger-hbase-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.hbase.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+ {
+ "hdfs-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "dfs.encryption.key.provider.uri" : "kms://http@%HOSTGROUP::host_group_1%:9292/kms",
+ "dfs.namenode.inode.attributes.provider.class" : "org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer"
+ }
+ }
+ },
+ {
+ "ranger-yarn-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "yarn",
+ "common.name.for.certificate" : "",
+ "ranger-yarn-plugin-enabled" : "Yes",
+ "policy_user" : "ambari-qa",
+ "hadoop.rpc.protection" : ""
+ }
+ }
+ },
+ {
+ "ranger-hbase-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "hbase",
+ "common.name.for.certificate" : "",
+ "ranger-hbase-plugin-enabled" : "Yes",
+ "policy_user" : "ambari-qa"
+ }
+ }
+ }
+ ],
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "RANGER_ADMIN"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "HBASE_CLIENT"
+ },
+ {
+ "name" : "HBASE_MASTER"
+ },
+ {
+ "name" : "RANGER_USERSYNC"
+ },
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "RANGER_KMS_SERVER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_2",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "APP_TIMELINE_SERVER"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_3",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "HBASE_CLIENT"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.3"
+ }
+}
+```
+## Deploy Ranger in HA mode
+
+The difference from deploying Ranger in non-HA mode is:
+
+* Deploy RANGER_ADMIN component to multiple host
+* Setup a load balancer and configure it to front all RANGER_ADMIN instances (The URL of a Ranger Admin instance is [http://host:port](http://hostport) (default port 6080) )
+* admin-properties
+ - policymgr_external_url - override the value of this configuration property with the URL of the load balancer. Each component interacting with Ranger is using the value of this property to connect to Ranger thus these will connect via the balancer.
diff --git a/versioned_docs/version-2.7.5/ambari-design/blueprints/index.md b/versioned_docs/version-2.7.5/ambari-design/blueprints/index.md
new file mode 100644
index 0000000..4acb444
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/blueprints/index.md
@@ -0,0 +1,1003 @@
+---
+slug: /blueprints
+---
+# Blueprints
+
+## Introduction
+
+Ambari Blueprints are a declarative definition of a cluster. With a Blueprint, you specify a [Stack](../stack-and-services/overview.mdx), the Component layout and the Configurations to materialize a Hadoop cluster instance (via a REST API) **without** having to use the Ambari Cluster Install Wizard.
+
+### **Notable JIRAs**
+JIRA | Description
+------------------------------------------------------------------|---------------------------------------------
+[AMBARI-4467](https://issues.apache.org/jira/browse/AMBARI-4467) |Blueprints REST resource.
+[AMBARI-5077](https://issues.apache.org/jira/browse/AMBARI-5077) |Provision cluster with blueprint.
+[AMBARI-4786](https://issues.apache.org/jira/browse/AMBARI-4786) |Export blueprints from running/existing cluster.
+[AMBARI-5114](https://issues.apache.org/jira/browse/AMBARI-5114) |Configurations with blueprints.
+[AMBARI-6275](https://issues.apache.org/jira/browse/AMBARI-6275) |Add hosts using blueprints.
+[AMBARI-10750](https://issues.apache.org/jira/browse/AMBARI-10750)|2.1 blueprint changes.
+
+## API Resources and Syntax
+
+The following table lists the basic Blueprints API resources.
+
+The API calls on this wiki page include the HTTP Method (for example: `GET, PUT, POST`) and a sample URI (for example: `/api/v1/blueprints`) . When actually calling the Ambari REST API, you want to be sure to set the `X-Requested-By` header and provide authentication information as appropriate. For example, calling the API using `curl`:
+
+```bash
+curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://c6401.ambari.apache.org:8080 /api/v1/blueprints
+```
+
+## Blueprint Usage Overview
+
+#### Step 0: Prepare Ambari Server and Agents
+
+Install the Ambari Server, run setup and start. Install the Ambari Agents on all hosts and perform manual registration.
+
+#### Step 1: Create Blueprint
+
+A blueprint can be created by hand or can be created by exporting a blueprint from an existing cluster.
+
+To export a cluster from an existing cluster: `GET /api/v1/clusters/:clusterName?format=blueprint`
+
+#### Step 2: Register Blueprint with Ambari
+
+`POST /api/v1/blueprints/:blueprintName`
+
+Request body is blueprint created in **Step 1**.
+
+To disable topology validation and register a blueprint:
+
+`<span>POST /api/v1/blueprints/:blueprintName?validate_topology=false</span>`
+
+Disabling topology validation allows a user to force registration of a blueprint that fails topology validation.
+
+#### Step 3: Create Cluster Creation Template
+
+Map Physical Hosts to Blueprint: Create the mapping between blueprint host groups and physical hosts.
+
+Provide Cluster Specific Configuration Overrides :Configuration can be applied at the cluster and host group scope and overrides any configurations specified in the blueprint.
+
+#### Step 4: Setup Stack Repositories (Optional)
+
+There are scenarios where public Stack repositories may not be accessible during creation of cluster via blueprint or an alternate repository is required for Stack.
+
+To use a local or alternate repository:
+
+```
+PUT /api/v1/stacks/:stack/versions/:stackVersion/operating_systems/:osType/repositories/:repoId
+
+{
+ "Repositories" : {
+ "base_url" : "",
+ "verify_base_url" : true
+ }
+}
+```
+
+This API may be invoked multiple times to set Base URL for multiple OS types or Stack versions. If this step is not performed, by default, blueprints will use the latest Base URL defined in the Stack.
+
+#### Step 5: Create Cluster
+
+`POST /api/v1/clusters/:clusterName`
+
+Request body includes blueprint name, host mappings and configurations from **Step 3**.
+
+Request is asynchronous and returns a `/requests` URL which can be used to monitor progress.
+
+#### Step 6: Monitor Cluster Creation Progress
+
+Using the `/requests` URL returned in **Step 4**, monitor the progress of the tasks associated with cluster creation.
+
+#### Limitations
+
+Ambari Blueprints currently does not support creating cluster reflecting a High Availability topology.
+
+Ambari 2.0 adds support for deploying High Availability clusters in Blueprints. Please see [Blueprint Support for HA Clusters](./blueprint-ha.md) for more information on this topic.
+
+## Blueprint Details
+
+### Prepare Ambari Server and Agents
+
+1. Perform your Ambari Server install and setup.
+
+```bash
+yum install ambari-server
+ambari-server setup
+```
+
+2. After setup completes, start your Ambari Server.
+
+```bash
+ambari-server start
+```
+
+3. Install Ambari Agents on all of the hosts you plan to include in your cluster.
+
+ ```bash
+ yum install ambari-agent
+ ```
+
+4. Set the Ambari Server on the Ambari Agents.
+
+```bash
+vi /etc/ambari-agent/conf/ambari-agent.ini
+```
+
+5. Set hostname= to the Fully Qualified Domain Name for the Ambari Server. Save and exit.
+
+```bash
+hostname=c6401.ambari.apache.org
+```
+
+6. Start the Agents to initiate registration to Server.
+
+```bash
+ambari-agent start
+```
+
+7. Confirm the Agent hosts are registered with the Server.
+[http://your.ambari.server:8080/api/v1/hosts](http://your.ambari.server:8080/api/v1/hosts)
+
+
+### Blueprint Structure
+
+A blueprint document is in JSON format and has the following structure:
+
+```json
+{
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value",
+ "property-name2" : "property-value"
+ }
+ },
+ {
+ "configuration-type2" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "host_groups" : [
+ {
+ "name" : "host-group-name",
+ "components" : [
+ {
+ "name" : "component-name"
+ },
+ {
+ "name" : "component-name2",
+ "provision_action" : "(INSTALL_AND_START | INSTALL_ONLY)"
+ }
+ ...
+
+ ],
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "settings" : [
+ "deployment_settings": [
+ {"skip_failure":"true"}
+ ],
+ "repository_settings":[
+ {
+ "override_strategy":"ALWAYS_APPLY",
+ "operating_system":"redhat7",
+ "repo_id":"HDP",
+ "base_url":"http://myserver/hdp"
+ },
+ {
+ "override_strategy":"APPLY_WHEN_MISSING",
+ "operating_system":"redhat7",
+ "repo_id":"HDP-UTIL-1.1",
+ "base_url":"http://myserver/hdp-util"
+ }
+ ],
+ "recovery_settings":[
+ {"recovery_enabled":"true"}
+ ],
+ "service_settings":[
+ {
+ "name":"SERVICE_ONE",
+ "recovery_enabled":"true"
+ },
+ {
+ "name":"SERVICE_TWO",
+ "recovery_enabled":"true"
+ }
+ ],
+ "component_settings":[
+ {
+ "name":"COMPONENT_A_OF_SERVICE_ONE"
+ "recover_enabled":"true"
+ },
+ {
+ "name":"COMPONENT_B_OF_SERVICE_TWO",
+ "recover_enabled":"true"
+ }
+ ]
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.1",
+ "security" : {
+ "type" : "(NONE | KERBEROS)",
+ "kerberos_descriptor" : {
+ ...
+
+ }
+ }
+ }
+}
+```
+
+#### **Blueprint Field Descriptions**
+
+* **configurations** : A list of configuration maps keyed by configuration type. An example of a configuration type is "core-site". When specified at the top level, configurations are applied at cluster scope and override default values for the cluster. When specified within a "host_groups" element, configurations are applied at the host level for all hosts mapped to the host group. Host scoped configurations override cluster scoped configuration for all hosts mapped to the host group. The configurations element is optional at both levels.
+
+* **host_groups** : A list of host groups which define topology (components) and configuration for all hosts which are mapped to the host group. At least one host group must be specified.
+
+ - **name** : The name of the host group. Mandatory field which is referred to in the cluster creation body when mapping physical hosts to host groups.
+
+ - **components** : A list of components which will run on all hosts that are mapped to the host group. At least one component must be specified for each host group.
+
+ - **provision_action** : Cluster wide provision action can be specified in Cluster Creation Template (see below), but optionally this can be overwritten on component level, by specifying a different provision_action here. The default provision_action is INSTALL_AND_START.
+
+ - **cardinality** : This field is optional and intended to provide a hint to the deployer as to how many instances of a particular host_group can be instantiated; it has no bearing on how the cluster is deployed. When a blueprint is exported for an existing cluster, this field will indicate the number of hosts that correspond the the host group in that cluster.
+
+* **Blueprints** : Blueprint and stack information
+ - **stack_name** : The name of the stack. All stacks currently shipped with Ambari have the name "HDP". This is a required field.
+
+ - **stack_version** : The version of the stack. For example: "1.3.2" or "2.1". This is a required field.When deploying a cluster using a blueprint, the stack definition identified in the blueprint must be available to the Ambari instance in the new cluster.
+
+ - **blueprint_name** : Optional field which specifies that name of the blueprint. Typically the name of the blueprint is specified in the URL of the REST invocation. The only reason to specify the name in the blueprint is when creating multiple blueprints via a single REST invocation. **Be aware that the name specified in this field will override the name specified in the URL.**
+ - **security** : Optional block to specify security settings for blueprint. Supported security types are **NONE** and **KERBEROS**. In case of KERBEROS, users have the option to embed a valid kerberos descriptor - to override default values defined for HDP stack - in field **kerberos_descriptor**or as an alternative they may reference a previously saved kerberos descriptor using **kerberos_descriptor_reference**field.
+
+In case of selecting **KERBEROS** as security_type it's mandatory to add **kerberos-env** and **krb5-conf** config types. (Checkout configurations section in **Blueprint example with KERBEROS** on this page)
+Be aware that, Kerberos client packages needs to be installed on the host running Ambari server and krb5.conf needs to be configured properly to contain your realm (admin_server and kdc).
+
+[Automated Kerberization](../kerberos/index.md) page describes structure of kerberos_descriptor.
+
+* **settings**: An optional section to provide additional configuration for cluster behavior during and after the blueprint deployment. You can provide configurations for the following properties:
+ - recovery_settings: A section to specify if all services (globally) should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart). For example:
+ ```json
+ "settings": [
+ "recovery_settings":[
+ {
+ "recovery_enabled":"true"
+ }
+ ]
+ ],
+ ```
+ - service_settings: A section to specify if services should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart).
+ For example:
+ ```json
+ "settings": [
+ "service_settings":[
+ {
+ "name":"HDFS",
+ "recovery_enabled":"true"
+ },
+ {
+ "name":"ZOOKEEPER",
+ "recovery_enabled":"true"
+ }
+ ]
+ ],_
+ ```
+ - component_settings: A section to specify if components should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart).
+ For example:
+ ```json
+ "settings": [
+ "component_settings":[
+ {
+ "name":"KAFKA_CLIENT"
+ "recover_enabled":"true"
+ },
+ {
+ "name":"METRICS_MONITOR",
+ "recover_enabled":"true"
+ }
+ ]
+ ],
+ ```
+ - deployment_settings: A section to specify if the blueprint deployment should auto skip install and start failures. To configure this behavior, specify "skip_failure" property to either "true" (auto skip failures), or "false" (do not auto skip failures). Blueprint deployment will fail on the very first deployment failure if the blueprint file does not contain the "deployment_settings" section.
+
+ For example:
+
+ ```json
+ "settings": [
+ "recovery_settings":[
+ {
+ "recovery_enabled":"true"
+ }
+ ]
+ ],
+ ```
+
+ - repository_settings: A section to specify custom repository URLs for the blueprint deployment. This section allows you to provide custom URLs to override the default ones. Without this section, you will need to update the repository URLs via REST API before deploying the cluster with the blueprint. "override_strategy" can be "ALWAYS_APPLY" ( always override the default one), or "APPLY_WHEN_MISSING" (only add it if no repository exists with the specific operating system and the repository id information). Repository URLs stored in the Ambari server database will be used if the blueprint does not have the "repository_settings" section.
+ For example:
+ ```json
+ settings: [
+ "repository_settings":[
+ {
+ "override_strategy":"ALWAYS_APPLY"
+ "operating_system":"redhat7",
+ "repo_id":"HDP",
+ "base_url":"[http://myserver/hdp](http:// myserver/hdp) "
+ },
+ {
+ "override_strategy":"APPLY_WHEN_MISSING",
+ "operating_system":"redhat7",
+ "repo_id":"HDP-UTIL-1.1",
+ "base_url":"[http://myserver/hdp-util](http:// myserver/hdp-util) "
+ }
+ ]
+ ]
+ ```
+
+### Cluster Creation Template Structure
+
+A Cluster Creation Template is in JSON format and has the following structure:
+
+```json
+{
+ "blueprint" : "blueprint-name",
+ "default_password" : "super-secret-password",
+ "provision_action" : "(INSTALL_AND_START | INSTALL_ONLY)"
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "host_groups" :[
+ {
+ "name" : "host-group-name",
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ],
+ "hosts" : [
+ {
+ "fqdn" : "host.domain.com"
+ },
+ {
+ "fqdn" : "host2.domain.com"
+ }
+ ...
+
+ ]
+ }
+ ...
+
+ ],
+ "credentials" : [
+ {
+ "alias" : "kdc.admin.credential",
+ "principal" : "{PRINCIPAL}",
+ "key" : "{KEY}",
+ "type" : "(TEMPORARY | PERSISTED)"
+ }
+ ],
+ "security" : {
+ "type" : "(NONE | KERBEROS)",
+ "kerberos_descriptor" : {
+ ...
+
+ }
+ }
+}
+```
+
+Starting in Ambari version 2.1.0, it is possible to specify a host count and a host predicate in the cluster creation template host group section instead of a host name.
+
+```json
+{
+ "name" : "group-using-host-count",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+}
+```
+
+Starting in Ambari version 2.2.0, it is possible to specify configuration recommendation strategy in the cluster creation template.
+
+```json
+{
+ "blueprint" : "blueprint-name",
+ "config_recommendation_strategy" : "ONLY_STACK_DEFAULTS_APPLY",
+ ...
+
+}
+```
+
+Starting in Ambari version 2.2.1, it is possible to specify the host rack info in the cluster creation template ([AMBARI-14600](https://issues.apache.org/jira/browse/AMBARI-14600)).
+
+```json
+"hosts" : [
+ {
+ "fqdn" : "amb2.service.consul",
+ "rack_info": "/dc1/rack1"
+ }
+ ]
+```
+
+**Cluster Creation Template Structure: Host Mappings and Configuration Field Descriptions**
+
+* **blueprint** : Name of the blueprint that defines the cluster to be deployed. Blueprint must already exist. Required field.
+
+* **default_password** : Optional field which specifies a default password for all required passwords which are not specified in the blueprint or cluster creation template configurations.
+
+* **provision_action** : Default provision_action is INSTALL_AND_START, optionally this can be overwritten on component level, by specifying a different provision_action for a given component.
+
+* **configurations** : A list of configuration maps keyed by configuration type. An example of a configuration type is "core-site". When specified at the top level, configurations are applied at cluster scope and override default values for the cluster. When specified within a "host_groups" element, configurations are applied at the host level for all hosts mapped to the host group. Host scoped configurations override cluster scoped configuration for all hosts mapped to the host group. All cluster scoped and host group scoped configurations specified here override configurations specified in the corresponding blueprint. The configurations element is optional at both levels.
+
+* **config_recommendation_strategy :** Optional field which specifies the strategy of applying configuration recommendations to a cluster. Recommended configurations gathered by the response of the stack advisor, they may override partly/totally user defined custom configurations depending on selected strategy. A property value is considered as custom configuration in case it has a value other then stack default. Available from Ambari 2.2.0.
+
+ - **NEVER_APPLY** : Configuration recommendations are ignored with this option. (This is the default)
+ - **ONLY_STACK_DEFAULTS_APPLY**:Configuration recommendations are applied only for properties defined in HDP stack by default.
+
+ - **ALWAYS_APPLY**: All c onfiguration recommendations are applied, they may override custom configurations provided by the user in the Blueprint and/or Cluster Creation Template.
+
+ - **ALWAYS_APPLY_DONT_OVERRIDE_CUSTOM_VALUES**: All c onfiguration recommendations are applied,however custom configurations defined by the user in the Blueprint and/or Cluster Creation Template are not overridden by recommended configuration values. Available as of Ambari 2.4.0.
+
+* **host_groups** : A list of host groups being deployed to the cluster. At least one host group must be specified.
+
+ - **name** : Required field which must correspond to a name of a host group in the associated blueprint.
+
+ - **hosts** : List of host mapping information
+ + **fqdn** : Fully qualified domain name for each host that is being mapped to the host group. At least one host must be specified
+ - **host_count** : The number of hosts that should be mapped to this host group. This can be specified instead of concrete host names. If no host_predicate is specified, any host that isn't explicitly mapped to another host group is available to be mapped to this host group. Available as of Ambari 2.1.0.
+
+ - **host_predicate** : Optional field which is used together with host_count to control which hosts are mapped to the host group. This is useful in supporting host 'flavors' where different host groups require different host types. The default predicate matches all hosts which aren't explicitly mapped to another host group. The syntax of the predicate is the standard Ambari API query syntax applied against the "api/v1/hosts" endpoint. Available as of Ambari 2.1.0.
+
+* **credentials** : Optional block to create credentials, kdc.admin.credential is required in case of setting up KERBEROS security. Store type could be **PERSISTED**
+or **TEMPORARY**. Temporary admin credentials are valid 90 minutes or until server restart.
+
+* **security** : Optional block to override security settings defined in Blueprint. Supported security types are **NONE** and **KERBEROS**. In case of KERBEROS, users have the option to embed a valid kerberos descriptor - to override default values defined for HDP stack - in field **kerberos_descriptor**or as an alternative they may reference a previously saved kerberos descriptor using **kerberos_descriptor_reference**field. Security settings defined here will override Blueprint settings, however overriding security type used in Blueprint to a less secure mode is not possible (ex. set security.type=NONE in cluster template having security.type=KERBEROS in Blueprint). In case of selecting **KERBEROS** as security_type it's mandatory to add **kerberos-env** and **krb5-conf** config types. (Checkout configurations section in **Blueprint example with KERBEROS** on this page)
+[Automated Kerberization](../kerberos/index.md) page describes structure of kerberos_descriptor.
+
+### Configurations
+
+#### Default Values and Overrides
+
+* **Stack Defaults**: Each Stack provides configurations for all included services which serve as defaults for all clusters deployed via Blueprints.
+
+* **Blueprint Cluster Scoped**: Configurations provided at the top level of a Blueprint override the corresponding default values for the entire cluster.
+
+* **Blueprint Host Group Scoped**: Configurations provided within a host_group element of a Blueprint override both the corresponding default values and blueprint cluster scoped values only for hosts mapped to the host group.
+
+* **Cluster Creation Template Cluster Scoped**: Configurations provided at the top level of the Cluster Creation Template override both the corresponding default and blueprint cluster scoped values for the entire cluster.
+
+* **Cluster Creation Template Host Group Scoped**: Configurations provided within a host_group element of a Cluster Creation Template override all other values for hosts mapped to the host group.
+
+#### Required Configurations
+
+* Not all configuration properties have valid defaults
+* Required properties must be specified by the Blueprint user
+* Come in two categories, passwords and non passwords
+* Non password required properties are validated at Blueprint creation time
+* Required password properties are validated at cluster creation time
+* For required password properties, they may be explicitly set in either the Blueprint or Cluster Creation Template configurations or a default password may be specified in the Cluster Creation Template which will be applied to all passwords which have not been explicitly set
+ - "default_password" : "super-secret-password"
+* If required configuration validation fails, a 400 response is returned indicating which properties must be specified
+
+## [Blueprint Examples](#)
+
+## Blueprint Example: Single-Node HDP 2.4 Cluster
+
+* Single-node cluster (c6401.ambari.apache.org)
+* HDP 2.4 Stack
+* Install Core Hadoop Services (HDFS, YARN, MapReduce2, ZooKeeper)
+
+### Example Blueprint
+
+```json
+{
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "components" : [
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "blueprint_name" : "single-node-hdfs-yarn",
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Register blueprint with Ambari Server
+
+Post the blueprint to the "single-node-hdfs-yarn" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/single-node-hdfs-yarn
+
+...
+
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+### Example Cluster Creation Template
+
+We are performing a single-node install and the blueprint above has **one** host group. Therefore, for our cluster instance, we define **one** host in **host_group_1** and reference the **single-node-hdfs-yarn** blueprint.
+
+**Explicit Host Name Example**
+
+```json
+{
+ "blueprint" : "single-node-hdfs-yarn",
+ "host_groups" :[
+ {
+ "name" : "host_group_1",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/MySingleNodeCluster
+
+...
+
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+```
+
+## Blueprint Example: Multi-Node HDP 2.4 Cluster
+
+* Multi-node cluster (three hosts)
+* Host Groups: "master", "slaves" (one master host, two slave hosts)
+* Use HDP 2.4 Stack
+* Install Core Hadoop Services (HDFS, YARN, MapReduce2, ZooKeeper)
+
+### Example Blueprint
+
+The blueprint ("multi-node-hdfs-yarn") below defines with **two** host groups (a "master" and the "slaves") which hosts the various Service components (masters, slaves and clients).
+
+```json
+{
+ "host_groups" : [
+ {
+ "name" : "master",
+ "components" : [
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "slaves",
+ "components" : [
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ }
+ ],
+ "cardinality" : "1+"
+ }
+ ],
+ "Blueprints" : {
+ "blueprint_name" : "multi-node-hdfs-yarn",
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Register blueprint with Ambari Server
+
+Post the blueprint to the "single-node-hdfs-yarn" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/multi-node-hdfs-yarn
+...
+
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+### Example Cluster Creation Template
+
+We are performing a multi-node install and the blueprint above has **two** host groups. Therefore, for our cluster instance, we define **one** host in **masters**, **two** hosts in **slaves** and reference the **multi-node-hdfs-yarn** blueprint.
+
+The below multi-node cluster creation template example uses the "host_count" and "host_predicate" syntax for the "slave" host group which is available as of Ambari 2.1.0. For older versions of Ambari, the "hosts/fqdn" syntax must be used.
+
+```
+{
+ "blueprint" : "multi-node-hdfs-yarn",
+ "default_password" : "my-super-secret-password",
+ "host_groups" :[
+ {
+ "name" : "master",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ },
+ {
+ "name" : "slaves",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+ }
+ ]
+}
+```
+
+### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/MyThreeNodeCluster
+...
+
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyThreeNodeCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+```
+
+## Adding Hosts to an Existing Cluster
+
+After creating a cluster using the Ambari Blueprint API, you may scale up the cluster using the API.
+
+There are two forms of the API, one for adding a single host and another for adding multiple hosts.
+
+The blueprint add hosts API is available as of Ambari 2.0.
+
+Currently, only clusters originally provisioned via the blueprint API may be scaled using this API.
+
+### Example Add Host Template
+
+#### Single Host Example
+
+Host is specified in URL
+
+```
+{
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves"
+}
+```
+
+#### Multiple Host Form
+
+Host is specified in request body
+
+```
+[
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_name" : "c6403.ambari.apache.org"
+ },
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_name" : "c6404.ambari.apache.org"
+ }
+]
+```
+
+#### Multiple Host Form using host_count
+
+Starting with Ambari 2.1, the fields "host_count" and "host_predicate" can also be used when adding a host.
+
+These fields behave exactly the same as they do when specified in the cluster creation template.
+
+```
+[
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+ }
+]
+```
+
+### Add Host Request
+
+#### Single Host
+
+```
+POST /api/v1/clusters/myExistingCluster/hosts/c6403.ambari.apache.org
+...
+
+[ Request Body is above Single Host Add Host Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/myExistingCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "Pending"
+ }
+}
+```
+
+#### Multiple Hosts
+
+```
+POST /api/v1/clusters/myExistingCluster/hosts
+...
+
+[ Request Body is above Multiple Host Add Host Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/myExistingCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "Pending"
+ }
+}
+```
+
+## Blueprint Example : Provisioning Multi-Node HDP 2.3 Cluster to use KERBEROS
+
+The blueprint below could be used to setup a cluster containing three host groups with KERBEROS security. Overriding default kerberos descriptor is not necessary however specifying a few Kerberos specific properties in kerberos-env and krb5-conf is a must to setup services to use Kerberos. Note: prior to Ambari 2.4.0 use "kdc_host" instead of "kdc_hosts".
+
+```json
+{
+ "configurations" : [
+ {
+ "kerberos-env": {
+ "properties_attributes" : { },
+ "properties" : {
+ "realm" : "AMBARI.APACHE.ORG",
+ "kdc_type" : "mit-kdc",
+ "kdc_hosts" : "(kerberos_server_name)",
+ "admin_server_host" : "(kerberos_server_name)"
+ }
+ }
+ },
+ {
+ "krb5-conf": {
+ "properties_attributes" : { },
+ "properties" : {
+ "domains" : "AMBARI.APACHE.ORG",
+ "manage_krb5_conf" : "true"
+ }
+ }
+ }
+ ],
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_2",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "KERBEROS_CLIENT"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_3",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "KERBEROS_CLIENT"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.3",
+ "security" : {
+ "type" : "KERBEROS"
+ }
+ }
+}
+```
+
+The **Cluster Creation Template** below could be used to setup a cluster containing hosts with KERBEROS security using the Blueprint from above. Overriding default kerberos descriptor is not necessary however specifying kdc.admin credentials is a must.
+
+```json
+{
+ "blueprint": "kerberosBlueprint",
+ "default_password": "admin",
+ "host_groups": [
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-1" }
+ ],
+ "name": "host_group_1",
+ "configurations" : [ ]
+ },
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-2" }
+ ],
+ "name": "host_group_2",
+ "configurations" : [ ]
+ },
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-3" }
+ ],
+ "name": "host_group_3",
+ "configurations" : [ ]
+ }
+ ],
+ "credentials" : [
+ {
+ "alias" : "kdc.admin.credential",
+ "principal" : "admin/admin",
+ "key" : "admin",
+ "type" : "TEMPORARY"
+ }
+ ],
+ "security" : {
+ "type" : "KERBEROS"
+ },
+ "Clusters" : {"cluster_name":"kerberosCluster"}
+}
+```
+
+## Blueprint Support for High Availability Clusters
+
+Support for deploying HA clusters for HDFS, Yarn, and HBase has been added in Ambari 2.0. Please see the following link for more information:
+
+[Blueprint Support for HA Clusters](./blueprint-ha.md)
diff --git a/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/create_theme.png b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/create_theme.png
new file mode 100644
index 0000000..ab504a3
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/create_theme.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png
new file mode 100644
index 0000000..df0bed0
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png
new file mode 100644
index 0000000..1b96243
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/index.md b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/index.md
new file mode 100644
index 0000000..52a2279
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/enhanced-configs/index.md
@@ -0,0 +1,342 @@
+---
+title: Enhanced Configs
+---
+
+Introduced in Ambari-2.1.0, the Enhanced Configs feature makes it possible for service providers to customize their service's configs to a great deal and determine which configs are prominently shown to user without making any UI code changes. Customization includes providing a service friendly layout, better controls (sliders, combos, lists, toggles, spinners, etc.), better validation (minimum, maximum, enums), automatic unit conversion (MB, GB, seconds, milliseconds, etc.), configuration dependencies and improved dynamic recommendations of default values.
+
+A service provider can accomplish all the above just by changing their service definition in the _stacks_/folder.
+
+Example: HBase Enhanced Configs
+
+
+
+## Features
+
+* Define theme with custom layout of configs
+ - Tabs
+ - Sections
+ - Sub-sections
+* Place selected configs in the layout defined above
+* Associate UI widget to use for a config
+ - Radio Buttons
+ - Slider
+ - Combo
+ - Time Interval Spinner
+ - Toggle
+ - Directory
+ - Directories
+ - List
+ - Password
+ - Text Field
+ - Checkbox
+ - Text Area
+* Automatic unit conversion for configs which have to be shown in units different from the units being saved as.
+
+ - Memory - B, KB, MB, GB, TB, PB
+ - Time - milliseconds, seconds, minutes, hours, days, months, years
+ - Percentage - float, percentage
+* Ability to define dependencies between configurations across services (depends-on, depended-by).
+
+* Ability to dynamically update values of other depended-by configs when a config is changed.
+
+## Enable Enhanced Configs - Steps
+
+### Step 1 - Create Theme (UI Metadata)
+
+The first step is to create a theme for your service in the stack definition folder. A theme provides necessary information of the UI to construct the enhanced configs. UI information like layout (tabs, sections, sub-sections), placement of configs in the sub-sections, and which widgets and units to use for each config
+
+
+
+1. Modify metainfo.xml to define a theme by including a themes block.
+
+```
+themes-dir theme.json true
+```
+2. The optional element can be used if the default theme folder of ' _themes_' is not desired, or taken by another service in the same _metainfo.xml_.
+
+3. Multiple themes can be defined, however only the first _default_theme will be used for the service.
+
+4. Each theme points to a theme JSON file (via _fileName_element) in the _themes-dir_folder.
+
+5. The _theme.json_file contains one _configuration_block containing three main keys
+ 1. _layouts_- specify tabs, sections and sub-section layout
+ 2. _placement_- specify configurations to place in sub-sections
+ 3. _widgets_- specify which UI widgets to use for each config
+
+```json
+{
+ "configuration": {
+ "layouts": [ ... ],
+ "placement": { ... },
+ "widgets": [ ... ]
+ }
+}
+```
+6. Layouts - Multiple layouts can be defined in a theme. Currently only the first layout will be used while rendering. A _layout_ has following content:
+ 1. Tabs: Multiple tabs can be defined in a layout. Each tab can have its contents laid out using a simple grid-layout using the _tab-columns_and _tab-rows_keys.
+
+In below example the _Settings_tab has a grid of 3 rows and 2 columns in which sections can be placed.
+
+```json
+"layouts": [
+ {
+ "name": "default",
+ "tabs": [
+ {
+ "name": "settings",
+ "display-name": "Settings",
+ "layout": {
+ "tab-columns": "2",
+ "tab-rows": "3",
+ "sections": [ ... ]
+ }
+ }
+ ]
+ }
+]
+```
+ 2. Sections: Each section is defined inside a tab and specifies its location and size inside the tab's grid-layout by using the _row-index_, _column-index_, _row-span_and _column-span_keys. Being a container itself, it can further define a grid-layout for the sub-sections it contains using the _section-rows_and _section-columns_keys.
+
+In below example the _MapReduce_section occupies the first cell of the _Settings_tab grid, and itself has a grid-layout of 1 row and 3 columns.
+
+```
+"sections": [ { "name": "section-mr-scheduler", "display-name": "MapReduce", "row-index": "0", "column-index": "0", "row-span": "1", "column-span": "1", "section-columns": "3", "section-rows": "1", "subsections": [ ... ] }, ...]
+```
+ 3. Sub-sections: Each sub-section is defined inside a section and specifies its location and size inside the section's grid-layout using the _row-index_, _column-index_, _row-span_and _column-span_keys. Each section also has an optional _border_boolean key which tells if a border should encapsulate its content.
+
+```
+"subsections": [ { "name": "subsection-mr-scheduler-row1-col1", "display-name": "MapReduce Framework", "row-index": "0", "column-index": "0", "row-span": "1", "column-span": "1" }, ...]
+```
+7. Placement: Specifies the order of configurations that are to be placed into each sub-section. Each placement identifies a config, and which sub-section it should appear in. The placement specifies which layout it applies to using the _configuration-layout_key.
+
+```
+"placement": { "configuration-layout": "default", "configs": [ { "config": "mapred-site/mapreduce.map.memory.mb", "subsection-name": "subsection-mr-scheduler-row1-col1" }, { "config": "mapred-site/mapreduce.reduce.memory.mb", "subsection-name": "subsection-mr-scheduler-row1-col2" }, ... ]}
+```
+8. Wigets: The widgets array specifies which UI widget should be used to show a specific config. It also contains extra UI specific metadata required to show the widget.
+
+In the example below, both configs are using the slider widget. However the unit varies, resulting in one config being shown in bytes and another being shown as percentage. This unit is purely for showing a config - which is different from the units in which it is actually persisted in Ambari. For example, the percent unit below maybe persisted as a _float_, while the MB config below can be persisted in B (bytes).
+
+```
+"widgets": [ { "config": "yarn-site/yarn.nodemanager.resource.memory-mb", "widget": { "type": "slider", "units": [ { "unit-name": "MB" } ] } }, { "config": "yarn-site/yarn.nodemanager.resource.percentage-physical-cpu-limit", "widget": { "type": "slider", "units": [ { "unit-name": "percent" } ] } }, { "config": "yarn-site/yarn.node-labels.enabled", "widget": { "type": "toggle" } }, ...]
+```
+
+For a complete reference to what UI widgets are available and what metadata can be specified per widget, please refer to _Appendix A_.
+
+### Step 2 - Annotate stack configs (Non-UI Metadata)
+
+Each configuration that is used by the service's theme has to provide extra metadata about the configuration. The list of available metadata are:
+
+* display-name
+* value-attributes
+ - type
+ + string
+ + value-list
+ + float
+ + int
+ + boolean
+ - minimum
+ - maximum
+ - unit
+ - increment-step
+ - entries
+ + entry
+ * value
+ * description
+* depends-on
+ - property
+ + type
+ + name
+
+The value-attributes provide meta information about the value that can used as hints by the appropriate widget. For example the slider widget can make use of the minimum and maximum values in its working.
+
+Examples:
+
+```xml
+<property>
+ <name>namenode_heapsize</name>
+ <value>1024</value>
+ <description>NameNode Java heap size</description>
+ <display-name>NameNode Java heap size</display-name>
+ <value-attributes>
+ <type>int</type>
+ <minimum>0</minimum>
+ <maximum>268435456</maximum>
+ <unit>MB</unit>
+ <increment-step>256</increment-step>
+ </value-attributes>
+ <depends-on>
+ <property>
+ <type>hdfs-site</type>
+ <name>dfs.datanode.data.dir</name>
+ </property>
+ </depends-on>
+</property>
+
+```
+
+```xml
+<property>
+ <name>hive.default.fileformat</name>
+ <value>TextFile</value>
+ <description>Default file format for CREATE TABLE statement.</description>
+ <display-name>Default File Format</display-name>
+ <value-attributes>
+ <type>value-list</type>
+ <entries>
+ <entry>
+ <value>ORC</value>
+ <description>The Optimized Row Columnar (ORC) file format provides a highly efficient way to store Hive data. It was designed to overcome limitations of the other Hive file formats. Using ORC files improves performance when Hive is reading, writing, and processi
+ </entry>
+ <entry>
+ <value>TextFile</value>
+ <description>Text file format saves Hive data as normal text.</description>
+ </entry>
+ </entries>
+ </value-attributes>
+</property>
+```
+
+The depends-on is useful in building a dependency graph between different configs in Ambari. Ambari uses these bi-directional relationships (depends-on and depended-by) to automatically update dependent configs using the stack-advisor functionality in Ambari.
+
+Dependencies between configurations is a directed-acyclic-graph (DAG). When a configuration is updated, the UI has to determine its effect on other configs in the graph. To determine this, the /recommendations_endpoint should be provided an array of what configurations have been just changed in the changed_configurations field. Based on the provided changed-configs, only its dependencies are updated in the response.
+
+Example:
+
+Figure below shows some config dependencies - A effects B and C, each of which effects DE and FG respectively.
+
+
+
+Now assume user changes B to B' - a call to _/_ _recommendations_will only change D and E to D' and E' respectively (AB'CD'E'FG). No other config will be changed. Now assume that C is changed to C' -/recommendations will only change F and G to F' and G' while still keeping the values of B' D' E' intact (AB'C'D'E'F'G'). Now if you change A to A', it will affect all its children (A'B''C''D''E''F''G''). The user will have chance to pick and choose which he wants to apply.
+
+The call to _/recommendations_ happens whenever a configuration with dependencies is changed. The POST call has the action configuration-dependencies - which will only change the configurations and its dependencies identified by the changed_configurations field.
+
+### Step 3 - Restart Ambari server
+
+Restarting ambari-server is required for any changes in the themes or the stack-definition to be loaded.
+
+## Reference
+
+* HDFS HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/HDFS/themes/theme.json)
+* YARN HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/themes/theme.json)
+* HIVE HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/themes/theme.json)
+* RANGER HDP-2.3 [theme_version_2.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/RANGER/themes/theme_version_2.json)
+
+## Appendix
+
+### Appendix A - Widget Non-UI Metadata
+
+<table>
+ <tr>
+ <th>Widget</th>
+ <th>Metadata Used</th>
+ </tr>
+ <tr>
+ <td>Slider</td>
+ <td>
+ <value-attributes><br></br>
+ <type>int</type><br></br>
+ <minimum>1073741824</minimum><br></br>
+ <maximum>17179869184</maximum><br></br>
+ <unit>B</unit><br></br>
+ <increment-step>1073741824</increment-step><br></br>
+ </value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Combo</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>4</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>8</value><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+ </value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Directory, Directories, Password, Text Field, Text Area</td>
+ <td>No value-attributes required</td>
+ </tr>
+ <tr>
+ <td>List</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>4</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>8</value><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>2+</selection-cardinality><br></br>
+</value-attributes><br></br>
+ </td>
+ </tr>
+ <tr>
+ <td>Radio-Buttons</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>1</value><br></br>
+ <label>Radio Option 1</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ <label>Radio Option 2</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>3</value><br></br>
+ <label>Radio Option 3</label><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+</value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Time Interval Spinner</td>
+ <td>
+<value-attributes> <br></br>
+ <type>int</type><br></br>
+ <minimum>0</minimum><br></br>
+ <maximum>2592000000</maximum><br></br>
+ <unit>milliseconds</unit><br></br>
+</value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Toggle, Checkbox</td>
+ <td>
+ <value-attributes>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>true</value><br></br>
+ <label>Native</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>false</value><br></br>
+ <label>Off</label><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+</value-attributes>
+ </td>
+ </tr>
+</table>
diff --git a/versioned_docs/version-2.7.5/ambari-design/index.md b/versioned_docs/version-2.7.5/ambari-design/index.md
new file mode 100644
index 0000000..c0afb15
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/index.md
@@ -0,0 +1,19 @@
+# Ambari Design
+
+Ambari Architecture: https://issues.apache.org/jira/secure/attachment/12559939/Ambari_Architecture.pdf
+
+Ambari Server-Agent Registration Flow: http://www.slideshare.net/hortonworks/ambari-agentregistrationflow-17041261
+
+Ambari Local Repository Setup: http://www.slideshare.net/hortonworks/ambari-using-a-local-repository
+
+API Documentation: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
+
+Technology Stack: [Technology Stack](./technology-stack.md)
+
+Integration: http://developer.teradata.com/viewpoint/articles/viewpoint-integration-with-apache-ambari-for-hadoop-monitoring
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+
+<DocCardList />
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/kerberos/enabling_kerberos.md b/versioned_docs/version-2.7.5/ambari-design/kerberos/enabling_kerberos.md
new file mode 100644
index 0000000..2b11190
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/kerberos/enabling_kerberos.md
@@ -0,0 +1,373 @@
+---
+title: Enabling Kerberos
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](#enabling-kerberos)
+ - [The Enable Kerberos Wizard](#the-enable-kerberos-wizard)
+ - [The REST API](#the-rest-api)
+ - [Miscellaneous Technical Information](#miscellaneous-technical-information)
+ - [Password Generation](#password-generation)
+
+<a name="enabling-kerberos"></a>
+
+## Enabling Kerberos
+
+Enabling Kerberos on the cluster may be done using the _Enable Kerberos Wizard_ within the Ambari UI
+or using the REST API.
+
+<a name="the-enable-kerberos-wizard"></a>
+
+### The Enable Kerberos Wizard
+
+The _Enable Kerberos Wizard_, in the Ambari UI, provides an easy to use wizard interface that walks
+through the process of enabling Kerberos.
+
+<a name="the-rest-api"></a>
+
+### The REST API
+
+It is possible to enable Kerberos using Ambari's REST API using the following API calls:
+
+**_Notes:_**
+
+- Change the authentication credentials as needed
+ - `curl ... -u username:password ...`
+ - The examples below use
+ - username: admin
+ - password: admin
+- Change the Ambari server host name and port as needed
+ - `curl ... http://HOST:PORT/api/v1/...`
+ - The examples below use
+ - HOST: AMBARI_SERVER
+ - PORT: 8080
+- Change the cluster name as needed
+ - `curl ... http://.../CLUSTER/...`
+ - The examples below use
+ - CLUSTER: CLUSTER_NAME
+- @./payload indicates the the payload data is stored in some file rather than declared inline
+ - `curl ... -d @./payload ...`
+ - The examples below use `./payload` which should be replace with the actual file path
+ - The contents of the payload file are indicated below the curl statement
+
+#### Add the KERBEROS Service to cluster
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS
+```
+
+#### Add the KERBEROS_CLIENT component to the KERBEROS service
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS/components/KERBEROS_CLIENT
+```
+
+#### Create and set KERBEROS service configurations
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME
+```
+
+Example payload when using an MIT KDC:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "mit-kdc",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.KDC.SERVER",
+ "master_kdc" : "FQDN.MASTER.KDC.SERVER",
+ "admin_server_host" : "FQDN.ADMIN.KDC.SERVER",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false"
+ }
+ }
+ }
+ }
+]
+```
+
+Example payload when using an Active Directory:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "active-directory",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.AD.SERVER",
+ "master_kdc" : "FQDN.MASTER.AD.SERVER",
+ "admin_server_host" : "FQDN.AD.SERVER",
+ "ldap_url" : "LDAPS://AD_HOST:PORT",
+ "container_dn" : "OU=....,....",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "password_length": "20",
+ "password_min_lowercase_letters": "1",
+ "password_min_uppercase_letters": "1",
+ "password_min_digits": "1",
+ "password_min_punctuation": "1",
+ "password_min_whitespace": "0",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false",
+ "create_attributes_template" : "{\n \"objectClass\": [\"top\", \"person\", \"organizationalPerson\", \"user\"],\n \"cn\": \"$principal_name\",\n #if( $is_service )\n \"servicePrincipalName\": \"$principal_name\",\n #end\n \"userPrincipalName\": \"$normalized_principal\",\n \"unicodePwd\": \"$password\",\n \"accountExpires\": \"0\",\n \"userAccountControl\": \"66048\"}"
+ }
+ }
+ }
+ }
+]
+```
+Example payload when using IPA:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "ipa",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.KDC.SERVER",
+ "master_kdc" : "FQDN.MASTER.KDC.SERVER",
+ "admin_server_host" : "FQDN.ADMIN.KDC.SERVER",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false"
+ }
+ }
+ }
+ }
+]
+```
+
+#### Create the KERBEROS_CLIENT host components
+_Once for each host, replace HOST_NAME_
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d '{"host_components" : [{"HostRoles" : {"component_name":"KERBEROS_CLIENT"}}]}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/hosts?Hosts/host_name=HOST_NAME
+```
+
+#### Install the KERBEROS service and components
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "INSTALLED"}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS
+```
+
+#### Stop all services
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services
+```
+
+#### Get the default Kerberos Descriptor
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://AMBARI_SERVER:8080/api/v1/stacks/STACK_NAME/versions/STACK_VERSION/artifacts/kerberos_descriptor
+```
+
+#### Get the customized Kerberos Descriptor (if previously set)
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/artifacts/kerberos_descriptor
+```
+
+#### Set the Kerberos Descriptor
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/artifacts/kerberos_descriptor
+```
+
+Payload:
+
+```
+{
+ "artifact_data" : {
+ ...
+ }
+}
+```
+
+**_Note:_** The Kerberos Descriptor payload may be a complete Kerberos Descriptor or just the updates to overlay on top of the default Kerberos Descriptor.
+
+#### Set the KDC administrator credentials
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/credentials/kdc.admin.credential
+```
+
+Payload:
+
+```
+{
+ "Credential" : {
+ "principal" : "admin/admin@EXAMPLE.COM",
+ "key" : "h4d00p&!",
+ "type" : "temporary"
+ }
+}
+```
+
+**_Note:_** the _principal_ and _key_ (password) values should be updated to match the correct credentials
+for the KDC administrator account
+
+**_Note:_** the `type` value may be `temporary` or `persisted`; however the value may only be `persisted`
+if Ambari's credential store has been previously setup.
+
+#### Enable Kerberos
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME
+```
+
+Payload
+
+```
+{
+ "Clusters": {
+ "security_type" : "KERBEROS"
+ }
+}
+```
+
+#### Start all services
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "STARTED"}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services
+```
+
+<a name="miscellaneous-technical-information"></a>
+
+### Miscellaneous Technical Information
+
+<a name="password-generation"></a>
+
+#### Password Generation
+
+When enabling Kerberos using an Active Directory, Ambari must use an internal mechanism to build
+the keytab files. This is because keytab files cannot be requested remotely from an Active Directory.
+In order to create keytab files, Ambari needs to know the password for each relevant Kerberos
+identity. Therefore, Ambari sets or updates the identity's password as needed.
+
+The password for each Ambari-managed account in an Active Directory is randomly generated and
+stored only long enough in memory to set the account's password and generate the keytab file.
+Passwords are generated using the following user-settable parameters:
+
+- Password length (`kerberos-env/password_length`)
+ - Default Value: 20
+- Minimum number of lower-cased letters (`kerberos-env/password_min_lowercase_letters`)
+ - Default Value: 1
+ - Character Set: `abcdefghijklmnopqrstuvwxyz`
+- Minimum number of upper-cased letters (`kerberos-env/password_min_uppercase_letters`)
+ - Default Value: 1
+ - Character Set: `ABCDEFGHIJKLMNOPQRSTUVWXYZ`
+- Minimum number of digits (`kerberos-env/password_min_digits`)
+ - Default Value: 1
+ - Character Set: `1234567890`
+- Minimum number of punctuation characters (`kerberos-env/password_min_punctuation`)
+ - Default Value: 1
+ - Character Set: `?.!$%^*()-_+=~`
+- Minimum number of whitespace characters (`kerberos-env/password_min_whitespace`)
+ - Default Value: 0
+ - Character Set: `(space character)`
+
+The following algorithm is executed:
+
+1. Create an array to store password characters
+2. For each character class (upper-case letter, lower-case letter, digit, ...), randomly select the
+minimum number of characters from the relevant character set and store them in the array
+3. For the number of characters calculated as the difference between the expected password length and
+the number of characters already collected, randomly select a character from a randomly-selected character
+class and store it into the array
+4. For the number of characters expected in the password, randomly pull one from the array and append
+to the password result
+5. Return the generated password
+
+To generate a random integer used to identify an index within a character set, a static instance of
+the `java.security.SecureRandom` class ([http://docs.oracle.com/javase/7/docs/api/java/security/SecureRandom.html](http://docs.oracle.com/javase/7/docs/api/java/security/SecureRandom.html))
+is used.
diff --git a/versioned_docs/version-2.7.5/ambari-design/kerberos/index.md b/versioned_docs/version-2.7.5/ambari-design/kerberos/index.md
new file mode 100644
index 0000000..51c56e1
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/kerberos/index.md
@@ -0,0 +1,125 @@
+---
+slug: /kerberos
+---
+
+# Ambari Kerberos Automation
+
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+
+- [Introduction](#introduction)
+ - [How it Works](#how-it-works)
+ - [Enabling Kerberos](#enabling-kerberos)
+ - [Adding Components](#adding-components)
+ - [Adding Hosts](#adding-hosts)
+ - [Regenerating Keytabs](#regenerating-keytabs)
+ - [Disabling Kerberos](#disabling-kerberos)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="introduction"></a>
+
+## Introduction
+
+Before Ambari 2.0.0, configuring an Ambari cluster to use Kerberos involved setting up the Kerberos
+client infrastructure on each host, creating the required identities, generating and distributing the
+needed keytabs files, and updating the necessary configuration properties. On a small cluster this may
+not seem to be too large of an effort; however as the size of the cluster increases, so does the amount
+of work that is involved.
+
+This is where Ambari’s Kerberos Automation facility can help. It performs all of these steps and
+also helps to maintain the cluster as new services and hosts are added.
+
+Kerberos automation can be invoked using Ambari’s REST API as well as via the _Enable Kerberos Wizard_
+in the Ambari UI.
+
+<a name="how-it-works"></a>
+
+### How it works
+
+Stacks and services that can utilize Kerberos credentials for authentication must have a Kerberos
+Descriptor declaring required Kerberos identities and how to update configurations. The Ambari
+infrastructure uses this data, and any updates applied by an administrator, to perform Kerberos
+related operations such as initially enabling Kerberos, enabling Kerberos for on hosts and added
+components, regenerating credentials, and disabling Kerberos.
+
+It should be notated that the Kerberos service is required to be installed on all hosts of the cluster
+before any automated tasks can be performed. If using the Ambari UI, this should happen as part of the
+Enable Kerberos wizard workflow.
+
+<a name="enabling-kerberos"></a>
+
+### Enabling Kerberos
+
+When enabling Kerberos, all of the services in the cluster are expected to be stopped. The main
+reason for this is to avoid state issues as the services are stopped and then started when the cluster
+is transitioning to use Kerberos.
+
+The following steps are taken to enable Kerberos on the cluster en masse:
+
+1. Create or update accounts in the configured KDC (or Active Directory)
+2. Generate keytab files and distribute them to the appropriate hosts
+3. Update relevant configurations
+
+<a name="adding-components"></a>
+
+### Adding Components
+
+If Kerberos is enabled for the Ambari cluster, whenever new components are added, the new components
+will automatically be configured for Kerberos, and any necessary principals and keytab files will be
+created and distributed as needed.
+
+For each new component, the following steps will occur before the component is installed and started:
+
+1. Update relevant configurations
+2. Create or update accounts in the configured KDC (or Active Directory)
+3. Generate keytab files and distribute them to the appropriate hosts
+
+<a name="adding-hosts"></a>
+
+### Adding Hosts
+
+When adding a new host, the Kerberos client must be installed on it. This does not happen automatically;
+however the _Add Host Wizard_ in the Ambari UI will will perform this step if Kerberos was enabled for
+the Ambari cluster. Once the host is added, generally one or more components are installed on
+it - see [Adding Components](#adding-components).
+
+<a name="regenerating-keytabs"></a>
+
+### Regenerating Keytabs
+
+Once a cluster has Kerberos enabled, it may be necessary to regenerate keytabs. There are two options
+for this:
+
+- `all` - create any missing principals and unconditionally update the passwords for existing principals, then create and distribute all relevant keytab files
+- `missing` - create any missing principals; then create and distribute keytab files for the newly-created principals
+
+In either case, the affected services should be restarted after the regeneration process is complete.
+
+If performed through the Ambari UI, the user will be asked which keytab regeneration mode to use and
+whether services are to be restarted or not.
+
+<a name="disabling-kerberos"></a>
+
+### Disabling Kerberos
+
+In the event Kerberos needs to be removed from the Ambari cluster, Ambari will remove the managed
+Kerberos identities, keytab files, and Kerberos-specific configurations. The Ambari UI will perform
+the steps of stopping and starting the services as well as removing the Kerberos service; however
+this will need to be done manually, if using the Ambari REST API.
diff --git a/versioned_docs/version-2.7.5/ambari-design/kerberos/kerberos_descriptor.md b/versioned_docs/version-2.7.5/ambari-design/kerberos/kerberos_descriptor.md
new file mode 100644
index 0000000..2dd4798
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/kerberos/kerberos_descriptor.md
@@ -0,0 +1,855 @@
+---
+title: The Kerberos Descriptor
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](#the-kerberos-descriptor)
+ - [Components of a Kerberos Descriptor](#components-of-a-kerberos-descriptor)
+ - [Stack-level Properties](#stack-level-properties)
+ - [Stack-level Identities](#stack-level-identities)
+ - [Stack-level Auth-to-local-properties](#stack-level-auth-to-local-properties)
+ - [Stack-level Configurations](#stack-level-configuratons)
+ - [Services](#services)
+ - [Service-level Identities](#service-level-identities)
+ - [Service-level Auth-to-local-properties](#service-level-auth-to-local-properties)
+ - [Service-level Configurations](#service-level-configurations)
+ - [Components](#service-components)
+ - [Component-level Identities](#component-level-identities)
+ - [Component-level Auth-to-local-properties](#component-level-auth-to-local-properties)
+ - [Component-level Configurations](#component-level-configurations)
+ - [Kerberos Descriptor specifications](#kerberos-descriptor-specifications)
+ - [properties](#properties)
+ - [auth-to-local-properties](#auth-to-local-properties)
+ - [configurations](#configurations)
+ - [identities](#identities)
+ - [principal](#principal)
+ - [keytab](#keytab)
+ - [services](#services)
+ - [components](#components)
+ - [Examples](#examples)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="the-kerberos-descriptor"></a>
+
+## The Kerberos Descriptor
+
+The Kerberos Descriptor is a JSON-formatted text file containing information needed by Ambari to enable
+or disable Kerberos for a stack and its services. This file must be named **_kerberos.json_** and should
+be in the root directory of the relevant stack or service definition. Kerberos Descriptors are meant to
+be hierarchical such that details in the stack-level descriptor can be overwritten (or updated) by details
+in the service-level descriptors.
+
+For the services in a stack to be Kerberized, there must be a stack-level Kerberos Descriptor. This
+ensures that even if a common service has a Kerberos Descriptor, it may not be Kerberized unless the
+relevant stack indicates that supports Kerberos by having a stack-level Kerberos Descriptor.
+
+For a component of a service to be Kerberized, there must be an entry for it in its containing service's
+service-level descriptor. This allows for some of a services' components to be managed and other
+components of that service to be ignored by the automated Kerberos facility.
+
+Kerberos Descriptors are inherited from the base stack or service, but may be overridden as a full
+descriptor - partial descriptors are not allowed.
+
+A complete descriptor (which is built using the stack-level descriptor, the service-level descriptors,
+and any updates from user input) has the following structure:
+
+- Stack-level Properties
+- Stack-level Identities
+- Stack-level Configurations
+- Stack-level Auth-to-local-properties
+- Services
+ - Service-level Identities
+ - Service-level Auth-to-local-properties
+ - Service-level Configurations
+ - Components
+ - Component-level Identities
+ - Component-level Auth-to-local-properties
+ - Component-level Configurations
+
+Each level of the descriptor inherits the data from its parent. This data, however, may be overridden
+if necessary. For example, a component will inherit the configurations and identities of its container
+service; which in turn inherits the configurations and identities from the stack.
+
+<a name="components-of-a-kerberos-descriptor"></a>
+
+### Components of a Kerberos Descriptor
+
+<a name="stack-level-properties"></a>
+
+#### Stack-level Properties
+
+Stack-level properties is an optional set of name/value pairs that can be used in variable replacements.
+For example, if a property named "**_property1_**" exists with the value of "**_value1_**", then any instance of
+"**_${property1}_**" within a configuration property name or configuration property value will be replaced
+with "**_value1_**".
+
+This property is only relevant in the stack-level Kerberos Descriptor and may not be overridden by
+lower-level descriptors.
+
+See [properties](#properties).
+
+<a name="stack-level-identities"></a>
+
+#### Stack-level Identities
+
+Stack-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are common among all services in the stack. An example of such an identity is the
+Ambari smoke test user, which is used by all services to perform service check operations. Service-
+and component-level identities may reference (and specialize) stack-level identities using the
+identity’s name with a forward slash (/) preceding it. For example if there was a stack-level identity
+with the name "smokeuser", then a service or a component may create an identity block that references
+and specializes it by declaring a "**_reference_**" property and setting it to "/smokeuser". Within
+this identity block details of the identity may be and overwritten as necessary. This does not alter
+the stack-level identity, it essentially creates a copy of it and updates the copy's properties.
+
+See [identities](#identities).
+
+<a name="stack-level-auth-to-local-properties"></a>
+
+#### Stack-level Auth-to-local-properties
+
+Stack-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="stack-level-configurations"></a>
+
+#### Stack-level Configurations
+
+Stack-level configurations is an optional configurations block containing a list of zero or more
+configuration descriptors that are common among all services in the stack. Configuration descriptors
+are overridable due to the structure of the data. However, overriding configuration properties may
+create undesired behavior since it is not known until after the Kerberization process is complete
+what value a property will have.
+
+See [configurations](#configurations).
+
+<a name="services"></a>
+
+#### Services
+
+Services is a list of zero or more service descriptors. A stack-level Kerberos Descriptor should not
+list any services; however a service-level Kerberos Descriptor should contain at least one.
+
+See [services](#services).
+
+<a name="service-level-identities"></a>
+
+#### Service-level Identities
+
+Service-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are common among all components of the service. Component-level identities may
+reference (and specialize) service-level identities by specifying a relative or an absolute path
+to it.
+
+For example if there was a service-level identity with the name "service_identity", then a child
+component may create an identity block that references and specializes it by setting its "reference"
+attribute to "../service_identity" or "/service_name/service_identity" and overriding any values as
+necessary. This does not override the service-level identity, it essentially creates a copy of it and
+updates the copy's properties.
+
+##### Examples
+
+```
+{
+ "name" : "relative_path_example",
+ "reference" : "../service_identity",
+ ...
+}
+```
+
+```
+{
+ "name" : "absolute_path_example",
+ "reference" : "/SERVICE/service_identity",
+ ...
+}
+```
+
+**Note**: By using the absolute path to an identity, any service-level identity may be referenced by
+any other service or component.
+
+See [identities](#identities).
+
+<a name="service-level-auth-to-local-properties"></a>
+
+#### Service-level Auth-to-local-properties
+
+Service-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="service-level-configurations"></a>
+
+#### Service-level Configurations
+
+Service-level configurations is an optional configurations block listing of zero or more configuration
+descriptors that are common among all components within a service. Configuration descriptors may be
+overridden due to the structure of the data. However, overriding configuration properties may create
+undesired behavior since it is not known until after the Kerberization process is complete what value
+a property will have.
+
+See [configurations](#configurations).
+
+<a name="service-components"></a>
+
+#### Components
+
+Components is a list of zero or more component descriptor blocks.
+
+See [components](#components).
+
+<a name="component-level-identities"></a>
+
+#### Component-level Identities
+
+Component-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are specific to the component. A Component-level identity may be referenced
+(and specialized) by using the absolute path to it (`/service_name/component_name/identity_name`).
+This does not override the component-level identity, it essentially creates a copy of it and updates
+the copy's properties.
+
+See [identities](#identities).
+
+<a name="component-level-auth-to-local-properties"></a>
+
+#### Component-level Auth-to-local-properties
+
+Component-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="component-level-configurations"></a>
+
+#### Component-level Configurations
+
+Component-level configurations is an optional configurations block listing zero or more configuration
+descriptors that are specific to the component.
+
+See [configurations](#configurations).
+
+### Descriptor Specifications
+
+<a name="properties"></a>
+
+#### properties
+
+The `properties` block is only valid in the service-level Kerberos Descriptor file. This block is
+a set of name/value pairs as follows:
+
+```
+"properties" : {
+ "property_1" : "value_1",
+ "property_2" : "value_2",
+ ...
+}
+```
+
+<a name="auth-to-local-properties"></a>
+
+#### auth-to-local-properties
+
+The `auth-to-local-properties` block is valid in the stack-, service-, and component-level
+descriptors. This block is a list of configuration specifications
+(`config-type/property_name[|concatenation_scheme]`) indicating which properties contain
+auth-to-local rules that should be dynamically updated based on the identities used within the
+Kerberized cluster.
+
+The specification optionally declares the concatenation scheme to use to append
+the rules into a rule set value. If specified one of the following schemes may be set:
+
+- **`new_lines`** - rules in the rule set are separated by a new line (`\n`)
+- **`new_lines_escaped`** - rules in the rule set are separated by a `\` and a new line (`\n`)
+- **`spaces`** - rules in the rule set are separated by a whitespace character (effectively placing all rules in a single line)
+
+If not specified, the default concatenation scheme is `new_lines`.
+
+```
+"auth-to-local-properties" : [
+ "core-site/hadoop.security.auth_to_local",
+ "service.properties/http.authentication.kerberos.name.rules|new_lines_escaped",
+ ...
+]
+```
+
+<a name="configurations"></a>
+
+#### configurations
+
+A `configurations` block may exist in stack-, service-, and component-level descriptors.
+This block is a list of one or more configuration blocks containing a single structure named using
+the configuration type and containing values for each relevant property.
+
+Each property name and value may be a concrete value or contain variables to be replaced using values
+from the stack-level `properties` block or any available configuration. Properties from the `properties`
+block are referenced by name (`${property_name}`), configuration properties are reference by
+configuration specification (`${config-type/property_name}`) and kerberos principals are referenced by the principal path
+(`principals/SERVICE/COMPONENT/principal_name`).
+
+```
+"configurations" : [
+ {
+ "config-type-1" : {
+ "${cluster-env/smokeuser}_property" : "value1",
+ "some_realm_property" : "${realm}",
+ ...
+ }
+ },
+ {
+ "config-type-2" : {
+ "property-2" : "${cluster-env/smokeuser}",
+ ...
+ }
+ },
+ ...
+]
+```
+
+If `cluster-env/smokuser` was `"ambari-qa"` and realm was `"EXAMPLE.COM"`, the above block would
+effectively be translated to
+
+```
+"configurations" : [
+ {
+ "config-type-1" : {
+ "ambari-qa_property" : "value1",
+ "some_realm_property" : "EXAMPLE.COM",
+ ...
+ }
+ },
+ {
+ "config-type-2" : {
+ "property-2" : "ambari-qa",
+ ...
+ }
+ },
+ ...
+]
+```
+
+<a name="identities"></a>
+
+#### identities
+
+An `identities` descriptor may exist in stack-, service-, and component-level descriptors. This block
+is a list of zero or more identity descriptors. Each identity descriptor is a block containing a `name`,
+an optional `reference` identifier, an optional `principal` descriptor, and an optional `keytab`
+descriptor.
+
+The `name` property of an `identity` descriptor should be a concrete name that is unique with in its
+`local` scope (stack, service, or component). However, to maintain backwards-compatibility with
+previous versions of Ambari, it may be a reference identifier to some other identity in the
+Kerberos Descriptor. This feature is deprecated and may not be available in future versions of Ambari.
+
+The `reference` property of an `identitiy` descriptor is optional. If it exists, it indicates that the
+properties from referenced identity is to be used as the base for the current identity and any properties
+specified in the local identity block overrides the base data. In this scenario, the base data is copied
+to the local identities and therefore changes are realized locally, not globally. Referenced identities
+may be hierarchical, so a referenced identity may reference another identity, and so on. Because of
+this, care must be taken not to create cyclic references. Reference values must be in the form of a
+relative or absolute _path_ to the referenced identity descriptor. Relative _paths_ start with a `../`
+and may be specified in component-level identity descriptors to reference an identity descriptor
+in the parent service. Absolute _paths_ start with a `/` and may be specified at any level as follows:
+
+- **Stack-level** identity reference: `/identitiy_name`
+- **Service-level** identity reference: `/SERVICE_NAME/identitiy_name`
+- **Component-level** identity reference: `/SERVICE_NAME/COMPONENT_NAME/identitiy_name`
+
+```
+"identities" : [
+ {
+ "name" : "local_identity",
+ "principal" : {
+ ...
+ },
+ "keytab" : {
+ ...
+ }
+ },
+ {
+ "name" : "/smokeuser",
+ "principal" : {
+ "configuration" : "service-site/principal_property_name"
+ },
+ "keytab" : {
+ "configuration" : "service-site/keytab_property_name"
+ }
+ },
+ ...
+]
+```
+
+<a name="principal"></a>
+
+#### principal
+
+The `principal` block is an optional block inside an `identity` descriptor block. It declares the
+details about the identity’s principal, including the principal’s `value`, the `type` (user or service),
+the relevant `configuration` property, and a local username mapping. All properties are optional; however
+if no base or default value is available (via the parent identity's `reference` value) for all properties,
+the principal may be ignored.
+
+The `value` property of the principal is expected to be the normalized principal name, including the
+principal’s components and realm. In most cases, the realm should be specified using the realm variable
+(`${realm}` or `${kerberos-env/realm}`). Also, in the case of a service principal, "`_HOST`" should be
+used to represent the relevant hostname. This value is typically replaced on the agent side by either
+the agent-side scripts or the services themselves to be the hostname of the current host. However the
+built-in hostname variable (`${hostname}`) may be used if "`_HOST`" replacement on the agent-side is
+not available for the service. Examples: `smokeuser@${realm}`, `service/_HOST@${realm}`.
+
+The `type` property of the principal may be either `user` or `service`. If not specified, the type is
+assumed to be `user`. This value dictates how the identity is to be created in the KDC or Active Directory.
+It is especially important in the Active Directory case due to how accounts are created. It also,
+indicates to Ambari how to handle the principal and relevant keytab file reguarding the user interface
+behavior and data caching.
+
+The `configuration` property is an optional configuration specification (`config-type/property_name`)
+that is to be set to this principal's `value` (after its variables have been replaced).
+
+The `local_username` property, if supplied, indicates which local user account to use when generating
+auth-to-local rules for this identity. If not specified, no explicit auth-to-local rule will be generated.
+
+```
+"principal" : {
+ "value": "${cluster-env/smokeuser}@${realm}",
+ "type" : "user" ,
+ "configuration": "cluster-env/smokeuser_principal_name",
+ "local_username" : "${cluster-env/smokeuser}"
+}
+```
+
+```
+"principal" : {
+ "value": "component1/_HOST@${realm}",
+ "type" : "service" ,
+ "configuration": "service-site/component1.principal"
+}
+```
+
+<a name="keytab"></a>
+
+#### keytab
+
+The `keytab` block is an optional block inside an `identity` descriptor block. It describes how to
+create and store the relevant keytab file. This block declares the keytab file's path in the local
+filesystem of the destination host, the permissions to assign to that file, and the relevant
+configuration property.
+
+The `file` property declares an absolute path to use to store the keytab file when distributing to
+relevant hosts. If this is not supplied, the keytab file will not be created.
+
+The `owner` property is an optional block indicating the local user account to assign as the owner of
+the file and what access (`"rw"` - read/write; `"r"` - read-only) should
+be granted to that user. By default, the owner will be given read-only access.
+
+The `group` property is an optional block indicating which local group to assigned as the group owner
+of the file and what access (`"rw"` - read/write; `"r"` - read-only; `“”` - no access) should be granted
+to local user accounts in that group. By default, the group will be given no access.
+
+The `configuration` property is an optional configuration specification (`config-type/property_name`)
+that is to be set to the path of this keytabs file (after any variables have been replaced).
+
+```
+"keytab" : {
+ "file": "${keytab_dir}/smokeuser.headless.keytab",
+ "owner": {
+ "name": "${cluster-env/smokeuser}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "${cluster-env/smokeuser_keytab}"
+}
+```
+
+<a name="services"></a>
+
+#### services
+
+A `services` block may exist in the stack-level and the service-level Kerberos Descriptor file.
+This block is a list of zero or more service descriptors to add to the Kerberos Descriptor.
+
+Each service block contains a service `name`, and optionals `identities`, `auth_to_local_properties`
+`configurations`, and `components` blocks.
+
+```
+"services": [
+ {
+ "name": "SERVICE1_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ],
+ "components": [
+ ...
+ ]
+ },
+ {
+ "name": "SERVICE2_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ],
+ "components": [
+ ...
+ ]
+ },
+ …
+]
+```
+
+<a name="components"></a>
+
+#### components
+
+A `components` block may exist within a `service` descriptor block. This block is a list of zero or
+more component descriptors belonging to the containing service descriptor. Each component descriptor
+is a block containing a component `name`, and optional `identities`, `auth_to_local_properties`,
+and `configurations` blocks.
+
+```
+"components": [
+ {
+ "name": "COMPONENT_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ]
+ },
+ ...
+]
+```
+
+<a name="examples"></a>
+
+### Examples
+
+#### Example Stack-level Kerberos Descriptor
+The following example is annotated for descriptive purposes. The annotations are not valid in a real
+JSON-formatted file.
+
+```
+{
+ // Properties that can be used in variable replacement operations.
+ // For example, ${keytab_dir} will resolve to "/etc/security/keytabs".
+ // Since variable replacement is recursive, ${realm} will resolve
+ // to ${kerberos-env/realm}, which in-turn will resolve to the
+ // declared default realm for the cluster
+ "properties": {
+ "realm": "${kerberos-env/realm}",
+ "keytab_dir": "/etc/security/keytabs"
+ },
+ // A list of global Kerberos identities. These may be referenced
+ // using /identity_name. For example the “spnego” identity may be
+ // referenced using “/spnego”
+ "identities": [
+ {
+ "name": "spnego",
+ // Details about this identity's principal. This instance does not
+ // declare any value for configuration or local username. That is
+ // left up to the services and components use wish to reference
+ // this principal and set overrides for those values.
+ "principal": {
+ "value": "HTTP/_HOST@${realm}",
+ "type" : "service"
+ },
+ // Details about this identity’s keytab file. This keytab file
+ // will be created in the configured keytab file directory with
+ // read-only access granted to root and users in the cluster’s
+ // default user group (typically, hadoop). To ensure that only
+ // a single copy exists on the file system, references to this
+ // identity should not override the keytab file details;
+ // however if it is desired that multiple keytab files are
+ // created, these values may be overridden in a reference
+ // within a service or component. Since no configuration
+ // specification is set, the the keytab file location will not
+ // be set in any configuration file by default. Services and
+ // components need to reference this identity to update this
+ // value as needed.
+ "keytab": {
+ "file": "${keytab_dir}/spnego.service.keytab",
+ "owner": {
+ "name": "root",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ }
+ }
+ },
+ {
+ "name": "smokeuser",
+ // Details about this identity's principal. This instance declares
+ // a configuration and local username mapping. Services and
+ // components can override this to set additional configurations
+ // that should be set to this principal value. Overriding the
+ // local username may create undesired behavior since there may be
+ // conflicting entries in relevant auth-to-local rule sets.
+ "principal": {
+ "value": "${cluster-env/smokeuser}@${realm}",
+ "type" : "user",
+ "configuration": "cluster-env/smokeuser_principal_name",
+ "local_username" : "${cluster-env/smokeuser}"
+ },
+ // Details about this identity’s keytab file. This keytab file
+ // will be created in the configured keytab file directory with
+ // read-only access granted to the configured smoke user
+ // (typically ambari-qa) and users in the cluster’s default
+ // user group (typically hadoop). To ensure that only a single
+ // copy exists on the file system, references to this identity
+ // should not override the keytab file details; however if it
+ // is desired that multiple keytab files are created, these
+ // values may be overridden in a reference within a service or
+ // component.
+ "keytab": {
+ "file": "${keytab_dir}/smokeuser.headless.keytab",
+ "owner": {
+ "name": "${cluster-env/smokeuser}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "cluster-env/smokeuser_keytab"
+ }
+ }
+ ]
+}
+```
+
+#### Example Service-level Kerberos Descriptor
+The following example is annotated for descriptive purposes. The annotations are not valid in a real
+JSON-formatted file.
+
+```
+{
+ // One or more services may be listed in a service-level Kerberos
+ // Descriptor file
+ "services": [
+ {
+ "name": "SERVICE_1",
+ // Service-level identities to be created if this service is installed.
+ // Any relevant keytab files will be distributed to hosts with at least
+ // one of the components on it.
+ "identities": [
+ // Service-specific identity declaration, declaring all properties
+ // needed initiate the creation of the principal and keytab files,
+ // as well as setting the service-specific configurations. This may
+ // be referenced by contained components using ../service1_identity.
+ {
+ "name": "service1_identity",
+ "principal": {
+ "value": "service1/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/service1.principal"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/service1.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "service1-site/service1.keytab.file"
+ }
+ },
+ // Service-level identity referencing the stack-level spnego
+ // identity and overriding the principal and keytab configuration
+ // specifications.
+ {
+ "name": "service1_spnego",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/service1.web.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/service1.web.keytab.file"
+ }
+ },
+ // Service-level identity referencing the stack-level smokeuser
+ // identity. No properties are being overridden and overriding
+ // the principal and keytab configuration specifications.
+ // This ensures that the smokeuser principal is created and its
+ // keytab file is distributed to all hosts where components of this
+ // this service are installed.
+ {
+ "name": "service1_smokeuser",
+ "reference": "/smokeuser"
+ }
+ ],
+ // Properties related to this service that require the auth-to-local
+ // rules to be dynamically generated based on identities create for
+ // the cluster.
+ "auth_to_local_properties" : [
+ "service1-site/security.auth_to_local"
+ ],
+ // Configuration properties to be set when this service is installed,
+ // no matter which components are installed
+ "configurations": [
+ {
+ "service-site": {
+ "service1.security.authentication": "kerberos",
+ "service1.security.auth_to_local": ""
+ }
+ }
+ ],
+ // A list of components related to this service
+ "components": [
+ {
+ "name": "COMPONENT_1",
+ // Component-specific identities to be created when this component
+ // is installed. Any keytab files specified will be distributed
+ // only to the hosts where this component is installed.
+ "identities": [
+ // An identity "local" to this component
+ {
+ "name": "component1_service_identity",
+ "principal": {
+ "value": "component1/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/comp1.principal",
+ "local_username" : "${service1-env/service_user}"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/s1c1.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": ""
+ },
+ "configuration": "service1-site/comp1.keytab.file"
+ }
+ },
+ // The stack-level spnego identity overridden to set component-specific
+ // configurations
+ {
+ "name": "component1_spnego_1",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/comp1.spnego.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp1.spnego.keytab.file"
+ }
+ },
+ // The stack-level spnego identity overridden to set a different set of component-specific
+ // configurations
+ {
+ "name": "component1_spnego_2",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/comp1.someother.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp1.someother.keytab.file"
+ }
+ }
+ ],
+ // Component-specific configurations to set if this component is installed
+ "configurations": [
+ {
+ "service-site": {
+ "comp1.security.type": "kerberos"
+ }
+ }
+ ]
+ },
+ {
+ "name": "COMPONENT_2",
+ "identities": [
+ {
+ "name": "component2_service_identity",
+ "principal": {
+ "value": "component2/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/comp2.principal",
+ "local_username" : "${service1-env/service_user}"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/s1c2.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": ""
+ },
+ "configuration": "service1-site/comp2.keytab.file"
+ }
+ },
+ // The service-level service1_identity identity overridden to
+ // set component-specific configurations
+ {
+ "name": "component2_service1_identity",
+ "reference": "../service1_identity",
+ "principal": {
+ "configuration": "service1-site/comp2.service.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp2.service.keytab.file"
+ }
+ }
+ ],
+ "configurations" : [
+ {
+ "service-site" : {
+ "comp2.security.type": "kerberos"
+ }
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
diff --git a/versioned_docs/version-2.7.5/ambari-design/kerberos/kerberos_service.md b/versioned_docs/version-2.7.5/ambari-design/kerberos/kerberos_service.md
new file mode 100644
index 0000000..2045aa7
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/kerberos/kerberos_service.md
@@ -0,0 +1,341 @@
+---
+title: The Kerberos Service
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](#the-kerberos-service)
+ - [Configurations](#configurations)
+ - [kerberos-env](#kerberos-env)
+ - [krb5-conf](#krb5-conf)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="the-kerberos-service"></a>
+
+## The Kerberos Service
+
+<a name="configurations"></a>
+
+### Configurations
+
+<a name="kerberos-env"></a>
+
+#### kerberos-env
+
+##### kdc_type
+
+The type of KDC being used.
+
+_Possible Values:_
+- `none`
+ - Ambari is not to integrate with a KDC. In this case, it is expected that the Kerberos identities
+will be created and the keytab files are distributed manually
+- `mit-kdc`
+ - Ambari is to integrate with an MIT KDC
+- `active-directory`
+ - Ambari is to integrate with an Active Directory
+- `ipa`
+ - Ambari is to integrate with a FreeIPA server
+
+##### manage_identities
+
+Indicates whether the Ambari-specified user and service Kerberos identities (principals and keytab files)
+should be managed (created, deleted, updated, etc...) by Ambari (`true`) or managed manually by the
+user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+##### create_ambari_principal
+
+Indicates whether the Ambari Kerberos identity (principal and keytab file used by Ambari, itself, and
+its views) should be managed (created, deleted, updated, etc...) by Ambari (`true`) or managed manually
+by the user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+This property is dependent on the value of `manage_identities`, where as if `manage_identities` is
+false, `create_ambari_principal` will assumed to be `false` as well.
+
+##### manage_auth_to_local
+
+Indicates whether the Hadoop auth-to-local rules should be managed by Ambari (`true`) or managed
+manually by the user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+##### install_packages
+
+Indicates whether Ambari should install the Kerberos client packages (`true`) or not (`false`).
+If not, it is expected that Kerberos utility programs installed by the user (such as kadmin, kinit,
+klist, and kdestroy) are compatible with MIT Kerberos 5 version 1.10.3 in command line options and
+behaviors.
+
+_Possible Values:_ `true`, `false`
+
+##### ldap_url
+
+The URL to the Active Directory LDAP Interface. This value **must** indicate a secure channel using
+LDAPS since it is required for creating and updating passwords for Active Directory accounts.
+
+_Example:_ `ldaps://ad.example.com:636`
+
+If the `kdc_type` is `active-directory`, this property is mandatory.
+
+##### container_dn
+
+The distinguished name (DN) of the container used store the Ambari-managed user and service principals
+within the configured Active Directory
+
+_Example:_ `OU=hadoop,DC=example,DC=com`
+
+If the `kdc_type` is `active-directory`, this property is mandatory.
+
+##### encryption_types
+
+The supported (space-delimited) list of session key encryption types that should be returned by the KDC.
+
+_Default value:_ aes des3-cbc-sha1 rc4 des-cbc-md5
+
+##### realm
+
+The default realm to use when creating service principals
+
+_Example:_ `EXAMPLE.COM`
+
+This value is expected to be in all uppercase characters.
+
+##### kdc_hosts
+
+A comma-delimited list of IP addresses or FQDNs for the list of relevant KDC hosts. Optionally a
+port number may be included for each entry.
+
+_Example:_ `kdc.example.com, kdc1.example.com`
+
+_Example:_ `kdc.example.com:88, kdc1.example.com:88`
+
+##### admin_server_host
+
+The IP address or FQDN for the Kerberos administrative host. Optionally a port number may be included.
+
+_Example:_ `kadmin.example.com`
+
+_Example:_ `kadmin.example.com:88`
+
+If the `kdc_type` is `mit-kdc` or `ipa`, the value must be the FQDN of the Kerberos administrative host.
+
+##### master_kdc
+
+The IP address or FQDN of the master KDC host in a master-slave KDC deployment. Optionally a port
+number may be included.
+
+_Example:_ `kadmin.example.com`
+
+_Example:_ `kadmin.example.com:88`
+
+##### executable_search_paths
+
+A comma-delimited list of search paths to use to find Kerberos utilities like kadmin and kinit.
+
+_Default value:_ `/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin`
+
+##### password_length
+
+The length required length for generated passwords.
+
+_Default value:_ `20`
+
+##### password_min_lowercase_letters
+
+The minimum number of lowercase letters (a-z) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_uppercase_letters
+
+The minimum number of uppercase letters (A-Z) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_digits
+
+The minimum number of digits (0-9) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_punctuation
+
+The minimum number of punctuation characters (?.!$%^*()-_+=~) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_whitespace
+
+The minimum number of whitespace characters required in generated passwords
+
+_Default value:_ `0`
+
+##### service_check_principal_name
+
+The principal name to use when executing the Kerberos service check
+
+_Example:_ `${cluster_name}-${short_date}`
+
+##### case_insensitive_username_rules
+
+Force principal names to resolve to lowercase local usernames in auth-to-local rules
+
+_Possible values:_ `true`, `false`
+
+_Default value:_ `false`
+
+##### ad_create_attributes_template
+
+A Velocity template to use to generate a JSON-formatted document containing the set of attribute
+names and values needed to create a new Kerberos identity in the relevant Active Directory.
+
+Variables include:
+
+- `principal_name` - the components (primary and instance) portion of the principal
+- `principal_primary` - the _primary component_ of the principal name
+- `principal_instance` - the _instance component_ of the principal name
+- `realm` - the `realm` portion of the principal
+- `realm_lowercase` - the lowercase form of the `realm` of the principal
+- `normalized_principal` - the full principal value, including the component and realms parts
+- `principal_digest` - a binhexed-encoded SHA1 digest of the normalized principal
+- `principal_digest_256` - a binhexed-encoded SHA256 digest of the normalized principal
+- `principal_digest_512` - a binhexed-encoded SHA512 digest of the normalized principal
+- `password` - the generated password
+- `is_service` - `true` if the principal is a _service_ principal, `false` if the principal is a _user_ principal
+- `container_dn` - the `kerberos-env/container_dn` property value
+
+_Note_: A principal is made up of the following parts: primary component, instances component
+(optional), and realm:
+
+* User principal: **_`primary_component`_**@**_`realm`_**
+* Service principal: **_`primary_component`_**/**_`instance_component`_**@**_`realm`_**
+
+_Default value:_
+
+```
+{
+"objectClass": ["top", "person", "organizationalPerson", "user"],
+"cn": "$principal_name",
+#if( $is_service )
+"servicePrincipalName": "$principal_name",
+#end
+"userPrincipalName": "$normalized_principal",
+"unicodePwd": "$password",
+"accountExpires": "0",
+"userAccountControl": "66048"
+}
+```
+
+This property is mandatory and only used if the `kdc_type` is `active-directory`
+
+##### kdc_create_attributes
+
+The set of attributes to use when creating a new Kerberos identity in the relevant (MIT) KDC.
+
+_Example:_ `-requires_preauth max_renew_life=7d`
+
+This property is optional and only used if the `kdc_type` is `mit-kdc`
+
+##### ipa_user_group
+
+The group in IPA that user principals should be a member of.
+
+This property is optional and only used if the `kdc_type` is `ipa`
+
+<a name="krb5-conf"></a>
+
+#### krb5-conf
+
+##### manage_krb5_conf
+
+Indicates whether the krb5.conf file should be managed (created, updated, etc...) by Ambari (`true`)
+or managed manually by the user (`false`).
+
+_Possible values:_ `true`, `false`
+
+_Default value:_ `false`
+
+##### domains
+
+A comma-separated list of domain names used to map server host names to the realm name.
+
+_Example:_ host.example.com, example.com, .example.com
+
+This property is optional
+
+##### conf_dir
+
+The krb5.conf configuration directory
+Default value: /etc
+
+##### content
+
+Customizable krb5.conf template (Jinja template engine)
+
+_Default value:_
+
+```
+[libdefaults]
+ renew_lifetime = 7d
+ forwardable = true
+ default_realm = {{realm}}
+ ticket_lifetime = 24h
+ dns_lookup_realm = false
+ dns_lookup_kdc = false
+ default_ccache_name = /tmp/krb5cc_%{uid}
+ #default_tgs_enctypes = {{encryption_types}}
+ #default_tkt_enctypes = {{encryption_types}}
+{% if domains %}
+[domain_realm]
+{%- for domain in domains.split(',') %}
+ {{domain|trim()}} = {{realm}}
+{%- endfor %}
+{% endif %}
+[logging]
+ default = FILE:/var/log/krb5kdc.log
+ admin_server = FILE:/var/log/kadmind.log
+ kdc = FILE:/var/log/krb5kdc.log
+
+[realms]
+ {{realm}} = {
+{%- if master_kdc %}
+ master_kdc = {{master_kdc|trim()}}
+{%- endif -%}
+{%- if kdc_hosts > 0 -%}
+{%- set kdc_host_list = kdc_hosts.split(',') -%}
+{%- if kdc_host_list and kdc_host_list|length > 0 %}
+ admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}
+{%- if kdc_host_list -%}
+{%- if master_kdc and (master_kdc not in kdc_host_list) %}
+ kdc = {{master_kdc|trim()}}
+{%- endif -%}
+{% for kdc_host in kdc_host_list %}
+ kdc = {{kdc_host|trim()}}
+{%- endfor -%}
+{% endif %}
+{%- endif %}
+{%- endif %}
+ }
+
+{# Append additional realm declarations below #}
+```
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/ambari-metrics-whitelisting.md b/versioned_docs/version-2.7.5/ambari-design/metrics/ambari-metrics-whitelisting.md
new file mode 100644
index 0000000..6852652
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/ambari-metrics-whitelisting.md
@@ -0,0 +1,60 @@
+# Ambari Metrics - Whitelisting
+
+In large clusters (500+ nodes), sometimes there are performance issues seen in AMS aggregations. In the ambari-metrics-collector log file, we can see log lines that look like
+
+```
+20:51:30,952 INFO 2080712366@qtp-974606690-381 AsyncProcess:1597 - #1, waiting for 13948 actions to finish
+20:51:31,601 INFO 1279097595@qtp-974606690-359 AsyncProcess:1597 - #1, waiting for 19376 actions to finish
+```
+
+In Ambari 3.0.0, we are tackling these performance issues through a complete schema and aggregation logic revamp. Until then, we can use AMS whitelisting to reduce the number of metrics tracked by AMS, there by solving this scale problem.
+
+## How do we enable whitelisting in AMS.
+
+**Until Ambari 2.4.3**
+ A metric whitelist file can be used to track the set of metrics in AMS. All other metrics will be discarded.
+
+**STEPS**
+
+* Metric whitelist file is present in /etc/ambari-metrics-collector/conf. If not present in older Ambari versions, it can be downloaded from https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-timelineservice/conf/unix/metrics_whitelist to the collector host.
+* Adding config ams-site : timeline.metrics.whitelist.file = <path_to_whitelist_file>
+* Restart AMS collector
+* Verify whitelisting config was used. In ambari-metrics-collector log file, verify the line 'Whitelisting # metrics'.
+
+**From Ambari 2.5.0 onwards**
+From Ambari 2.5.0, more refinements for whitelisting were included.
+
+* **App Blacklisting** - Blacklist metrics from one or more services. Other service metrics will be entirely allowed or controlled through a whitelist file.
+
+ ```
+ ams-site : timeline.metrics.apps.blacklist = hbase,namenode
+ ```
+
+* **App Whitelisting** - Whitelist metrics from one or more services.
+
+ ```
+ ams-site:timeline.metrics.apps.whitelist = nimbus,datanode
+ ```
+
+ NOTE : The App name can be found from the metadata URL :
+
+ ```
+ http:<metrics_collector_host>:6188/ws/v1/timeline/metrics/metadata
+ ```
+
+* **Metric Whitelisting** - Same as the whitelisting method in Ambari 2.4.3 (through a whitelist file).
+In addition to supplying metric names in the whitelist file, patterns can also be supplied using the ._p_ perfix. For example, a pattern can be specified as follows
+
+._p_dfs.FSNamesystem.*
+
+._p_jvm.JvmMetrics*
+
+An example of a metric whitelisting file that has both metrics and patterns - https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-timelineservice/src/test/resources/test_data/metric_whitelist.dat.
+
+These whitelisting/blacklisting techniques can be used together.
+
+* If you just have timeline.metrics.whitelist.file = <some_file>, only metrics in that file will be allowed (irrespective of whatever apps might be sending them).
+* If you just have timeline.metrics.apps.blacklist = datanode, all datanode metrics will be disallowed. Metrics from all other services will be allowed.
+* If you just have timeline.metrics.apps.whitelist = namenode, it is not useful since there is no blacklisting at all.
+* If you have metric whitelisting enabled (through a file), and have timeline.metrics.apps.blacklist = datanode, all datanode metrics will be disallowed. The whitelisted metrics from other services will be allowed.
+* If you have timeline.metrics.apps.blacklist = datanode, timeline.metrics.apps.whitelist = namenode and metric whitelisting enabled (through a file), datanode metrics will be blacklisted, all namenode metrics will be allowed, and whitelisted metrics from other services will be allowed.
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/ambari-server-metrics.md b/versioned_docs/version-2.7.5/ambari-design/metrics/ambari-server-metrics.md
new file mode 100644
index 0000000..380fa38
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/ambari-server-metrics.md
@@ -0,0 +1,109 @@
+# Ambari Server Metrics
+
+## Outline
+Ambari Server can be used to manage a few tens of nodes to 1000+ nodes. In large clusters, or clusters with sub-optimal infrastructure, capturing Ambari Server performance can be useful for tuning the server as well as guiding future performance optimization efforts. Through this feature, a Metrics Source-Sink framework has been implemented within the AmbariServer which facilitates fine grained control of the various metric sources as well as eases addition of future metrics sources.
+
+Specifically, Ambari server JVM and database (EclipseLink) metric sources have been wired up to send metrics to AMS, and visualized through Grafana dashboards.
+
+## Metrics System Terminology
+
+## Configuration / Enabling
+* To enable Ambari Server metrics, make sure the following config file exists during Ambari Server start/restart - /etc/ambari-server/conf/metrics.properties.
+* Currently, only 2 metric sources have been implemented - JVM Metric Source and Database Metric Source.
+* To add / remove a metric source to be tracked the following config needs to be modified in the metrics.properties file.
+ ```
+ metric.sources=jvm,database
+ ```
+* Source specific configs are discussed in the metrics source section.
+
+## Metric Sources
+
+Name|Functionality|Interface|Implementation(s)
+----|-------------|---------|-----------------------
+Metrics Service |Serves as a starting point for the Metrics system.<br></br>Loads metrics configuration.<br></br>Initializes the sink. If the sink is not properly initialized (AMS is not yet deployed), it tries to re-initialize every 5 minutes asynchronously.<br></br>Initializes and starts configured sources. | org.apache.ambari.server.metrics.system.MetricsService | org.apache.ambari.server.metrics.system.impl.MetricsServiceImpl
+Metric Source | Any sub-component of Ambari Server that has metrics of interest.<br></br>Needs subset of metrics configuration corresponding to the source and the Sink to be initialized.<br></br>Periodically publishes metrics to the Sink.<br></br>Example - JVM, database etc. | org.apache.ambari.server.metrics.system.MetricsSource |org.apache.ambari.server.metrics.system.impl.JvmMetricsSource<br></br>org.apache.ambari.server.metrics.system.impl.DatabaseMetricsSource
+Metric Sink | Flushes the metrics to an external metrics collection system (Metrics Collector) | org.apache.ambari.server.metrics.system.MetricsSink | org.apache.ambari.server.metrics.system.impl.AmbariMetricSinkImp
+
+### JVM Metrics
+
+**Working**
+
+* Collects and publishes Ambari Server JVM related metrics using Codahale library.
+* Metrics collected for GC, Buffers, Threads, Memory and File descriptor.
+* To enable this source, add "jvm" to the metric.sources config in metrics.properties and restart Ambari Server.
+
+**Configs**
+
+Config Name|Default Value|Explanation
+-----------|-------------|---------------------
+source.jvm.class | org.apache.ambari.server.metrics.system.impl.JvmMetricsSource | Class used to collect JVM Metrics.
+source.jvm.interval | 10 | Interval, in seconds, used to denote how often metrics should be collected.
+
+**Grafana dashboard**
+
+* The 'Ambari Server - JVM' dashboard represents the metrics captured from the JvmMetricsSource.
+* Contains memory, GC and threads related graphs that might be of interest on a non performing syste
+
+### Database Metrics
+
+**Working**
+
+The EclipseLink PeformanceMonitor has been extended to support a custom Ambari Database Metrics source. It provides us with monitoring data per entity and per operation on the entity.
+
+The Performance Monitor provides 2 kinds of metrics -
+
+* Counter - Number of occurrences of the operation / query. For such type of metrics, the metric name will start with Counter.
+* Timer - Total cumulative time spent on the operation / query. For such type of metrics, the metric name will start with Timer.
+For example, some of the metrics being collected tothe Database Metrics Source.
+
+* Counter.ReadObjectQuery.HostRoleCommandEntity.readHostRoleCommandEntity
+
+* Timer.ReadAllQuery.StackEntity.StackEntity.findByNameAndVersion.ObjectBuilding
+
+
+In addition to the Counter & Timer metrics collected from EclipseLink, a computed metric of Timer/Counter (divided by) is also sent. This metrics provides the average time taken for an operation across time.
+
+For example, if
+
+```
+ Counter Metric : Counter.ReadAllQuery.HostRoleCommandEntity = 10000
+ Timer Metric : Timer.ReadAllQuery.HostRoleCommandEntity = 50
+ Computed Metric (Avg time for the operation) : ReadAllQuery.HostRoleCommandEntity = 200 (10000 div by 50)
+```
+
+As seen above, the computed metric name will be the same as the Timer & Counter metric except without the 'Timer.' / 'Counter.' prefix.
+
+To enable this source, add "**database**" to the **metric.sources** config in metrics.properties and restart Ambari Server.
+
+**configs**
+
+Config Name|Default Value|Explanation
+-----------|-------------|---------------------
+source.database.class | org.apache.ambari.server.metrics.system.impl.DatabaseMetricsSource | Class used to collect Database Metrics from extended Performance Monitor class - org.apache.ambari.server.metrics.system.impl.AmbariPerformanceMonitor.
+source.database.performance.monitor.query.weight | HEAVY | EclipseLink Performance monitor granularity : NONE / NORMAL / HEAVY / ALL
+source.database.monitor.dumptime | 60000 | Collection interval in milliseconds
+source.database.monitor.entities | Cluster(.*)Entity,Host(.*)Entity,ExecutionCommandEntity, ServiceComponentDesiredStateEntity,Alert(.*)Entity,StackEntity,StageEntity | Only these entities' metrics will be collected and tracked. (org.apache.ambari.server.orm.entities).
+source.database.monitor.query.keywords.include | CacheMisses | Include some metrics which have the keyword even if they are not part of requested Entities.
+
+**Grafana dashboards**
+
+Ambari database metrics have been represented in 2 Grafana dashboards.
+
+* 'Ambari Server - Database' dashboard
+ * An aggregate dashboard that displays Total ReadAllQuery, Cache Hits, Cache Misses, Query Stages, Query Types across all entities.
+ * It also contains an example of how to visualize Timer, Counter and Avg Timing data for a specific entity - HostRoleCommandEntity.
+* 'Ambari Server - Top N Entities' dashboard
+ * Shows Top N entities that have maximum number of ReadAllQuery operations done on them.
+ * Shows Top N entities that the database spent the most time in ReadAllQuery operations.
+ * Shows Top N entities that have maximum Cache Misses
+These dashboard graphs are meant to provide an example of how to create graphs to query specific entities or operations in an Ad Hoc manner.
+
+## Disabling Ambari Server metrics globally
+
+* Add the following config to /etc/ambari-server/conf/ambari.properties
+ * ambariserver.metrics.disable=true
+* Restart Ambari Server
+
+## Related JIRA
+
+[AMBARI-17589](https://issues.apache.org/jira/browse/AMBARI-17589)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/configuration.mdx b/versioned_docs/version-2.7.5/ambari-design/metrics/configuration.mdx
new file mode 100644
index 0000000..78d05c2
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/configuration.mdx
@@ -0,0 +1,190 @@
+# Configuration
+
+## Metrics Collector
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+ams-site | /etc/ambari-metrics-collector/conf/ams-site.xml |Settings that control the API daemon and the aggregator threads.
+ams-env | /etc/ambari-metrics-collector/conf/ams-env.sh |Memory / PATH settings for the API daemon
+ams-hbase-site | /etc/ams-hbase/conf/hbase-site.xml<br></br>/etc/ambari-metrics-collector/conf/hbase-site.xml |Settings for the HBase storage used for the metrics data.
+ams-hbase-env | /etc/ams-hbase/conf/hbase-env.sh |Memory / PATH settings for the HBase storage.<br></br>**Note**: In embedded more, the heap memory setting for master and regionserver is summed up as total memory for single HBase daemon.
+
+## Metrics Monitor
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+ams-env |/etc/ambari-metrics-monitor/conf/ams-env.sh |Used for log and pid dir modifications, this is the same configuration as above, common to both components.
+metric_groups |/etc/ambari-metrics-monitor/conf/metric_groups.conf |Not available in the UI. Used to control what **HOST/SYSTEM** metrics are reported.
+metric_monitor |/etc/ambari-metrics-monitor/conf/metric_monitor.ini |Not available in the UI. Settings for the monitor daemon.
+
+## Metric Collector - ams-site - Configuration details
+
+* Modifying retention interval for time aggregated data. Refer to Aggregation section for more information on aggregation: API spec
+(Note: In Ambari 2.0 and 2.1, the Phoenix version does not support Alter TTL queries. So these can be modified from the UI, only at install time. Please refer to Known Issues section for workaround.)
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.ttl |86400 |1 minute resolution data purge interval. Default is 1 day.
+timeline.metrics.host.aggregator.minute.ttl |604800 |Host based X minutes resolution data purge interval. Default is 7 days.<br></br>(X = configurable interval, default interval is 2 minutes)
+timeline.metrics.host.aggregator.hourly.ttl |2592000 |Host based hourly resolution data purge interval. Default is 30 days.
+timeline.metrics.host.aggregator.daily.ttl |31536000 |Host based daily resolution data purge interval. Default is 1 year.
+timeline.metrics.cluster.aggregator.minute.ttl |2592000 |Cluster wide minute resolution data purge interval. Default is 30 days.
+timeline.metrics.cluster.aggregator.hourly.ttl |31536000 |Cluster wide hourly resolution data purge interval. Default is 1 year.
+timeline.metrics.cluster.aggregator.daily.ttl |63072000 |Cluster wide daily resolution data purge interval. Default is 2 years.
+**Note**: The precision table at 1 minute resolution stores raw precision data for 1 day, when user queries for past 1 hour of data, the AMS API returns raw precision data.
+
+* Modifying the aggregation intervals for HOST and CLUSTER aggregators.
+On wake up the aggregator threads resume from (last run time + interval) as long as last run time is not too old.
+
+Property| Default Value Path | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.minute.interval |120 |Time in seconds to sleep for the minute resolution host based aggregator. Default resolution is 2 minutes.
+timeline.metrics.host.aggregator.hourly.interval |3600 |Time in seconds to sleep for the hourly resolution host based aggregator. Default resolution is 1 hour.
+timeline.metrics.host.aggregator.daily.interval |86400 |Time in seconds to sleep for the day resolution host based aggregator. Default resolution is 24 hours.
+timeline.metrics.cluster.aggregator.minute.interval |120 |Time in seconds to sleep for the minute resolution cluster wide aggregator. Default resolution is 2 minutes.
+timeline.metrics.cluster.aggregator.hourly.interval |3600 |Time in seconds to sleep for the hourly resolution cluster wide aggregator. Default is 1 hour.
+timeline.metrics.cluster.aggregator.daily.interval |86400 |Time in seconds to sleep for the day resolution cluster wide aggregator. Default is 24 hours.
+
+* Modifying checkpoint information. The aggregators store the timestamp or last run time on local FS.
+After reading last run time, the aggregator thread decides to aggregate as long as the (currentTime - lastRunTime) < multipler * aggregation_interval.
+The multiplier is configurable for each aggregator.
+
+Property | Default Value | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.minute.checkpointCutOffMultiplier |2 |Multiplier value * interval = Max allowed checkpoint lag. Effectively if aggregator checkpoint is greater than max allowed checkpoint delay,the checkpoint will be discarded by the aggregator.
+timeline.metrics.host.aggregator.hourly.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.host.aggregator.daily.checkpointCutOffMultiplier |1 |Same as above
+timeline.metrics.cluster.aggregator.minute.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.cluster.aggregator.hourly.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.cluster.aggregator.daily.checkpointCutOffMultiplier |1 |Same as above
+timeline.metrics.aggregator.checkpoint.dir |/var/lib/ambari-metrics-collector/checkpoint |Directory to store aggregator checkpoints. Change to a permanent location so that checkpoint are not lost.
+
+* Other important configuration properties
+
+
+Property | Default Value | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.*.disabled |false |Disable host based * aggregations. ( * => minute/hourly/daily)
+timeline.metrics.cluster.aggregator.*.disabled |false |Disable cluster based * aggregations. ( * => minute/hourly/daily)
+timeline.metrics.cluster.aggregator.minute.timeslice.interval |30 |Lowest resolution of desired data for cluster level minute aggregates.
+timeline.metrics.hbase.data.block.encoding |FAST_DIFF| Codecs are enabled on a table by setting the DATA_BLOCK_ENCODING property. Default encoding is FAST_DIFF. This can be changed only before creating tables.
+timeline.metrics.hbase.compression.scheme |SNAPPY |Compression codes need to be installed and available before setting the scheme. Default compression is SNAPPY. Disable by setting to None. This can be changed only before creating tables.
+timeline.metrics.service.default.result.limit |5760 |Max result limit on number of rows returned. Calculated as follows: 4 aggregate metrics/min * 60 * 24: Retrieve aggregate data for 1 day.
+timeline.metrics.service.checkpointDelay |60 |Time in seconds to sleep on the first run or when the checkpoint is too old.
+timeline.metrics.service.resultset.fetchSize |2000 |JDBC resultset prefect size for aggregator queries.
+timeline.metrics.service.cluster.aggregator.appIds |datanode,nodemanager,hbase |List of application ids to use for aggregating host level metrics for an application. Example: bytes_read across Yarn Nodemanagers.
+
+## Configuring Ambari Metrics service in distributed mode
+
+In distributed mode, Metric Collector writes go to HDFS of the cluster. Currently distributed mode does not support multi-mode Metric Collector, however the plan is to allow Metric Collector to scale horizontally to allow a multi-node HBase storage layer.
+
+**Note**: Make sure there is a local Datanode hosted with the Collector, it provides AMS HBase the distinct advantage of write and reads sharded across the data volumes available to the DN.
+
+Following steps need to be performed either at install time or after deploy to configure Metric Collector in distributed mode. Note: If configuring after install, the data will not be automatically copied over to HDFS.
+
+1. Edit ams-site, Set timeline.metrics.service.operation.mode = distributed
+2. Edit ams-hbase-site,
+ - Set hbase.rootdir = hdfs://namenode-host;:8020/user/ams/hbase [ If NN HA is enabled, hdfs://nameservice-id/user/ams/hbase ]
+ (Note: /user/ams/hbase here is the directory where metric data will be stored in HDFS)
+ - Set hbase.cluster.distributed = true
+ - Add dfs.client.read.shortcircuit = true (This is an optimization with a local DN present)
+3. Restart Metrics Collector
+
+**Note**: In Ambari 2.0.x, there is a bug in deploying AMS in distributed mode, if Namenode HA is enabled. Please follow the instruction listed in this JIRA as workaround steps: ([AMBARI-10707](https://issues.apache.org/jira/browse/AMBARI-10707))
+
+**Note**: In Ambari 2.2.1, stack advisor changes the dependent configs for distributed mode automatically through recommendations. Ideally, the only config that needs to be changed is timeline.metrics.service.operation.mode = distributed. The other configs - hbase.rootdir, hbase.cluster.distributed and dfs.client.read.shortcircuit will be changed automatically.
+
+## Migrating data from embedded to distributed mode
+
+Steps to migrate existing metric data to HDFS and starting AMS in distributed mode:
+
+* Stop AMS Metric Collector
+* Create hdfs directory for ams user, Example:
+
+ ```bash
+ su - hdfs -c 'hdfs dfs -mkdir /user/ams'
+ su - hdfs -c 'hdfs dfs -chown ams:hadoop /user/ams'
+ ```
+* Copy the metric data from the AMS local directory (existing value of hbase.rootdir in ams-hbase-site) to HDFS directory. Example:
+
+ ```bash
+ cd /var/lib/ambari-metrics-collector/
+ su - hdfs -c 'hdfs dfs -copyFromLocal hbase hdfs:// <namnode-http-address>:8020/user/ams/'
+ su - hdfs -c 'hdfs dfs -chown -R ams:hadoop /user/ams/hbase'
+ ```
+
+* Start the Metric Collector after making the changes needed for distributed mode.
+
+## Enabling HBase Region, User and Table Metrics
+
+Ambari disables HBase metrics (per region, per user and per table) by default. HBase metrics can be numerous and can cause performance issues. HBase RegionServer metrics are available by default.
+
+If you want HBase (per region, per user and per table) metrics to be collected by Ambari, you can do the following. It is **highly recommended** that you test turning on this option and confirm that your AMS performance is acceptable.
+
+### Step-by-step guide
+1. On the Ambari Server, browse to:
+
+ ```bash
+ var/lib/ambari-server/resources/stacks/HDP/3.0/services/HBASE/ package/templates
+ ```
+
+2. When Ambari is older than 2.7.0, on the Ambari Server, browse to:
+
+ ```bash
+ var/lib/ambari-server/resources/common-services/HBASE/ $VERSION/package/templates
+ ```
+
+3. Edit the following template files:
+
+ ```bash
+ hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2
+ hadoop-metrics2-hbase.properties-GANGLIA-RS.j2
+ ```
+
+4. Comment out (or remove) the following lines:
+
+ ```bash
+ *.source.filter.class=org.apache.hadoop.metrics2.filter. RegexFilter
+ hbase.*.source.filter.exclude=.*(Regions|Users|Tables).*
+ ```
+
+5. Save the template files and restart Ambari Server and then HBase for the changes to take effect.
+
+:::tip
+If you upgrade Ambari to a newer version, you will need to re-apply this change to the template file.
+:::
+
+## Enabling HDFS per-user Metrics
+
+HDFS per-user Metrics aren't emitted by default. Kindly exercise caution before enabling them and make sure to refer to the details of client and service port numbers.
+
+To be able to use the HDFS - Users dashboard in your Grafana instance as well as to view metrics for HDFS per user, you will need to add these custom properties to your configuration.
+
+### Step-by-step guide
+In Ambari, HDFS > Configs > Advanced > Custom hdfs-site, Add the following properties.
+
+```
+dfs.namenode.servicerpc-address=<namenodehost>:8021
+
+ipc.8020.callqueue.impl=org.apache.hadoop.ipc.FairCallQueue
+
+ipc.8020.backoff.enable=true
+
+ipc.8020.scheduler.impl=org.apache.hadoop.ipc.DecayRpcScheduler
+
+ipc.8020.scheduler.priority.levels=3
+
+ipc.8020.decay-scheduler.backoff.responsetime.enable=true
+
+ipc.8020.decay-scheduler.backoff.responsetime.thresholds=10,20,30
+```
+
+**Things to Consider**
+client port : 8020 (if different, replace it with appropriate port in all keys)Things to consider:
+
+* service port: 8021 (if different, replace it with appropriate port in first value)
+* namenodehost: needs to be a FQDN.
+Once these properties are added, it should look like this.
+
+
+**Restart HDFS & you should see the metrics being emitted. You should now also be able to use the HDFS - Users Dashboard in Grafana.**
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/ams-arch.jpg b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/ams-arch.jpg
new file mode 100644
index 0000000..df41aa6
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/ams-arch.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/connect-phoenix.png b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/connect-phoenix.png
new file mode 100644
index 0000000..dac73fd
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/connect-phoenix.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png
new file mode 100644
index 0000000..51eeed6
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/hosts-metadata.png b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/hosts-metadata.png
new file mode 100644
index 0000000..7bd26bd
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/hosts-metadata.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/metrics-datastructure.png b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/metrics-datastructure.png
new file mode 100644
index 0000000..fe56c81
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/metrics-datastructure.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/metrics-metadata.png b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/metrics-metadata.png
new file mode 100644
index 0000000..4d1f8c2
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/metrics-metadata.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/restart-datanode.png b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/restart-datanode.png
new file mode 100644
index 0000000..31b0f0d
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/imgs/restart-datanode.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/index.md b/versioned_docs/version-2.7.5/ambari-design/metrics/index.md
new file mode 100644
index 0000000..63c92e3
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/index.md
@@ -0,0 +1,22 @@
+# Metrics
+
+**Ambari Metrics System** ("AMS") is a system for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+## Terminology
+
+Term | Definition
+--------------------------------|-------------------------------------------------------------
+Ambari Metrics System (“AMS”) | The built-in metrics collection system for Ambari.
+Metrics Collector | The standalone server that collects metrics, aggregates metrics, serves metrics from the Hadoop service sinks and the Metrics Monitor.
+Metrics Monitor | Installed on each host in the cluster to collect system-level metrics and forward to the Metrics Collector.
+Metrics Hadoop Sinks | Plugs into the various Hadoop components sinks to send Hadoop metrics to the Metrics Collector.
+
+## Architecture
+Following image depicts the high level conceptual architecture of the new Ambari Metrics System:
+
+
+
+The **Metrics Collector** is daemon that receives data from registered publishers (the Monitors and Sinks). The Collector itself is build using Hadoop technologies such as HBase Phoenix and ATS. The Collector can store data on the local filesystem (referred to as "embedded mode") or use an external HDFS (referred to as "distributed mode").
+
+## Learn More
+Browse the following to learn more about the [Ambari Metrics REST API](./metrics-api-specification.md) specification and about advanced [Configuration](./configuration.mdx) of AMS.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/metrics-api-specification.md b/versioned_docs/version-2.7.5/ambari-design/metrics/metrics-api-specification.md
new file mode 100644
index 0000000..72ae688
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/metrics-api-specification.md
@@ -0,0 +1,63 @@
+---
+title: Ambari Metrics API specification
+---
+
+The Ambari REST API supports metric queries at CLUSTER, HOST, COMPONENT and HOST COMPONENT levels.
+
+Broadly, the types of metrics queries supported are: **time range** or **point in time**.
+
+Following is an illustration of an API call that fetches metrics from the Metrics backend service using Ambari API.
+
+## CLUSTER
+
+E.g.: Dashboard metrics : Fetch load average across all nodes of a cluster
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>?fields=metrics/load[1430844925,1430848525,15]&_=1430848532904
+```
+The above API call retrieves the load average, aggregated across all hosts in the cluster.
+
+The request part of the API call selects the cluster instance while the predicate includes the metric with the time range query, followed by current time in milliseconds.
+
+Time range query:
+
+Field | Value |Comment
+-----------|------------|----------------------------------------
+Start time | 1430844925 |Start time for the time range. (Epoch)
+End time | 1430848525 |End time of the time range. (Epoch)
+Step | 15 |Default step, this is used only for zero padding or null padding if the padding interval cannot be determined from the retrieved data.
+
+## HOST
+
+E.g.: Host metrics: Get the cpu utilization on a particular host in the cluster
+
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>?fields=metrics/cpu/cpu_user[1430844610,1430848210,15],metrics/cpu/cpu_wio[1430844610,1430848210,15],metrics/cpu/cpu_nice[1430844610,1430848210,15],metrics/cpu/cpu_aidle[1430844610,1430848210,15],metrics/cpu/cpu_system[1430844610,1430848210,15],metrics/cpu/cpu_idle[1430844610,1430848210,15]&_=1430848217591
+```
+
+The above API call retrieves all cpu related metrics required to chart out cpu utilization on a host page.
+
+The request part of the above API call selects the host which is queried while the predicate part includes the metric names with time range query.
+
+## COMPONENT
+
+E.g.: Service metrics: Get the capacity utilization metrics aggregated across all datanodes but only the latest value (point in time)
+
+```
+ http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/services/HDFS/components/DATANODE?fields=metrics/dfs/datanode/DfsUsed,metrics/dfs/datanode/Capacity&_=1430849798630
+```
+
+The above API call retrieves two metrics values which represent the point in time metric value for the requested metrics obtained for the Metrics Service backend. (non JMX)
+
+For a call to get JMX metrics directly from a Hadoop daemon, use the metrics name that corresponds to the JMX MBean metric, example: metrics/dfs/FSNamesystem/CapacityUsedGB (Refer to Stack Defined Metrics for more info)
+
+The request part of the above API call selects the service from the cluster while predicate part includes the metrics names.
+
+## HOST COMPONENT
+E.g.: Daemon metrics: Get the heap memory usage for active Namenode
+
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/NAMENODE?fields=metrics/jvm/memHeapCommittedM[1430847303,1430850903,15],metrics/jvm/memHeapUsedM[1430847303,1430850903,15]&_=1430850903846
+The above API call retrieves JVM heap metrics for the Active Namenode in the cluster.
+```
+
+The request part of the api selects the Namenode host component while predicate part includes metrics with time range query.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/metrics-collector-api-specification.md b/versioned_docs/version-2.7.5/ambari-design/metrics/metrics-collector-api-specification.md
new file mode 100644
index 0000000..a9e90fa
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/metrics-collector-api-specification.md
@@ -0,0 +1,232 @@
+# Metrics Collector API Specification
+
+## Sending Metrics to AMS (POST)
+
+Sending metrics to Ambari Metrics Service can be achieved through the following API call.
+
+The Sink implementations responsible for sending metrics to AMS, buffer data for 1 minute before sending. TimelineMetricCache provides a simple cache implementation to achieve this behavior.
+
+Sample sink implementation use by Hadoop daemons: https://github.com/apache/ambari/tree/trunk/ambari-metrics/ambari-metrics-hadoop-sink
+
+```uri
+POST http://<ambari-metrics-collector>:6188/ws/v1/timeline/metrics
+```
+
+```json
+{
+ "metrics": [
+ {
+ "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",
+ "appid": "amssmoketestfake",
+ "hostname": "ambari20-5.c.pramod-thangali.internal",
+ "timestamp": 1432075898000,
+ "starttime": 1432075898000,
+ "metrics": {
+ "1432075898000": 0.963781711428,
+ "1432075899000": 1432075898000
+ }
+ }
+ ]
+}
+```
+
+```
+Connecting (POST) to <ambari-metrics-collector>:6188/ws/v1/timeline/metrics/
+Http response: 200 OK
+```
+
+## Fetching Metrics from AMS (GET)
+
+**Sample call**
+```
+GET http://<ambari-metrics-collector>:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric&appId=amssmoketestfake&hostname=<hostname>&precision=seconds&startTime=1432075838000&endTime=1432075959000
+Http response: 200 OK
+Http data:
+{
+ "metrics": [
+ {
+ "timestamp": 1432075898089,
+ "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",
+ "appid": "amssmoketestfake",
+ "hostname": "ambari20-5.c.pramod-thangali.internal",
+ "starttime": 1432075898000,
+ "metrics": {
+ "1432075898000": 0.963781711428,
+ "1432075899000": 1432075898000
+ }
+ }
+ ]
+}
+```
+
+**Generic GET call format**
+```uri
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=<>&hostname=<>&appId=<>&startTime=<>&endTime=<>&precision=<>
+```
+
+**Query Parameters Explanation**
+
+Parameter|Optional/Mandatory|Explanation|Values it can take
+---------|------------------|-----------|-------------------
+metricNames | Mandatory Comma | separated list of metrics that are required. | disk_free,mem_free... etc
+appId | Mandatory |The AppId that corresponds to the metricNames that were requested. Currently, only 1 AppId is required and allowed. | HOST/namenode/datanode/nimbus/hbase/kafka_broker/FLUME_HANDLER etc
+hostname | Optional | Comma separated list of hostnames. When no specified, cluster aggregates are returned. | h1,h2..etc
+startTime, endTime | Optional | Start and End time values. If not specified, the last data point of the metric is returned. | epoch times in seconds or milliseconds
+precision | Optional | What precision the data needs to be returned. If not specified, the precision is calculated based on the time range requested (Table below) |SECONDS/MINUTES/DAYS/HOURS
+
+**Precision query parameter (Default resolution)**
+
+ <table class="confluenceTable">
+ <colgroup>
+ <col />
+ <col />
+ <col />
+ </colgroup>
+ <tbody>
+ <tr>
+ <th class="confluenceTh"><p><span>Query Time</span></p><p><span>range</span></p></th>
+ <th class="confluenceTh"><p><span>Resolution of </span><span>returned metrics</span></p></th>
+ <th class="confluenceTh"><p><span>Comments</span></p></th>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>Upto 2 hours</span></p></td>
+ <td class="confluenceTd"><p><span>SECONDS</span></p></td>
+ <td class="confluenceTd">
+ <ul>
+ <li><span><span>10 second data for host metrics</span></span></li>
+ <li><span><span>30 second data for Aggregated query (No host specified)</span><br class="_mce_tagged_br" /></span></li>
+ </ul></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>2 hours - 1 day</span></p></td>
+ <td class="confluenceTd"><p><span>MINUTES</span></p></td>
+ <td class="confluenceTd"><p><span>5 minute data</span></p></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span><span>1 day</span> - 30 days</span></p></td>
+ <td class="confluenceTd"><p>HOURS</p></td>
+ <td class="confluenceTd"><p><span>1 hour data</span></p></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>> 30 days</span></p></td>
+ <td class="confluenceTd"><p><span>DAYS</span></p></td>
+ <td class="confluenceTd">1 day data</td>
+ </tr>
+ </tbody>
+ </table>
+
+**Specifying Aggregate Functions**
+
+The metricName can have a specific aggregate function qualifier after the metricName (as shown below) to request specific aggregates. Valid values are ._avg, ._max, ._min, ._sum. When an aggregate query is requested without an aggregate function in the metricName, the default is AVG.
+Examples
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._avg,regionserver.Server.writeRequestCount._max&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.readRequestCount,regionserver.Server.writeRequestCount._max&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+
+**Specifying Post processing Functions**
+
+Similar to aggregate functions, post processing functions can also be specified. Currently, we have 2 post processing functions - rate (Rate per second) and diff (difference between consecutive values). Post processing functions can also be applied after aggregate functions.
+Examples
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._rate,regionserver.Server.writeRequestCount._diff&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.readRequestCount._max._diff&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+**Specifying Wild Cards**
+
+Both metricNames and hostname take wildcard (%) values for a group of metric (or hosts). A query can have a combination of full metric names and names with wildcards also.
+
+Examples
+
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=master.AssignmentManger.ritCount,regionserver.Server.%&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&hostname=abc.testdomain12%.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+
+**Downsampling**
+
+As discussed before, AMS downsamples data when higher time ranges are requested. The default "downsampled across time" data returned is AVG. Specific downsamples can be requested by adding the aggregate function qualifiers ( ._avg, ._max, ._min, ._sum ) to the metric names the same way like requesting aggregates across the cluster.
+Example
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._max&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000&precision=MINUTES
+```
+The above query returns 5 minute data for the metric, where the data point value is the MAX of the values found in every 5 minute range.
+
+## AMS Metadata API
+
+AMS has 2 metadata endpoints that are useful for finding out the set of metrics it received, as well as the topology of the cluster.
+
+**METRICS METADATA**
+
+Endpoint :
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics/metadata
+```
+
+Data returned : A mapping between the set of APP_IDs to the list of metrics received with that AppId.
+
+Sample data returned
+
+
+
+**HOSTS METADATA**
+
+Endpoint :
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics/hosts
+```
+Data returned : A mapping between the hosts in the cluster and the set of APP_IDs on the host.
+
+Sample data returned
+
+
+
+## Guide to writing your own Sink
+* Include the ambari-metrics-common artifacts from source or maven-central (when available) into your project
+* Find below helpful info regarding common data-structures to use from the ambari-metrics-common module
+* Extend the org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink class and implement the required methods
+* Use the org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache to store intermediate data until it is time to send (example: collection interval = 10 seconds, send interval = 1 minute). The cache implementation provides the logic needed for buffering and local aggregation.
+* Use org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink#emitMetrics to send metrics to AMS backend.
+
+**METRIC DATA STRUCTURE**
+
+Source location for common data structures module: https://github.com/apache/ambari/tree/trunk/ambari-metrics/ambari-metrics-common/
+
+Example sink implementation: https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-hadoop-sink/
+
+
+
+**INTERNAL PHOENIX KEY STRUCTURE**
+
+ The Metric Record Key data structure is described below:
+
+Property|Type|Comment|Optional
+--------|----|--------|---------------
+Metric Name | String | First key part, important consideration while querying from HFile storage | N
+Hostname | String | Second key part | N
+Server time | Long | Timestamp on server when first metric write request was received | N
+Application Id | String | Uniquely identify service | N
+Instance Id | String | Second key part to identify instance/ component | Y
+Start time | Long | Start of the timeseries data |
+
+
+**HOW AGGREGATION WORKS**
+
+* The granularity of aggregate data can be controlled by setting wake up interval for each of the aggregator threads.
+* Presently we support 2 types of aggregators, HOST and APPLICATION with 3 time dimensions, per minute, per hour and per day.
+ * The HOST aggregates are just aggregates on precision data across the supported time dimensions.
+ * The APP aggregates are across appId. Note: We ignore instanceId for APP level aggregates. Same time dimensions apply for APP level aggregates.
+ * We also support HOST level metrics for APP, meaning you can expect a system metric example: "cpu_user" to be aggregated across datanodes, effectively calculating system metric for hosted apps.
+* Each aggregator performs checkpointing by storing last successful time of completion in a file. If the checkpoint is too old, the aggregators will discard checkpoint and aggregate data for the configured interval, meaning data in between (now - interval) time.
+* Refer to [Phoenix table schema](./operations.md) for details of tables and records.
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/operations.md b/versioned_docs/version-2.7.5/ambari-design/metrics/operations.md
new file mode 100644
index 0000000..5bfd1f6
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/operations.md
@@ -0,0 +1,151 @@
+# Operations
+
+## Metrics Collector
+
+**Pid file locations**
+
+Daemon | Default User | Pid File Path
+---------------|-------------------------------------------------|----------------------------------------
+Metrics Collector API |ams |/var/run/ambari-metrics-collector/ambari-metrics-collector.pid
+Metrics Collector Hbase |ams |/var/run/ambari-metrics-collector/hbase-ams-master.pid
+
+**Log file locations**
+
+Daemon | Log File Path
+---------------|------------------------------------------------
+Metrics Collector API |/var/log/ambari-metrics-collector/ambari-metrics-collector.log<br></br>/var/log/ambari-metrics-collector/ambari-metrics-collector.out
+Metrics Collector HBase |/var/log/ambari-metrics-collector/hbase-ams-master-<hostname>.log<br></br>/var/log/ambari-metrics-collector/hbase-ams-master-<hostname>.out
+
+**Manually restart Metrics Collector**
+
+Stop command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ stop'
+```
+
+Start command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ start'
+```
+
+## Metrics Monitor
+
+**Pid File location**
+
+```
+/var/run/ambari-metrics-monitor/ambari-metrics-monitor.pid
+```
+
+**Log File location**
+
+```
+/var/log/ambari-metrics-monitor/ambari-metrics-monitor.out
+```
+
+**Manually restart Metrics Monitor**
+Stop command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf stop'
+```
+
+Start command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf start'
+```
+
+## Build Instructions
+
+The ambari-metrics-assembly package builds the assemblies (rpm/deb/msi) for various platforms.
+
+Following binaries can be found in the ambari-metrics-assembly/target folder after build is successful.
+
+```
+ambari-metrics-collecor-<ambari-version>.<arch>
+ambari-metrics-monitor-<ambari-version>.<arch>
+ambari-hadoop-sink-<ambari-version>.<arch>
+```
+
+**Note**: Ambari Metrics needs to be built before Ambari Server
+
+### RPM packages
+
+```bash
+cd ambari-metrics
+mvn clean package -Dbuild-rpm
+```
+
+### Debian packages
+Same instruction as above, change the maven target to build-deb
+
+### Windows msi
+TBU
+
+### Command line parameters
+
+Parameter | Default Value | Comment
+---------------|------------------|------------------------------
+hbase.tar | http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.0.0/tars/hbase-1.1.1.2.3.0.0-2557.tar.gz | HBase tarball. This is default version for Ambari 2.1.2
+hbase.folder | hbase-1.1.1.2.3.0.0-2557 |-
+hadoop.tar | http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.0.0/tars/hadoop-2.7.1.2.3.0.0-2557.tar.gz | Hadoop tarball, used for native libs. This is default version for Ambari 2.1.2
+hadoop.folder | hadoop-2.7.1.2.3.0.0-2557 |-
+
+**Note**
+
+After the change introducted by [AMBARI-18915](https://issues.apache.org/jira/browse/AMBARI-18915) - Update AMS pom to use Apache hbase,hadoop,phoenix tarballs REOPENED AMS uses hadoop tars downloaded from Apache by default. Since that version of libhadoop is not built with libsnappy, the following config change in ams-site is needed to make AMS start up correctly.
+
+**timeline.metrics.hbase.compression.scheme = None**
+
+## Disk space utilization guidance
+
+Num of Nodes | METRIC_RECORD (MB) | METRIC_RECORD_MINUTE (MB) | METRIC_RECORD_HOURLY (MB) | METRIC_RECORD_DAILY (MB) | METRIC_AGGREGATE (MB) | METRIC_AGGREGATE_MINUTE (MB) | METRIC_AGGREGATE_HOURLY (MB) | METRIC_AGGREGATE_DAILY (MB) | TOTAL (GB)
+-------|----------|-----------|----------|----------|---------|--------|----------|-----------------|-------------
+50 | 5120 | 2700 | 245 | 10 | 1500 |305 |28 |1 |10
+100 | 10240 | 5400 | 490 | 20 | 1500 |305 |28 |1 |18
+300 | 30720 | 16200 | 1470 | 60 | 1500 |305 |28 |1 |49
+500 | 51200 | 27000 | 2450 | 100 | 1500 |305 |28 |1 |81
+800 | 81920 | 43200 | 3920 | 160 | 1500 |305 |28 |1 |128
+
+**NOTE**:
+
+The above guidance has been derived from looking at AMS disk utilization in actual clusters.
+The ACTUAL numbers have been obtained by observing an actual cluster with the basic services (HDFS, YARN, HBase) installed along with Storm, Kafka and Flume.
+Kafka and Flume generate metrics only while a job is running. If those services are being used heavily, additional disk space is recommended. We ran sample jobs with STORM and KAFKA while deriving these numbers to make sure there is some contribution.
+
+**Actual disk utilization data**
+
+Num of Nodes | METRIC_RECORD (MB) | METRIC_RECORD_MINUTE (MB) | METRIC_RECORD_HOURLY (MB) | METRIC_RECORD_DAILY (MB) | METRIC_AGGREGATE (MB) | METRIC_AGGREGATE_MINUTE (MB) | METRIC_AGGREGATE_HOURLY (MB) | METRIC_AGGREGATE_DAILY (MB) | TOTAL (GB)
+-------|----------|-----------|----------|----------|---------|--------|----------|-----------------|-------------
+2 | 120 | 175 | 17 | 1 | 545 | 136 | 16 | 1 | 1
+3 | 294 | 51 | 3.4 | 1 | 104 | 26 | 1.8 | 1 | 0.5
+10 | 1024 | 540 | 49 | 2 | 1433.6 | 305 | 28 | 1 | 3.3
+
+## Phoenix Schema
+
+### Phoenix Tables
+
+Table Name | Description | Purge Interval(default)
+------------------------|---------------------------------------------------------------------------|-------------------------
+METRIC_RECORD | Data per metric per host at 10 seconds precision with 1 minute aggregates.| 1 day
+METRIC_RECORD_MINUTE | Data per metric per host at 5 minute precision | 1 week
+METRIC_RECORD_HOURLY | Data per metric per host at 1 hour precision | 30 days
+METRIC_RECORD_DAILY | Data per metric per host at 1 day precision | 1 year
+METRIC_AGGREGATE | Cluster wide aggregates per metric at 30 seconds precision | 1 week
+METRIC_AGGREGATE_MINUTE | Cluster wide aggregates per metric at 5 minute precision | 30 days
+METRIC_AGGREGATE_HOURLY | Cluster wide aggregates per metric at 1 hour precision | 1 year
+METRIC_AGGREGATE_DAILY | Cluster wide aggregates per metric at 1 day precision | 2 years
+
+### Connecting to Phoenix
+* Unpack Phoenix (4.2.0+) tarball onto the Metric Collector host
+* Change director to phonenix-4.*/bin
+* Edit sqlline.py and search for "java", replace java with full path to the java executable, example: "/usr/jdk64/jdk1.8.0_40/bin/java"
+* Connect command:
+Ambari versions 2.2.0 and below : ./sqlline.py localhost:61181:/hbase
+Ambari versions > 2.2.0 :
+```bash
+./sqlline.py localhost:61181:/ams-hbase-unsecure (embedded mode) and <cluster-zookeeper-quorum-host>:<cluster_zookeeper_port>:/ams-hbase-unsecure (distributed mode)
+```
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/stack-defined-metrics.md b/versioned_docs/version-2.7.5/ambari-design/metrics/stack-defined-metrics.md
new file mode 100644
index 0000000..e535f4b
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/stack-defined-metrics.md
@@ -0,0 +1,79 @@
+# Stack Defined Metrics
+
+The Ambari Stack definition represents the complete declarative description of Services that are comprised in a cluster.
+
+The stack definition also contains a definition file for all metrics that are supported by the Service.
+
+Presently the metrics.json describes the mapping between the metrics name requested in the REST API and the metrics name to use for making a call to the Metrics Service.
+
+Location of the **metrics.json** in the stack:
+
+Level|Location|Comment
+-----|--------|-------
+Cluster & Host | ganglia_properties.json | Presently, this file defines metrics for Host Component and Service Components as well but these are only used for older versions of stack < 2.0 and unit tests.<br></br>The Cluster and Host sections of this json file drive the Dashboard graphs.
+Component & Host Component | common-services.<SERVICE_NAME> | This file contains definition of metrics mapping for Ambari Metrics (type = ganglia) and JMX.
+
+**Note**: The individual stacks that override behavior from common services can redefine the metrics.json file, the inheritance is all-or-nothing, meaning if metrics.json file is present in the child stack, it will override the metrics.json from common-services
+
+**Structure of metrics.json file**
+
+Key|Allowed Values|Comments
+-----|--------|-------------
+Type |"ganglia" / "jmx" |type = ganglia implies Metrics Service request fulfilled by either a Ganglia (up to version 2.0) or Ambari Metrics (2.0 and above) backend service, this decision is taken by Ambari server at runtime.
+Category | "default" / "performance" ... |This is to group metrics into subsets for better navigability
+Metrics |metricKey : { <br></br>"metricName":<br></br>"pointInTime":<br></br>temporal":<br></br>} | metricKey = Key to be used by REST API. This is unique for a service and identifies the requested metric as well as what endpoint to use for serving the data (AMS vs JMX)<br></br>metricName = Name to use for the Metrics Service backend<br></br>pointInTime = Get latest value, no time range query allowed<br></br>temporal = Time range query supported
+
+Example:
+
+```json
+{
+
+ "NAMENODE": {
+
+ "Component": [
+
+ {
+
+ "type": "ganglia",
+
+ "metrics": {
+
+ "default": {
+
+ "metrics/dfs/FSNamesystem/TotalLoad": {
+
+ "metric": "dfs.FSNamesystem.TotalLoad",
+
+ "pointInTime": false,
+
+ "temporal": true
+
+ }
+
+ } ]
+
+ },
+
+ "HostComponent" : [
+
+ { "type" : "ganglia", ... }
+
+ { "type" : "jmx", .... }
+
+ ]
+
+}
+```
+
+**Sample API calls to retrieve metric definitions**:
+
+Service metrics:
+```
+Template => http://<ambari-server>:<port>/api/v1/stacks/<stackName>/versions/<stackVersion>/services/<serviceName>/artifacts/metrics_descriptor
+Example => http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/HDFS/artifacts/metrics_descriptor
+```
+Cluster & Host metrics:
+```
+Template => http://<ambari-server>:<port>/api/v1/stacks/<stackName>/versions/<stackVersion>/artifacts/metrics_descriptor
+Example => http://localhost:8080/api/v1/stacks/HDP/versions/2.3/artifacts/metrics_descriptor
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/troubleshooting.md b/versioned_docs/version-2.7.5/ambari-design/metrics/troubleshooting.md
new file mode 100644
index 0000000..cab1ea2
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/troubleshooting.md
@@ -0,0 +1,145 @@
+# Troubleshooting
+
+## Cleaning up Ambari Metrics System Data
+
+Following steps would help in cleaning up Ambari Metrics System data in a given cluster.
+
+Important Note:
+
+1. Cleaning up the AMS data would remove all the historical AMS data available
+2. The hbase parameters mentioned above are specific to AMS and they are different from the Cluster Hbase parameters
+
+### Step-by-step guide
+
+1. Using Ambari
+ * Set AMS to maintenance
+ * Stop AMS from Ambari. Identify the following from the AMS Configs screen
+ * 'Metrics Service operation mode' (embedded or distributed)
+ * hbase.rootdir
+ * hbase.zookeeper.property.dataDir
+2. AMS data would be stored in 'hbase.rootdir' identified above. Backup and remove the AMS data.
+ * If the Metrics Service operation mode
+ * is 'embedded', then the data is stored in OS files. Use regular OS commands to backup and remove the files in hbase.rootdir
+ * is 'distributed', then the data is stored in HDFS. Use 'hdfs dfs' commands to backup and remove the files in hbase.rootdir
+3. Remove the AMS zookeeper data by backing up and removing the contents of 'hbase.tmp.dir'/zookeeper
+4. Remove any Phoenix spool files from 'hbase.tmp.dir'/phoenix-spool folder
+5 Restart AMS using Ambari
+
+## Moving Metrics Collector to a new host
+
+1. Stop AMS Service
+
+2. Execute the following API call to Delete Metric Collector. (Replace server-host, cluster-name and host-name with the Metrics Collector host)
+
+```
+curl -u admin:admin -H "X-Requested-By:ambari" -i -X DELETE http://<server-host>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/METRICS_COLLECTOR
+```
+
+3. Execute the following API call to add Metrics collector to a new host. (Replace, server-host, cluster-name, host-name)
+
+```
+curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://<server-host>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/METRICS_COLLECTOR
+```
+
+4. Install the Metrics Collector component from the Host page of the new host.
+
+5. If the AMS is in embedded mode, copy the AMS data from old node to new node.
+
+ * For embedded mode (ams-site: timeline.metrics.service.operation.mode), copy over the hbase.rootdir and tmpdir to new host from the old collector host.
+ * For distributed mode, since AMS HBase is writing to HDFS, no change will be necessary.
+ * Ensure that ams:hbase-site:hbase.rootdir and hbase.tmp.dir are pointing to the correct location in the new AMS node
+6. Start the Metrics Service.
+
+7. The service daemons will be pointing to the old metrics collector host. Perform a rolling restart of slave components and a normal restart of Master components for them to pick up the new collector host.
+
+Note : Restart of services is not needed post Ambari-2.5.0 since live collector information is maintained in the cluster zookeeper.
+
+
+
+## Troubleshooting Guide
+
+The following page documents common problems discovered with Ambari Metrics Service and provides a guide for things to look out for and already solved problems.
+
+**Important facts to collect from the system**:
+
+Problems with Metric Collector host
+* Output of "rpm -qa | grep ambari" on the collector host.
+* Total available System memory, output of : "free -g"
+* Total available disk space and available partitions, output of : "df -h "
+* Total number of hosts in the cluster
+* Configs: /etc/ams-hbase/conf/hbase-env.sh, /etc/ams-hbase/conf/hbase-site.xml, /etc/ambari-metrics-collector/conf/ams-env.sh, /etc/ambari-metrics-collector/conf/ams-site.xml
+* Collector logs:
+
+```
+/var/log/ambari-metrics-collector/ambari-metrics-collector.log, /var/log/ambari-metrics-collector/hbase-ams-master-<host>.log, /var/log/ambari-metrics-collector/hbase-ams-master-<host>.out
+Note: Additionally, If distributed mode is enabled, /var/log/ambari-metrics-collector/hbase-ams-zookeeper-<host>.log, /var/log/ambari-metrics-collector/hbase-ams-regionserver-<host>.log
+```
+
+* Response to the following URLs -
+
+```
+http://<ams-host>:6188/ws/v1/timeline/metrics/metadata
+http://<ams-host>:6188/ws/v1/timeline/metrics/hosts
+```
+
+* The response will be JSON and can be attached as a file.
+* From AMS HBase Master UI - http://<METRICS_COLLECTOR_HOST>:61310
+
+
+ * Region Count
+ * StoreFile Count
+ * JMX Snapshot - http://<METRICS_COLLECTOR_HOST>:61310/jmx
+
+
+**Problems with Metric Monitor host**
+
+```
+Monitor log file: /etc/ambari-metrics-monitor/ambari-metrics-monitor.out
+```
+
+**Check out [Configurations - Tuning](https://cwiki.apache.org/confluence/display/AMBARI/Configurations+-+Tuning) for scale issue troubleshooting.**
+
+**Issue 1: AMS HBase process slow disk writes**
+
+The symptoms and resolutions below address the **embedded** mode of AMS only.
+
+_Symptoms_:
+
+Behavior|How to detect
+--------|--------------
+High CPU usage | HBase process on Collector host taking up close to 100% of every core
+HBase Log: Compaction times | grep hbase-ams-master-<host>.log | grep "Finished memstore flush"<br></br>This yields MB written in X milliseconds, generally 128 MBps and above is average speed unless the disk is contended.<br></br>Also this search reveals how many times compaction ran per minute. A value greater than 6 or 8 is a warning that write volume is far greater than what HBase can hold in memory
+HBase Log: ZK timeout | HBase crashes saying zookeeper session timed out. This happens because in embedded mode the zk session timeout is limited to max of 30 seconds (HBase issue: fix planned for 2.1.3).<br></br>The cause is again slow disk reads.
+Collector Log : "waiting for some tasks to finish" | ambari-metric-collector log shows messages where AsyncProcess writes are queued up
+
+_Resolutions_:
+
+Configuration Change|Description
+--------|-----------------------
+ams-hbase-site :: hbase.rootdir | Change this path to a disk mount that is not heavily contended.
+ams-hbase-ste :: hbase.tmp.dir | Change this path to a location different from hbase.rootdir
+ams-hbase-env :: hbase_master_heapsize<br></br>ams-hbase-site :: hbase.hregion.memstore.flush.size | Bump this value up so more data is held in memory to address I/O speeds.<br></br>If heap size is increased and resident memory usage does not go up, this parameter can be changed to address how much data can be stored in a memstore per Region. Default is set to 128 MB. The size is in bytes.<br></br>Be careful with modifying this value, generally limit the setting between 64 MB (small heap with fast disk write), to 512 MB (large heap > 8 GB, and average write speed), since more data held in memory means longer time to write it to disk during a Flush operation.
+
+**Issue 2: Ambari Metrics take a long time to load**
+
+_Symptoms_:
+
+Behavior|How to detect
+--------|--------------
+Graphs: Loading time too long<br></br>Graphs: No data available | Check out service pages / host pages for metric graphs
+Socket read timeouts | ambari-server.log shows: Error message saying socket timeout for metrics
+Ambari UI slowing down | Host page loading time is high, heatmaps do not show data<br></br>Dashboard loading time is too high<br></br>Multiple sessions result in slowness
+
+_Resolutions_:
+
+Upgrade to 2.1.2+ is highly recommended.
+
+Following is a list of fixes in 2.1.2 release that should greatly help to alleviate the slow loading and timeouts:
+
+https://issues.apache.org/jira/browse/AMBARI-12654
+
+https://issues.apache.org/jira/browse/AMBARI-12983
+
+https://issues.apache.org/jira/browse/AMBARI-13108
+
+## [Known Issues](https://cwiki.apache.org/confluence/display/AMBARI/Known+Issues)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/metrics/upgrading-ambari-metrics-system.md b/versioned_docs/version-2.7.5/ambari-design/metrics/upgrading-ambari-metrics-system.md
new file mode 100644
index 0000000..e61a8d7
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/metrics/upgrading-ambari-metrics-system.md
@@ -0,0 +1,21 @@
+# Upgrading Ambari Metrics System
+
+**Upgrading from Ambari 2.0 or 2.0.1 to 2.1**
+
+1. Upgrade ambari server and perform needed post-upgrade checks. (make sure all services are up and running)
+2. Stop Ambari Metrics service
+3. Execute the following command on all hosts.
+
+ ```bash
+ yum upgrade -y ambari-metrics-monitor ambari-metrics-hadoop-sink
+ ```
+ (Use appropriate package manager on ubuntu and windows)
+
+4. Execute the following command on host running Metrics Collector
+
+ ```bash
+ yum upgrade -y ambari-metrics-collector
+ ```
+
+5. Start Ambari Metrics Service
+6. The Sink jars will be deployed on every host, the daemons will pick the changes to sink implementations when they are restarted. (E.g.: HDFS Namenode / Datanode)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/quick-links.md b/versioned_docs/version-2.7.5/ambari-design/quick-links.md
new file mode 100644
index 0000000..0fea1e4
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/quick-links.md
@@ -0,0 +1,225 @@
+# Quick Links
+
+## Introduction
+
+A service can add a list of quick links to the Ambari web UI by adding meta info to a file following a predefined JSON format. Ambari server parses the quick link JSON file and provides its content to the UI. So that Ambari web UI can calculate quick link URLs based on the information and populate the quick links drop-down list accordingly.
+
+## Design
+
+By default, the JSON file is called quicklinks.json and is located in the quicklinks directory under the service root directory. For example, for Oozie, the file is OOZIE/quicklinks/quicklinks.json. You can also name the file differently as well as put it in a custom directory under the service root directory.
+
+
+Use YARN as an example, the following is what the metainfo.xml looks like with the quick links configurations.
+
+```xml
+<services>
+ <service>
+ <name>YARN</name>
+ <version>2.7.1.2.3</version>
+ <quickLinksConfigurations>
+ <quickLinksConfiguration>
+ <fileName>quicklinks.json</fileName>
+ <default>true</default>
+ </quickLinksConfiguration>
+ </quickLinksConfigurations>
+```
+
+The metainfo.xml can have different quick links configuration as shown here for MapReduce2.
+
+The _quickLinksConfigurations-dir_is an optional field that tells Ambari Server where to load the quicklinks.json file. We can skip it if we want the service to use the default _quicklinks_directory.
+
+```xml
+<service>
+ <name>MAPREDUCE2</name>
+ <version>2.7.1.2.3</version>
+ <quickLinksConfigurations-dir>quicklinks-mapred</quickLinksConfigurations-dir>
+ <quickLinksConfigurations>
+ <quickLinksConfiguration>
+ <fileName>quicklinks.json</fileName>
+ <default>true</default>
+ </quickLinksConfiguration>
+ </quickLinksConfigurations>
+```
+
+A quick link JSON file has two major sections, the "configuration" section for determine the protocol (HTTP vs HTTPS), and the "links" section for meta information of each quick link to be displayed on the Ambari web UI. The JSON file also includes a "name" section at the top that defines the name of the quick links JSON file that server uses for identification.
+
+Ambari web UI uses information provided in the "configuration" section to determine if the service is running against HTTP or HTTPS. The result is used to construct all quick link URLs defined in the "links" section.
+
+Use YARN as an example, the following is what the quicklinks.json looks like
+
+```json
+{
+ "name": "default",
+ "description": "default quick links configuration",
+ "configuration": {
+ "protocol": {
+ # type tells the UI which protocol to use if all checks meet.
+
+ # Use https_only or http_only with empty checks section to explicitly specify the type
+ "type":"https",
+ "checks":[ # There can be more than one check needed.
+ {
+ "property":"yarn.http.policy",
+ # Desired section here either is a specific value for the property specified
+ # Or whether the property value should exit or not_exist, blank or not_blank
+ "desired":"HTTPS_ONLY",
+ "site":"yarn-site"
+ }
+ ]
+ },
+ #configuration for individual links
+ "links": [
+ {
+ "name": "resourcemanager_ui",
+ "label": "ResourceManager UI",
+ "requires_user_name": "false", #set this to true if UI should attach log in user name to the end of the quick link url
+ "url": "%@://%@:%@",
+
+ #section calculate the port numbe.
+ "port":{
+ #use a property for the whole url if the service does not have a property for the port.
+ #Specify the regex so the url can be parsed for the port value.
+ "http_property": "yarn.timeline-service.webapp.address",
+ "http_default_port": "8080",
+ "https_property": "yarn.timeline-service.webapp.https.address",
+ "https_default_port": "8090",
+ "regex": "\\w*:(\\d+)",
+ "site": "yarn-site"
+ }
+ },
+ {
+ "name": "resourcemanager_logs",
+ "label": "ResourceManager logs",
+ "requires_user_name": "false",
+ "url": "%@://%@:%@/logs",
+ "port":{
+ "http_property": "yarn.timeline-service.webapp.address",
+ "http_default_port": "8088",
+ "https_property": "yarn.timeline-service.webapp.https.address",
+ "https_default_port": "8090",
+ "regex": "\\w*:(\\d+)",
+ "site": "yarn-site"
+ }
+ }
+ ]
+ }
+}
+```
+
+# REST API
+
+You can examine the quick link information made available to the Ambari web UI by running the following REST API as an HTTP GET request.
+
+REST API
+
+```
+/api/v1/stacks/[stack_name]versions/[stack_version]/services/[service_name]/quicklinks?QuickLinkInfo/default=true&fields=*
+```
+
+Response sent to the Ambari web UI.
+
+```json
+{
+ "href" : "http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/YARN/quicklinks?QuickLinkInfo/default=true&fields=*",
+ "items" : [
+ {
+ "href" : "http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/YARN/quicklinks/quicklinks.json",
+ "QuickLinkInfo" : {
+ "default" : true,
+ "file_name" : "quicklinks.json",
+ "service_name" : "YARN",
+ "stack_name" : "HDP",
+ "stack_version" : "2.3",
+ "quicklink_data" : {
+ "QuickLinksConfiguration" : {
+ "description" : "default quick links configuration",
+ "name" : "default",
+ "configuration" : {
+ "protocol" : {
+ "type" : "https",
+ "checks" : [
+ {
+ "property" : "yarn.http.policy",
+ "desired" : "HTTPS_ONLY",
+ "site" : "yarn-site"
+ }
+ ]
+ },
+ "links" : [
+ {
+ "name" : "resourcemanager_jmx",
+ "label" : "ResourceManager JMX",
+ "url" : "%@://%@:%@/jmx",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "resourcemanager_logs",
+ "label" : "ResourceManager logs",
+ "url" : "%@://%@:%@/logs",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "resourcemanager_ui",
+ "label" : "ResourceManager UI",
+ "url" : "%@://%@:%@",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.resourcemanager.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.resourcemanager.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "thread_stacks",
+ "label" : "Thread Stacks",
+ "url" : "%@://%@:%@/stacks",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ }
+ ]
+ }
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+## Ambari Web UI
+
+The changes for the stack driven quick links are hidden from the UI presentation. The quick links drop-down list behavior remains unchanged.
diff --git a/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/create-widget.png b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/create-widget.png
new file mode 100644
index 0000000..87f901e
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/create-widget.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/gauge.png b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/gauge.png
new file mode 100644
index 0000000..13ae128
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/gauge.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/graphs.png b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/graphs.png
new file mode 100644
index 0000000..a9bf723
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/graphs.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/number.png b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/number.png
new file mode 100644
index 0000000..adae257
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/number.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/widget-browser.png b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/widget-browser.png
new file mode 100644
index 0000000..c205590
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/imgs/widget-browser.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/service-dashboard/index.mdx b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/index.mdx
new file mode 100644
index 0000000..1c48434
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/service-dashboard/index.mdx
@@ -0,0 +1,188 @@
+# Enhanced Service Dashboard
+
+This feature was first introduced in Ambari 2.1.0 release. Any Ambari release before 2.1.0 does not support this feature. Cluster is required to be upgraded to Ambari 2.1.0 or above to use this feature.
+
+:::caution
+This document assumes that the service metrics are being exposed via Ambari. If this is not the case then please refer to [Metrics](https://cwiki.apache.org/confluence/display/AMBARI/Metrics)document for more related information.
+:::
+
+## Introduction
+
+The term Enhanced Service Dashboard is used for being able to seamlessly add new widgets to the service summary page and heatmap page. This feature enables stack service to be packaged with the widget definitions in the JSON format. These widget definitions will appear as default widgets on the service summary page and heatmap page on service installation. In addition new widgets for the service can be created any time on the deployed cluster.
+
+Displaying default service dashboard widgets on service installation is a 3 step process:
+
+1. Push service metrics to Ambari Metric Collector.
+
+2. Declare service metrics in service's metrics.json file of Ambari. This step is required to expose metrics via Ambari RESTful API.
+
+3. Define service widgets in the widgets.jsonfile.
+
+:::tip
+Widget gets the data to be charted from the service metrics. It is important to validate that the required service metrics are being exposed from Ambari metrics endpoint before defining a widget.
+:::
+
+## Service Dashboard Widgets
+
+Ambari supports 4 widget types:
+
+1. Graph
+2. Gauge
+3. Number
+4. Template
+
+### Graph
+
+A widget to display line or area graphs that are derived from one or more than one service metrics value over a range of time.
+
+
+### Graph Widget Definition
+
+```json
+{
+ "widget_name": "Memory Utilization",
+ "description": "Percentage of total memory allocated to containers running in the cluster.",
+ "widget_type": "GRAPH",
+ "is_visible": true,
+ "metrics": [
+ {
+ "name": "yarn.QueueMetrics.Queue=root.AllocatedMB",
+ "metric_path": "metrics/yarn/Queue/root/AllocatedMB",
+ "service_name": "YARN",
+ "component_name": "RESOURCEMANAGER",
+ "host_component_criteria": "host_components/HostRoles/ha_state=ACTIVE"
+ },
+ {
+ "name": "yarn.QueueMetrics.Queue=root.AvailableMB",
+ "metric_path": "metrics/yarn/Queue/root/AvailableMB",
+ "service_name": "YARN",
+ "component_name": "RESOURCEMANAGER",
+ "host_component_criteria": "host_components/HostRoles/ha_state=ACTIVE"
+ }
+ ],
+ "values": [
+ {
+ "name": "Memory Utilization",
+ "value": "${(yarn.QueueMetrics.Queue=root.AllocatedMB / (yarn.QueueMetrics.Queue=root.AllocatedMB + yarn.QueueMetrics.Queue=root.AvailableMB)) * 100}"
+ }
+ ],
+ "properties": {
+ "display_unit": "%",
+ "graph_type": "LINE",
+ "time_range": "1"
+ }
+}
+```
+
+1. **widget_name:** This is the name that will be displayed in the UI for the widget.
+
+2. **description:**Description for the widget that will be displayed in the UI.
+
+3. **widget_type:**This information is used by the widget to create the widget from the metric data.
+
+4. **is_visible:** This boolean decides if the widget is shown on the service summary page by default or not.
+
+5. **metrics:** This is an array that includes all metrics definitions comprising the widget.
+
+6. **metrics/name:** Actual n ame of the metric as being pushed to the sink or emitted as JMX property by the service.
+
+7. **metrics/metric_path** **:** This is the path to which above mentioned metrics/name is mapped in metrics.json file for the service. Metric value will be exposed in the metrics attribute of the service component or host component endpoint of the Ambari API at the same path.
+
+8. **metrics/service_name:**Name of the service containing the component emitting the metric.
+
+9. **metrics/component_name:**Name of the component emitting the metric.
+
+10. **metrics/host_component_criteria:**This is an optional field. Presence of this field means that the metric is host component metric and not a service component metric. If a metric is intended to be queried on host component endpoint then the criteria for choosing the host component needs to be specified over here. If this is left as a single space string then the first found host component will be queried for the metric.
+
+11. **values:** This is an array of datasets for the Graph widget. For other widget types this array always has one element.
+
+12. **values/name:** This field is used only for Graph widget type. This shows up as a label name in the legend for the dataset shown in a Graph widget type.
+
+13. **values/value:** This is the expression from which the value for the dataset is calculated. Expression contains references to the declared metric name and constant numbers which acts as valid operands. Expression also contains a valid set of operators {+,-,*,/} that can be used along with valid operands. Parentheses are also permitted in the expression.
+
+14. **properties:** These contains set of properties specific to the widget type. For Graph widget type it contains display_unit, graph_type and time_range. t ime_range field is currently not honored in the UI.
+
+### Gauge
+
+A widget to display percentage calculated from current value of a metric or current values of more than one metric.
+
+
+### Number
+
+A widget to display a number optionally with a unit that can be calculated from the current value of a metric or current values of more than one metric.
+
+
+
+### Template
+
+A widget to display one or more numbers calculated from current value of a metric or current values of more than one metric along with the embedded string.
+
+Aggregator function for the Metric
+
+Aggregator function are related to only service component level metrics and are not supported for host component level metrics.
+
+Ambari Metrics System supports 4 type of aggregation:
+
+1. **max**: Maximum value of the metric across all host components
+2. **min**: Minimum value of the metric across all host components
+3. **avg**: Average value of the metric across all host components
+4. **sum**: Sum of metric value recorded for each host components
+
+By default Ambari Metrics System uses the average aggregator function while computing value for a service component metric but this behavior can be overridden by suffixing metric name with the aggregator function name (._max ,._min ,._avg and ._sum ).
+
+```json
+{
+ "widget_name": "Blocked Updates",
+ "description": "Number of milliseconds updates have been blocked so the memstore can be flushed.",
+ "default_section_name": "HBASE_SUMMARY",
+ "widget_type": "GRAPH",
+ "is_visible": true,
+ "metrics": [
+ {
+ "name": "regionserver.Server.updatesBlockedTime._sum",
+ "metric_path": "metrics/hbase/regionserver/Server/updatesBlockedTime._sum",
+ "service_name": "HBASE",
+ "component_name": "HBASE_REGIONSERVER"
+ }
+ ],
+ "values": [
+ {
+ "name": "Updates Blocked Time",
+ "value": "${regionserver.Server.updatesBlockedTime._sum}"
+ }
+ ],
+ "properties": {
+ "display_unit": "ms",
+ "graph_type": "LINE",
+ "time_range": "1"
+ }
+}
+```
+
+## Widget Operations:
+
+A Service that has widgets.json and metrics.json file will also be provided with following abilities:
+
+1. **Widget Browser** for performing widget related operations.
+
+2. **Create widget wizard** for creating new desired service widgets in a cluster.
+
+3. **Edit widget wizard** for editing service widgets post creation.
+
+## Widget Browser
+
+Widget Browser is the place from which actions can be performed on widgets like adding/removing a widget from the dashboard, sharing widget with other users and deleting a widget.While creating new widget, user can choose to share the widget with other users or not. All widgets that are created by the user + shared with the user + defined in the stack as default service widgets will be available in the widget browser of a user.
+
+
+### Create Widget Wizard
+
+A custom desired widget can be created from the exposed service metrics using 3 step Create Widget wizard.
+
+
+## Using Enhanced Service Dashboard feature
+
+If the existing service in Ambari is pushing the metrics to Ambari metrics collector then minimal work needs to be done. This includes adding metrics.json and widgets.json file for the service, and might include making changes to metainfo.xml file.
+
+:::tip
+[AMBARI-9910](https://issues.apache.org/jira/browse/AMBARI-9910) added widgets to Accumulo service page. This work can be referred as an example to use Enhanced Service Dashboard feature.
+:::
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/custom-services.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/custom-services.md
new file mode 100644
index 0000000..561da65
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/custom-services.md
@@ -0,0 +1,1059 @@
+# Custom Services
+
+There are many aspects to creating custom services. At its most basic a service must include its metainfo.xml and command script. It also must be packaged to allow adding it to a cluster. Some of the further sub-sections define optional elements of the service definition which can be included.
+
+## Defining a Custom Service
+
+
+
+### Service Metainfo and Component Category
+
+#### metainfo.xml
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service must be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you must specify the <commandScript> to use when executing commands. There is a defined set of default commands the component must support depending on the components category.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+### Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV". This service includes MASTER, SLAVE and CLIENT components.
+
+#### Create a Custom Service
+
+1. Create a directory named `<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```bash
+mkdir SAMPLESRV
+cd SAMPLESRV
+```
+2. Within the `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily>
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+3. In the above, the service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+4. Next, let's create that command script. Create a directory for the command script `SAMPLESRV` `/` ** `package/scripts`** that we designated in the service metainfo.
+
+```bash
+mkdir -p package/scripts
+cd package/scripts
+```
+5. Within the scripts directory, create the `.py` command script files mentioned in the metainfo. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+#### Implementing a Custom Command
+
+1. Browse to the `SAMPLESRV` directory, and edit the `metainfo.xml` file that describes the service. For example, adding a custom command to the SAMPLESRV_CLIENT:
+
+```xml
+
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+```
+2. Next, let's create that command script by editing the package/scripts/sample_client.py file that we designated in the service metainfo.
+
+
+```python
+import sys
+from resource_management import *
+
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+#### Adding Configs to the Custom Service
+
+In this example, we will add a configuration type "test-config" to our SAMPLESRV.
+
+1. Modify the metainfo.xml Add the configuration files to the CLIENT component will make it available in the client tar ball downloaded from Ambari.
+
+```xml
+<component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>test-config.xml</fileName>
+ <dictionaryName>test-config</dictionaryName>
+ </configFile>
+ </configFiles>
+</component>
+```
+2. Create a directory for the configuration dictionary file `SAMPLESRV` `/` **`configuration`**.
+
+```bash
+mkdir -p configuration
+cd configuration
+```
+3. Create the `test-config.xml` file. For example:
+
+```xml
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<configuration>
+ <property>
+ <name>some.test.property</name>
+ <value>this.is.the.default.value</value>
+ <description>This is a test description.</description>
+ </property>
+ <property>
+ <name>another.test.property</name>
+ <value>5</value>
+ <description>This is a second test description.</description>
+ </property>
+</configuration>
+
+```
+4. There is an optional setting "configuration-dir". Custom services should either not include the setting or should leave it as the default value "configuration".
+
+```xml
+<configuration-dir>configuration</configuration-dir>
+```
+5. Configuration dependencies can be included in the metainfo.xml in the a " `configuration-dependencies`" section. This section can be added to the service as a whole or a particular component. One of the implications of this dependency is that whenever the config-type is updated, Ambari automatically marks the component or service as requiring restart.
+
+For example, HIVE defines a component level configuration dependencies for the HIVE_METASTORE component
+
+```xml
+ <component>
+ <name>HIVE_METASTORE</name>
+ <displayName>Hive Metastore</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <reassignAllowed>true</reassignAllowed>
+ <clientsToUpdateConfigs></clientsToUpdateConfigs>
+... ...
+ <configuration-dependencies>
+ <config-type>hive-site</config-type>
+ </configuration-dependencies>
+ </component>
+```
+
+HIVE also defines service level configuration dependencies.
+
+```xml
+<configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hive-log4j</config-type>
+ <config-type>hive-exec-log4j</config-type>
+ <config-type>hive-env</config-type>
+ <config-type>hivemetastore-site.xml</config-type>
+ <config-type>webhcat-site</config-type>
+ <config-type>webhcat-env</config-type>
+ <config-type>parquet-logging</config-type>
+ <config-type>ranger-hive-plugin-properties</config-type>
+ <config-type>ranger-hive-audit</config-type>
+ <config-type>ranger-hive-policymgr-ssl</config-type>
+ <config-type>ranger-hive-security</config-type>
+ <config-type>mapred-site</config-type>
+ <config-type>application.properties</config-type>
+ <config-type>druid-common</config-type>
+ </configuration-dependencies>
+```
+
+## Packaging and Installing Custom Services
+
+### Introduction
+
+Custom services in Apache Ambari can be packaged and installed in many ways. Ideally, they should all be packaged and installed in the same manner. This document describes how to package and install custom services using Extensions and Management Packs. Using this approach, the custom service definitions do not get inserted under the stack versions services directory. This keeps the stack clean and allows users to easily see which services were installed by which package (stack or extension).
+
+### Management Packs
+
+A [management pack](./management-packs.md) is a mechanism for installing stacks, extensions and custom services. A management pack is packaged as a tar.gz file which expands as a directory that includes an mpack.json file and the stack, extension and custom service definitions that it defines.
+
+#### Example Structure
+
+myext-mpack1.0.0.0
+
+├── mpack.json
+
+└──
+
+#### mpack.json Format
+
+The mpacks.json file allows you to specify the name, version and description of the management pack along with the prerequisites for installing the management pack. For extension management packs, the only important prerequisite is the min_ambari_version. The most important part is the artifacts section. For the purpose here, the artifact type will always be "extension-definitions". You can provide any name for the artifact and you can potentially change the source_dir if you wish to package your extensions under a different directory than "extensions". For consistency, it is recommended that you use the default source_dir "extensions".
+
+```json
+{
+
+"type" : "full-release",
+
+"name" : "myextension-mpack",
+
+"version": "1.0.0.0",
+
+"description" : "MyExtension Management Pack",
+
+"prerequisites": {
+
+"min_ambari_version" : "2.4.0.0"
+
+},
+
+"artifacts": [
+
+{
+
+"name" : "myextension-extension-definitions",
+
+"type" : "extension-definitions",
+
+"source_dir": "extensions"
+
+}
+
+]
+
+}
+```
+
+### Extensions
+
+An [extension](./extensions.md)is a collection of one or more custom services which are packaged together. Much like stacks, each extension has a name which needs to be unique in the cluster. It also has a version folder to distinguish different releases of the extension which go in the resources/extensions folder with
+
+An extension version is similar to a stack version but it only includes the metainfo.xml and the services directory. This means that the alerts, kerberos, metrics, role command order and widgets files are not supported and should be included at the service level. In addition, the repositories, hooks, configurations, and upgrades directories are not supported although upgrade support can be added at the service level.
+
+#### Extension Structure
+
+```
+MY_EXT
+
+└── 1.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ ├── SERVICEA
+
+ ├── ...
+```
+
+#### Extension metainfo.xml Format:
+
+The extension metainfo.xml is very simple, it just specifies the minimum stack versions which are supported.
+
+```xml
+<metainfo>
+
+ <prerequisites>
+
+ <min-stack-versions>
+
+ <stack>
+
+ <name>BIGTOP</name>
+
+ <version>1.0.*</version>
+
+ </stack>
+
+ </min-stack-versions>
+
+ </prerequisites>
+
+</metainfo>
+```
+
+#### Extension Inheritance
+
+Extension versions can _extend_ other Extension versions in order to share command scripts and configurations. This reduces duplication of code across Extensions with the following:
+
+* add new Services in the child Extension version (not in the parent Extension version)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, **MyExtension 2.0**could extend **MyExtension 1.0** so only the changes applicable to **the MyExtension 2.0** extension are present in that Extension definition. This extension is defined in the metainfo.xml for **MyExtension 2.0**:
+
+```xml
+<metainfo>
+ <extends>1.0</extends>
+
+```
+
+### Extension Management Packs Structure
+
+```
+myext-mpack1.0.0.0
+
+├── mpack.json
+
+└── extensions
+
+ └── MY_EXT
+
+ └── 1.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ └── SERVICEA
+
+ └── 2.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ ├── SERVICEA
+
+ └── …
+
+
+```
+
+### Installing Management Packs
+
+In order to install an extension management pack, you run the following command with or without the "-v" option:
+
+ambari-server install-mpack --mpack=/dir/to/myext-mpack-1.0.0.0.tar.gz -v
+
+This will check to see if the management pack's prerequisites are met (min_ambari_version). In addition it will check to see if there are any errors in the management pack format. Assuming everything is correct, the management pack will be extracted in:
+
+/var/lib/ambariserver/resources/mpacks.
+
+It will then create symlinks from /var/lib/ambari-server/resources/extensions for each extension version in /var/lib/ambari-server/resources/mpacks/
+
+Extension Directory | Target Management Pack Symlink
+--------------------|------------------------------------------------------------------
+resources/extensions/MY_EXT/1.0 | resources/mpacks/myext-mpack1.0.0.0/extensions/MY_EXT/1.0
+resources/extensions/MY_EXT/2.0 | resources/mpacks/myext-mpack1.0.0.0/extensions/MY_EXT/2.0
+
+### Verifying the Extension Installation
+
+Once you have installed the extension management pack, you can restart ambari-server.
+
+```bash
+ambari-server restart
+```
+
+After ambari-server has been restarted, you will see in the ambari DB your extension listed in the extension table:
+
+```
+ambari=> select * from extension;
+
+extension_id | extension_name | extension_version
+
+--------------+----------------+-------------------
+
+1 | EXT | 1.0
+
+(1 row)
+```
+
+You can also query for extensions by calling REST APIs.
+
+```
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"items" : [{
+
+"href" : "http://
+
+"Extensions" : {
+
+"extension_name" : "EXT"
+
+}
+
+}]
+
+}
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"Extensions" : {
+
+"extension_name" : "EXT"
+
+},
+
+"versions" : [{
+
+"href" : "http://
+
+"Versions" : {
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0"
+
+}
+
+}]
+
+}
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"Versions" : {
+
+"extension-errors" : [ ],
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0",
+
+"parent_extension_version" : null,
+
+"valid" : true
+
+}
+
+}
+```
+
+### Linking Extensions to the Stack
+
+Once you have verified that Ambari knows about your extension, the next step is linking the extension version to the current stack version. Linking adds the extension version's services to the list of stack version services. This allows you to install the extension services on the cluster. Linking an extension version to a stack version, will first verify whether the extension supports the given stack version. This is determined by the stack versions listed in the extension version's metainfo.xml.
+
+The following REST API call, will link an extension version to a stack version. In this example it is linking EXT/1.0 with the BIGTOP/1.0 stack version.
+
+```bash
+curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"ExtensionLink": {"stack_name": "BIGTOP", "stack_version": "1.0", "extension_name": "EXT", "extension_version": "1.0"}}' http://
+```
+
+You can examine links (or extension links) either in the Ambari DB or with REST API calls.
+
+```
+ambari=> select * from extensionlink;
+
+link_id | stack_id | extension_id
+
+---------+----------+--------------
+
+1 | 2 | 1
+
+(1 row)
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"items" : [{
+
+"href" : "http://
+
+"ExtensionLink" : {
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0",
+
+"link_id" : 1,
+
+"stack_name" : "BIGTOP",
+
+"stack_version" : "1.0"
+
+}
+
+}]
+
+}
+```
+
+## Role Command Order
+
+Each service can define its own role command order by including a role_command_order.json file in its service folder. The service should only specify the relationship of its components to other components. In other words, if a service only includes COMP_X, it should only list dependencies related to COMP_X. If when COMP_X starts it is dependent on the NameNode start and when the NameNode stops it should wait for COMP_X to stop, the following would be included in the role command order:
+
+```json
+{
+ "_comment" : "Record format:",
+ "_comment" : "blockedRole-blockedCommand: [blockerRole1-blockerCommand1, blockerRole2-blockerCommand2, ...]",
+ "general_deps" : {
+ "_comment" : "dependencies for all cases"
+ },
+ "_comment" : "Dependencies that are used when GLUSTERFS is not present in cluster",
+ "optional_no_glusterfs": {
+ "COMP_X-START": ["NAMENODE-START"],
+ "NAMENODE-STOP": ["COMP_X-STOP"]
+ }
+}
+```
+
+The entries in the service's role command order will be merged with the role command order defined in the stack. For example, since the stack already has a dependency for NAMENODE-STOP, in the example above COMP_X-STOP would be added to the rest of the NAMENODE-STOP dependencies and the COMP_X-START dependency on NAMENODE-START would be added as a new dependency.
+
+**Sections**
+Ambari uses the below sections only:
+
+Section Name | When Used
+-------------|------------
+general_deps | Command orders are applied in all situations
+optional_glusterfs | Command orders are applied when cluster has instance of GLUSTERFS service
+optional_no_glusterfs | Command orders are applied when cluster does not have instance of GLUSTERFS service
+namenode_optional_ha | Command orders are applied when HDFS service is installed and JOURNALNODE component exists (HDFS HA is enabled)
+resourcemanager_optional_ha | Command orders are applied when YARN service is installed and multiple RESOURCEMANAGER host-components exist (YARN HA is enabled)
+
+**Commands**
+Commands currently supported by Ambari are
+
+* INSTALL
+* UNINSTALL
+* START
+* RESTART
+* STOP
+* EXECUTE
+* ABORT
+* UPGRADE
+* SERVICE_CHECK
+* CUSTOM_COMMAND
+* ACTIONEXECUTE
+
+## Service Advisor
+
+Each custom service can provide a service advisor as a Python script named service-advisor.py in their service folder. A Service Advisor allows custom services to integrate into the stack advisor behavior which only applies to the services within the stack.
+
+### Service Advisor Inheritance
+
+Unlike the Stack-advisor scripts, the service-advisor scripts do not automatically extend the parent service's service-advisor scripts. The service-advisor script needs to explicitly extend their parent's service service-advisor script. The following code sample shows how you would refer to a parent's service_advisor.py. In this case it is extending the root service-advisor.py file in the resources/stacks directory.
+
+**Sample service-advisor.py file inheritance**
+
+```python
+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
+STACKS_DIR = os.path.join(SCRIPT_DIR, '../../../stacks/')
+PARENT_FILE = os.path.join(STACKS_DIR, 'service_advisor.py')
+
+try:
+ with open(PARENT_FILE, 'rb') as fp:
+ service_advisor = imp.load_module('service_advisor', fp, PARENT_FILE, ('.py', 'rb', imp.PY_SOURCE))
+except Exception as e:
+ traceback.print_exc()
+ print "Failed to load parent"
+
+class HAWQ200ServiceAdvisor(service_advisor.ServiceAdvisor):
+```
+
+### Service Advisor Behavior
+
+Like the stack advisors, service advisors provide information on 4 important aspects for the service:
+
+1. Recommend layout of the service on cluster
+2. Recommend service configurations
+3. Validate layout of the service on cluster
+4. Validate service configurations
+
+By providing the service-advisor.py file, one can control dynamically each of the above for the service.
+
+The [main interface for the service-advisor](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51) scripts contains documentation on how each of the above are called, and what data is provided.
+
+**Base service_advisor.py from resources/stacks**
+
+```python
+
+class ServiceAdvisor(DefaultStackAdvisor):
+
+ def colocateService(self, hostsComponentsMap, serviceComponents):
+ pass
+
+ def getServiceConfigurationRecommendations(self, configurations, clusterSummary, services, hosts):
+ pass
+
+ def getServiceComponentLayoutValidations(self, services, hosts):
+ return []
+
+ def getServiceConfigurationsValidationItems(self, configurations, recommendedDefaults, services, hosts):
+ return []
+```
+
+**Examples**
+[Service Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51)
+[HAWQ 2.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0/service_advisor.py)
+[PXF 3.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/PXF/3.0.0/service_advisor.py)
+
+## Service Inheritance
+
+A service can inherit through the stack but may also inherit directly from common-services. This is declared in the metainfo.xml:
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <extends>common-services/HDFS/2.1.0.2.0</extends>
+```
+
+When a service inherits from another service version, how its defining files and directories are inherited follows a number of different patterns.
+
+The following files if defined in the current service version replace the definitions from the parent service version:
+
+* alerts.json
+* kerberos.json
+* metrics.json
+* role_command_order.json
+* service_advisor.py
+* widgets.json
+
+Note: All the services' role command orders will be merge with the stack's role command order to provide a master list.
+
+The following files if defined in the current service version are merged with the parent service version (supports removing/deleting parent entries):
+
+* quicklinks/quicklinks.json
+* themes/theme.json
+
+The following directories if defined in the current service version replace those from the parent service version:
+
+* packages
+* upgrades
+
+This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The configurations directory in the current service version merges the configuration files with those from the parent service version. Configuration files defined at any level can be omitted from the services configurations by specifying the config-type in the excluded-config-types list:
+
+```xml
+ <excluded-config-types>
+ <config-type>storm-site</config-type>
+ </excluded-config-types>
+```
+
+For an individual configuration file (or configuration type) like core-site.xml, it will by default merge with the configuration type from the parent. If the `supports_do_not_extend` attribute is specified as `true`, the configuration type will **not** be merged.
+
+```xml
+<configuration supports_do_not_extend="true">
+```
+
+### Inheritance and the Service MetaInfo
+
+By default all attributes of the service and components if defined in the metainfo.xml of the current service version will replace those of the parent service version unless specified in the sections that follow.
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <displayName>HDFS</displayName>
+ <comment>Apache Hadoop Distributed File System</comment>
+ <version>2.1.0.2.0</version>
+ <components>
+ <component>
+ <name>NAMENODE</name>
+ <displayName>NameNode</displayName>
+ <category>MASTER</category>
+ <cardinality>1-2</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <reassignAllowed>true</reassignAllowed>
+ <commandScript>
+ <script>scripts/namenode.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>1800</timeout>
+ </commandScript>
+ ...
+```
+
+The custom commands defined in the metainfo.xml of the current service version are merged with those of the parent service version.
+
+```xml
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/namenode.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+```
+
+The configuration dependencies defined in the metainfo.xml of the current service version are merged with those of the parent service version.
+
+```xml
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ ...
+ </configuration-dependencies>
+
+```
+
+The components defined in the metainfo.xml of the current service are merged with those of the parent (supports delete).
+
+```xml
+ <component>
+ <name>ZKFC</name>
+ <displayName>ZKFailoverController</displayName>
+ <category>SLAVE</category>
+```
+
+## Service Upgrade
+
+Each custom service can define its upgrade within its service definition. This allows the custom service to be integrated within the [stack's upgrade](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-StackUpgrades).
+
+### Service Upgrade Packs
+
+Each service can define _upgrade-packs_, which are XML files describing the upgrade process of that particular service and how the upgrade pack relates to the overall stack upgrade-packs. These _upgrade-pack_ XML files are placed in the service's _upgrades/_ folder in separate sub-folders specific to the stack-version they are meant to extend. Some examples of this can be seen in the testing code.
+
+**Examples**
+
+- [Upgrades folder](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/)
+- [Upgrade-pack XML](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml)
+
+### Matching Upgrade Packs
+
+Each upgrade-pack that the service defines should match the file name of the service defined by a particular stack version. For example in the testing code, HDP 2.2.0 had an [upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.2.0/upgrades/upgrade_test_15388.xml) upgrade-pack. The HDFS service defined an extension to that upgrade pack [HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml). In this case the upgrade-pack was defined in the HDP/2.0.5 stack. The upgrade-pack is an extension to HDP/2.2.0 because it is defined in upgrade/HDP/2.2.0 directory. Finally the name of the service's extension to the upgrade-pack upgrade_test_15388.xml matches the name of the upgrade-pack in HDP/2.2.0/upgrades.
+
+**Upgrade XML Format**
+
+The file format for the service is much the same as that of the stack. The target, target-stack and type attributes should all be the same as the stack's upgrade-pack.
+
+**Prerequisite Checks**
+
+The service is able to add its own prerequisite checks.
+
+**General Attributes and Prerequisite Checks**
+```xml
+<upgrade xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <target>2.4.*</target>
+ <target-stack>HDP-2.4.0</target-stack>
+ <type>ROLLING</type>
+ <prerequisite-checks>
+ <check>org.apache.ambari.server.checks.FooCheck</check>
+ </prerequisite-checks>
+```
+
+**Order Section**
+
+The order section of the upgrade-pack, consists of group elements just like the stack's upgrade-pack. The key difference is defining how these groups relate to groups in the stack's upgrade pack or other service upgrade-packs. In the first example we are referencing the PRE_CLUSTER group and adding a new execute-stage for the service FOO. The entry is supposed to be added after the execute-stage for HDFS based on the
+
+**Order Section - Add After Group Entry**
+```xml
+
+<order>
+ <group xsi:type="cluster" name="PRE_CLUSTER" title="Pre {{direction.text.proper}}">
+ <add-after-group-entry>HDFS</add-after-group-entry>
+ <execute-stage service="FOO" component="BAR" title="Backup FOO">
+ <task xsi:type="manual">
+ <message>Back FOO up.</message>
+ </task>
+ </execute-stage>
+ </group>
+
+```
+
+The same syntax can be used to order other sections like service check priorities and group services.
+
+**Order Section - Further Add After Group Entry Examples**
+```xml
+<group name="SERVICE_CHECK1" title="All Service Checks" xsi:type="service-check">
+ <add-after-group-entry>ZOOKEEPER</add-after-group-entry>
+ <priority>
+ <service>HBASE</service>
+ </priority>
+</group>
+
+<group name="CORE_MASTER" title="Core Masters">
+ <add-after-group-entry>YARN</add-after-group-entry>
+ <service name="HBASE">
+ <component>HBASE_MASTER</component>
+ </service>
+</group>
+```
+
+It is also possible to add new groups and order them after other groups in the stack's upgrade-packs. In the following example, we are adding the FOO group after the HIVE group using the add-after-group tag.
+
+**Order Section - Add After Group**
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO">
+ <component>BAR</component>
+ </service>
+</group>
+```
+
+You could also include both the add-after-group and the add-after-group-entry tags in the same group. This will create a new group if it doesn't already exist and will order it after the add-after-group's group name. The add-after-group-entry will determine the internal ordering of that group's services, priorities or execute stages.
+
+**Order Section - Add After Group**
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <add-after-group-entry>FOO</add-after-group-entry>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO2">
+ <component>BAR2</component>
+ </service>
+</group>
+```
+
+**Processing Section**
+
+The processing section of the upgrade-pack remains the same as what it would be in the stack's upgrade-pack.
+
+**Processing Section**
+```xml
+ <processing>
+ <service name="FOO">
+ <component name="BAR">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ <component name="BAR2">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ </service>
+ </processing>
+```
+## Custom Service Repo
+
+Each service can define its own repo info by adding repos/repoinfo.xml in its service folder. The service specific repo will be included in the list of repos defined for the stack.
+
+**Example**: https://github.com/apache/ambari/blob/trunk/contrib/management-packs/microsoft-r_mpack/src/main/resources/custom-services/MICROSOFT_R_SERVER/8.0.5/repos/repoinfo.xml
+
+```xml
+
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://cust.service.lab.com/Services/centos6/1.1/myservices</baseurl>
+ <repoid>CUSTOM-1.1</repoid>
+ <reponame>CUSTOM</reponame>
+ </repo>
+ </os>
+</reposinfo>
+```
+
+## Custom Services - Additional Configuration
+
+### Alerts
+
+Each service is capable of defining which alerts Ambari should track by providing an [alerts.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/alerts.json) file.
+
+### Kerberos
+
+Ambari is capable of enabling and disabling Kerberos for a cluster. To inform Ambari of the identities and configurations to be used for the service and its components, each service can provide a _kerberos.json_ file.
+
+### Metrics
+
+Ambari provides the [Ambari Metrics System ("AMS")](../metrics/index.md)service for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+Each service can define which metrics AMS should collect and provide by defining a [metrics.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metrics.json) file.
+
+Read more about the metrics.json file format in the [Stack Defined Metrics](../metrics/stack-defined-metrics.md) page.
+
+### Quick Links
+
+A service can add a list of quick links to the Ambari web UI by adding a quick links JSON file. Ambari server parses the quick links JSON file and provides its content to the Ambari web UI. The UI can calculate quick link URLs based on that information and populate the quick links drop-down list accordingly.
+
+### Widgets
+
+Each service can define which widgets and heat maps show up by default on the service summary page by defining a [widgets.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/widgets.json) file.
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md
new file mode 100644
index 0000000..3b76f47
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md
@@ -0,0 +1,644 @@
+# Defining a Custom Stack and Services
+
+## Background
+
+The Stack definitions can be found in the source tree at [/ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks). After you install the Ambari Server, the Stack definitions can be found at `/var/lib/ambari-server/resources/stacks`
+
+## Stack Properties
+
+The stack must contain or inherit a properties directory which contains two files: [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) and [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json). The properties directory must be at the root stack version level and must not be included in the other stack versions. This [directory](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) is new in Ambari 2.4.
+
+The stack_features.json contains a list of features that are included in Ambari and allows the stack to specify which versions of the stack include those features. The list of features are determined by the particular Ambari release. The reference list for a particular Ambari version should be found in the [HDP/2.0.6/properties/stack_features.json](https://github.com/apache/ambari/blob/branch-2.4/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) in the branch for that Ambari release. Each feature has a name and description and the stack can provide the minimum and maximum version where that feature is supported.
+
+```json
+{
+
+"stack_features": [
+
+{
+
+"name": "snappy",
+
+"description": "Snappy compressor/decompressor support",
+
+"min_version": "2.0.0.0",
+
+"max_version": "2.2.0.0"
+
+},
+
+...
+
+}
+```
+
+The stack_tools.json includes the name and location where the stack_selector and conf_selector tools are installed.
+
+```json
+{
+
+"stack_selector": ["hdp-select", "/usr/bin/hdp-select", "hdp-select"],
+
+"conf_selector": ["conf-select", "/usr/bin/conf-select", "conf-select"]
+
+}
+```
+
+For more information see the [Stack Properties](./stack-properties.md) wiki page.
+
+## Structure
+
+The structure of a Stack definition is as follows:
+
+```
+|_ stacks
+ |_
+ |_
+ metainfo.xml
+ |_ hooks
+ |_ repos
+ repoinfo.xml
+ |_ services
+ |_
+ metainfo.xml
+ metrics.json
+ |_ configuration
+ {configuration files}
+ |_ package
+ {files, scripts, templates}
+```
+
+## Defining a Service and Components
+
+**metainfo.xml**
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service can be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you specify the `<commandScript>` to use when executing commands. There is a defined set of default commands the component must support.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+## Using Stack Inheritance
+
+Stacks can _extend_ other Stacks in order to share command scripts and configurations. This reduces duplication of code across Stacks with the following:
+
+* define repositories for the child Stack
+* add new Services in the child Stack (not in the parent Stack)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, the **HDP 2.1 Stack _extends_ HDP 2.0.6 Stack** so only the changes applicable to **HDP 2.1 Stack** are present in that Stack definition. This extension is defined in the [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.1/metainfo.xml) for HDP 2.1 Stack:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.0.6</extends>
+</metainfo>
+```
+
+## Example: Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV", add it to an existing Stack definition. This service includes MASTER, SLAVE and CLIENT components.
+
+### Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+```
+3. Browse to the newly created `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+5. Next, let's create that command script. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts` that we designated in the service metainfo.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+```
+6. Browse to the scripts directory and create the `.py` command script files. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+### Install the Service (via Ambari Web "Add Services")
+
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Sample Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Sample Service" and click Next.
+
+4. Assign the "Sample Srv Master" and click Next.
+
+5. Select the hosts to install the "Sample Srv Client" and click Next.
+
+6. Once complete, the "My Sample Service" will be available Service navigation area.
+
+7. If you want to add the "Sample Srv Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+## Example: Implementing a Custom Client-only Service
+
+In this example, we will create a custom service called "TESTSRV", add it to an existing Stack definition and use the Ambari APIs to install/configure the service. This service is a CLIENT so it has two commands: install and configure.
+
+### Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTSRV</strong>` that will contain the service definition for **TESTSRV**.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+```
+3. Browse to the newly created `TESTSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTSRV</name>
+ <displayName>New Test Service</displayName>
+ <comment>A New Test Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TEST_CLIENT</name>
+ <displayName>New Test Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTSRV**", and it contains one component " **TEST_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts` that we designated in the service metainfo.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the client';
+ def configure(self, env):
+ print 'Configure the client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```
+ambari-server restart
+```
+
+### Adding Repository details in repoinfo.xml
+
+When adding a custom service, it may be needed to add additional repository details for the stack especially if the service binaries are available through a separate repository. Additional
+
+```xml
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://cust.service.lab.com/Services/centos6/1.1/myservices</baseurl>
+ <repoid>CUSTOM-1.1</repoid>
+ <reponame>CUSTOM</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.1</baseurl>
+ <repoid>HDP-2.0.6</repoid>
+ <reponame>HDP</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.17/repos/centos6</baseurl>
+ <repoid>HDP-UTILS-1.1.0.17</repoid>
+ <reponame>HDP-UTILS</reponame>
+ </repo>
+ </os>
+</reposinfo>
+```
+
+### Install the Service (via the Ambari REST API)
+
+1. Add the Service to the Cluster.
+
+
+```
+POST
+/api/v1/clusters/MyCluster/services
+
+{
+"ServiceInfo": {
+ "service_name":"TESTSRV"
+ }
+}
+```
+2. Add the Components to the Service. In this case, add TEST_CLIENT to TESTSRV.
+
+```
+POST
+/api/v1/clusters/MyCluster/services/TESTSRV/components/TEST_CLIENT
+```
+3. Install the component on all target hosts. For example, to install on `<a href="http://c6402.ambari.apache.org">c6402.ambari.apache.org</a>` and `<a href="http://c6403.ambari.apache.org">c6403.ambari.apache.org</a>`, first create the host_component resource on the hosts using POST.
+
+```
+POST
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+POST
+/api/v1/clusters/MyCluster/hosts/c6403.ambari.apache.org/host_components/TEST_CLIENT
+```
+4. Now have Ambari install the components on all hosts. In this single command, you are instructing Ambari to install all components related to the service. This call the `install()` method in the command script on each host.
+
+
+```
+PUT
+/api/v1/clusters/MyCluster/services/TESTSRV
+
+{
+ "RequestInfo": {
+ "context": "Install Test Srv Client"
+ },
+ "Body": {
+ "ServiceInfo": {
+ "state": "INSTALLED"
+ }
+ }
+}
+```
+5. Alternatively, instead of installing all components at the same time, you can explicitly install each host component. In this example, we will explicitly install the TEST_CLIENT on `<a href="http://c6402.ambari.apache.org">c6402.ambari.apache.org</a>`:
+
+```
+PUT
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+{
+ "RequestInfo": {
+ "context":"Install Test Srv Client"
+ },
+ "Body": {
+ "HostRoles": {
+ "state":"INSTALLED"
+ }
+ }
+}
+```
+6. Use the following to configure the client on the host. This will end up calling the `configure()` method in the command script.
+
+```
+POST
+/api/v1/clusters/MyCluster/requests
+
+{
+ "RequestInfo" : {
+ "command" : "CONFIGURE",
+ "context" : "Config Test Srv Client"
+ },
+ "Requests/resource_filters": [{
+ "service_name" : "TESTSRV",
+ "component_name" : "TEST_CLIENT",
+ "hosts" : "c6403.ambari.apache.org"
+ }]
+}
+```
+7. If you want to see which hosts the component is installed.
+
+```
+GET
+/api/v1/clusters/MyCluster/components/TEST_CLIENT
+```
+
+### Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Test Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Test Service" and click Next.
+
+4. Select the hosts to install the "New Test Client" and click Next.
+
+5. Once complete, the "My Test Service" will be available Service navigation area.
+
+6. If you want to add the "New Test Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+
+## Example: Implementing a Custom Client-only Service (with Configs)
+
+In this example, we will create a custom service called "TESTCONFIGSRV" and add it to an existing Stack definition. This service is a CLIENT so it has two commands: install and configure. And the service also includes a configuration type "test-config".
+
+### Create and Add the Service to the Stack
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTCONFIGSRV</strong>` that will contain the service definition for TESTCONFIGSRV.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+```
+3. Browse to the newly created `TESTCONFIGSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTCONFIGSRV</name>
+ <displayName>New Test Config Service</displayName>
+ <comment>A New Test Config Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TESTCONFIG_CLIENT</name>
+ <displayName>New Test Config Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTCONFIGSRV**", and it contains one component " **TESTCONFIG_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts` that we designated in the service metainfo `<commandscript></commandscript>`.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the config client';
+ def configure(self, env):
+ print 'Configure the config client';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now let's define a config type for this service. Create a directory for the configuration dictionary file `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servicesTESTCONFIGSRV/configuration`.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+```
+8. Browse to the configuration directory and create the `test-config.xml` file. For example:
+
+```xml
+
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<configuration>
+ <property>
+ <name>some.test.property</name>
+ <value>this.is.the.default.value</value>
+ <description>This is a kool description.</description>
+ </property>
+</configuration>
+
+```
+9. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```
+ambari-server restart
+```
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/extensions.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/extensions.md
new file mode 100644
index 0000000..b89bc96
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/extensions.md
@@ -0,0 +1,208 @@
+# Extensions
+
+## Background
+Added in Ambari 2.4.
+
+An Extension is a collection of one or more custom services which are packaged together. Much like stacks, each extension has a name which needs to be unique in the cluster. It also has a version directory to distinguish different releases of the extension. Much like stack versions which go in `/var/lib/ambari-server/resources/stacks with <stack_name>/<stack_version>` sub-directories, extension versions go in `/var/lib/ambari-server/resources/extensions with <extension_name>/<extension_version>` sub-directories.
+
+An extension can be linked to supported stack versions. Once an extension version has been linked to the currently installed stack version, the custom services contained in the extension version may be added to the cluster in the same manner as if they were actually contained in the stack version.
+
+Third party developers can release Extensions which can be added to a cluster.
+
+## Structure
+
+The structure of an Extension definition is as follows:
+
+```
+|_ extensions
+ |_ <extension_name>
+ |_ <extension_version>
+ |_ metainfo.xml
+ |_ services
+ |_ <service_name>
+ |_ metainfo.xml
+ |_ metrics.json
+ |_ configuration
+ |_ {configuration files}
+ |_ package
+ |_ {files, scripts, templates}
+```
+
+An extension version is similar to a stack version but it only includes the metainfo.xml and the services directory. This means that the alerts, kerberos, metrics, role command order, widgets files are not supported and should be included at the service level. In addition, the repositories, hooks, configurations, and upgrades directories are not supported although upgrade support can be added at the service level.
+
+## Extension Inheritance
+
+Extension versions can extend other Extension versions in order to share command scripts and configurations. This reduces duplication of code across Extensions with the following:
+
+- add new Services in the child Extension version (not in the parent Extension version)
+- override command scripts of the parent Services
+- override configurations of the parent Services
+
+For example, **MyExtension 2.0** could extend **MyExtension 1.0** so only the changes applicable to the **MyExtension 2.0** extension are present in that Extension definition. This extension is defined in the metainfo.xml for **MyExtension 2.0**:
+
+```xml
+<metainfo>
+ <extends>1.0</extends>
+```
+
+## Supported Stack Versions
+
+**Each Extension Version must support one or more Stack Versions.** The Extension Version specifies the minimum Stack Version which it supports. This is included in the extension's metainfo.xml in the prerequisites section like so:
+
+```xml
+<metainfo>
+ <prerequisites>
+ <min-stack-versions>
+ <stack>
+ <name>HDP</name>
+ <version>2.4</version>
+ </stack>
+ <stack>
+ <name>OTHER</name>
+ <version>1.0</version>
+ </stack>
+ </min-stack-versions>
+ </prerequisites>
+</metainfo>
+
+```
+
+## Installing Extensions
+
+It is recommended to install extensions using [management packs](./management-packs.md). For more details see the [instructions on packaging custom services using extensions and management packs](custom-services.md).
+
+Once the extension version directory has been created under the resource/extensions directory with the required metainfo.xml file, you can restart ambari-server.
+
+## Extension REST APIs
+
+You can query for extensions by calling REST APIs.
+
+### Get all extensions
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions'
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/",
+ "items" : [
+ {
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT",
+ "Extensions" : {
+ "extension_name" : "EXT"
+ }
+ }
+ ]
+}
+```
+
+### Get extension
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions/EXT'
+
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT",
+ "Extensions" : {
+ "extension_name" : "EXT"
+ },
+ "versions" : [
+ {
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT/versions/1.0",
+ "Versions" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0"
+ }
+ }
+ ]
+}
+```
+
+### Get extension version
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions/EXT/versions/1.0'
+
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT/versions/1.0/",
+ "Versions" : {
+ "extension-errors" : [],
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "parent_extension_version" : null,
+ "valid" : true
+ }
+}
+```
+
+## Extension Links
+
+An Extension Link is a link between a stack version and an extension version. Once an extension version has been linked to the currently installed stack version, the custom services contained in the extension version may be added to the cluster in the same manner as if they were actually contained in the stack version.
+
+It is only possible to link an extension version to a stack version if the stack version is supported by the extension version. The stack name must be specified in the prerequisites section of the extension's metainfo.xml and the stack version must be greater than or equal to the minimum version number specified.
+
+## Extension Link REST APIs
+
+You can retrieve, create, update and delete extension links by calling REST APIs.
+
+### Create Extension Link
+
+```
+The following curl command will link an extension EXT/1.0 to the stack HDP/2.4
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"ExtensionLink": {"stack_name": "HDP", "stack_version":
+
+"2.4", "extension_name": "EXT", "extension_version": "1.0"}}' http://<server>:<port>/api/v1/links/
+```
+
+### Get All Extension Links
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/links'
+
+{
+ "href" : "http://<server>:<port>/api/v1/links/",
+ "items" : [
+ {
+ "href" : "http://<server>:<port>:8080/api/v1/links/1",
+ "ExtensionLink" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "link_id" : 1,
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+ }
+ ]
+}
+```
+
+### Get Extension Link
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/link/1'
+{
+ "href" : "http://<server>:<port>/api/v1/links/1",
+ "ExtensionLink" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "link_id" : 1,
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Delete Extension Link
+
+```
+You must specify the ID of the Extension Link to be deleted.
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE http://<server>:<port>/api/v1/links/<link_id>
+```
+
+### Update All Extension Links
+
+```
+This will reread the stacks, extensions and services in order to make sure the state of the stack is up to date in memory.
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT http://<server>:<port>/api/v1/links/
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/faq.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/faq.md
new file mode 100644
index 0000000..d19f838
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/faq.md
@@ -0,0 +1,29 @@
+# FAQ
+
+## **[STACK]/[SERVICE]/metainfo.xml**
+
+**If a component exists in the parent stack and is defined again in the child stack with just a few attributes, are these values just to override the parent's values or the whole component definition is replaced? What about other elements in metainfo.xml -- which rules apply?**
+
+Ambari goes property by property and merge them from parent to child. So if you remove a category for example from the child it will be inherited from parent, that goes for pretty much all properties.
+
+So, the question is how do we tackle existence of a property in both parent and child. Here, most of the decision still follow same paradigm as take the child value instead of parent and every property in parent, not explicitly deleted from child using a marker like
+
+
+* For config-dependencies, we take all or nothing approach, if this property exists in child use it and all of its children else take it from parent.
+
+* The custom commands are merged based on names, such that merged definition is a union of commands with child commands with same name overriding those fro parent.
+
+* Cardinality is overwritten by a child or take from the parent if child has not provided one.
+
+You could look at this method for more details: `org.apache.ambari.server.api.util.StackExtensionHelper#mergeServices`
+
+For more information see the [Service Inheritance](./custom-services.md#Service20%Inheritance) wiki page.
+
+**If a component is missing in the new definition but is present in the parent, does it get inherited?**
+
+Generally yes.
+
+**Configuration dependencies for the service -- are they overwritten or merged?**
+
+Overwritten.
+
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/hooks.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/hooks.md
new file mode 100644
index 0000000..aa83f2a
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/hooks.md
@@ -0,0 +1,48 @@
+# Hooks
+
+A stack can add during the following points in Ambari actions:
+
+* before ANY
+* before and after INSTALL
+* before RESTART
+* before START
+
+As mentioned in [Stack Inheritance](./stack-inheritance.md), the hooks directory if defined in the current stack version replace those from the parent stack version. This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The hooks directory should have 5 sub-directories:
+
+* after-INSTALL
+* before-ANY
+* before-INSTALL
+* before-RESTART
+* before-START
+
+Each hook directory can have 3 sub-directories:
+
+* files
+* scripts
+* templates
+
+The scripts directory is the main directory and must be supplied. This must contain a hook.py file. It may contain other python scripts which initialize variables (like a params.py file) or other utility files contain functions used in the hook.py file.
+
+The files directory can any non-python files such as scripts, jar or properties files.
+
+The templates folder can contain any required j2 files that are used to set up properties files.
+
+The hook.py file should extend the Hook class defined in resource_management/libraries/script/hook.py. The naming convention is to name the hook class after the hook folder such as AfterInstallHook if the class is in the after-INSTALL folder. The hook.py file must define the hook(self, env) function. Here is an example hook:
+
+>
+
+```py
+from resource_management.libraries.script.hook import Hook
+
+class AfterInstallHook(Hook):
+
+ def hook(self, env):
+ import params
+ env.set_params(params)
+ # Call any functions to set up the stack after install
+
+if __name__ == "__main__":
+ AfterInstallHook().execute()
+```
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/how-to-define-stacks-and-services.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/how-to-define-stacks-and-services.md
new file mode 100644
index 0000000..cf07b79
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/how-to-define-stacks-and-services.md
@@ -0,0 +1,1216 @@
+# How-To Define Stacks and Services
+
+Services managed by Ambari are defined in its _stacks_ folder .
+
+To define your own services and stacks to be managed by Ambari, follow the steps below.
+
+There is also an example you can follow on how to [create your custom stack and service](./defining-a-custom-stack-and-services.md).
+
+A stack is a collection of services. Multiple versions of a stack can be defined, each with its own set of services. Stacks in Ambari are defined in [ambari-server/src/main/resources/stacks ;](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folder, which can be found at **/var/lib/ambari-server/resources/stacks** folder after install.
+
+Services managed by a stack can be defined either in [ambari-server/src/main/resources/common-services](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services)or [ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folders. These folders after install can be found at _/var/lib/ambari-server/resources/common-services_ or _/var/lib/ambari-server/resources/stacks_ folders respectively.
+
+> **Question: When do I define service in _common-services_ vs. _stacks_ folders?**
+One would define services in the [common-services](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services)folder if there is possibility of the service being used in multiple stacks. For example, almost all stacks would need the HDFS service - so instead of redefining HDFS in each stack, the one defined in common-services is referenced .Likewise, if a service is never going to be shared, it can be defined in the [stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folder.Basically services defined in stacks folder are used by containment, whereas the ones defined in common-services are used by reference.
+
+## Define Service
+
+Shown below is how to define a service in _common-services_ folder. The same approach can be taken when defining services in the _stacks_ folder, which will be discussed in the _Define Stack_ section.
+
+
+
+Services **MUST** provide the main _metainfo.xml_ file which provides important metadata about the service.
+
+Apart from that, other files can be provided to give more information about the service. More details about these files are provided below.
+
+A service may also inherit from either a previous stack version or common services. For more information see the [Service Inheritance](./stack-inheritance.md) page.
+
+### _metainfo.xml_
+
+In the _metainfo.xml_ service descriptor, one can first define the service and its components.
+
+Complete reference can be found in the [Writing metainfo.xml](./writing-metainfo.md) page.
+
+A good reference implementation is the [HDFS metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L27).
+
+> **Question: Is it possible to define multiple services in the same metainfo.xml?**
+Yes. Though it is possible, it is discouraged to define multiple services in the same service folder.
+
+YARN and MapReduce2 are services that are defined together in the [YARN folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0). Its [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml) defines both services.
+
+#### Scripts
+
+With the components defined, we need to provide scripts which can handle the various stages of the service and component's lifecycle.
+
+The scripts necessary to manage service and components are specified in the _metainfo.xml_ ([HDFS](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L35))
+Each of these scripts should extend the [Script](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-common/src/main/python/resource_management/libraries/script/script.py) class which provides useful methods. Example: [NameNode script](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L68)
+
+
+
+These scripts should be provided in the __ folder.
+
+
+
+**package/scripts**
+Folder | Purpose
+-------|--------
+**package/scripts** | Contains scripts invoked by Ambari. These scripts are loaded into the execution path with the correct environment.<br></br>Example: [HDFS](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts)
+**package/files** | Contains files used by above scripts. Generally these are other scripts (bash, python, etc.) invoked as a separate process.<br></br>Example: [checkWebUI.py](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/files/checkWebUI.py) is run in HDFS service-check to determine if Journal Nodes are available
+**package/templates** | Template files used by above scripts to generate files on managed hosts. These are generally configuration files required by the service to operate.<br></br>Example: [exclude_hosts_list.j2](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/templates/exclude_hosts_list.j2) which is used by scripts to generate _/etc/hadoop/conf/dfs.exclude_
+
+#### Python
+
+Ambari by default supports Python scripts for management of service and components.
+
+Component scripts should extend `resource_management.Script` class and provide methods required for that component's lifecycle.
+
+Taken from the page on [how to create custom stack](./defining-a-custom-stack-and-services.md), the following methods are needed for MASTER, SLAVE and CLIENT components to go through their lifecycle.
+
+```python
+import sys
+from resource_management import Script
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+```python
+import sys
+from resource_management import Script
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+```python
+import sys
+from resource_management import Script
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+Ambari provides helpful Python libraries below which are useful in writing service scripts. For complete reference on these libraries visit the [Ambari Python Libraries](https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Python+Libraries) page.
+
+* resource_management
+* ambari_commons
+* ambari_simplejson
+
+##### OS Variant Scripts
+
+If the service is supported on multiple OSes which requires separate scripts, the base _resource_management.Script_ class can be extended with different _@OsFamilyImpl()_ annotations.
+
+This allows for the separation of only OS specific methods of the component.
+
+Example: [NameNode default script](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L126), [NameNode Windows script](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L346).
+
+> **Examples**
+NameNode [Start](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py#L93), [Stop](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py#L208).
+
+DataNode [Start and Stop](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py#L68).
+
+HDFS [configurations persistence](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py#L31)
+
+#### Custom Actions
+
+Sometimes services need to perform actions unique to that service which go beyond the default actions provided by Ambari (like _install_ , _start, stop, configure,_ etc.).
+
+Services can define such actions and expose them to the user in UI so that they can be easily invoked.
+
+As an example, we show the _Rebalance HDFS_ custom action implemented by HDFS.
+
+##### Stack Changes
+
+1. [Define custom command inside the _customCommands_ section](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L49) of the component in _metainfo.xml_.
+
+2. [Implement method with same name as custom command](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L273) in script referenced from _metainfo.xml_.
+
+ 1. If custom command does not have OS variants, it can be implemented in the same class that extends _resource_management.Script_
+ 2. If there are OS variants, different methods can be implemented in each class annotated by _@OsFamilyImpl(os_family=...)_. [Default rebalancehdfs](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L273),[Windows rebalancehdfs](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L354).
+
+This will provide ability by the backend to run the script on all managed hosts where the service is installed.
+
+##### UI Changes
+
+No UI changes are necessary to see the custom action on the host page.
+
+The action should show up in the host-component's list of actions. Any master-component actions will automatically show up on the service's action menu.
+
+When the action is clicked in UI, the POST call is made automatically to trigger the script defined above.
+
+> **Question: How do I provide my own label and icon for the custom action in UI?**
+In Ambari UI, add your component action to the _App.HostComponentActionMap_ object with custom icon and name. Ex: [REBALANCEHDFS](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-web/app/models/host_component.js#L351).
+
+### Configuration
+
+Configuration files for a service should be placed by default in the _ [configuration](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration)_ folder.
+
+If a different named folder has to be used, the [< _configuration-dir>_](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml#L249) element can be used in _metainfo.xml_ to point to that folder.
+
+The important sections of the metainfo.xml with regards to configurations are:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <displayName>HDFS</displayName>
+ <comment>Apache Hadoop Distributed File System</comment>
+ <version>2.1.0.2.0</version>
+ <components>
+ ...
+ <component>
+ <name>HDFS_CLIENT</name>
+ ...
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>hdfs-site.xml</fileName>
+ <dictionaryName>hdfs-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>xml</type>
+ <fileName>core-site.xml</fileName>
+ <dictionaryName>core-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>log4j.properties</fileName>
+ <dictionaryName>hdfs-log4j,yarn-log4j</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>hadoop-env.sh</fileName>
+ <dictionaryName>hadoop-env</dictionaryName>
+ </configFile>
+ </configFiles>
+ ...
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ </configuration-dependencies>
+ </component>
+ ...
+ </components>
+
+ <configuration-dir>configuration</configuration-dir>
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ <config-type>hadoop-env</config-type>
+ <config-type>hadoop-policy</config-type>
+ <config-type>hdfs-log4j</config-type>
+ <config-type>ranger-hdfs-plugin-properties</config-type>
+ <config-type>ssl-client</config-type>
+ <config-type>ssl-server</config-type>
+ <config-type>ranger-hdfs-audit</config-type>
+ <config-type>ranger-hdfs-policymgr-ssl</config-type>
+ <config-type>ranger-hdfs-security</config-type>
+ <config-type>ams-ssl-client</config-type>
+ </configuration-dependencies>
+ </service>
+ </services>
+</metainfo>
+```
+
+* **config-type** - String representing a group of configurations. Example: _core-site, hdfs-site, yarn-site_, etc. When configurations are saved in Ambari, they are persisted within a version of config-type which is immutable. If you change and save HDFS core-site configs 4 times, you will have 4 versions of config-type core-site. Also, when a service's configs are saved, only the changed config-types are updated.
+
+* **configFiles** - lists the config-files handled by the enclosing component
+* **configFile** - represents one config-file of a certain type
+
+ - **type** - type of file based on which contents are generated differently
+
+ + **xml** - XML file generated in Hadoop friendly format. Ex:[hdfs-site.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml)
+ + **env** - Generally used for scripts where the content value is used as a template. The template has config-tags whose values are populated at runtime during file generation. Ex:[hadoop-env.sh](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml)
+ + **properties** - Generates property files where entries are in key=value format. Ex:[falcon-runtime.properties](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-runtime.properties.xml)
+ - **dictionaryName** - Name of the config-type as which key/values of this config file will be stored
+* **configuration-dependencies** - Lists the config-types on which this component or service depends on. One of the implications of this dependency is that whenever the config-type is updated, Ambari automatically marks the component or service as requiring restart. From the code section above, whenever _core-site_ is updated, both HDFS service as well as HDFS_CLIENT component will be marked as requiring restart.
+
+* **configuration-dir** - Directory where files listed in _configFiles_ will be. Optional. Default value is _configuration_.
+
+#### Adding new configs in a config-type
+
+There are a number of different parameters that can be specified to a config item when it is added to a config-type. These have been covered [here](https://cwiki.apache.org/confluence/display/AMBARI/Configuration+support+in+Ambari).
+
+#### UI - Categories
+
+Configurations defined above show up in the service's _Configs_ page.
+
+To customize categories and ordering of configurations in UI, the following files have to be updated.
+
+**Create Category** - Update the _ [ambari-web/app/models/stack_service.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/models/stack_service.js#L226)_ file to add your own service, along with your new categories.
+
+**Use Category** - To place configs inside a defined category, and specify an order in which configs are placed, add configs to [ambari-web/app/data/HDP2/site_properties.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2/site_properties.js) file. In this file one can specify the category to use, and the index where a config should be placed. The stack folders in [ambari-web/app/data](https://github.com/apache/ambari/tree/trunk/ambari-web/app/data) are hierarchical and inherit from previous versions. The mapping of configurations into sections is defined here. Example [Hive Categories](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2.2/hive_properties.js), [Tez Categories](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2.2/tez_properties.js).
+
+#### UI - Enhanced Configs
+
+The _Enhanced Configs_feature makes it possible for service providers to customize their service's configs to a great deal and determine which configs are prominently shown to user without making any UI code changes. Customization includes providing a service friendly layout, better controls (sliders, combos, lists, toggles, spinners, etc.), better validation (minimum, maximum, enums), automatic unit conversion (MB, GB, seconds, milliseconds, etc.), configuration dependencies and improved dynamic recommendations of default values.
+
+A service provider can accomplish all the above just by changing their service definition in the _stacks_/folder.
+
+Read more in the _ [Enhanced Configs](https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Configs)_ page
+
+### Alerts
+
+Each service is capable of defining which alerts Ambari should track by providing an [alerts.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/alerts.json) file.
+
+Read more about Ambari Alerts framework [in the Alerts wiki page](https://cwiki.apache.org/confluence/display/AMBARI/Alerts) and the alerts.json format in the [Alerts definition documentation](https://github.com/apache/ambari/blob/branch-2.1/ambari-server/docs/api/v1/alert-definitions.md).
+
+### Kerberos
+
+Ambari is capable of enabling and disabling Kerberos for a cluster. To inform Ambari of the identities and configurations to be used for the service and its components, each service can provide a _kerberos.json_ file.
+
+Read more about Kerberos support in the _ [Automated Kerberization](https://cwiki.apache.org/confluence/display/AMBARI/Automated+Kerberizaton)_ wiki page and the Kerberos descriptor in the [Kerberos Descriptor documentation](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/security/kerberos/kerberos_descriptor.md).
+
+### Metrics
+
+Ambari provides the [Ambari Metrics System ("AMS")](https://cwiki.apache.org/confluence/display/AMBARI/Metrics)service for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+Each service can define which metrics AMS should collect and provide by defining a [metrics.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metrics.json) file.
+
+You can read about the metrics.json file format in the [Stack Defined Metrics](https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics) page.
+
+### Quick Links
+
+A service can add a list of quick links to the Ambari web UI by adding metainfo to a text file following a predefined JSON format. Ambari server parses the quicklink JSON file and provides its content to the UI. So that Ambari web UI can calculate quick link URLs based on the information and populate the quicklinks drop-down list accordingly.
+
+Read more about quick links JSON file design in the [Quick Links](../quick-links.md) page.
+
+### Widgets
+
+Each service can define which widgets and heatmaps show up by default on the service summary page by defining a [widgets.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/widgets.json) file.
+
+You can read more about the widgets descriptor in the [Enhanced Service Dashboard](https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard) page.
+
+### Role Command Order
+
+From Ambari 2.2, each service can define its own role command order by including the [role_command_order.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json) file in its service folder. The service should only specify the relationship of its components to other components. In other words, if a service only includes COMP_X, it should only list dependencies related to COMP_X. If when COMP_X starts it is dependent on the NameNode start and when the NameNode stops it should wait for COMP_X to stop, the following would be included in the role command order:
+
+**Example service role_command_order.json**
+```json
+"COMP_X-START": ["NAMENODE-START"],
+ "NAMENODE-STOP": ["COMP_X-STOP"]
+```
+
+The entries in the service's role command order will be merged with the role command order defined in the stack. For example, since the stack already has a dependency for NAMENODE-STOP, in the example above COMP_X-STOP would be added to the rest of the NAMENODE-STOP dependencies and in addition the COMP_X-START dependency on NAMENODE-START would just be added as a new dependency.
+
+For more details on role command order, see the [Role Command Order](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-RoleCommandOrder) section below.
+
+### Service Advisor
+
+From Ambari 2.4, each service can choose to define its own service advisor rather than define the details of its configuration and layout in the stack advisor. This is particularly useful for custom services which are not defined in the stack. Ambari provides the _Service Advisor_ capability where a service can write a Python script named _service-advisor.py_ in their service folder. This folder can be in the stack's services directory where the service is defined or can be inherited from the service definition in common-services or elsewhere. Example: [common-services/HAWQ/2.0.0](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0).
+
+Unlike the Stack-advisor scripts, the service-advisor scripts do not automatically extend the parent service's service-advisor scripts. The service-advisor script needs to explicitly extend their parent's service service-advisor script. The following code sample shows how you would refer to a parent's service_advisor.py. In this case it is extending the root service-advisor.py file in the resources/stacks directory.
+
+**Sample service-advisor.py file inheritance**
+```python
+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
+STACKS_DIR = os.path.join(SCRIPT_DIR, '../../../stacks/')
+PARENT_FILE = os.path.join(STACKS_DIR, 'service_advisor.py')
+
+try:
+ with open(PARENT_FILE, 'rb') as fp:
+ service_advisor = imp.load_module('service_advisor', fp, PARENT_FILE, ('.py', 'rb', imp.PY_SOURCE))
+except Exception as e:
+ traceback.print_exc()
+ print "Failed to load parent"
+
+class HAWQ200ServiceAdvisor(service_advisor.ServiceAdvisor):
+```
+
+Like the stack advisors, service advisors provide information on 4 important aspects:
+
+1. Recommend layout of the service on cluster
+2. Recommend service configurations
+3. Validate layout of the service on cluster
+4. Validate service configurations
+
+By providing the service-advisor.py file, one can control dynamically each of the above for the service.
+
+The main interface for the service-advisor scripts contains documentation on how each of the above are called, and what data is provided.
+
+```python
+class ServiceAdvisor(DefaultStackAdvisor):
+"""
+ Abstract class implemented by all service advisors.
+
+"""
+
+"""
+ If any components of the service should be colocated with other services,
+ this is where you should set up that layout. Example:
+
+ # colocate HAWQSEGMENT with DATANODE, if no hosts have been allocated for HAWQSEGMENT
+ hawqSegment = [component for component in serviceComponents if component["StackServiceComponents"]["component_name"] == "HAWQSEGMENT"][0]
+ if not self.isComponentHostsPopulated(hawqSegment):
+ for hostName in hostsComponentsMap.keys():
+ hostComponents = hostsComponentsMap[hostName]
+ if {"name": "DATANODE"} in hostComponents and {"name": "HAWQSEGMENT"} not in hostComponents:
+ hostsComponentsMap[hostName].append( { "name": "HAWQSEGMENT" } )
+ if {"name": "DATANODE"} not in hostComponents and {"name": "HAWQSEGMENT"} in hostComponents:
+ hostComponents.remove({"name": "HAWQSEGMENT"})
+"""
+ def colocateService(self, hostsComponentsMap, serviceComponents):
+ pass
+
+"""
+ Any configuration recommendations for the service should be defined in this function.
+
+ This should be similar to any of the recommendXXXXConfigurations functions in the stack_advisor.py
+ such as recommendYARNConfigurations().
+
+"""
+ def getServiceConfigurationRecommendations(self, configurations, clusterSummary, services, hosts):
+ pass
+
+"""
+ Returns an array of Validation objects about issues with the hostnames to which components are assigned.
+
+ This should detect validation issues which are different than those the stack_advisor.py detects.
+
+ The default validations are in stack_advisor.py getComponentLayoutValidations function.
+
+"""
+ def getServiceComponentLayoutValidations(self, services, hosts):
+ return []
+
+"""
+ Any configuration validations for the service should be defined in this function.
+
+ This should be similar to any of the validateXXXXConfigurations functions in the stack_advisor.py
+ such as validateHDFSConfigurations.
+
+"""
+ def getServiceConfigurationsValidationItems(self, configurations, recommendedDefaults, services, hosts):
+ return []
+```
+
+#### **Examples**
+
+* [Service Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51)
+* [HAWQ 2.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0/service_advisor.py)
+* [PXF 3.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/PXF/3.0.0/service_advisor.py)
+
+### Service Upgrade
+
+From Ambari 2.4, each service can now define its upgrade within its service definition. This is particularly useful for custom services which no longer need to modify the stack's upgrade-packs in order to integrate themselves into the [cluster upgrade](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-StackUpgrades).
+
+
+Each service can define _upgrade-packs_, which are XML files describing the upgrade process of that particular service and how the upgrade pack relates to the overall stack upgrade-packs. These _upgrade-pack_ XML files are placed in the service's _upgrades/_ folder in separate sub-folders specific to the stack-version they are meant to extend. Some examples of this can be seen in the testing code.
+
+#### Examples
+
+* [Upgrades folder](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/)
+* [Upgrade-pack XML](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml)
+
+Each upgrade-pack that the service defines should match the file name of the service defined by a particular stack version. For example in the testing code, HDP 2.2.0 had an [upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.2.0/upgrades/upgrade_test_15388.xml) upgrade-pack. The HDFS service defined an extension to that upgrade pack [HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml). In this case the upgrade-pack was defined in the HDP/2.0.5 stack. The upgrade-pack is an extension to HDP/2.2.0 because it is defined in upgrade/HDP/2.2.0 directory. Finally the name of the service's extension to the upgrade-pack upgrade_test_15388.xml matches the name of the upgrade-pack in HDP/2.2.0/upgrades.
+
+The file format for the service is much the same as that of the stack. The target, target-stack and type attributes should all be the same as the stack's upgrade-pack. The service is able to add its own prerequisite checks.
+
+**General Attributes and Prerequisite Checks**
+```
+<upgrade xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <target>2.4.*</target>
+ <target-stack>HDP-2.4.0</target-stack>
+ <type>ROLLING</type>
+ <prerequisite-checks>
+ <check>org.apache.ambari.server.checks.FooCheck</check>
+ </prerequisite-checks>
+```
+
+The order section of the upgrade-pack, consists of group elements just like the stack's upgrade-pack. The key difference is defining how these groups relate to groups in the stack's upgrade pack or other service upgrade-packs. In the first example we are referencing the PRE_CLUSTER group and adding a new execute-stage for the service FOO. The entry is supposed to be added after the execute-stage for HDFS based on the `<add-after-group-entry>` tag.
+
+```xml
+<order>
+ <group xsi:type="cluster" name="PRE_CLUSTER" title="Pre {{direction.text.proper}}">
+ <add-after-group-entry>HDFS</add-after-group-entry>
+ <execute-stage service="FOO" component="BAR" title="Backup FOO">
+ <task xsi:type="manual">
+ <message>Back FOO up.</message>
+ </task>
+ </execute-stage>
+ </group>
+
+```
+
+The same syntax can be used to order other sections like service check priorities and group services.
+
+```xml
+<group name="SERVICE_CHECK1" title="All Service Checks" xsi:type="service-check">
+ <add-after-group-entry>ZOOKEEPER</add-after-group-entry>
+ <priority>
+ <service>HBASE</service>
+ </priority>
+</group>
+
+<group name="CORE_MASTER" title="Core Masters">
+ <add-after-group-entry>YARN</add-after-group-entry>
+ <service name="HBASE">
+ <component>HBASE_MASTER</component>
+ </service>
+</group>
+```
+
+It is also possible to add new groups and order them after other groups in the stack's upgrade-packs. In the following example, we are adding the FOO group after the HIVE group using the add-after-group tag.
+
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO">
+ <component>BAR</component>
+ </service>
+</group>
+```
+
+You could also include both the add-after-group and the add-after-group-entry tags in the same group. This will create a new group if it doesn't already exist and will order it after the add-after-group's group name. The add-after-group-entry will determine the internal ordering of that group's services, priorities or execute stages.
+
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <add-after-group-entry>FOO</add-after-group-entry>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO2">
+ <component>BAR2</component>
+ </service>
+</group>
+```
+
+The processing section of the upgrade-pack remains the same as what it would be in the stack's upgrade-pack.
+
+```xml
+ <processing>
+ <service name="FOO">
+ <component name="BAR">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ <component name="BAR2">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ </service>
+ </processing>
+```
+
+## Define Stack
+
+A stack is a versioned collection of services. Each stack is a folder is defined in [ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks) source. Once installed, these stack definitions are available on the ambari-server machine at _/var/lib/ambari-server/resources/stacks_.
+
+Each stack folder contains one sub-folder per version of the stack. Some of these stack-versions are active while some are not. Each stack-version includes services which are either referenced from _common-services_, or defined inside the stack-version's _services_ folder.
+
+
+
+Example: [HDP stack](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP). [HDP-2.4 stack version](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.4).
+
+### Stack-Version Descriptor
+
+Each stack-version should provide a _metainfo.xml_ (Example: [HDP-2.3](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/metainfo.xml), [HDP-2.4](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.4/metainfo.xml)) descriptor file which describes the following about this stack-version:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.3</extends>
+ <minJdk>1.7</minJdk>
+ <maxJdk>1.8</maxJdk>
+</metainfo>
+```
+
+* **versions/active** - Whether this stack-version is still available for install. If not available, this version will not show up in UI during install.
+
+* **extends** - The stack-version in this stack that is being extended. Extended stack-versions inherit services along with almost all aspects of the parent stack-version.
+
+* **minJdk** - Minimum JDK with which this stack-version is supported. Users are warned during installer wizard if the JDK used by Ambari is lower than this version.
+
+* **maxJdk** - Maximum JDK with which this stack-version is supported. Users are warned during installer wizard if the JDK used by Ambari is greater than this version.
+
+### Stack Properties
+
+The stack must contain or inherit a properties directory which contains two files: [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) and [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json). This [directory](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) is new in Ambari 2.4.
+
+The stack_features.json contains a list of features that are included in Ambari and allows the stack to specify which versions of the stack include those features. The list of features are determined by the particular Ambari release. The reference list for a particular Ambari version should be found in the [HDP/2.0.6/properties/stack_features.json](https://github.com/apache/ambari/blob/branch-2.4/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) in the branch for that Ambari release. Each feature has a name and description and the stack can provide the minimum and maximum version where that feature is supported.
+
+```json
+{
+
+"stack_features": [
+
+{
+
+"name": "snappy",
+
+"description": "Snappy compressor/decompressor support",
+
+"min_version": "2.0.0.0",
+
+"max_version": "2.2.0.0"
+
+},
+
+...
+
+}
+```
+
+The stack_tools.json includes the name and location where the stack_selector and conf_selector tools are installed.
+
+```json
+{
+
+"stack_selector": ["hdp-select", "/usr/bin/hdp-select", "hdp-select"],
+
+"conf_selector": ["conf-select", "/usr/bin/conf-select", "conf-select"]
+
+}
+```
+
+
+Any custom stack must include these two JSON files. For further information see the [Stack Properties](./stack-properties.md) wiki page.
+
+### Services
+
+Each stack-version includes services which are either referenced from _common-services_, or defined inside the stack-version's _services_ folder.
+
+Services are defined in _common-services_ if they will be shared across multiple stacks. If they will never be shared, then they can be defined inside the stack-version.
+
+#### Reference _common-services_
+
+To reference a service from common-services, the service descriptor file should use the < _extends>_ element. (Example: [HDFS in HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/metainfo.xml))
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <extends>common-services/HDFS/2.1.0.2.0</extends>
+ </service>
+ </services>
+</metainfo>
+```
+
+#### Define Service
+
+In exactly the same format as services defined in _common-services_, a new service can be defined inside the _services_ folder.
+
+Examples:
+
+* [HDFS in BIGTOP-0.8](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/BIGTOP/0.8/services/HDFS)
+* [GlusterFS in HDP-2.3.GlusterFS](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS)
+
+#### Extend Service
+
+When a stack-version extends another stack-version, it inherits all details of the parent service. It is also free to override and remove any portion of the inherited service definition.
+
+Examples:
+
+* HDP-2.3 / HDFS -[Adding NFS_GATEWAY component, updating service version and OS specific packages](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/HDFS/metainfo.xml)
+* HDP-2.2 / Storm -[Deleting STORM_REST_API component, updating service version and OS specific packages](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.2/services/STORM/metainfo.xml)
+* HDP-2.3 / YARN -[Deleting YARN node-label configuration from capacity-scheduler.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/YARN/configuration/capacity-scheduler.xml)
+* HDP-2.3 / Kafka -[Add Kafka Broker Process alert](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/KAFKA/alerts.json)
+
+### Role Command Order
+
+_**Role**_ is another name for **Component** (Ex: NAMENODE, DATANODE, RESOURCEMANAGER, HBASE_MASTER, etc.)
+
+As the name implies, it is possible to tell Ambari about the order in which commands should be run for the components defined in your stack.
+
+For example: "_ZooKeeper Server_ should be started before starting _NameNode_". Or "_HBase Master_ should be started only after _NameNode_ and _DataNodes_ are started".
+
+This can be specified by including the [role_command_order.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json) file in the stack-version folder.
+
+#### Format
+
+Specified in JSON format, the file contains a JSON object with top-level keys being either section names or comments Ex: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json).
+
+Inside each section object, the key describes the dependent component-action, and the value lists the component-actions which should be done before it.
+
+```json
+{
+ "_comment": "Section 1 comment",
+ "section_name_1": {
+ "_comment": "Section containing role command orders",
+ "-": ["-", "-"],
+ "-": ["-"],
+ ...
+
+ },
+ "_comment": "Next section comment",
+ ...
+
+}
+```
+
+#### Sections
+
+Ambari uses the below sections only:
+
+Section Name | When Used
+-------------|---------------
+general_deps |Command orders are applied in all situations
+optional_glusterfs | Command orders are applied when cluster has instance of GLUSTERFS service
+optional_no_glusterfs | Command orders are applied when cluster does not have instance of GLUSTERFS service
+namenode_optional_ha | Command orders are applied when HDFS service is installed and JOURNALNODE component exists (HDFS HA is enabled)
+resourcemanager_optional_ha | Command orders are applied when YARN service is installed and multiple RESOURCEMANAGER host-components exist (YARN HA is enabled)
+
+#### Commands
+
+Commands currently supported by Ambari are
+
+* INSTALL
+* UNINSTALL
+* START
+* RESTART
+* STOP
+* EXECUTE
+* ABORT
+* UPGRADE
+* SERVICE_CHECK
+* CUSTOM_COMMAND
+* ACTIONEXECUTE
+
+#### Examples
+
+Role Command Order | Explanation
+-------------------|---------------------
+"HIVE_METASTORE-START": ["MYSQL_SERVER-START", "NAMENODE-START"] | Start MySQL and NameNode components before starting Hive Metastore
+"MAPREDUCE_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START"], | MapReduce service check needs ResourceManager and NodeManagers started
+"ZOOKEEPER_SERVER-STOP" : ["HBASE_MASTER-STOP", "HBASE_REGIONSERVER-STOP", "METRICS_COLLECTOR-STOP"], | Before stopping ZooKeeper servers, make sure HBase Masters, HBase RegionServers and AMS Metrics Collector are stopped.
+
+### Repositories
+
+Each stack-version can provide the location of package repositories to use, by providing a _repos/repoinfo.xml_ (Ex: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/repos/repoinfo.xml))
+The _repoinfo.xml_ file contains repositories grouped by operating systems. Each OS specifies a list of repositories that are shown to the user when the stack-version is selected for install.
+
+These repositories are used in conjunction with the [_packages_ defined in a service's metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L161) to install appropriate bits on the system.
+
+```xml
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.1</baseurl>
+ <repoid>HDP-2.0.6</repoid>
+ <reponame>HDP</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.17/repos/centos6</baseurl>
+ <repoid>HDP-UTILS-1.1.0.17</repoid>
+ <reponame>HDP-UTILS</reponame>
+ </repo>
+ </os>
+<reposinfo>
+```
+
+baseurl- URL of the RPM repository where provided _repoid_ can be found
+**repoid** - Repo ID to use that are hosted at _baseurl
+**reponame** - Display name for the repo being used.
+
+#### Latest Builds
+
+Though repository base URL is capable of providing updates to a particular repo, it has to be defined at build time. This could be an issue later when the repository changes location, or update builds are hosted at a different site.
+
+For such scenarios, a stack-version can provide the location of a JSON file which can provide details of other repo URLs to use.
+
+Example: [HDP-2.3 repoinfo.xml uses](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/repos/repoinfo.xml), which then points to alternate repository URLs where latest builds can be found:
+
+```json
+{
+ ...
+
+ "HDP-2.3":{
+ "latest":{
+ "centos6":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos6/2.x/BUILDS/2.3.6.0-3586/",
+ "centos7":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/2.x/BUILDS/2.3.6.0-3586/",
+ "debian6":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/debian6/2.x/BUILDS/2.3.6.0-3586/",
+ "debian7":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/debian7/2.x/BUILDS/2.3.6.0-3586/",
+ "suse11":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/suse11sp3/2.x/BUILDS/2.3.6.0-3586/",
+ "ubuntu12":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu12/2.x/BUILDS/2.3.6.0-3586/",
+ "ubuntu14":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu14/2.x/BUILDS/2.3.6.0-3586/"
+ }
+ },
+ ...
+
+}
+```
+
+### Hooks
+
+A stack-version could have very basic and common instructions that need to be run before or after certain Ambari commands, across all services.
+
+Instead of duplicating this code across all service scripts and asking users to worry about them, Ambari provides the _Hooks_ ability where common before and after code can be pulled away into the _hooks_ folder. (Ex: [HDP-2.0.6](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks))
+
+
+
+#### Command Sub-Folders
+
+The general naming pattern for hooks sub-folders is `"<before|after>-<ANY|<CommandName>>"`.
+What this means is that the scripts/hook.py file under the sub-folder is run either before or after the command.
+
+**Examples:**
+
+Sub-Folder | Purpose | Example
+-----------|---------|------------
+before-START | Hook script called before START command is run on any component of the stack-version. | [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py#L30)<br></br>sets up Hadoop log and pid directories<br></br>creates Java Home symlink<br></br>Creates /etc/hadoop/conf/topology_script.py<br></br>etc.
+before-INSTALL | Hook script called before installing of any component of the stack-version | [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py#L33)<br></br>Creates repo files in /etc/yum.repos.d<br></br>Installs basic packages like curl, unzip, etc.
+
+Based on the commands currently supported by Ambari, the following sub-folders can be created based on necessity
+
+Prefix | - |Command
+-------|--|---------------------
+before | - |INSTALL UNINSTALL START RESTART STOP
+after | - |EXECUTE ABORT UPGRADE SERVICE_CHECK `<custom_command>`-Custom commands specified by the user like the DECOMMISSION or REBALANCEHDFS commands specified by HDFS
+
+The _scripts/hook.py_ script should import [resource_management.libraries.script.hook](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/script/hook.py) module and extend the Hook class
+
+```python
+from resource_management.libraries.script.hook import Hook
+
+class CustomHook(Hook):
+ def hook(self, env):
+ # Do custom work
+
+if __name__ == "__main__":
+ CustomHook().execute()
+```
+
+### Configurations
+
+Though most configurations are set at the service level, there can be configurations which apply across all services to indicate the state of the cluster installed with this stack.
+
+For example, things like ["is security enabled?"](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml#L25), ["what user runs smoke tests?"](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml#L46) etc.
+
+Such configurations can be defined in the [configuration folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration) of the stack. They are available for access just like the service-level configs.
+
+#### Stack Advisor
+
+With each stack containing multiple complex services, it becomes necessary to dynamically determine how the services are laid out on the cluster, and for determining values of certain configurations.
+
+Ambari provides the _Stack Advisor_ capability where stacks can write a Python script named _stack-advisor.py_ in the _services/_ folder. Example: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py).
+
+Stack-advisor scripts automatically extend the parent stack-version's stack-advisor scripts. This allows newer stack-versions to change behavior without effecting earlier behavior.
+
+Stack advisors provide information on 4 important aspects:
+
+1. Recommend layout of services on cluster
+2. Recommend service configurations
+3. Validate layout of services on cluster
+4. Validate service configurations
+
+By providing the stack-advisor.py file, one can control dynamically each of the above.
+
+The main interface for the stack-advisor scripts contains documentation on how each of the above are called, and what data is provided
+
+```python
+class StackAdvisor(object):
+"""
+ Abstract class implemented by all stack advisors. Stack advisors advise on stack specific questions.
+
+ Currently stack advisors provide following abilities:
+ - Recommend where services should be installed in cluster
+ - Recommend configurations based on host hardware
+ - Validate user selection of where services are installed on cluster
+ - Validate user configuration values
+
+ Each of the above methods is passed in parameters about services and hosts involved as described below.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services selected by the user.
+
+ Example: {
+ "services": [
+ {
+ "StackServices": {
+ "service_name" : "HDFS",
+ "service_version" : "2.6.0.2.2",
+ },
+ "components" : [
+ {
+ "StackServiceComponents" : {
+ "cardinality" : "1+",
+ "component_category" : "SLAVE",
+ "component_name" : "DATANODE",
+ "display_name" : "DataNode",
+ "service_name" : "HDFS",
+ "hostnames" : []
+ },
+ "dependencies" : []
+ }, {
+ "StackServiceComponents" : {
+ "cardinality" : "1-2",
+ "component_category" : "MASTER",
+ "component_name" : "NAMENODE",
+ "display_name" : "NameNode",
+ "service_name" : "HDFS",
+ "hostnames" : []
+ },
+ "dependencies" : []
+ },
+ ...
+
+ ]
+ },
+ ...
+
+ ]
+ }
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ Example: {
+ "items": [
+ {
+ Hosts: {
+ "host_name": "c6401.ambari.apache.org",
+ "public_host_name" : "c6401.ambari.apache.org",
+ "ip": "192.168.1.101",
+ "cpu_count" : 1,
+ "disk_info" : [
+ {
+ "available" : "4564632",
+ "used" : "5230344",
+ "percent" : "54%",
+ "size" : "10319160",
+ "type" : "ext4",
+ "mountpoint" : "/"
+ },
+ {
+ "available" : "1832436",
+ "used" : "0",
+ "percent" : "0%",
+ "size" : "1832436",
+ "type" : "tmpfs",
+ "mountpoint" : "/dev/shm"
+ }
+ ],
+ "host_state" : "HEALTHY",
+ "os_arch" : "x86_64",
+ "os_type" : "centos6",
+ "total_mem" : 3664872
+ }
+ },
+ ...
+
+ ]
+ }
+
+ Each of the methods can either return recommendations or validations.
+
+ Recommendations are made in a Ambari Blueprints friendly format.
+
+ Validations are an array of validation objects.
+
+"""
+
+ def recommendComponentLayout(self, services, hosts):
+"""
+ Returns recommendation of which hosts various service components should be installed on.
+
+ This function takes as input all details about services being installed, and hosts
+ they are being installed into, to generate hostname assignments to various components
+ of each service.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Layout recommendation of service components on cluster hosts in Ambari Blueprints friendly format.
+
+ Example: {
+ "resources" : [
+ {
+ "hosts" : [
+ "c6402.ambari.apache.org",
+ "c6401.ambari.apache.org"
+ ],
+ "services" : [
+ "HDFS"
+ ],
+ "recommendations" : {
+ "blueprint" : {
+ "host_groups" : [
+ {
+ "name" : "host-group-2",
+ "components" : [
+ { "name" : "JOURNALNODE" },
+ { "name" : "ZKFC" },
+ { "name" : "DATANODE" },
+ { "name" : "SECONDARY_NAMENODE" }
+ ]
+ },
+ {
+ "name" : "host-group-1",
+ "components" :
+ { "name" : "HDFS_CLIENT" },
+ { "name" : "NAMENODE" },
+ { "name" : "JOURNALNODE" },
+ { "name" : "ZKFC" },
+ { "name" : "DATANODE" }
+ ]
+ }
+ ]
+ },
+ "blueprint_cluster_binding" : {
+ "host_groups" : [
+ {
+ "name" : "host-group-1",
+ "hosts" : [ { "fqdn" : "c6401.ambari.apache.org" } ]
+ },
+ {
+ "name" : "host-group-2",
+ "hosts" : [ { "fqdn" : "c6402.ambari.apache.org" } ]
+ }
+ ]
+ }
+ }
+ }
+ ]
+ }
+"""
+ pass
+
+ def validateComponentLayout(self, services, hosts):
+"""
+ Returns array of Validation issues with service component layout on hosts
+
+ This function takes as input all details about services being installed along with
+ hosts the components are being installed on (hostnames property is populated for
+ each component).
+
+ @type services: dictionary
+ @param services: Dictionary containing information about services and host layout selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Dictionary containing array of validation items
+ Example: {
+ "items": [
+ {
+ "type" : "host-group",
+ "level" : "ERROR",
+ "message" : "NameNode and Secondary NameNode should not be hosted on the same machine",
+ "component-name" : "NAMENODE",
+ "host" : "c6401.ambari.apache.org"
+ },
+ ...
+
+ ]
+ }
+"""
+ pass
+
+ def recommendConfigurations(self, services, hosts):
+"""
+ Returns recommendation of service configurations based on host-specific layout of components.
+
+ This function takes as input all details about services being installed, and hosts
+ they are being installed into, to recommend host-specific configurations.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services and component layout selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Layout recommendation of service components on cluster hosts in Ambari Blueprints friendly format.
+
+ Example: {
+ "services": [
+ "HIVE",
+ "TEZ",
+ "YARN"
+ ],
+ "recommendations": {
+ "blueprint": {
+ "host_groups": [],
+ "configurations": {
+ "yarn-site": {
+ "properties": {
+ "yarn.scheduler.minimum-allocation-mb": "682",
+ "yarn.scheduler.maximum-allocation-mb": "2048",
+ "yarn.nodemanager.resource.memory-mb": "2048"
+ }
+ },
+ "tez-site": {
+ "properties": {
+ "tez.am.java.opts": "-server -Xmx546m -Djava.net.preferIPv4Stack=true -XX:+UseNUMA -XX:+UseParallelGC",
+ "tez.am.resource.memory.mb": "682"
+ }
+ },
+ "hive-site": {
+ "properties": {
+ "hive.tez.container.size": "682",
+ "hive.tez.java.opts": "-server -Xmx546m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC",
+ "hive.auto.convert.join.noconditionaltask.size": "238026752"
+ }
+ }
+ }
+ },
+ "blueprint_cluster_binding": {
+ "host_groups": []
+ }
+ },
+ "hosts": [
+ "c6401.ambari.apache.org",
+ "c6402.ambari.apache.org",
+ "c6403.ambari.apache.org"
+ ]
+ }
+"""
+ pass
+
+ def validateConfigurations(self, services, hosts):
+""""
+ Returns array of Validation issues with configurations provided by user
+ This function takes as input all details about services being installed along with
+ configuration values entered by the user. These configurations can be validated against
+ service requirements, or host hardware to generate validation issues.
+
+ @type services: dictionary
+ @param services: Dictionary containing information about services and user configurations.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Dictionary containing array of validation items
+ Example: {
+ "items": [
+ {
+ "config-type": "yarn-site",
+ "message": "Value is less than the recommended default of 682",
+ "type": "configuration",
+ "config-name": "yarn.scheduler.minimum-allocation-mb",
+ "level": "WARN"
+ }
+ ]
+ }
+"""
+ pass
+```
+
+#### **Examples**
+
+* [Stack Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py#L23)
+* [Default Stack Advisor implementation - for all stacks](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py#L303)
+* [HDP (2.0.6) Default Stack Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L28)
+* [YARN container size calculated](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L807)
+* Recommended configurations -[HDFS](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L222),[YARN](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L133),[MapReduce2](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L148),[HBase](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L245) (HDP-2.0.6),[HBase](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L148) (HDP-2.3)
+* [Delete HBase Bucket Cache configs on smaller machines](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py#L272)
+* [Specify maximum value for Tez config](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py#L184)
+
+### Properties
+
+Similar to stack configurations, most properties are defined at the service level, however there are global properties which can be defined at the stack-version level affecting across all services.
+
+Some examples are: [stack-selector and conf-selector](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json#L2) specific names or what [stack versions certain stack features](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json#L5) are supported by. Most of these properties were introduced in Ambari 2.4 version in the effort of parameterize stack information and facilitate the reuse of common-services code by other distributions.
+
+Such properties can be defined in .json format in the [properties folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) of the stack.
+
+More details about stack properties can be found on [Stack Properties section](https://cwiki.apache.org/confluence/x/pgPiAw).
+
+### Widgets
+
+At the stack-version level one can contribute heatmap entries to the main dashboard of the cluster.
+
+Generally these heatmaps would be ones which apply to all services - like host level heatmaps.
+
+Example: [HDP-2.0.6 contributes host level heatmaps](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/widgets.json)
+
+### Kerberos
+
+We have seen previously the Kerberos descriptor at the service level.
+
+One can be defined at the stack-version level also to describe identities across all services.
+
+Read more about the Kerberos support and the Kerberos Descriptor in the _ [Automated Kerberization](https://cwiki.apache.org/confluence/display/AMBARI/Automated+Kerberizaton)_ page.
+
+Example: [Smoke tests user and SPNEGO user defined in HDP-2.0.6](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/kerberos.json)
+
+### Stack Upgrades
+
+Ambari provides the ability to upgrade your cluster from a lower stack-version to a higher stack-version.
+
+Each stack-version can define _upgrade-packs_, which are XML files describing the upgrade process. These _upgrade-pack_ XML files are placed in the stack-version's _upgrades/_ folder.
+
+Example: [HDP-2.3](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades), [HDP-2.4](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.4/upgrades)
+
+Each stack-version should have an upgrade-pack for the next stack-version a cluster can **upgrade to**.
+
+Ex: [Upgrade-pack from HDP-2.3 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.4.xml)
+
+There are two types of upgrades:
+
+Upgrade Type | Pros | Cons
+-------------|------|----------
+Express Upgrade (EU) | Cluster unavailable - services are stopped during upgrade process | Much faster - clusters can be upgraded in a couple of hours
+Rolling Upgrade (RU) | Minimal cluster downtime - services available throughout upgrade process | Takes time (sometimes days depending on cluster size) due to incremental upgrade approach
+
+Each component which has to be upgraded by Ambari should specify the **versionAdvertised** flag in the metainfo.xml.
+
+This will tell Ambari to use the component's version and perform upgrade. Not specifying this flag will result in Ambari not upgrading the component.
+
+Example: [HDFS NameNode](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L33) (versionAdvertised=true), [AMS Metrics Collector](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml#L33) (versionAdvertised=false).
+
+#### Rolling Upgrades
+
+In Rolling Upgrade each service is upgraded with minimal downtime in mind. The general approach is to quickly upgrade the master components, followed by upgrading of workers in batches.
+
+The service will not be available when masters are restarting. However when master components are in High Availability (HA), the service continues to be available through restart of each master.
+
+You can read more about the Rolling Upgrade process via this [blog post](http://hortonworks.com/blog/introducing-automated-rolling-upgrades-with-apache-ambari-2-0/) and [documentation](https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_upgrading_Ambari/content/_upgrading_HDP_perform_rolling_upgrade.html).
+
+Examples
+
+* [HDP-2.2.x to HDP-2.2.y](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.2.xml)
+* [HDP-2.2 to HDP-2.3](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.3.xml)
+* [HDP-2.2 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.4.xml)
+* [HDP-2.3 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.4.xml)
+
+#### Express Upgrades
+
+In Express Upgrade the goal is to upgrade entire cluster as fast as possible - even if it means cluster downtime. It is generally much faster than Rolling Upgrade.
+
+For each service the components are first stopped, upgraded and then started.
+
+You can read about Express Upgrade steps in this [documentation](https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_upgrading_Ambari/content/_upgrading_HDP_perform_express_upgrade.html).
+
+Examples
+
+* [HDP-2.1 to HDP-2.3](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.1/upgrades/nonrolling-upgrade-2.3.xml)
+* [HDP-2.2 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/nonrolling-upgrade-2.4.xml)
+
+## Configuration support in Ambari
+[Configuration support in Ambari](https://cwiki.apache.org/confluence/display/AMBARI/Configuration+support+in+Ambari)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/define-service.png b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/define-service.png
new file mode 100644
index 0000000..40868dd
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/define-service.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/define-stack.png b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/define-stack.png
new file mode 100644
index 0000000..7ebfdb9
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/define-stack.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/hooks.png b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/hooks.png
new file mode 100644
index 0000000..7ebfdb9
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/hooks.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/scripts-folder.png b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/scripts-folder.png
new file mode 100644
index 0000000..f072a4c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/scripts-folder.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/scripts.png b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/scripts.png
new file mode 100644
index 0000000..30d1e3a
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/scripts.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/stacks-properties.png b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/stacks-properties.png
new file mode 100644
index 0000000..d526a8c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/imgs/stacks-properties.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/index.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/index.md
new file mode 100644
index 0000000..6ed5dfd
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/index.md
@@ -0,0 +1,16 @@
+# Stacks and Services
+
+**Introduction**
+
+Ambari supports the concept of Stacks and associated Services in a Stack Definition. By leveraging the Stack Definition, Ambari has a consistent and defined interface to install, manage and monitor a set of Services and provides extensibility model for new Stacks + Services to be introduced.
+
+From Ambari 2.4, there is also support for the concept of Extensions and its associated custom Services in an Extension Definition.
+
+Terminology
+
+Term | Description
+-----|------------
+Stack | Defines a set of Services and where to obtain the software packages for those Services. A Stack can have one or more versions, and each version can be active/inactive. For example, Stack = "HDP-1.3.3".
+Extension | Defines a set of custom Services which can be added to a stack version. An Extension can have one or more versions.
+Service | Defines the Components (MASTER, SLAVE, CLIENT) that make up the Service. For example, Service = "HDFS"
+Component | The individual Components that adhere to a certain defined lifecycle (start, stop, install, etc). For example, Service = "HDFS" has Components = "NameNode (MASTER)", "Secondary NameNode (MASTER)", "DataNode (SLAVE)" and "HDFS Client (CLIENT)"
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/management-packs.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/management-packs.md
new file mode 100644
index 0000000..aaf9118
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/management-packs.md
@@ -0,0 +1,327 @@
+# Management Packs
+
+## **Background**
+
+At present, stack definitions are bundled with Ambari core and are part of Apache Ambari releases. This enforces having to do an Ambari release with updated stack definitions whenever a new version of a stack is released. Also to add an "add-on" service (custom service) to the stack definition, one has to manually add the add-on service to the stack definition on an Ambari Server. There is no release vehicle that can be used to ship add-on services.
+
+Apache Ambari Management Packs addresses this issue by decoupling Ambari's core functionality (cluster management and monitoring) from stack management and definition. An Apache Ambari Management Pack (Mpack) can bundle multiple service definitions, stack definitions, stack add-on service definitions, view definitions services so that releasing these artifacts don't enforce an Apache Ambari release. Apache Ambari Management Packs will be released as separate release artifacts and will follow its own release cadence instead of being tightly coupled with Apache Ambari releases.
+
+Management packs are released as tarballs, however they contain a metadata file (mpack.json) that specify the contents of the management pack and actions to perform when installing the management pack.
+
+## **Apache JIRA**
+
+[AMBARI-14854](https://issues.apache.org/jira/browse/AMBARI-14854)
+
+## **Release Timelines**
+
+* Short Term Goals (Apache Ambari 2.4.0.0 release)
+ 1. Provide a release vehicle for stack definitions (example:HDP management pack, IOP management pack).
+
+ 2. Provide a release vehicle for add-on/custom services (example: Microsoft-R management pack)
+ 3. Retrofit in existing stack processing infrastructure
+ 4. Command line to update stack definitions and service definitions
+
+* Long Term Goals (Ambari 2.4+)
+ 1. Release HDP stacks as mpacks
+ 2. Build management pack processing infrastructure that will replace the stack processing infrastructure.
+
+ 3. Dynamic creation of stack definitions by processing management packs
+ 4. Provide a REST API adding/removing /upgrading managment packs.
+
+## **Management Pack Metadata (Mpack.json)**
+
+Management pack should contain following metadata information in mpack.json.
+
+* **Name**: Unique management pack name
+* **Version**: Management pack version
+* **Description**: Friendly description of the management pack
+* **Prerequisites**:
+ - Minimum Ambari Version on which the management pack is installable.
+
+ + **Example**: To install stackXYZ-ambari-mpack1.0.0.0 management pack, Ambari should be atleast on Apache Ambari 2.4.0.0 version.
+
+ - Minimum management pack version that should be installed before upgrading to this management pack.
+
+ + **Example**: To upgrade to stackXYZ-ambari-mpack-2.0.0.0 management pack, stackXYZ-ambari-mpack-1.8.0.0 management pack or higher should be installed.
+
+ - Minimum stack version that should already be present in the stack definitions for this management pack to be applicable.
+
+ + **Example**: To add a add-on service management pack myservice-ambari-mpack-1.0.0.0 management pack stackXYZ-2.1 stack definition should be present.
+
+* **Artifacts**:
+ - List of release artifacts (service definitions, stack definitions, stack-addon-service-definitions, view-definitions) bundled in the management pack.
+
+ - Metadata for each artifact like source directory, additional applicability for that artifact etc.
+
+ - Supported Artifact Types
+ + **service-definitions**: Contains service definitions similar to common-services/serviceA/1.0.0
+ + **stack-definitions**: Contains stack definitions similar to stacks/stackXYZ/1.0
+ + **extension-definitions**: Contains dynamic stack extensions (Refer:[Extensions](./extensions.md))
+ + **stack-addon-service-definitions**: Defines add-on service applicability for stacks and how to merge the add-on service into the stack definition.
+
+ + **view-definitions** (Not supported in Apache Ambari 2.4.0.0)
+ - A management pack can have more than one release artifacts.
+
+ + **Example**: It should be possible to create a management pack that bundles together
+ * **stack-definitions**: stackXYZ-1.0, stackXYZ-1.1, stackXYZ-2.0
+ * **service-definitions**: HAWQ, HDFS, ZOOKEEPER
+ * **stack-addon-service-definitions**: HAWQ/2.0.0 is applicable to stackXYZ-2.0, stackABC-1.0
+ * **view-definitions**: Hive, Jobs, Slider (Apache Ambari 2.4.0.0)
+
+## **Management Pack Structure**
+
+### StackXYZ Management Pack Structure
+
+_stackXYZ-ambari-mpack-1.0.0.0_
+
+```
+├── mpack.json
+
+├── common-services
+
+│ └── HDFS
+
+│ └── 2.1.0.2.0
+
+│ └── configuration
+
+└── stacks
+
+ └── stackXYZ
+
+ └── 1.0
+
+ ├── metainfo.xml
+
+ ├── repos
+
+ │ └── repoinfo.xml
+
+ ├── role_command_order.json
+
+ └── services
+
+ ├── HDFS
+
+ │ ├── configuration
+
+ │ │ └── hdfs-site.xml
+
+ │ └── metainfo.xml
+
+ ├── stack_advisor.py
+
+ └── ZOOKEEPER
+
+ └── metainfo.xml
+```
+
+### StackXYZ Management Pack Mpack.json
+
+_stackXYZ-ambari-mpack1.0.0.0/mpack.json_
+
+```json
+{
+
+ "type" : "full-release",
+
+ "name" : "stackXYZ-ambari-mpack",
+
+ "version": "1.0.0.0",
+
+ "description" : "StackXYZ Management Pack",
+
+ "prerequisites": {
+
+ "min_ambari_version" : "2.4.0.0"
+
+ },
+
+ "artifacts": [
+
+ {
+
+ "name" : "stackXYZ-service-definitions",
+
+ "type" : "service-definitions",
+
+ "source_dir": "common-services"
+
+ },
+
+ {
+
+ "name" : "stackXYZ-stack-definitions",
+
+ "type" : "stack-definitions",
+
+ "source_dir": "stacks"
+
+ }
+
+ ]
+
+}
+```
+
+### Add-On Service Management Pack Structure
+
+_myservice-ambari-mpack-1.0.0.0_
+
+```
+├── common-services
+
+│ └── MYSERVICE
+
+│ └── 1.0.0
+
+│ ├── configuration
+
+│ │ └── myserviceconfig.xml
+
+│ ├── metainfo.xml
+
+│ ├── package
+
+│ │ └── scripts
+
+│ │ ├── client.py
+
+│ │ ├── master.py
+
+│ │ └── slave.py
+
+│ └── role_command_order.json
+
+├── custom-services
+
+│ └── MYSERVICE
+
+│ ├── 1.0.0
+
+│ │ └── metainfo.xml
+
+│ └── 2.0.0
+
+│ └── metainfo.xml
+
+└── mpack.json
+```
+
+### Add-On Service Management Pack Mpack.json
+
+_myservice-ambari-mpack-1.0.0.0/mpack.json_
+
+```json
+{
+
+ "type" : "full-release",
+
+ "name" : "myservice-ambari-mpack",
+
+ "version": "1.0.0.0",
+
+ "description" : "MyService Management Pack",
+
+ "prerequisites": {
+
+ "min-ambari-version" : "2.4.0.0",
+
+ "min-stack-versions" : [
+
+ {
+
+ "stack_name" : "stackXYZ",
+
+ "stack_version" : "2.2"
+
+ }
+
+ ]
+
+ },
+
+ "artifacts": [
+
+ {
+
+ "name" : "MYSERVICE-service-definition",
+
+ "type" : "service-definition",
+
+ "source_dir" : "common-services/MYSERVICE/1.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "1.0.0"
+
+ },
+
+ {
+
+ "name" : "MYSERVICE-1.0.0",
+
+ "type" : "stack-addon-service-definition",
+
+ "source_dir": "addon-services/MYSERVICE/1.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "1.0.0",
+
+ "applicable_stacks" : [
+
+ {
+
+ "stack_name" : "stackXYZ", "stack_version" : "2.2"
+
+ }
+
+ ]
+
+ },
+
+ {
+
+ "name" : "MYSERVICE-2.0.0",
+
+ "type" : "stack-addon-service-definition",
+
+ "source_dir": "custom-services/MYSERVICE/2.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "2.0.0",
+
+ "applicable_stacks" : [
+
+ {
+
+ "stack_name" : "stackXYZ",
+
+ "stack_version" : "2.4"
+
+ }
+
+ ]
+
+ }
+
+ ]
+
+}
+```
+
+## **Installing Management Pack**
+
+```bash
+ambari-server install-mpack --mpack=/path/to/mpack.tar.gz --purge --verbose
+```
+
+**Note**: Do not pass the "--purge" command line parameter when installing an add-on service management pack. The "--purge" flag is used to purge any existing stack definition (HDP is included in Ambari release) and should be included only when installing a Stack Management Pack.
+
+## **Upgrading Management Pack**
+
+```bash
+ambari-server upgrade-mpack --mpack=/path/to/mpack.tar.gz --verbose
+```
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/overview.mdx b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/overview.mdx
new file mode 100644
index 0000000..b47e75e
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/overview.mdx
@@ -0,0 +1,554 @@
+# Overview
+
+## Background
+
+The Stack definitions can be found in the source tree at `<a href="https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks">/ambari-server/src/main/resources/stacks</a>`. After you install the Ambari Server, the Stack definitions can be found at `/var/lib/ambari-server/resources/stacks`
+
+## Structure
+
+The structure of a Stack definition is as follows:
+
+```
+|_ stacks
+ |_
+ |_
+ metainfo.xml
+ |_ hooks
+ |_ repos
+ repoinfo.xml
+ |_ services
+ |_
+ metainfo.xml
+ metrics.json
+ |_ configuration
+ {configuration files}
+ |_ package
+ {files, scripts, templates}
+```
+
+## Defining a Service and Components
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service can be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you specify the <commandScript> to use when executing commands. There is a defined set of default commands the component must support.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+#### Using Stack Inheritance
+
+Stacks can _extend_ other Stacks in order to share command scripts and configurations. This reduces duplication of code across Stacks with the following:
+
+* define repositories for the child Stack
+* add new Services in the child Stack (not in the parent Stack)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, the **HDP 2.1 Stack _extends_ HDP 2.0.6 Stack** so only the changes applicable to **HDP 2.1 Stack** are present in that Stack definition. This extension is defined in the [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.1/metainfo.xml) for HDP 2.1 Stack:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.0.6</extends>
+</metainfo>
+```
+
+## Example: Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV", add it to an existing Stack definition. This service includes MASTER, SLAVE and CLIENT components.
+
+## Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+```
+3. Browse to the newly created `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+5. Next, let's create that command script. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `SAMPLESRV` `/` ** `package/scripts`** that we designated in the service metainfo.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+```
+6. Browse to the scripts directory and create the `.py` command script files. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+## Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Sample Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Sample Service" and click Next.
+
+4. Assign the "Sample Srv Master" and click Next.
+
+5. Select the hosts to install the "Sample Srv Client" and click Next.
+
+6. Once complete, the "My Sample Service" will be available Service navigation area.
+
+7. If you want to add the "Sample Srv Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+#### Example: Implementing a Custom Client-only Service
+
+In this example, we will create a custom service called "TESTSRV", add it to an existing Stack definition and use the Ambari APIs to install/configure the service. This service is a CLIENT so it has two commands: install and configure.
+
+## Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTSRV</strong>` that will contain the service definition for **TESTSRV**.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+```
+3. Browse to the newly created `TESTSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTSRV</name>
+ <displayName>New Test Service</displayName>
+ <comment>A New Test Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TEST_CLIENT</name>
+ <displayName>New Test Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTSRV**", and it contains one component " **TEST_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTSRV` `/` **`package/scripts`** that we designated in the service metainfo.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the client';
+ def configure(self, env):
+ print 'Configure the client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+## Install the Service (via the Ambari REST API)
+
+1. Add the Service to the Cluster.
+
+
+```
+POST
+/api/v1/clusters/MyCluster/services
+
+{
+"ServiceInfo": {
+ "service_name":"TESTSRV"
+ }
+}
+```
+2. Add the Components to the Service. In this case, add TEST_CLIENT to TESTSRV.
+
+```
+POST
+/api/v1/clusters/MyCluster/services/TESTSRV/components/TEST_CLIENT
+```
+3. Install the component on all target hosts. For example, to install on `c6402.ambari.apache.org` and `c6403.ambari.apache.org`, first create the host_component resource on the hosts using POST.
+
+```
+POST
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+POST
+/api/v1/clusters/MyCluster/hosts/c6403.ambari.apache.org/host_components/TEST_CLIENT
+```
+4. Now have Ambari install the components on all hosts. In this single command, you are instructing Ambari to install all components related to the service. This call the `install()` method in the command script on each host.
+
+
+```
+PUT
+/api/v1/clusters/MyCluster/services/TESTSRV
+
+{
+ "RequestInfo": {
+ "context": "Install Test Srv Client"
+ },
+ "Body": {
+ "ServiceInfo": {
+ "state": "INSTALLED"
+ }
+ }
+}
+```
+5. Alternatively, instead of installing all components at the same time, you can explicitly install each host component. In this example, we will explicitly install the TEST_CLIENT on `c6402.ambari.apache.org`:
+
+```
+PUT
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+{
+ "RequestInfo": {
+ "context":"Install Test Srv Client"
+ },
+ "Body": {
+ "HostRoles": {
+ "state":"INSTALLED"
+ }
+ }
+}
+```
+6. Use the following to configure the client on the host. This will end up calling the `configure()` method in the command script.
+
+```
+POST
+/api/v1/clusters/MyCluster/requests
+
+{
+ "RequestInfo" : {
+ "command" : "CONFIGURE",
+ "context" : "Config Test Srv Client"
+ },
+ "Requests/resource_filters": [{
+ "service_name" : "TESTSRV",
+ "component_name" : "TEST_CLIENT",
+ "hosts" : "c6403.ambari.apache.org"
+ }]
+}
+```
+7. If you want to see which hosts the component is installed.
+
+```
+GET
+/api/v1/clusters/MyCluster/components/TEST_CLIENT
+```
+
+## Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Test Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Test Service" and click Next.
+
+4. Select the hosts to install the "New Test Client" and click Next.
+
+5. Once complete, the "My Test Service" will be available Service navigation area.
+
+6. If you want to add the "New Test Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+
+#### Example: Implementing a Custom Client-only Service (with Configs)
+
+In this example, we will create a custom service called "TESTCONFIGSRV" and add it to an existing Stack definition. This service is a CLIENT so it has two commands: install and configure. And the service also includes a configuration type "test-config".
+
+## Create and Add the Service to the Stack
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTCONFIGSRV</strong>` that will contain the service definition for TESTCONFIGSRV.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+```
+3. Browse to the newly created `TESTCONFIGSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```
+2.0
+
+ TESTCONFIGSRV
+ New Test Config Service
+ A New Test Config Service
+ 0.1.0
+
+ TESTCONFIG_CLIENT
+ New Test Config Client
+ CLIENT
+ 1+
+
+ scripts/test_client.py
+ PYTHON
+ 600
+
+ any
+```
+4. In the above, my service name is " **TESTCONFIGSRV**", and it contains one component " **TESTCONFIG_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTCONFIGSRV` `/` **`package/scripts`** that we designated in the service metainfo `<commandscript></commandscript>`.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the config client';
+ def configure(self, env):
+ print 'Configure the config client';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now let's define a config type for this service. Create a directory for the configuration dictionary file `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTCONFIGSRV` `/` **`configuration`**.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+```
+8. Browse to the configuration directory and create the `test-config.xml` file. For example:
+
+```
+some.test.property
+ this.is.the.default.value
+ This is a kool description.
+
+```
+9. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/stack-inheritance.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/stack-inheritance.md
new file mode 100644
index 0000000..8d5184d
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/stack-inheritance.md
@@ -0,0 +1,68 @@
+
+# Stack Inheritance
+
+Each stack version must provide a metainfo.xml descriptor file which can declare whether the stack inherits from another stack version:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.3</extends>
+ ...
+</metainfo>
+```
+
+When a stack inherits from another stack version, how its defining files and directories are inherited follows a number of different patterns.
+
+The following files should not be redefined at the child stack version level:
+
+* properties/stack_features.json
+* properties/stack_tools.json
+
+Note: These files should only exist at the base stack level.
+
+The following files if defined in the current stack version replace the definitions from the parent stack version:
+
+* kerberos.json
+* widgets.json
+
+The following files if defined in the current stack version are merged with the parent stack version:
+
+* configuration/cluster-env.xml
+
+* role_command_order.json
+
+Note: All the services' role command orders will be merge with the stack's role command order to provide a master list.
+
+All attributes of the current stack version's metainfo.xml will replace those defined in the parent stack version.
+
+The following directories if defined in the current stack version replace those from the parent stack version:
+
+* hooks
+
+This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The following directories are not inherited:
+
+* repos
+* upgrades
+
+The repos/repoinfo.xml file should be defined in every stack version. The upgrades directory and its corresponding XML files should be defined in all stack versions that support upgrade.
+
+## Services Folder
+
+The services folder is a special case. There are two inheritance mechanisms at work here. First the stack_advisor.py will automatically import the parent stack version's stack_advisor.py script but the rest of the inheritance behavior is up to the script's author. There are several examples of [stack_advisor.py](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py) files in the Ambari server source.
+
+```python
+ class HDP23StackAdvisor(HDP22StackAdvisor):
+ def __init__(self):
+ super(HDP23StackAdvisor, self).__init__()
+ Logger.initialize_logger()
+
+ def getComponentLayoutValidations(self, services, hosts):
+ parentItems = super(HDP23StackAdvisor, self).getComponentLayoutValidations(services, hosts)
+ ...
+```
+
+Services defined within the services folder follow the rules for [service inheritance](./custom-services.md#Service20%Inheritance). By default if a service does not declare an explicit inheritance (via the **extends** tag), the service will inherit from the service defined at the parent stack version.
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/stack-properties.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/stack-properties.md
new file mode 100644
index 0000000..af33f0c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/stack-properties.md
@@ -0,0 +1,119 @@
+# Stack Properties
+
+Similar to stack configurations, most properties are defined at the service level, however there are global properties which can be defined at the stack-version level affecting across all services.
+
+Some examples are: [stack-selector and conf-selector](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json#L2) specific names or what [stack versions certain stack features](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json#L5) are supported by. Most of these properties were introduced in Ambari 2.4 version in the effort of parameterize stack information and facilitate the reuse of common-services code by other distributions.
+
+
+Such properties can be defined in .json format in the [properties folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) of the stack.
+
+
+
+# Stack features
+
+Stacks can support different features depending on their version, for example: upgrade support, NFS support, support for specific new components (such as Ranger, Phoenix )...
+
+
+Stack featurization was added as part of the HDP stack configurations on [HDP/2.0.6/configuration/cluster-env.xml](http://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml), introducing a new stack_features property which value is processed in the stack engine from an external property file.
+
+```xml
+<!-- Define stack_features property in the base stack. DO NOT override this property for each stack version -->
+<property>
+ <name>stack_features</name>
+ <value/>
+ <description>List of features supported by the stack</description>
+ <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+ <value-attributes>
+ <property-file-name>stack_features.json</property-file-name>
+ <property-file-type>json</property-file-type>
+ <read-only>true</read-only>
+ <overridable>false</overridable>
+ <visible>false</visible>
+ </value-attributes>
+ <on-ambari-upgrade add="true"/>
+</property>
+```
+
+Stack Features properties are defined in [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) file under /HDP/2.0.6/properties. These features support is now available for access at service-level code to change certain service behaviors or configurations. This is an example of features described in stack_features.jon file:
+
+
+```json
+"stack_features": [
+ {
+ "name": "snappy",
+ "description": "Snappy compressor/decompressor support",
+ "min_version": "2.0.0.0",
+ "max_version": "2.2.0.0"
+ },
+ {
+ "name": "lzo",
+ "description": "LZO libraries support",
+ "min_version": "2.2.1.0"
+ },
+ {
+ "name": "express_upgrade",
+ "description": "Express upgrade support",
+ "min_version": "2.1.0.0"
+ },
+ {
+ "name": "rolling_upgrade",
+ "description": "Rolling upgrade support",
+ "min_version": "2.2.0.0"
+ }
+ ]
+}
+```
+
+where min_version/max_version are optional constraints.
+
+Feature constants, matching features names, such has ROLLING_UPGRADE = "rolling_upgrade" has been added to a new StackFeature class in [resource_management/libraries/functions/constants.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/constants.py#L38)
+
+
+```python
+class StackFeature:
+"""
+ Stack Feature supported
+"""
+ SNAPPY = "snappy"
+ LZO = "lzo"
+ EXPRESS_UPGRADE = "express_upgrade"
+ ROLLING_UPGRADE = "rolling_upgrade"
+```
+
+Additionally, corresponding helper functions has been introduced in [resource_management/libraries/functions/stack_fetaures.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/stack_features.py) to parse the .json file content and called from service code to check if the stack supports the specific feature.
+
+This is an example where the new stack featurization design is used in service code:
+
+```python
+if params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version):
+ conf_select.select(params.stack_name, "hive", params.version)
+ stack_select.select("hive-server2", params.version)
+```
+
+# Stack Tools
+
+
+Similar to stack features, stack-selector and conf-selector tools are now stack-driven instead of hardcoding hdp-select and conf-select. They are defined in [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json) file under /HDP/2.0.6/properties
+
+And declared as part of the HDP stack configurations as a new property on [/HDP/2.0.6/configuration/cluster-env.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml)
+
+
+```xml
+<!-- Define stack_tools property in the base stack. DO NOT override this property for each stack version -->
+<property>
+ <name>stack_tools</name>
+ <value/>
+ <description>Stack specific tools</description>
+ <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+ <value-attributes>
+ <property-file-name>stack_tools.json</property-file-name>
+ <property-file-type>json</property-file-type>
+ <read-only>true</read-only>
+ <overridable>false</overridable>
+ <visible>false</visible>
+ </value-attributes>
+ <on-ambari-upgrade add="true"/>
+</property>
+```
+
+Corresponding helper functions have been added in [resource_management/libraries/functions/stack_tools.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py). These helper functions are used to remove hardcodings in resource_management library.
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md
new file mode 100644
index 0000000..95023bf
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md
@@ -0,0 +1,67 @@
+# Version functions: conf-select and stack-select
+
+Especially during upgrade, it is important to be able to set the current stack and configuration versions. For non-custom services, these functions are implemented in the conf-select and stack-select functions. These can be imported in a service's scripts with the following imports:
+
+```py
+from resource_management.libraries.functions import conf_select
+
+from resource_management.libraries.functions import stack_select
+```
+
+Typically the select functions, which is used to set the stack and configuration versions, are called in the pre_upgrade_restart function during a rolling upgrade:
+
+```py
+ def pre_upgrade_restart(self, env, upgrade_type=None):
+ import params
+ env.set_params(params)
+
+ # this function should not execute if the version can't be determined or
+ # the stack does not support rolling upgrade
+ if not (params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version)):
+ return
+
+ Logger.info("Executing <My Service> Stack Upgrade pre-restart")
+ conf_select.select(params.stack_name, "<my_service>", params.version)
+ stack_select.select("<my_service>", params.version)
+```
+
+The select functions will set up symlinks for the current stack or configuration version. For the stack, this will set up the links from the stack root current directory to the particular stack version. For example:
+
+```
+/usr/hdp/current/hadoop-client -> /usr/hdp/2.5.0.0/hadoop
+```
+
+For the configuration version, this will set up the links for all the configuration directories, as follows:
+
+```
+/etc/hadoop/conf -> /usr/hdp/current/hadoop-client/conf
+
+/usr/hdp/current/hadoop-client/conf -> /etc/hadoop/2.5.0.0/0
+```
+
+The stack_select and conf_select functions can also be used to return the hadoop directories:
+
+```
+hadoop_prefix = stack_select.get_hadoop_dir("home")
+
+hadoop_bin_dir = stack_select.get_hadoop_dir("bin")
+
+hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+```
+
+The conf_select api is as follows:
+
+```py
+def select(stack_name, package, version, try_create=True, ignore_errors=False)
+
+def get_hadoop_conf_dir(force_latest_on_upgrade=False)
+```
+
+The stack_select api is as follows:
+```py
+def select(component, version)
+
+def get_hadoop_dir(target, force_latest_on_upgrade=False)
+```
+
+Unfortunately for custom services these functions are not available to set up their configuration or stack versions. A custom service could implement their own functions to set up the proper links.
diff --git a/versioned_docs/version-2.7.5/ambari-design/stack-and-services/writing-metainfo.md b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/writing-metainfo.md
new file mode 100644
index 0000000..2645fe9
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/stack-and-services/writing-metainfo.md
@@ -0,0 +1,247 @@
+# Writing metainfo.xml
+
+metainfo.xml is a declarative definition of an Ambari managed service describing its content. Itis the most critical file for any service definition. This section describes various key sub-sections within a metainfo.xml file.
+
+_Non-mandatory fields are described in italics._
+
+The top level fields to describe a service are as follows:
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+name | the name of the service. A name has to be unique among all the services that are included in the stack definition containing the service. | HDFS
+displayName | the display name of the service | HDFS
+version | the version of the service. name and version together uniquely identify a service. Usually, the version is the version of the service binary itself. | 2.1.0.2.0
+components | the list of component that the service is comprised of | `<check out HDFS metainfo>`
+osSpecifics | OS specific package information for the service | `<check out HDFS metainfo>`
+commandScript | service level commands may also be defined. The command is executed on a component instance that is a client | `<check out HDFS metainfo>`
+comment | a short description describing the service | Apache Hadoop Distributed File System
+requiredServices | what other services that should be present on the cluster | `<check out HDFS metainfo>`
+configuration-dependencies | configuration files that are expected by the service (config files owned by other services are specified in this list) | `<check out HDFS metainfo>`
+restartRequiredAfterRackChange | Restart Required After Rack Change | true / false
+configuration-dir | Use this to specify a different config directory if not 'configuration' | -
+
+**service/components - A service contains several components. The fields associated with a component are**:
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+name | name of the component | HDFS
+displayName | display name of the component. | HDFS
+category | type of the component - MASTER, SLAVE, and CLIENT | 2.1.0.2.0
+commandScript | application wide commands may also be defined. The command is executed on a component instance that is a client | `<check out HDFS metainfo>`
+cardinality | allowed/expected number of instances | For example, 1-2 for MASTER, 1+ for Slave
+reassignAllowed | whether the component can be reassigned / moved to a different host. | true / false
+versionAdvertised | does the component advertise its version - used during rolling/express upgrade | Apache Hadoop Distributed File System
+timelineAppid | This will be the component name under which the metrics from this component will be collected. | `<check out HDFS metainfo>`
+dependencies | the list of components that this component depends on | `<check out HDFS metainfo>`
+customCommands | a set of custom commands associated with the component in addition to standard commands. | RESTART_LLAP (Check out HIVE metainfo)
+
+**service/osSpecifics - OS specific package names (rpm or deb packages)**
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+osFamily | the os family for which the package is applicable | any => all<br></br>amazon2015,redhat6,debian7,ubuntu12,ubuntu14,ubuntu16
+packages | list of packages that are needed to deploy the service | `<check out HDFS metainfo>`
+package/name | name of the package (will be used by the yum/zypper/apt commands) | Eg hadoop-lzo.
+
+**service/commandScript - the script that implements service check**
+
+Feild | What is it used for
+------|---------------------
+script | the relative path to the script
+scriptType | the type of the script, currently only supported type is PYTHON
+timeout | custom timeout for the command - this supersedes ambari default
+
+sample values:
+
+```xml
+<commandScript>
+ <script>scripts/service_check.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>300</timeout>
+</commandScript>
+```
+**service/component/dependencies/dependency**
+
+Feild | What is it used for
+------|---------------------
+name | name of the component it depends on
+scope | cluster / host. specifies whether the dependent component<br></br>should be present in the same cluster or the same host.
+auto-deploy | custom timeout for the command - this supersedes ambari default
+conditions | Conditions in which this dependency exists. For example, the presence of a property in a config.
+
+sample values:
+
+```xml
+<dependency>
+ <name>HDFS/ZKFC</name>
+ <scope>cluster</scope>
+ <auto-deploy>
+ <enabled>false</enabled>
+ </auto-deploy>
+ <conditions>
+ <condition xsi:type="propertyExists">
+ <configType>hdfs-site</configType>
+ <property>dfs.nameservices</property>
+ </condition>
+ </conditions>
+</dependency>
+```
+
+**service/component/commandScript - the script that implements components specific default commands (Similar to service/commandScript )**
+
+**service/component/logs - provides log search integration.**
+
+Feild | What is it used for
+------|---------------------
+logId | logid of the component
+primary | primary log id or not.
+
+sample values:
+
+```xml
+<log>
+ <logId>hdfs_namenode</logId>
+ <primary>true</primary>
+</log>
+```
+
+**service/component/customCommand - custom commands can be added to components.**
+
+- **name**: name of the custom command
+- **commandScript**: the details of the script that implements the custom command
+- commandScript/script: the relative path to the script
+- commandScript/scriptType: the type of the script, currently only supported type is PYTHON
+- commandScript/timeout: custom timeout for the command - this supersedes ambari default
+
+**service/component/configFiles - list of config files to be available when client config is to be downloaded (used to configure service clients that are not managed by Ambari)**
+
+- **type**: the type of file to be generated, xml or env sh, yaml, etc
+- **fileName**: name of the generated file
+- **dictionary**: data dictionary that contains the config properties (relevant to how ambari manages config bags internally)
+
+## Sample metainfo.xml
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HBASE</name>
+ <displayName>HBase</displayName>
+ <comment>Non-relational distributed database and centralized service for configuration management &
+ synchronization
+ </comment>
+ <version>0.96.0.2.0</version>
+ <components>
+ <component>
+ <name>HBASE_MASTER</name>
+ <displayName>HBase Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <timelineAppid>HBASE</timelineAppid>
+ <dependencies>
+ <dependency>
+ <name>HDFS/HDFS_CLIENT</name>
+ <scope>host</scope>
+ <auto-deploy>
+ <enabled>true</enabled>
+ </auto-deploy>
+ </dependency>
+ <dependency>
+ <name>ZOOKEEPER/ZOOKEEPER_SERVER</name>
+ <scope>cluster</scope>
+ <auto-deploy>
+ <enabled>true</enabled>
+ <co-locate>HBASE/HBASE_MASTER</co-locate>
+ </auto-deploy>
+ </dependency>
+ </dependencies>
+ <commandScript>
+ <script>scripts/hbase_master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>1200</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/hbase_master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+
+ <component>
+ <name>HBASE_REGIONSERVER</name>
+ <displayName>RegionServer</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <timelineAppid>HBASE</timelineAppid>
+ <commandScript>
+ <script>scripts/hbase_regionserver.py</script>
+ <scriptType>PYTHON</scriptType>
+ </commandScript>
+ </component>
+
+ <component>
+ <name>HBASE_CLIENT</name>
+ <displayName>HBase Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <commandScript>
+ <script>scripts/hbase_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ </commandScript>
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>hbase-site.xml</fileName>
+ <dictionaryName>hbase-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>hbase-env.sh</fileName>
+ <dictionaryName>hbase-env</dictionaryName>
+ </configFile>
+ </configFiles>
+ </component>
+ </components>
+
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily>
+ <packages>
+ <package>
+ <name>hbase</name>
+ </package>
+ </packages>
+ </osSpecific>
+ </osSpecifics>
+
+ <commandScript>
+ <script>scripts/service_check.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>300</timeout>
+ </commandScript>
+
+ <requiredServices>
+ <service>ZOOKEEPER</service>
+ <service>HDFS</service>
+ </requiredServices>
+
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hbase-site</config-type>
+ <config-type>ranger-hbase-policymgr-ssl</config-type>
+ <config-type>ranger-hbase-security</config-type>
+ </configuration-dependencies>
+
+ </service>
+ </services>
+</metainfo>
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-design/technology-stack.md b/versioned_docs/version-2.7.5/ambari-design/technology-stack.md
new file mode 100644
index 0000000..92de946
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/technology-stack.md
@@ -0,0 +1,30 @@
+---
+sidebar_position: 1
+---
+
+# Technology Stack
+
+## Ambari Server
+
+- Server code: Java 1.7 / 1.8
+- Agent scripts: Python
+- Database: Postgres, Oracle, MySQL
+- ORM: EclipseLink
+- Security: Spring Security with remote LDAP integration and local database
+- REST server: Jersey (JAX-RS)
+- Dependency Injection: Guice
+- Unit Testing: JUnit
+- Mocks: EasyMock
+- Configuration management: Python
+
+## Ambari Web
+
+- Frontend code: JavaScript
+- Client-side MVC framework: Ember.js / AngularJS
+- Templating: Handlebars.js (integrated with Ember.js)
+- DOM manipulation: jQuery
+- Look and feel: Bootstrap 2
+- CSS preprocessor: LESS
+- Unit Testing: Mocha
+- Mocks: Sinon.js
+- Application assembler/tester: Brunch / Grunt / Gulp
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/framework-services.md b/versioned_docs/version-2.7.5/ambari-design/views/framework-services.md
new file mode 100644
index 0000000..47a639e
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/framework-services.md
@@ -0,0 +1,110 @@
+# Framework Services
+
+This section describes the framework services that are available for views.
+
+
+## ViewContext
+
+The view server-side resources have access to a [ViewContext](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewContext.java) object. The view context provides information about the current authenticated user, the view definition, the instance configuration properties, instance data and the view controller.
+
+```java
+/**
+ * The view context.
+
+ */
+ @Inject
+ ViewContext context;
+```
+
+## Instance Data
+
+The view framework exposes a way to store key/value pair "instance data". This data is scoped to a given view instance and user. Instance data is meant to be used for information such as "user prefs" or other lightweight information that supports the experience of your view application. You can access the instance data get and put methods from the [ViewContext](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewContext.java) object.
+
+Checkout the **Favorite View** for example usage of the instance data API.
+
+[https://github.com/apache/ambari/tree/trunk/ambari-views/examples/favorite-view](https://github.com/apache/ambari/tree/trunk/ambari-views/examples/favorite-view)
+
+```java
+/**
+ * Context object available to the view components to provide access to
+ * the view and instance attributes as well as run time information about
+ * the current execution context.
+
+ */
+public interface ViewContext {
+
+ /**
+ * Save an instance data value for the given key.
+
+ *
+ * @param key the key
+ * @param value the value
+ *
+ * @throws IllegalStateException if no instance is associated
+ */
+ public void putInstanceData(String key, String value);
+ /**
+ * Get the instance data value for the given key.
+
+ *
+ * @param key the key
+ *
+ * @return the instance data value; null if no instance is associated
+ */
+ public String getInstanceData(String key);
+
+}
+```
+
+## Instance Configuration Properties
+
+The instance configuration properties (set when you created your view instance) are accessible from the view context:
+
+```java
+viewContext.getProperties();
+```
+
+Configuration properties also supports a set of pre-defined **variables** that will be replaced when you read the property from the view context. For example, if your view requires a configuration parameter "hdfs.file.path" and that path is going to be set based on the username, when you configure the view instance, set the configuration property like so:
+
+```
+"hdfs.file.path" : "/this/is/some/path/${username}"
+```
+
+When you get this property from the view context, the `${username}` variable will be replaced automatically.
+
+```java
+viewContext.getProperties().get("hdfs.file.path") returns "/this/is/some/path/pramod"
+```
+
+Instance parameters support the following pre-defined variables: `${username}`, `${viewName}` and `${instanceName}`
+
+## Events
+
+Events are an important component of the views framework. Events allow the view to interact with the framework on lifecycle changes (i.e. "Framework Events") such as deploy, create and destroy. As well, once a user has collection of views available, eventing allows the views to communicate with other views (i.e. "View Events").
+
+### Framework Events
+
+To register to receive framework events, in the `view.xml`, specify a `<view-class>this.is.my.view-clazz</view-class>` which is a class that implements the [View](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/View.java) interface.
+
+
+
+Event | Description
+---------|-------
+onDeploy() | Called when a view is deployed.
+onCreate() | Called when a view instance is created.
+onDestroy() | Called when a view instance is destroy.
+
+### Views Events
+
+Views can pass events between views. Obtain the [ViewController](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewController.java) object that allows you to **register listeners** for view events and to **fire events** for other listeners. A view can register an event [Listener](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/events/Listener.java) (via the [ViewController](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewController.java)) for other views by **view name**, or by **view name + version**. When an [Event](http://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/events/Event.java) is fired from the source view, all registered listeners will receive the event.
+
+
+
+1. Obtain the view controller and register a listener.
+
+```java
+viewContext.getViewController().registerListener(...);
+```
+2. Fire the event. `viewContext.getViewController().fireEvent(...);`
+
+3. The framework will notify all registered listeners. The listener implementation can process the event as appropriate. `listener.notify(...)`
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/imgs/fmwk-events.jpg b/versioned_docs/version-2.7.5/ambari-design/views/imgs/fmwk-events.jpg
new file mode 100644
index 0000000..a1b4881
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/imgs/fmwk-events.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-components.jpg b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-components.jpg
new file mode 100644
index 0000000..1f95d12
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-components.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-events.jpg b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-events.jpg
new file mode 100644
index 0000000..62f0532
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-events.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-lifecycle.png b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-lifecycle.png
new file mode 100644
index 0000000..1cd7cc1
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-lifecycle.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-versions.jpg b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-versions.jpg
new file mode 100644
index 0000000..4dcd45d
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/imgs/view-versions.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/index.md b/versioned_docs/version-2.7.5/ambari-design/views/index.md
new file mode 100644
index 0000000..6e8a74a
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/index.md
@@ -0,0 +1,90 @@
+# Views
+
+:::info
+This capability is currently under development.
+:::info
+
+**Ambari Views** offer a systematic way to plug-in UI capabilities to surface custom visualization, management and monitoring features in Ambari Web. A " **view**" is a way of extending Ambari that allows 3rd parties to plug in new resource types along with the APIs, providers and UI to support them. In other words, a view is an application that is deployed into the Ambari container.
+
+
+## Useful Resources
+
+Resource | Link
+---------|-------
+Views Overview | http://www.slideshare.net/hortonworks/ambari-views-overview
+Views Framework API Docs | https://github.com/apache/ambari/blob/trunk/ambari-views/docs/index.md
+Views Framework Examples | https://github.com/apache/ambari/tree/trunk/ambari-views/examples
+
+## Terminology
+
+The following section describes the basic terminology associated with views.
+
+Term | Description
+---------|-------
+View Name | The name of the view. The view name identifies the view to Ambari.
+View Version | The version of the view. A unique view name can have multiple versions deployed in Ambari.
+View Package | This is the JAR package that contains the **view definition** and all view resources (server-side resources and client-side assets) for a given view version. See [View Package](#View20%Package) for more information on the contents and structure of the package.
+View Definition | This defines the view name, version, resources and required/optional configuration parameters for a view. The view definition file is included in the view package. See View Definition for more information on the view definition file syntax and features.
+View Instance | An unique instance of a view, that is based on a view definition and specific version that is configured. See Versions and Instances for more information.
+View API | The REST API for viewing the list of deployed views and creating view instances. See View API for more information.
+Framework Services | The server-side of the view framework exposes certain services for use with your views. This includes persistence of view instance data and view eventing. See Framework Services for more information.
+
+## Components of a View
+
+A view can consist of **client-side assets** (i.e. the UI that is exposed in Ambari Web) and **server-side resources** (i.e. the classes that expose REST end points). When the view loads into Ambari Web, the view UI can use the view server-side resources as necessary to deliver the view functionality.
+
+
+
+### Client-side Assets
+
+The view does not limit or restrict what client-side technologies a view uses. You can package client-side dependencies (such as JavaScript and CSS frameworks) with your view.
+
+### Server-side Resources
+
+A view can expose resources as REST end points to be used in conjunction with the client-side to deliver the functionality of your view application. Thees resources are written in Java and can be anything from a servlet to a regular REST service to an Ambari ResourceProvider (i.e. a special type of REST service that handles some REST capabilities such as partial response and pagination – if you adhere to the Ambari ResourceProvider interface). See [Framework Services](./framework-services.md) for more information on capabilities that the framework exposes on the server-side for views.
+
+:::info
+Checkout the **Weather View** as an example of a view that exposes servlet and REST endpoints.
+
+[https://github.com/apache/ambari/tree/trunk/ambari-views/examples/weather-view](https://github.com/apache/ambari/tree/trunk/ambari-views/examples/weather-view)
+:::
+
+## View Package
+
+The assets associated with a view are delivered as a JAR package. The **view definition file** must be at the root of the package. UI assets and server-side classes are served from the root. Dependent Java libraries are placed in the `WEB-INF/lib` directory.
+
+```
+view.jar
+|
+|- view.xml
+|
+|-
+|
+|- index.html
+| |
+| |_
+|
+|_ WEB-INF
+ |
+ |_ lib/*.jar
+```
+
+## Versions and Instances
+
+Multiple versions of a given view can be deployed into Ambari and multiple instances of each view can be created for each version. For example, I can have a view named FILES and deploy versions 0.1.0 and 0.2.0. I can then create instances of each version FILES{0.1.0} and FILES{0.2.0} allowing some Ambari users to have an older version of FILES (0.1.0), and other users to have the newer FILES version (0.2.0). I can also create multiple instances for each version, configuring each differently.
+
+
+
+### Instance Configuration Parameters
+
+As part of a view definition, the instance configuration parameters are specified (i.e. "these parameters are needed to configure an instance of this view"). When you create a view instance, you specify the configuration parameters specific to that instance. Since parameters are scoped to a particular view instance, you can have multiple instances of a view, each instance configured differently.
+
+Using the example above, I can create two instances of the FILES{0.2.0} version, one instance that is configured a certain way and the second that is configured differently. This allows some Ambari users to use FILES one way, and other users a different way.
+
+See [Framework Services](./framework-services.md) for more information on instance configuration properties.
+
+## View Lifecycle
+
+The lifecycle of a view is shown below. As you deploy a view and create instances of a view, server-side framework events are invoked. See [Framework Services](./framework-services.md) for more information on capabilities that the framework exposes on the server-side for views.
+
+
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/view-api.md b/versioned_docs/version-2.7.5/ambari-design/views/view-api.md
new file mode 100644
index 0000000..023db84
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/view-api.md
@@ -0,0 +1,195 @@
+# View API
+
+This section describes basic usage of the View REST API. Browse https://github.com/apache/ambari/blob/trunk/ambari-views/docs/index.md for detailed usage information and examples.
+
+## Get List of Deployed Views
+
+1. Gets the list of all deployed views
+
+```
+GET /api/v1/views
+
+200 - OK
+```
+
+2. Once you have a list of views, you can drill-into a view and see the available versions.
+
+```
+GET /api/v1/views/FILES
+
+200 - OK
+```
+
+3. You can go a level deeper and see more information about that specific version for the view, such as the parameters and the archive name, and a list of all instances of the view for that specific view version.
+
+```
+GET /api/v1/views/FILES/versions/0.1.0
+
+200 - OK
+```
+
+## Creating a View Instance: Files View
+
+The following example shows creating an instance of the [Files View](https://github.com/apache/ambari/tree/trunk/contrib/views/files), name FILES, version 0.1.0 view called "MyFiles".
+
+1. Create the view instance.
+
+```
+POST /api/v1/views/FILES/versions/0.1.0/instances/MyFiles
+
+[ {
+"ViewInstanceInfo" : {
+ "properties" : {
+ "dataworker.defaultFs" : "webhdfs://your.namenode.host:50070"
+ }
+ }
+} ]
+
+201 - CREATED
+```
+
+:::info
+When creating your view instance, be sure to provide all required view instance properties, otherwise you will receive a 500 with a message explaining the properties that are required.
+:::
+
+2. Restart Ambari Server to pick-up the view instance and UI resources.
+
+```bash
+ambari-server restart
+```
+
+3. Confirm the newly created view instance is available.
+
+```
+GET /api/v1/views/FILES/versions/0.1.0
+
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/FILES/versions/0.1.0/",
+ "ViewVersionInfo" : {
+ "archive" : "/var/lib/ambari-server/resources/views/work/FILES{0.1.0}",
+ "label" : "Files",
+ "masker_class" : null,
+ "parameters" : [
+ {
+ "name" : "dataworker.defaultFs",
+ "description" : "FileSystem URI",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "dataworker.username",
+ "description" : "The username (defaults to ViewContext username)",
+ "required" : false,
+ "masked" : false
+ }
+ ],
+ "version" : "0.1.0",
+ "view_name" : "FILES"
+ },
+ "permissions" : [ ],
+ "instances" : [
+ {
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/FILES/versions/0.1.0/instances/MyFiles",
+ "ViewInstanceInfo" : {
+ "instance_name" : "MyFiles",
+ "version" : "0.1.0",
+ "view_name" : "FILES"
+ }
+ }
+ ]
+}
+```
+
+Browse to the view instance directly.
+
+```
+http://c6401.ambari.apache.org:8080/views/FILES/0.1.0/MyFiles/
+
+or
+
+http://c6401.ambari.apache.org:8080/#/main/views/FILES/0.1.0/MyFiles
+```
+
+## Creating a View Instance: Capacity Scheduler View
+
+The following example shows creating an instance of the [Capacity Scheduler View](https://github.com/apache/ambari/tree/trunk/contrib/views/capacity-scheduler), name CAPACITY-SCHEDULER, version 0.1.0 view called "CS_1", using the label "Capacity Scheduler".
+
+* Create the view instance.
+
+```
+POST /api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/instances/CS_1
+
+[ {
+"ViewInstanceInfo" : {
+ "label" : "Capacity Scheduler",
+ "properties" : {
+ "ambari.server.url" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster",
+ "ambari.server.username" : "admin",
+ "ambari.server.password" : "admin"
+ }
+ }
+} ]
+
+201 - CREATED
+```
+
+:::info
+When creating your view instance, be sure to provide all **required** view instance properties, otherwise you will receive a 500 with a message explaining the properties that are required.
+:::
+
+* Confirm the newly created view instance is available.
+
+```
+GET /api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0
+
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/",
+ "ViewVersionInfo" : {
+ "archive" : "/var/lib/ambari-server/resources/views/work/CAPACITY-SCHEDULER{0.1.0}",
+ "label" : "Capacity Scheduler",
+ "masker_class" : null,
+ "parameters" : [
+ {
+ "name" : "ambari.server.url",
+ "description" : "Target Ambari Server REST API cluster URL (for example: http://ambari.server:8080/api/v1/clusters/c1)",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "ambari.server.username",
+ "description" : "Target Ambari administrator username (for example: admin)",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "ambari.server.password",
+ "description" : "Target Ambari administrator password (for example: admin)",
+ "required" : true,
+ "masked" : false
+ }
+ ],
+ "version" : "0.1.0",
+ "view_name" : "CAPACITY-SCHEDULER"
+ },
+ "permissions" : [ ],
+ "instances" : [
+ {
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/instances/CS_1",
+ "ViewInstanceInfo" : {
+ "instance_name" : "CS_1",
+ "version" : "0.1.0",
+ "view_name" : "CAPACITY-SCHEDULER"
+ }
+ }
+ ]
+}
+```
+* Browse to the view instance directly.
+
+```
+http://c6401.ambari.apache.org:8080/views/CAPACITY-SCHEDULER/0.1.0/CS_1/
+
+or
+
+http://c6401.ambari.apache.org:8080/#/main/views/CAPACITY-SCHEDULER/0.1.0/CS_1/
+```
diff --git a/versioned_docs/version-2.7.5/ambari-design/views/view-definition.md b/versioned_docs/version-2.7.5/ambari-design/views/view-definition.md
new file mode 100644
index 0000000..b32cc8f
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-design/views/view-definition.md
@@ -0,0 +1,210 @@
+# View Definition
+
+The following describes the syntax of the View Definition File (`view.xml`) as part of the View Package.
+
+An XML Schema Definition is available [here](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/resources/view.xsd).
+
+## `<view>`
+
+The `<view>` element is the enclosing element in the Definition File. The following table describes the elements you can include in the `<view>` element:
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes | The unique name of the view. See `<name>` for more information.
+label | Yes | The display label of the view. See `<label>` for more information.
+version | Yes | The version of the view. See `<version>` for more information.
+min-ambari-version<br></br>max-ambari-version | No | The minimum and maximum Ambari version this view can be deployed with. See `<min-ambari-version>` for more information.
+description | No | The description of the view. See `<description>` for more information.
+icon | No | The 32x32 icon to display for this view. Suggested size is 32x32 and will be displayed as 8x8 and 16x16 as necessary. If this property is not set, a default view framework icon is used.
+icon64 | No | The 64x64 icon to display for this view. If this property is not set, the 32x32 sized icon will be used.
+permission | No | Defines a custom permission for this view. See `<permission>` for more information.
+parameter | No | Defines a configuration parameter that is used to when creating a view instance. See `<parameter>` for more information.
+resource | No | Defines a resource that is exposed by the view. See `<resource>` for more information.
+instance | No | Defines a static instance of the view. See `<instance>` for more information.
+view-class | No| Registers a view class to receive framework events. See `<view-class>` for more information.
+validator-class | No | Registers a validator class to receive framework events. See `<validator-class>` for more information.
+
+## `<name>`
+
+The unique name of the view. Example:
+
+```xml
+<name>MY_COOL_VIEW</name>
+```
+
+## `<label>`
+
+The label of the view. Example:
+
+```xml
+<label>My Cool View</label>
+```
+
+## `<version>`
+
+The version of the view. Example:
+
+```xml
+<version>0.1.0</version>
+```
+
+## `<min-ambari-version> <max-ambari-version>`
+
+The minimum and maximum version of Ambari server that can run this view. Example:
+
+```xml
+<min-ambari-version>1.7.0</min-ambari-version>
+<min-ambari-version>1.7.*</min-ambari-version>
+<max-ambari-version>2.0</max-ambari-version>
+```
+
+## `<description>`
+
+The description of the view. Example:
+
+```xml
+<description>This view is used to display information.</description>
+
+```
+
+## `<parameter>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes | The name of the configuration parameter.
+description | Yes | The description of the configuration parameter.
+label | No | The user friendly name of the configuration parameter (used in the Ambari Administration Interface UI).
+placeholder| No | The placeholder value for the configuration parameter (used in the Ambari Administration Interface UI).
+default-value | No| The default value for the configuration parameter (used in the Ambari Administration Interface UI).
+required | Yes |If true, the configuration parameter is required in order to create a view instance.
+masked | No | Indicated this parameter value is to be "masked" in the Ambari Web UI (i.e. not shown in the clear). Omitting this element default to not-masked. Otherwise, if true, the parameter value will be "masked" in the Web UI.
+
+```xml
+<parameter>
+ <name>someParameter</name>
+ <description>Some parameter this is used to configure an instance of this view</description>
+ <required>false</required>
+</parameter>
+```
+
+```xml
+<parameter>
+ <name>name.label.descr.default.place</name>
+ <description>Name, label, description, default and placeholder</description>
+ <label>NameLabelDescDefaultPlace</label>
+ <placeholder>this is placeholder text but you should see default</placeholder>
+ <default-value>youshouldseethisdefault</default-value>
+ <required>true</required>
+</parameter>
+```
+
+See the [Property View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/property-view/docs/index.md) to see the different parameter options in use.
+
+## `<permission>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes| The unique name of the permission.
+description| Yes| The description of the permission.
+
+```xml
+<permission>
+ <name>SOME_CUSTOM_PERM</name>
+ <description>A custom permission for this view</description>
+</permission>
+<permission>
+ <name>SOME_OTHER_PERM</name>
+ <description>Another custom permission for this view</description>
+</permission>
+```
+
+## `<resource>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes| The name of the resource. This will be the resource endpoint name of the view instance.
+plural-name | No | The plural name of the resource.
+service-class | No | The JAX-RS annotated resource service class.
+id-property | No | The resource identifier.
+provider-class | No | The Ambari ResourceProvider resource class.
+resource-class | No | The JavaBean resource class.
+
+```xml
+<resource>
+ <name>calculator</name>
+ <service-class>org.apache.ambari.view.proxy.CalculatorResource</service-class>
+</resource>
+```
+
+See the [Calculator View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/calculator-view/docs/index.md) to see a REST service endpoint view implementation.
+
+```xml
+<resource>
+ <name>city</name>
+ <plural-name>cities</plural-name>
+ <id-property>id</id-property>
+ <resource-class>org.apache.ambari.view.weather.CityResource</resource-class>
+ <provider-class>org.apache.ambari.view.weather.CityResourceProvider</provider-class>
+ <service-class>org.apache.ambari.view.weather.CityService</service-class>
+</resource>
+```
+
+See the [Weather View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/weather-view/docs/index.md) to see an Ambari ResourceProvider view implementation..
+
+## `<instance>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes |The unique name of the view instance.
+label | No |The display label of the view instance. If not set, the view definition `<label>` is used.
+description| No |The description of the view instance. If not set, the view definition `<description>` is used.
+visible | No |If true, for the view instance to show up in the users view instance list.
+icon | No |Overrides the view icon for this specific view instance.
+icon64 | No |Overrides the view icon64 for this specific view instance.
+property | No |Specifies any necessary configuration parameters for the view instance. See `<property>` for more information.
+
+```xml
+<instance>
+ <name>US_WEST</name>
+ <property>
+ <key>cities</key>
+ <value>Palo Alto, US;Los Angeles, US;Portland, US;Seattle, US</value>
+ </property>
+ <property>
+ <key>units</key>
+ <value>imperial</value>
+ </property>
+</instance>
+```
+
+## `<property>`
+
+Element | Requred | Description
+--------|---------|---------------
+key |Yes |The property key (for the configuration parameter to set).
+value |Yes |The property value (for the configuration parameter to set).
+
+```xml
+<property>
+ <key>units</key>
+ <value>imperial</value>
+</property>
+```
+
+## `<view-class>`
+
+Registers a view class to receive framework events. The view class must implement the [View](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/View.java) interface.
+
+```xml
+<view-class>this.is.my.viewclazz</view-class>
+```
+
+## `<validator-class>`
+
+Registers a validator class to receive property and instance validation requests. The validator class must implement the [Validator](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/validation/Validator.java) interface.
+
+```xml
+<validator-class>org.apache.ambari.view.property.MyValidator</validator-class>
+```
+
+See [Property Validator View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/property-validator-view/docs/index.md) to see view property and instance validation in use.
diff --git a/versioned_docs/version-2.7.5/ambari-dev/admin-view-ambari-admin-development.md b/versioned_docs/version-2.7.5/ambari-dev/admin-view-ambari-admin-development.md
new file mode 100644
index 0000000..e8b230c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/admin-view-ambari-admin-development.md
@@ -0,0 +1,39 @@
+# Admin View (ambari-admin) Development
+
+## Frontend Development
+
+Follow the instructions here to ease frontend development for Admin View (ambari-admin module):
+
+1. Follow the Quick Start Guide to install and start Ambari Server (cluster need not be deployed).
+2. Follow the "Frontend Development" section in Quick Start Guide to check out the Ambari source using git. This makes the entire Ambari source available via /vagrant/ambari from the Vagrant VM.
+3. From the Ambari Server host:
+
+ ```bash
+ cd /var/lib/ambari-server/resources/views/work <- if this directory does not exist, you have not started ambari-server; run "ambari-server start" to start it
+ mv ADMIN_VIEW\{2.5.0.0\} /tmp
+ ln -s /vagrant/ambari/ambari-admin/src/main/resources/ui/admin-web/dist ADMIN_VIEW\{2.5.0.0\}
+ cp /tmp/ADMIN_VIEW\{2.5.0.0\}/view.xml ADMIN_VIEW\{2.5.0.0\}/
+ ambari-server restart
+ ```
+
+4. Now you can change the source code for Admin View and run gulp locally, and the changes are automatically reflected on the server.
+
+
+## Functional Tests
+
+To run end-to-end functional tests on the browser, execute:
+
+npm run update-webdriver
+npm start (This starts http server at 8000 port)
+
+Open another terminal at same path and execute: npm run protractor (does e2e test in the browser. This library works on top of selenium jar).
+
+## Unit Tests
+
+To run unit tests:
+
+Go to path: `/ambari/ambari-admin/src/main/resources/ui/admin-web`
+Execute npm run test-single-run (this uses PhantomJS headless browser; it's the same used by the ambari-web unit tests)
+
+Note:
+"npm test" command starts karma server at [http://localhost:9876/](http://localhost:9876/) and runs unit tests. This server remain up, autoreloads any changes in the test code and reruns the tests. This is useful while developing unit tests.
diff --git a/versioned_docs/version-2.7.5/ambari-dev/ambari-code-layout.md b/versioned_docs/version-2.7.5/ambari-dev/ambari-code-layout.md
new file mode 100644
index 0000000..6adf4ed
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/ambari-code-layout.md
@@ -0,0 +1,102 @@
+# Ambari Code Layout
+
+_Ambari code checkout and build instructions are available in Ambari Development page._
+_Ambari design and architecture is detailed in [Ambari Design](../ambari-design/index.md) page._
+_Understanding the architecture of Ambari is helpful in navigating code easily._
+
+Ambari's source has the following layout
+
+```
+ambari/
+ ambari-agent/
+ ambari-common/
+ ambari-project/
+ ambari-server/
+ ambari-views/
+ ambari-web/
+ contrib/
+ docs/
+```
+
+Major components of Ambari reside in their own sub-folders under the root folder, to maintain clean separation of code.
+
+Folder | Components or Purpose
+------|---------------------
+ambari-server | Code for the main Ambari server which manages Hadoop through the agents installed on each node.
+ambari-agent | Code for the Ambari agents which run on each node that the server above manages.
+ambari-web | Code for Ambari Web UI which interacts with the Ambari server above.
+ambari-views | Code for Ambari Views, the framework for extending the Ambari Web UI.
+ambari-common | Any common code between Ambari Server and Agents.
+contrib | Code for any custom contributions Ambari makes to other third party software or libraries.
+docs | Basic Ambari documentation, including the Ambari REST API.
+
+Ambari Server and Agents interact with each other via an internal JSON protocol.
+Ambari Web UI interacts with Ambari Server through the documented Ambari REST APIs.
+
+## Ambari-Server
+
+## Ambari-Agent
+
+## Ambari-Views
+
+## Ambari-Web
+
+The Ambari Web UI is a purely browser side JavaScript application based on the [Ember](http://emberjs.com/) JavaScript framework. A good understanding of [Ember](http://emberjs.com/) is necessary to easily understand the code and its layout.
+
+Being a pure JavaScript application, all UI is rendered locally in browser; with data coming from the Ambari REST APIs provided by the Ambari Server.
+
+```
+ambari-web/
+ app/
+ config.coffee
+ package.json
+ pom.xml
+ test/
+ vendor/
+```
+
+Folder | Description
+------|---------------------
+app/ |The main application code. This contains Ember's views, templates, controllers, models, routes, etc. for rendering Ambari UI
+config.coffee |[Brunch](http://brunch.io/) application builder configuration file
+package.json |[npm](https://npmjs.org/) package manager configuration file
+test/ |Javascript test files testing functionality written in app/ folder
+vendor/ |Third party javascript libraries and stylesheets used. Full list of third party libraries is documented in /ambari/ambari-web/app/assets/licenses/NOTICE.txt
+
+Developers mainly work on javascript and other files in the app/ folder. Once that is done, the final javascript is built using Brunch (a HTML5 application assembler based on node.js) into the /ambari/ambari-web/public/ folder. This folder contains the index.html which bootstraps the Ambari web application.
+
+Developers while working should use the
+
+```bash
+brunch w
+```
+
+command to launch Brunch in watch mode where it re-generates the final application on any change. Similarly
+
+```bash
+brunch watch --server (or use the shorthand: brunch w -s)
+```
+
+launches a HTTP server at http://localhost:3333 serving the final application. This is helpful in seeing UI with mock data, without the entire Ambari server deployed.
+
+Note: see "[Coding Guidelines for Ambari](./coding-guidelines-for-ambari.md)" for more details on building and running Ambari Web locally.
+
+**ambari-web/app**
+
+Since ambari-web/app/ folder is where developers spend a majority of time, the major files and their purpose is listed below.
+
+Folder or File | Description
+------|---------------------
+assets/ | Mock data under assets/data. Static files given from server via assets/font, assets/img.
+controllers/ | The C in MVC. Ember controllers for the main application controllers/main, installer controllers/wizard, and common controllers controllers/global
+data/ | Meta data for the application (UI metadata, server data metadata, etc.)
+mappers/ | Classes which map server side JSON data structures into client side Ember models.
+models/ | The M in MVC. [Ember Data](http://emberjs.com/guides/models/) models used. Clusters, Services, Hosts, Alerts, etc. models are defined here
+routes/ | [Ember routes](http://emberjs.com/guides/routing/) defining the various page redirections in the application. main.js contains the main application routes. installer.js contains installer routes. Others are routings in various wizards etc.
+styles/ | CSS stylesheets represented in the [less](http://lesscss.org/) format. This is compiled by Brunch into the ambari-web/public/stylesheets/app.css
+views/ | The V in MVC. Contains all the Ember views of the application. Main application views under views/main, installer views under views/installer, and common views under views/commons
+templates/ | The HTML templates used by the views above. Generally a view will have a template file. Sometimes views define the template content in themselves as strings
+app.js | The main Ember application
+config.js | Main configuration file for the javascript application. Developer can keep application in test mode using the App.testMode property etc.
+
+If a developer adds, removes, or renames a model, view, controller, template, route, they should update the corresponding entry in models.js, views.js, controllers.js, templates.js, routes.js files.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-dev/apache-ambari-jira.md b/versioned_docs/version-2.7.5/ambari-dev/apache-ambari-jira.md
new file mode 100644
index 0000000..8097868
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/apache-ambari-jira.md
@@ -0,0 +1,36 @@
+# Apache Ambari JIRA
+
+The following page describes the [Apache Ambari JIRA](https://issues.apache.org/jira/browse/AMBARI) components for tasks, bugs and improvements across the core project + contributions.
+
+## components
+
+Proposed Rename | Description
+------|---------------------
+alerts | JIRAs related to Ambari Alerts system.
+ambari-admin | New component specifically for Ambari Admin.
+ambari-agent | JIRAs related to the Ambari Agent.
+ambari-client | JIRAs related to the Ambari Client.
+ambari-metrics| JIRAs related to Ambari Metrics system.
+ambari-server | JIRAs related to the Ambari Server.
+ambari-shell | New component specifically for Ambari Shell.
+ambari-views | JIRAs related to the [Ambari Views framework](../ambari-design/views/index.md). Specific Views that are built on the framework will be handled with labels.
+ambari-web | New component specifically for Ambari Web.
+blueprints | JIRAs related to [Ambari Blueprints](../ambari-design/blueprints/index.md).
+contrib | JIRAs related to contributions under "contrib", such as Ambari SCOM
+documentation | JIRAs related to project documentation including the wiki.
+infra | JIRAs related to project infrastructure including builds, releases mechanics and automation
+security | JIRAs related to Ambari security features, including Kerberos.
+site | JIRAs related to the project site http://ambari.apache.org/
+stacks | JIRAs related to Ambari Stacks.
+test | JIRAs related to unit tests and test automation
+
+## Use of Labels
+
+In certain cases, the component listed above might be "too broad" and you want to designate JIRAs to a specific area of that component. To handle these scenarios, use a combination of component + labels. Some examples:
+
+Feature Area | Description|Component|Label
+-------------|------------|----------|---------
+HDP Stack | These are specific Stack implements for HDP. |stacks | HDP
+BigTop | This is a specific Stack implement for BigTop. | stacks | BigTop
+Files View | This is a specific view implementation for Files. | ambari-views | Files
+Ambari SCOM | This is a specific contribution of a Management Pack for Microsoft System Center. | contrib |Ambari-SCOM
diff --git a/versioned_docs/version-2.7.5/ambari-dev/code-review-guidelines.md b/versioned_docs/version-2.7.5/ambari-dev/code-review-guidelines.md
new file mode 100644
index 0000000..8f079a6
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/code-review-guidelines.md
@@ -0,0 +1,20 @@
+# Code Review Guidelines
+
+Please refer to [How to Contribute](./how-to-contribute.md) for instructions how to submit a code review to Github.
+
+**What makes a good code review?**
+
+- Authors should annotate source code before the review. This makes it easier for devs reviewing your code and may even help you spot bugs before they do.
+- Send small code-reviews if possible. Reviewing more than 400 lines per hour diminishes our ability to find defects.
+- Reviewing code for more than one hour also reduces our ability to find bugs.
+- If possible, try to break up large reviews into separate but functional stages. If you need to temporarily comment out unit tests, do so. Sending gigantic patches means your review will take longer since reviewers need to block out more time to go through it, and you may spend more time revving iterations and rebasing.
+
+We have a global community of committers, so please be mindful that you should **wait at least 24 hours** before merging your pull request even though you may already have the necessary +1.
+
+This encourages others to take an interest in your pull request and helps us find more bugs (it's ok to slow down in order to speed up).
+
+**Always include** **at least two committers that are familiar with that code area**.
+
+If you want to subscribe to code reviews for a particular area, [feel free to edit this section](https://cwiki.apache.org/confluence/display/AMBARI/Code+Review+Guidelines).
+
+
diff --git a/versioned_docs/version-2.7.5/ambari-dev/coding-guidelines-for-ambari.md b/versioned_docs/version-2.7.5/ambari-dev/coding-guidelines-for-ambari.md
new file mode 100644
index 0000000..48798ec
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/coding-guidelines-for-ambari.md
@@ -0,0 +1,189 @@
+# Coding Guidelines for Ambari
+
+## Ambari Web Frontend Development Environment
+
+### Application Assembler: Brunch
+
+* Brunch was used to create the application skeleton for Ambari Web.
+
+* Brunch builds and deploys code automatically in the background as you modify the source files. This lets you break up the application into a number of JS files for code organization and reuse without worrying about development turnaround or runtime load performance.
+
+* Run a Node.js-based web server with a single command so that you can easily run Ambari Web without setting up Ambari Server (you still need to run Ambari Server for true end-to-end testing).
+
+To check out Ambari Web from the Github repository and run it:
+
+* Install Node.js from [http://nodejs.org](http://nodejs.org)
+* Execute the following:
+
+```bash
+git clone https://git-wip-us.apache.org/repos/asf/ambari.git
+cd ambari/ambari-web
+sudo npm install -g brunch@1.7.20
+rm -rf node_modules public
+npm install
+brunch build
+```
+
+_Note: if you receive a "gyp + xcodebuild" error when running "npm install", confirm you have Xcode CLI tools installed (Xcode > Preferences > Downloads)_
+_Note: if you receive "Error: EMFILE, open" errors when running "brunch build", increase the ulimit for file descriptors (for example, "ulimit -n 10000")_
+
+To run the web server in isolation with Ambari Server:
+
+```
+brunch watch --server (or use the shorthand: brunch w -s)
+```
+
+The above runs Ambari Web with a local test server at localhost:3333. The login/password is admin/admin
+
+All Ambari front-end developers are highly recommended to use PhpStorm by JetBrains. JetBrains has kindly granted Apache Ambari an open-source license for PhpStorm and IntelliJ. These products are available to Ambari committers (if you are an Ambari committer, email [private@ambari.apache.org](mailto:private@ambari.apache.org) to request license keys). You can also use Eclipse if that is your preference.
+
+* IDE Plugins
+
+Go to Preferences->Plugins->Browse repositories and install "Node.js" and "Handlebars" plugins.
+
+### Coding Conventions
+
+For any JavaScript/Handlebars/LESS files, they should be formatted with the IDE to maintain consistency.
+
+Also, the IDE will give warnings in the editor about implicit globals, etc. Fix these warnings before submitting patches.
+
+We will use all default settings for Code Style in the IDE, except for the following:
+
+```
+Go to Preferences
+Code Style->General
+Line separator (for new files): Unix
+Make sure "Use tab character" is NOT checked
+Tab size: 2
+Indent: 2
+Continuation indent: 2
+Code Style->JavaScript:
+Tabs and Indents
+Make sure "use tab character" is NOT checked
+Set Tab size, Indent, and Continuation indent to "2".
+
+Spaces->Other
+Turn on "After name-value property separator ':'
+```
+
+In general, the following conventions should be followed for all JavaScript code: http://javascript.crockford.com/code.html
+
+Exceptions to the rule from the above:
+
+* We use 2 spaces instead of 4.
+
+* Variable Declarations:
+"It is preferred that each variable be given its own line and comment. They should be listed in alphabetical order."
+Comment only where it makes sense. - No need to do alphabetical sorting.
+
+* "JavaScript does not have block scope, so defining variables in blocks can confuse programmers who are experienced with other C family languages. Define all variables at the top of the function." - This does not need to be followed.
+
+### Java Import Order
+
+Some IDEs define their default import order differently and this can cause a lot of problems when creating patches and merging commits to different branches. The following are the checkstyle rules which are applied while executing the test phase of the build. Your IDE of choice should be updated to match these settings:
+
+* The use of the wild card character, '*', should be avoided and all imports should be explicitly stated.
+
+* The following order should be used for all import statements:
+ - java
+ - javax
+ - org
+ - com
+ - other
+
+### UI Unit Tests
+
+All patches must be accompanied by unit tests ensuring good coverage. When unit tests are not applicable (e.g., stylistic or layout changes, etc.), you must explicitly state in the JIRA that unit tests are not applicable.
+
+Unit tests are written using Mocha and run with the PhantomJS headless browser.
+
+To run unit tests for ambari-web, run:
+
+```bash
+cd ambari-web
+mvn test
+```
+
+## Ambari Backend Development
+
+**Following points are borrowed from hadoop wiki:**
+
+* All public classes and methods should have informative Javadoc comments.
+
+* Do not use @author tags.
+
+* Code must be formatted according to Sun's conventions, with one exception:
+* Indent two spaces per level, not four.
+
+* Contributions must pass existing unit tests.
+
+* The code changes must be accompanied by unit tests. In cases where unit tests are not possible or don't make sense an explanation should be provided on the jira.
+
+* New unit tests should be provided to demonstrate bugs and fixes. JUnit (junit4) is our test framework:
+* You must implement a class that uses @Test annotations for all test methods.
+
+* Define methods within your class whose names begin with test, and call JUnit's many assert methods to verify conditions. Please add meaningful messages to the assert statement to facilitate diagnostics.
+
+* By default, do not let tests write any temporary files to /tmp. Instead, the tests should write to the location specified by the test.build.data system property.
+
+* Logging levels should conform to Log4j levels
+* Use slf4j instead of commons logging as the logging facade.
+
+* Logger name should be the class name as far as possible.
+
+
+**Unit tests**
+
+* Developers should always run full unit tests before submitting their patch for code review and before committing to Apache. From the top-level directory,
+
+```bash
+mvn clean test
+```
+
+Sometimes it is useful to run unit tests just for the feature you are working on (e.g., Kerberos, Rolling/Express Upgrade, Stack Inheritance, Alerts, etc.). For this purpose, you can run unit tests with a given profile.
+
+The profiles run all test classes/cases annotated with a given Category, E.g.,
+
+```java
+@Category({ category.AlertTest.class})
+```
+
+To run one of the profiles, look at the available names in the top-level pom.xml . E.g.,
+
+```bash
+mvn clean test -P AlertTests # Other options are AmbariUpgradeTests, BlueprintTests, KerberosTests, MetricsTests, StackUpgradeTests
+```
+
+
+After you're done testing just that suite, **you should run a full unit test using "mvn clean test".**
+* [http://wiki.apache.org/hadoop/HowToDevelopUnitTests](http://wiki.apache.org/hadoop/HowToDevelopUnitTests)
+* The tests should be named *Test.java
+* **Unit testing with databases**
+ - We should use JavaDB as the in-memory database for unit-test. The database layer/ORM should be configurable to use in memory database. Two things are important for the db in testing.
+
+ - Ability to bootstrap the db with any initial data dynamically.
+
+ - Ability to modify the db state out of band to simulate certain test cases. One way to achieve the above could be to implement a database access layer only for testing purposes, but it might cause inconsistency in ORM objects, which needs to be figured out.
+
+* **Stub Heartbeat handler**
+ - For testing purpose it may be a good idea to implement a stub heartbeat handler that only simulates interaction with the agents but doesn't interact with any real agent: It will expose an action queue similar to the real heartbeat handler, but will not send anything anywhere, will just periodically remove the action from the queue. It will expose an interface to inject artificial responses for each of the actions, which can be used in tests to simulate agent responses. It will also expose an interface to inject node state to simulate failure of nodes or lost heartbeats. Guice framework can be used to inject stub heartbeat handler in testing.
+
+* **EasyMock**
+ - EasyMock is our mocking framework of choice. It has been successfully used in hadoop.A few important points: An example of a scenario where Easymock is apt: Suppose we are testing deployment of a service but want to bypass a service dependency or want to inject an artificial component dependency, the dependency tracker object can be mocked to simulate the desired dependency scenario. Ambari server is by and large a state driven system. EasyMock can be used to bypass the state changes and test components narrowly. However, it is probably better to rather use in-memory database to simulate state changes and use EasyMock only when certain behavior cannot be easily simulated. For example consider testing API implementation to get status of a transaction. It can be tested by mocking the action manager object, alternatively, it can be tested by setting the state in the in-memory database. In this case, the later is a more comprehensive test.
+ Avoid static methods and objects, Easymock cannot mock these. Use configuration or dependency injection to initialize static objects if they are likely to be mocked.
+ EasyMock cannot mock final classes, so those should be avoided for classes likely to be mocked. Take a look at:[http://www.easymock.org/EasyMock3_1_Documentation.html](http://www.easymock.org/EasyMock3_1_Documentation.html) for docs.
+
+**Guice**
+
+Guice is a dependency injection framework and can be used to dynamically inject pluggable components.
+Please refer http://code.google.com/p/google-guice/wikJamiroquaii/Motivation. We can use Guice in following scenarios:
+
+* Pluggable manifest generator: It may be desirable to have different implementation of manifest generator for non-puppet setup or for testing.
+
+* Injecting in-memory database (if possible) instead of a real persistent database for testing. It needs to be investigated how Guice fits with the ORM tools.
+
+* Injecting a stub implementation of heartbeat handler.
+
+* It may be a good idea to bind API implementations for management or monitoring via Guice. This will allow testing of APIs and the server independent of the implementation via mock implementations. For example, the management api implementation in coordinator can be mocked so that API definitions and URIs can be independently tested.
+
+* Injecting mock objects for dependency tracker, or stage planner for testing.
diff --git a/versioned_docs/version-2.7.5/ambari-dev/developer-tools.md b/versioned_docs/version-2.7.5/ambari-dev/developer-tools.md
new file mode 100644
index 0000000..e97b47a
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/developer-tools.md
@@ -0,0 +1,79 @@
+# Developer Tools
+
+## Diff and Merge tools
+
+Araxis has been kind enough to give us free licenses for Araxis Merge if you work on open source, just submit a request at http://www.araxis.com/buy/open-source
+
+Download from http://www.araxis.com/url/merge/download.uri
+
+You will be prompted for your serial number when you run the application for the first time. To enter a new serial number into an existing installation, click the Re-Register... button in the About window.
+
+### Integrating Araxis to Git as your Diff and Merge tool
+
+After installing Araxis Merge,
+
+On Mac OS X,
+
+- Drag Araxis across to your ~/Applications folder as normal
+- Copy the contents of the Utilities folder to (e.g.) /usr/local/araxis/bin
+- Add the path to your startup script: export PATH="$PATH:/usr/local/araxis/bin"
+
+In your .gitconfig file (tested on Mac OS X),
+
+```
+[diff]
+ tool = araxis
+[difftool]
+ prompt = false
+[merge]
+ tool = araxis_merge
+[mergetool "araxis_merge"]
+ cmd = araxisgitmerge "$PWD/$REMOTE" "$PWD/$BASE" "$PWD/$LOCAL" "$PWD/$MERGED"
+```
+
+## Git Best Practices
+
+This is just a personal preference, but it may be easier to create one Git branch per Jira/feature. E.g.,
+
+```bash
+git checkout trunk
+git checkout -b AMBARI12345 # create the branch and switch to it
+git branch --set-upstream-to=origin/trunk AMBARI12345 # set the upstream so that git pull --rebase will get the HEAD from trunk
+# Do work,
+git commit -m "AMBARI-12345. Foo (username)"
+# Do more work
+git commit --amend # edit the last commit
+git pull --rebase
+
+# If conflicts are detected, then run
+git mergetool # should be easy if you have Araxis Merge setup to do a 3-way merge
+git rebase --continue
+git push origin HEAD:trunk
+```
+
+## Useful Git Commands
+
+In your .gitconfig file,
+
+```bash
+[alias]
+ st = status
+ ci = commit
+ br = branch
+ co = checkout
+ dc = diff --cached
+ dtc = difftool --cached
+ lg = log -p
+ lsd = log --graph --decorate --pretty=oneline --abbrev-commit --all
+ slast = show --stat --oneline HEAD
+ pshow = show --no-prefix --format=format:%H --full-index
+ pconfig = config --list
+```
+
+Also, in your ~/.bashrc or ~/.profile file,
+
+```bash
+alias branchshow='for k in `git branch|perl -pe s/^..//`;do echo -e `git show --pretty=format:"%Cgreen%ci %Cblue%cr%Creset" $k|head -n 1`\\t$k;done|sort'
+```
+
+This command will show all of your branches sorted by the last commit times, which is useful if you develop one feature per branch.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-dev/development-in-docker.md b/versioned_docs/version-2.7.5/ambari-dev/development-in-docker.md
new file mode 100644
index 0000000..1c8c14e
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/development-in-docker.md
@@ -0,0 +1,92 @@
+# Development in Docker
+
+## Overview
+
+This page describes how to develop, build and test Ambari on Docker.
+
+In order to build Ambari, there are a quite few steps to execute and it is a bit cumbersome. You can build an environment in Docker and are good to go!
+
+This is NOT meant for running production level Ambari in Docker (though you can run Ambari and deploy Hadoop in a single Docker container for testing purpose)
+
+
+
+(This is not only about Jenkins slaves but think it is your laptop)
+
+First, we will make a Docker image that has all third party libraries Ambari requires.
+
+Second, prepare your code on Docker host machine. It can be trunk or a branch, or your developing code or with a patch applied. Note that your code does not reside inside of Docker container, but on the Docker host and we link it by Docker volume (like mount)
+
+And you are ready to go!
+
+### Source code
+
+This code has been migrated to Ambari trunk.
+
+https://github.com/apache/ambari/tree/trunk/dev-support/docker
+
+## Requirements
+
+There are a few system requirements if you want to play with this document.
+
+- Docker https://docs.docker.com/#installation-guides
+
+## Create Docker Image
+
+First thing first, we have to build an Docker image for this solution. This will setup libraries including ones from yum and maven dependencies. In my environment (Centos 6.5 VM with 8GB and 4CPUs) takes 30mins. Good news is this is one time.
+
+```bash
+git clone https://github.com/apache/ambari.git
+cd ambari
+docker build -t ambari/build ./dev-support/docker/docker
+```
+
+This is going to build a image named "ambari/build" from configuration files under ./dev-support/docker/docker
+
+## Unit Test
+
+For example our unit test Jenkins job on trunk runs on Docker. If you want to replicate the environment, read this section.
+
+The basic
+
+```bash
+cd {ambari_root}
+docker run --privileged -h node1.mydomain.com -v $(pwd):/tmp/ambari ambari/build /tmp/ambari/dev-support/docker/docker/bin/ambaribuild.py test -b
+```
+
+- 'docker run' is the command to run a container from a image. Which image was run? It is 'ambari/build'
+- -h sets a host name in the container.
+- -v is to mount your Ambari code on the host to the container's /tmp. Make sure you are at the Ambari root directory.
+- ambaribuild.py runs some script to eventually run 'mvn test' for ambari.
+- -b option is to rebuild the entire source tree. It runs test as is on your host if omitted.
+
+## Deploy Hadoop
+
+You want to run Ambari and Hadoop to test your improvements that you have just coded on your host. Here is the way!
+
+
+```bash
+cd {ambari_root}
+docker run --privileged -t -p 80:80 -p 5005:5005 -p 8080:8080 -h node1.mydomain.com --name ambari1 -v $(pwd):/tmp/ambari ambari/build /tmp/ambari-build-docker/bin/ambaribuild.py deploy -b
+
+# once your are done
+docker kill ambari1 && docker rm ambari1
+```
+
+- --privileged is important as ambari-server accessing to /proc/??/exe
+- -p 80:80 to ensure you can access to web UI from your host.
+- -p 5005 is java debug port
+- 'deploy' to build, install rpms and start ambari-server and ambari-agent and deploy Hadoop through blueprint.
+
+You can take a look at https://github.com/apache/ambari/tree/trunk/dev-support/docker/docker/blueprints to see what is actually deployed.
+
+There are a few other parameters you can play.
+
+```bash
+cd {ambari_root}
+docker run --privileged -t -p 80:80 -p 5005:5005 -p 8080:8080 -h node1.mydomain.com --name ambari1 -v ${AMBARI_SRC:-$(pwd)}:/tmp/ambari ambari/build /tmp/ambari-build-docker/bin/ambaribuild.py [test|server|agent|deploy] [-b] [-s [HDP|BIGTOP|PHD]]
+```
+
+- test: mvn test
+- server: install and run ambari-server
+- agent: install and run ambari-server and ambari-agent
+- deploy: install and run ambari-server and ambari-agent, and deploy a hadoop
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-dev/development-process-for-new-major-features.md b/versioned_docs/version-2.7.5/ambari-dev/development-process-for-new-major-features.md
new file mode 100644
index 0000000..1bf5e5c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/development-process-for-new-major-features.md
@@ -0,0 +1,163 @@
+# Development Process for New Major Features
+
+
+## Goals
+
+* Make it clear to the community of new feature development happening at a high level
+* Make it easier to correlate features with JIRAs
+* Make it easier to track progress for features in development
+* Make it easier to understand estimated release schedule for features in development
+
+## Process
+
+* Create a JIRA of type "Epic" for the new feature in [Apache Ambari JIRA](https://issues.apache.org/jira/browse/AMBARI)
+* Add the feature to the [Features + Roadmap](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30755705) wiki and link it to the Epic created
+* The Epic should contain a high-level description that is easy to understand
+* The Epic should also contain the initial, detailed design (this can be in the form of a shared Google Doc for ease of collaboration,Word doc, pdf, etc)
+* Once the initial design is posted, announce to the dev mailing list to elicit feedback (Subject: [DISCUSS] _Epic Name_. Be sure include a link to the Epic JIRA in the body).It is recommended to ask for review feedback to be given by a certain date so that the review process does not drag on.
+
+* Iterate on the design based on community feedback. Incorporate multiple review cycles as needed.
+
+* Once the design is finalized, break it down into Tasks that are linked to the Epic
+* (Nice to have) Once the Tasks are defined, schedule them into sprints using the Agile Board so that it's easy to see who is working on what/when, what tasks remain but unassigned so the community can pick up work from the backlog, etc.
+
+## Feature Branches
+
+The use of feature branches allows for the large, potentially destabilizing changes to be made without affecting the stability of the trunk.
+
+## Feature Flags
+
+* Sometimes, we want to have the ability for the users to experiment with a new feature, but not make expose it as a general feature since it has not gone through rigorous testing. In other cases, we want to provide an escape hatch for certain edge-case scenarios that we may not want to expose in general because using the escape hatch is potentially dangerous and should only be reserved special occasions. For these purposes, Ambari has a notion of **feature flags**. Make use of Feature Flags when adding new features that follow under these categories. [Feature Flags](./feature-flags.md) has more details on this.
+
+## Contribution Flow
+
+[https://docs.google.com/document/d/1hz7qjGKkNeckMibEs67ZmAa2kxjie0zkG6H_IiC2RgA/edit?pli=1](https://docs.google.com/document/d/1hz7qjGKkNeckMibEs67ZmAa2kxjie0zkG6H_IiC2RgA/edit?pli=1)
+
+## Git Feature Branches
+
+
+The Git feature branch workflow is a simple, yet powerful way to develop new features in an encapsulated environment while at the same time fostering collaboration within the community. The idea is create short-lived branches where new development will take place and eventually merge the completed feature branch back into `trunk`. A short-lived branch could mean anywhere from several days to several months depending on the extent of the feature and how often the branch is merged back into `trunk`.
+
+Feature branches are also useful for changes which are not necessarily considered to be new features. They can be for proof-of-concept changes or architectural changes which have the likelihood of destabilizing `trunk`.
+
+### Benefits
+
+* Allows incremental work to proceed without destabilizing the main trunk of source control.
+
+* Smaller commits means smaller and clearer code reviews.
+
+* Each code review is not required to be fully functional allowing a more agile approach to gathering feedback on the progress of the feature.
+
+* Maintains Git history and allows for code to be backed out easily after merging.
+
+### Drawbacks
+
+* Requires frequent merges from `trunk` into your feature branch to keep merge conflicts to a minimum.
+
+* May require periodic merges of the feature branch back into trunk during development to help mitigate frequent merge conflicts.
+
+* No continuous integration coverage on feature branches. Although this is not really a drawback since most feature branches will break some aspects of CI in the early stages of the feature.
+
+### Guidelines to Follow
+
+The following simple rules can help in keeping Ambari's approach to feature branch development simple and consistent.
+
+* When creating a feature branch, it should be given a meaningful name. Acceptable names include either the name of the feature or the name of the Ambari JIRA. The branch should also always start with the text `branch-feature-`. Some examples of properly named feature branches include:
+ - `branch-feature-patch-upgrades`
+ - `branch-feature-AMBARI-12345`
+
+* Every commit in your feature branch should have an associated `AMBARI-XXXXX` JIRA. This way, when your branch is merged back into trunk, the commit history follows Ambari's conventions.
+
+* Merge frequently from trunk into your branch to keep your branch up-to-date and lessen the number of potential merge conflicts.
+
+* Do **NOT** squash commits. Every commit in your feature branch must have an `AMBARI-XXXXX` association with it.
+
+
+* Once a feature has been completed and the branch has been merged into trunk, the branch can be safely removed. Feature branches should only exist while the work is still in progress.
+
+### Approach
+
+The following steps outline the lifecycle of a feature branch. You'll notice that once the feature has been completed and merged back into trunk, the feature branch is deleted. This is an important step to keep the git branch listing as clean as possible.
+
+```
+$ git checkout -b branch-feature-AMBARI-12345 trunk
+Switched to a new branch 'branch-feature-AMBARI-12345'
+
+$ git push -u origin branch-feature-AMBARI-12345
+Total 0 (delta 0), reused 0 (delta 0)
+To https://git-wip-us.apache.org/repos/asf/ambari.git
+ * [new branch] branch-feature-AMBARI-12345 -> branch-feature-AMBARI-12345
+Branch branch-feature-AMBARI-12345 set up to track remote branch branch-feature-AMBARI-12345 from origin by rebasing.
+
+```
+
+* Branch is correctly named
+* Branch is pushed to Apache so it can be visible to other developers
+
+```bash
+$ git checkout -b branch-feature-AMBARI-12345 trunk
+Switched to a new branch 'branch-feature-AMBARI-12345'
+
+$ git add
+$ git commit -m 'AMBARI-28375 - Some Change (me)'
+
+$ git add
+$ git commit -m 'AMBARI-28499 - Another Change (me)'
+
+$ git push
+```
+
+* Each commit to the feature branch has its own AMBARI-XXXXX JIRA
+* Multiple commits are allowed before pushing the changes to the feature branch
+
+```bash
+$ git checkout branch-feature-AMBARI-12345
+Switched to branch 'branch-feature-AMBARI-18456'
+
+$ git merge trunk
+Updating ed28ff4..3ab2a7c
+Fast-forward
+ ambari-server/include.xml | 0
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+ create mode 100644 ambari-server/include.xml
+```
+
+* Merging trunk into the feature branch often (daily, hourly) allows for faster and easier merge conflict resolution
+* Fast-forwards are OK here since trunk is always the source of truth and we don't need extra "merge" commits in the feature branch
+
+```bash
+$ git checkout trunk
+Switched to branch 'trunk'
+
+$ git merge --no-ff branch-feature-AMBARI-12345
+Updating ed28ff4..3ab2a7c
+ ambari-server/include.xml | 0
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+ create mode 100644 ambari-server/include.xml
+```
+
+Notice that the `--no-ff` option was provided when merging back into `trunk`. This is to ensure that an additional "merge" commit is created which references all feature branch commits. With this single merge commit, the entire merge can be easily backed out if a problem was discovered which destabilized trunk.
+
+* The feature is merged successfully with a "merge" commit back to trunk
+* This can be done multiple times during the course of the feature development as long as the code merged back to trunk is stable
+
+```bash
+$ git checkout trunk
+Switched to branch 'trunk'
+
+$ git branch -d branch-feature-AMBARI-12345
+Deleted branch branch-feature-AMBARI-12345 (was ed28ff4).
+
+$ git push origin --delete branch-feature-AMBARI-12345
+To https://git-wip-us.apache.org/repos/asf/ambari.git
+ - [deleted] branch-feature-AMBARI-12345
+
+$ git remote update origin --prune
+Fetching origin
+From https://git-wip-us.apache.org/repos/asf/ambari
+ x [deleted] (none) -> branch-feature-AMBARI-56789
+```
+
+* Cleanup the branch when done, both locally and remotely
+* Prune your local branches which no longer track remote branches
+
diff --git a/versioned_docs/version-2.7.5/ambari-dev/feature-flags.md b/versioned_docs/version-2.7.5/ambari-dev/feature-flags.md
new file mode 100644
index 0000000..ed13903
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/feature-flags.md
@@ -0,0 +1,13 @@
+# Feature Flags
+
+* Sometimes, we want to have the ability for the end users to experiment with a new feature, but not expose it as a general feature since it has not gone through rigorous testing and use of the feature could result in problems. In other cases, we want to provide an escape hatch for certain edge-case scenarios that we may not want to expose in general because using the escape hatch is potentially dangerous and should only be reserved special occasions. For these purposes, Ambari has a notion of **feature flags**.
+
+* Feature flags can be created as an attribute of App.supports map under [https://github.com/apache/ambari/blob/trunk/ambari-web/app/config.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/config.js)
+* Those boolean flags are exposed in the Ambari Web UI via `<ambari-server-protocol>://<ambari-server-host>:<ambari-server-port>/#/experimental`
+ * The end user can go to the above URL to turn certain experimental features on.
+
+ 
+
+* In Ambari Web code, we should toggle experimental features on/off via the App.supports object.
+
+* You will see sample usage if you recursively grep for "App.supports" under the ambari-web project.
diff --git a/versioned_docs/version-2.7.5/ambari-dev/how-to-commit.md b/versioned_docs/version-2.7.5/ambari-dev/how-to-commit.md
new file mode 100644
index 0000000..1c6bb2f
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/how-to-commit.md
@@ -0,0 +1,14 @@
+# How to Commit
+
+This document describes how to commit changes to Ambari. It assumes a knowledge of Git. While it is for committers to use as a guide, it also provides contributors an idea of how the commit process actually works.
+
+In general we are very conservative about changing the Apache Ambari code base. It is ground truth for systems that use it, so we need to make sure that it is reliable. For this reason we use Review Then Commit (RTC) http://www.apache.org/foundation/glossary.html#ReviewThenCommit change policy.
+
+Except for some very rare cases any change to the Apache Ambari code base will start off as a Jira. (In some cases a change may relate to more than one Jira. Also, there are cases when a Jira results in multiple commits.) Generally, the process of getting ready to commit when the Jira has a patch associated with it and the contributor decides that it is ready for review and marks it patch available.
+
+A committer must sign off on a patch. It is very helpful if the community also reviews the patch, but in the end a committer must take responsibility for the correctness of the patch. If the patch is simple enough and the committer feels confident in the review, a single +1 from a committer is sufficient to commit the patch. (Remember committers cannot review their own patch. If a committer submits a patch, they should make sure that another committer reviews it.)
+
+Follow the instructions in [How to Contribute](./how-to-contribute.md) guide to commit changes to Ambari.
+
+If the Jira is a bug fix you may also need to commit the patch to the latest branch in git (trunk).
+
diff --git a/versioned_docs/version-2.7.5/ambari-dev/how-to-contribute.md b/versioned_docs/version-2.7.5/ambari-dev/how-to-contribute.md
new file mode 100644
index 0000000..1e44e8c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/how-to-contribute.md
@@ -0,0 +1,127 @@
+# How to Contribute
+
+## Contributing code changes
+
+### Checkout source code
+
+* Fork the project from Github at https://github.com/apache/ambari if you haven't already
+* Clone this fork:
+
+```bash
+# Replace [forked-repository-url] with your git clone url
+git clone [forked-repository-url] ambari
+```
+
+* Set upstream remote:
+
+```bash
+cd ambari
+git remote add upstream https://github.com/apache/ambari.git
+```
+
+### Keep your Fork Up to Date
+
+```bash
+# Fetch from upstream remote
+git fetch upstream
+# Checkout the branch that needs to sync
+git checkout trunk
+# Merge with remote
+git merge upstream/trunk
+```
+
+Repeat these steps for all the branches that needs to be synced with the remote.
+
+### JIRA
+
+Apache Ambari uses JIRA to track issues including bugs and improvements, and uses Github pull requests to manage code reviews and code merges. Major design changes are discussed in JIRA and implementation changes are discussed in pull requests after a pull request is created.
+
+* Find an existing Apache JIRA that the change pertains to
+ * Do not create a new JIRA if the change is minor and relates to an existing JIRA; add to the existing discussion and work instead
+ * Look for existing pull requests that are linked from the JIRA, to understand if someone is already working on the JIRA
+
+* If the change is new, then create a new JIRA:
+ * Provide a descriptive Title
+ * Write a detailed Description. For bug reports, this should ideally include a short reproduction of the problem. For new features, it may include a design document.
+ * Fill the required fields:
+ * Issue Type. Bug and Task are the most frequently used issue types in Ambari.
+ * Priority. Their meaning is roughly:
+ * Blocker: pointless to release without this change as the release would be unusable to a large minority of users
+ * Critical: a large minority of users are missing important functionality without this, and/or a workaround is difficult
+ * Major: a small minority of users are missing important functionality without this, and there is a workaround
+ * Minor: a niche use case is missing some support, but it does not affect usage or is easily worked around
+ * Trivial: a nice-to-have change but unlikely to be any problem in practice otherwise
+ * Component. Choose the components that are affected by this change. Choose from Ambari Components
+ * Affects Version. For Bugs, assign at least one version that is known to exhibit the problem or need the change
+ * Do not include a patch file; pull requests are used to propose the actual change.
+
+### Pull Request
+
+Apache Ambari uses [Github pull requests](https://github.com/apache/ambari/pulls) to review and merge changes to the source code. Before creating a pull request, one must have a fork of apache/ambari checked out. Follow instructions in step 1 to create a fork if you haven't already.
+
+#### Commit and Push changes
+
+- Create a branch AMBARI-XXXXX-branchName before starting to make any code changes. Ex: If the Fix Version of the JIRA you are working on is 2.6.2, then create a branch based on branch-2.6
+
+ ```bash
+ git checkout branch-2.6
+ git pull upstream branch-2.6
+ git checkout -b AMBARI-XXXXX-branch-2.6
+ ```
+
+- Mark the status of the related JIRA as "In Progress" to let others know that you have started working on the JIRA.
+- Make changes to the code and commit them to the newly created branch.
+- Run all the tests that are applicable and make sure that all unit tests pass
+- Push your changes. Provide your Github user id and [personal access token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/) when asked for user name and password
+
+ ```bash
+ git push origin AMBARI-XXXXX-branch-2.6
+ ```
+
+#### Create Pull Request
+
+Navigate to your fork in Github and [create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/). The pull request needs to be opened against the branch you want the patch to land.
+
+The pull request title should be of the form **[AMBARI-xxxx] Title**, where AMBARI-xxxx is the relevant JIRA number
+
+- If the pull request is still a work in progress, and so is not ready to be merged, but needs to be pushed to Github to facilitate review, then add **[WIP]** after the **AMBARI-XXXX**
+- Consider identifying committers or other contributors who have worked on the code being changed. Find the file(s) in Github and click “Blame” to see a line-by-line annotation of who changed the code last. You can add @username in the PR description or as a comment to request review from a developer.
+- Note: Contributors do not have access to edit or add reviewers in the "Reviewers" widget. Contributors can only @mention to get the attention of committers.
+- The related JIRA will automatically have a link to the PR as shown below. Mark the status of JIRA as "Patch Available" manually.
+
+
+
+#### Jenkins Job
+
+* A Jenkins Job is configured to be triggered everytime a new pull request is created. The job is configured to perform the following tasks:
+ * Validate the merge
+ * Build Ambari
+ * Run unit tests
+* It reports the outcome of the build as an integrated check in the pull request as shown below.
+
+
+
+* It is the responsibility of the contributor of the pull request to make sure that the build passes. Pull requests should not be merged if the Jenkins job fails to validate the merge.
+* To re-trigger the build job, just comment "retest this please" in the PR. Visit this page to check the latest build jobs.
+
+#### Repeat
+
+Repeat the above steps for patches that needs to land in multiple branches. Ex: If a patch needs to be committed to branches branch-2.6 and trunk, then you need to create two branches and open two pull requests by following the above steps.
+
+## Review Process
+
+Ambari uses Github for code reviews. All committers are required to follow the instructions in this [page](https://gitbox.apache.org/setup/) and link their github accounts with gitbox to gain Merge access to [apache/ambari](https://github.com/apache/ambari) in github.
+
+To try out the changes locally, you can checkout the pull request locally by following the instructions in this [guide](https://help.github.com/articles/checking-out-pull-requests-locally/).
+
+* Other reviewers, including committers can try out the changes locally and either approve or give their comments as suggestions on the pull request by submitting a review on the pull request. More help can be found here.
+* If more changes are required, reviewers are encouraged to leave their comments on the lines of code that require changes. The author of the pull request can then update the code and push another commit to the same branch to update the pull request and notify the committers.
+* The pull request can be merged if atleast one committer has approved it or commented "LGTM" which means "Looks Good To Me" and the jenkins job validated the merge successfully. If you comment LGTM you will be expected to help with bugs or follow-up issues on the patch. (Remember committers cannot review their own patch. If a committer opens a PR, they should make sure that another committer reviews it.)
+* Sometimes, other changes might be merged which conflict with the pull request’s changes. The PR can’t be merged until the conflict is resolved. This can be resolved by running **git fetch** upstream followed by git rebase **upstream/[branch-name]** and resolving the conflicts by hand, then pushing the result to your branch.
+* If a PR is merged, promptly close the PR and resolve the JIRA as "Fixed".
+
+## Apache Ambari Committers
+
+Please read more on Apache Committers at: http://www.apache.org/dev/committers.html
+
+In general a contributor that makes sustained, welcome contributions to the project may be invited to become a committer, though the exact timing of such invitations depends on many factors. Sustained contributions over 6 months is a welcome sign of contributor showing interest in the project. A contributor receptive to feedback and following development guidelines stated above is a good sign for being a committer to the project. We have seen contributors contributing 20-30 patches become committers but again this is very subjective and can vary on the patches submitted to the project. Ultimately it is the Ambari PMC that suggests and votes for committers in the project.
diff --git a/versioned_docs/version-2.7.5/ambari-dev/imgs/experimental-features .png b/versioned_docs/version-2.7.5/ambari-dev/imgs/experimental-features .png
new file mode 100644
index 0000000..211a596
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/imgs/experimental-features .png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-dev/imgs/jenkins-job.png b/versioned_docs/version-2.7.5/ambari-dev/imgs/jenkins-job.png
new file mode 100644
index 0000000..0e6476c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/imgs/jenkins-job.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-dev/imgs/pull-request.png b/versioned_docs/version-2.7.5/ambari-dev/imgs/pull-request.png
new file mode 100644
index 0000000..2a8b408
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/imgs/pull-request.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-dev/imgs/reviewers.png b/versioned_docs/version-2.7.5/ambari-dev/imgs/reviewers.png
new file mode 100644
index 0000000..c19354d
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/imgs/reviewers.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-dev/imgs/with-without-docker.png b/versioned_docs/version-2.7.5/ambari-dev/imgs/with-without-docker.png
new file mode 100644
index 0000000..2d6e38c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/imgs/with-without-docker.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-dev/index.md b/versioned_docs/version-2.7.5/ambari-dev/index.md
new file mode 100644
index 0000000..3dba30d
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/index.md
@@ -0,0 +1,311 @@
+# Ambari Development
+
+## Checking out Ambari source
+
+Follow the instructions under [Checkout source code](./how-to-contribute.md) section of "How to contribute" guide.
+
+We'll refer to the top-level "ambari" directory as `AMBARI_DIR` in this document.
+
+## Tools needed to build Ambari
+
+The following tools are needed to build Ambari from source.
+
+Alternatively, you can easily launch a VM that is preconfigured with all the tools that you need. See the **Pre-Configured Development Environment** section in the [Quick Start Guide](../quick-start/quick-start-guide.md).
+
+* xCode (if using Mac - free download from the apple store)
+* JDK 8 (Ambari 2.6 and below can be compiled with JDK 7, from Ambari 2.7, it can be compiled with at least JDK 8)
+* [Apache Maven](http://maven.apache.org/download.html) 3.3.9 or later
+Tip:In order to persist your changes to the JAVA_HOME environment variable and add Maven to your path, create the following files:
+File: ~/.profile
+
+```bash
+source ~/.bashrc
+```
+
+File: ~/.bashrc
+
+```bash
+export PATH=/usr/local/apache-maven-3.3.9/bin:$PATH
+export JAVA_HOME=$(/usr/libexec/java_home)
+export _JAVA_OPTIONS="-Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true"
+```
+
+
+* Python 2.6 (Ambari 2.7 or later require Python 2.7 as minimum supported version)
+* Python setuptools:
+for Python 2.6: D [ownload](http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c11-py2.6.egg#md5=bfa92100bd772d5a213eedd356d64086) setuptools and run:
+
+```bash
+sh setuptools-0.6c11-py2.6.egg
+```
+
+for Python 2.7: D [ownload](https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg#md5=fe1f997bc722265116870bc7919059ea) setuptools and run:
+
+```bash
+sh setuptools-0.6c11-py2.7.egg
+```
+
+
+* rpmbuild (rpm-build package)
+* g++ (gcc-c++ package)
+
+## Running Unit Tests
+
+* `mvn clean test`
+* Run unit tests in a single module:
+
+```bash
+mvn -pl ambari-server test
+```
+
+
+* Run only Java tests:
+
+```bash
+mvn -pl ambari-server -DskipPythonTests
+```
+
+
+* Run only specific Java tests:
+
+```bash
+mvn -pl ambari-server -DskipPythonTests -Dtest=AgentHostInfoTest test
+```
+
+
+* Run only Python tests:
+
+```bash
+mvn -pl ambari-server -DskipSurefireTests test
+```
+
+
+* Run only specific Python tests:
+
+```bash
+mvn -pl ambari-server -DskipSurefireTests -Dpython.test.mask=TestUtils.py test
+```
+
+
+* Run only Checkstyle and RAT checks:
+
+```bash
+mvn -pl ambari-server -DskipTests test
+```
+
+
+
+NOTE: Please make sure you have npm in the path before running the unit tests.
+
+## Generating Findbugs Report
+
+* mvn clean install
+
+This will generate xml and html report unders target/findbugs. You can also add flags to skip unit tests to generate report faster.
+
+## Building Ambari
+
+Note: if you can an error that too many files are open while building, then run: ulimit -n 10000 (for example)
+
+To build Ambari RPMs, run the following.
+
+Note: Replace ${AMBARI_VERSION} with a 4-digit version you want the artifacts to be (e.g., -DnewVersion=1.6.1.1)
+
+**Note**: If running into errors while compiling the ambari-metrics package due to missing the artifacts of jms, jmxri, jmxtools:
+
+```
+[ERROR] Failed to execute goal on project ambari-metrics-kafka-sink: Could not resolve dependencies for project org.apache.ambari:ambari-metrics-kafka-sink:jar:2.0.0-0: The following artifacts could not be resolved: javax.jms:jms:jar:1.1, com.sun.jdmk:jmxtools:jar:1.2.1, com.sun.jmx:jmxri:jar:1.2.1: Could not transfer artifact javax.jms:jms:jar:1.1 from/to java.net (https://maven-repository.dev.java.net/nonav/repository): No connector available to access repository java.net (https://maven-repository.dev.java.net/nonav/repository) of type legacy using the available factories WagonRepositoryConnectorFactory
+```
+
+The work around is to manually install the three missing artifacts:
+
+```
+mvn install:install-file -Dfile=jms-1.1.pom -DgroupId=javax.jms -DartifactId=jms -Dversion=1.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxtools-1.2.1.pom -DgroupId=com.sun.jdmk -DartifactId=jmxtools -Dversion=1.2.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxri-1.2.1.pom -DgroupId=com.sun.jmx -DartifactId=jmxri -Dversion=1.2.1 -Dpackaging=jar
+
+If when compiling it seems stuck, and you've already increased Java and Maven heapsize, it could be that Ambari Views has a lot of artifacts, and the rat-check is choking up. In this case, try running
+
+git clean -df (this will remove untracked files and directories)
+mvn clean package -DskipTests -Drat.ignoreErrors=true
+or
+mvn clean package -DskipTests -Drat.skip
+```
+
+## Setting the Version Using Maven
+
+Ambari 2.8+ uses a newer method to update the version when building Ambari.
+
+**RHEL/CentOS 6**:
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package rpm:rpm -DskipTests -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+**SUSE/SLES 11**
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package rpm:rpm -DskipTests -Psuse11 -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+**Ubuntu 12**:
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package jdeb:jdeb -DskipTests -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+Ambari Server will create following packages
+
+* RPM will be created under `AMBARI_DIR`/ambari-server/target/rpm/ambari-server/RPMS/noarch.
+
+* DEB will be created under `AMBARI_DIR`/ambari-server/target/
+
+Ambari Agent will create following packages
+
+* RPM will be created under `AMBARI_DIR`/ambari-agent/target/rpm/ambari-agent/RPMS/x86_64.
+
+* DEB will be created under `AMBARI_DIR`/ambari-agent/target
+
+Optional parameters:
+
+* -X -e: add these options for more verbose output by Maven. Useful when debugging Maven issues.
+
+* -DdefaultStackVersion=STACK-VERSION
+* Sets the default stack and version to be used for installation (e.g., -DdefaultStackVersion=HDP-1.3.0)
+* -DenableExperimental=true
+* Enables experimental features to be available via Ambari Web (default is false)
+* All views can be packaged in RPM by adding _-Dviews_ parameter
+ - _mvn -B clean install package rpm:rpm -Dviews -DskipTests_
+* Specific views can be built by adding `--projects` parameter to the _-Dviews_
+ - _mvn -B clean install package rpm:rpm --projects ambari-web,ambari-project,ambari-views,ambari-admin,contrib/views/files,contrib/views/pig,ambari-server,ambari-agent,ambari-client,ambari-shell -Dviews -DskipTests_
+
+
+_NOTE: Run everything as `root` below._
+
+## Building Ambari Metrics
+
+If you plan on installing the Ambari Metrics service, you will also need to build the Ambari Metrics project.
+
+```bash
+cd ambari-metrics
+mvn clean package -Dbuild-rpm -DskipTests
+
+For Ubuntu:
+cd ambari-metrics
+mvn clean package -Dbuild-deb -DskipTests
+```
+
+**Note:**
+
+The metrics rpms will be found at: ambari-metrics-assembly/target/. These would be need for installing the Ambari Metrics service.
+
+## Running the Ambari Server
+
+First, install the Ambari Server RPM.
+
+**On RHEL/CentOS:**
+
+```bash
+yum install ambari-server/target/rpm/ambari-server/RPMS/noarch/ambari-server-*.noarch.rpm
+```
+
+On SUSE/SLES:
+
+```bash
+zypper install ambari-server/target/rpm/ambari-server/RPMS/noarch/ambari-server-*.noarch.rpm
+```
+
+**On Ubuntu 12:**
+
+```bash
+dpkg --install ambari-server/target/ambari-server-*.deb # Will fail with missing dependencies errors
+apt-get update # Update locations of dependencies
+apt-get install -f # Install all failed dependencies
+dpkg --install ambari-server/target/ambari-server-*.deb # Will succeed
+```
+
+Initialize Ambari Server:
+
+```bash
+ambari-server setup
+```
+
+Start up Ambari Server:
+
+```
+ambari-server start
+```
+
+See Ambari Server log:
+
+```bash
+tail -f /var/log/ambari-server/ambari-server.log
+```
+
+To access Ambari, go to
+
+```
+http://{ambari-server-hostname}:8080
+```
+
+from your web browser and log in with username _admin_ and password _admin_.
+
+## Install and Start the Ambari Agent Manually on Each Host in the Cluster
+
+Install the Ambari Agent RPM.
+
+On RHEL/CentOS:
+
+SUSE/SLES:
+
+```bash
+zypper install ambari-agent/target/rpm/ambari-agent/RPMS/x86_64/ambari-agent-*.rpm
+```
+
+Ubuntu12:
+
+```bash
+dpkg --install ambari-agent/target/ambari-agent-*.deb
+```
+
+Edit the location of Ambari Server in /etc/ambari-agent/conf/ambari-agent.ini by editing the _hostname_ line.
+
+Start Ambari Agent:
+
+```
+ambari-agent start
+```
+
+See Ambari Agent log:
+
+```bash
+tail -f /var/log/ambari-agent/ambari-agent.log
+```
+
+## Setting up Ambari in Eclipse
+
+```
+$ mvn clean eclipse:eclipse
+```
+
+After doing the above you should be able to import the project via Eclipse "Import > Maven > Existing Maven Project". Choose the root directory where you cloned the git repository. You should be able to see the following projects on eclipse:
+
+```
+ambari
+|
+|- ambari-project
+|- ambari-server
+|- ambari-agent
+|- ambari-web
+```
+
+Select the top-level "ambari pom.xml" and click Finish.
diff --git a/versioned_docs/version-2.7.5/ambari-dev/releasing-ambari.md b/versioned_docs/version-2.7.5/ambari-dev/releasing-ambari.md
new file mode 100644
index 0000000..d041946
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/releasing-ambari.md
@@ -0,0 +1,401 @@
+# Releasing Ambari
+
+## Useful Links
+
+### [Publishing Maven Artifacts](http://apache.org/dev/publishing-maven-artifacts.html)
+
+* Setting up release signing keys
+* Uploading artifacts to staging and release repositories
+
+### [Apache Release Guidelines](http://www.apache.org/legal/release-policy.html)
+
+* Release requirements
+* Process for staging
+
+## Preparing for release
+
+Setup for first time release managers
+
+If you are being a release manager for the first time, you will need to run the following additional steps so that you are not blocked during the actual release process.
+
+**Configure SSH/SFTP to home.apache.org**
+
+SFTP to home.apache.org supports only Key-Based SSH Logins
+
+```bash
+# Generate RSA Keys
+ mkdir ~/.ssh
+ chmod 700 ~/.ssh
+ ssh-keygen -t rsa -b 4096
+
+# Note: This will create ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub files will be generated
+
+# Upload Public RSA Key
+Login at http://id.apache.org
+Add Public SSH Key to your profile from ~/.ssh/id_rsa.pub
+SSH Key (authorized_keys line):
+Submit changes
+
+# Verify SSH to minotaur.apache.org works
+ssh -i ~/.ssh/id_rsa {username}@minotaur.apache.org
+
+# SFTP to home.apache.org
+sftp {username}@home.apache.org
+mkdir public_html
+cd public_html
+put test #This test file is a sample empty file present in current working directory from which you sftp.
+
+Verify URL http://home.apache.org/{username}/test
+```
+
+**Generate OpenGPG Key**
+
+You should get a signing key, keep it in a safe place, upload the public key to apache, and build a web of trust.
+
+Ref: http://zacharyvoase.com/2009/08/20/openpgp/
+
+```bash
+gpg2 --gen-key
+gpg2 --keyserver pgp.mit.edu --send-key {key}
+gpg2 --armor --export {username}@apache.org > {username}.asc
+Copy over {username}.asc to {username}@home.apache.org:public_html/~{username}.asc
+Verify URL http://home.apache.org/~{username}/{username}.asc
+Query PGP KeyServer http://pgp.mit.edu:11371/pks/lookup?search=0x{key}&op=vindex
+
+Web of Trust:
+Request others to sign your PGP key.
+
+Login at http://id.apache.org
+Add OpenPGP Fingerprint to your profile
+OpenPGP Public Key Primary Fingerprint: XXXX YYYY ZZZZ ....
+Submit changes
+Verify that the public PGP key is exported to http://home.apache.org/keys/committer/{username}.asc
+```
+
+**Email dev@ambari.apache.org mailing list notifying that you will be creating the release branch at least one week in advance**
+
+```
+Subject: Preparing Ambari X.Y.Z branch
+
+Hi developers and PMCs,
+
+I am proposing cutting a new branch branch-X.Y for Ambari X.Y.Z on __________ as per the outlined tasks in the Ambari Feature + Roadmap page (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30755705).
+
+After making the branch, we (i.e., development community) should only accept blocker or critical bug fixes into the branch and harden it until it meets a high enough quality bar.
+
+If you have a bug fix, it should first be committed to trunk, and after ensuring that it does not break any tests (including smoke tests), then it should be integrated to the Ambari branch-X.Y
+If you have any doubts whether a fix should be committed into branch-X.Y, please email me for input at ____________
+Stay tuned for updates on the release process.
+
+Thanks
+```
+
+**Create the release branch**
+
+Create a branch for a release using branch-X.Y (ex: branch-2.1) as the name of the branch.
+
+Note: Going forward, we should be creating branch-{majorVersion}.{minorVersion), so that the same branch can be used for maintenance releases.
+
+**Checkout the release branch**
+
+```bash
+git checkout branch-X.Y
+```
+
+**Update Ambari REST API docs**
+
+Starting with Ambari's `<span>trunk</span>` branch as of Ambari 2.8, the release manager should generate documentation from existing source code. The documentation should be checked back into the branch before performing the release.
+
+```bash
+# Generate the following artifacts:
+# - Configuration markdown at docs/configuration/index.md
+# - swagger.json and index.html at docs/api/generated/
+cd ambari-server/
+mvn clean compile exec:java@configuration-markdown test -Drat.skip -Dcheckstyle.skip -DskipTests -Dgenerate.swagger.resources
+
+# Review and Commit the changes to branch-X.Y
+git commit
+```
+
+**Update release version**
+
+Once the branch has been created, the release version must be set and committed. The changes should be committed to the release branch.
+
+**Ambari 2.8+**
+
+Starting with Ambari 2.8, the build process relies on [Maven 3.5+ which allows the](https://maven.apache.org/maven-ci-friendly.html) [use of the `${revision}` tag](https://maven.apache.org/maven-ci-friendly.html). This means that the release version can be defined once in the root `pom.xml` and then inherited by all submodules. In order to build Ambari with a specific build number, there are two methods:
+
+```bash
+mvn -Drevision=2.8.0.0.0 ...
+Editing the root pom.xml to include the new build number
+<revision>2.8.0.0-SNAPSHOT</revision>
+```
+
+To be consistent with prior releases, the `pom.xml` should be updated in order to contain the new version.
+
+**Steps followed for 2.8.0 release as a reference**
+
+```bash
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+# Remove .versionsBackup files
+git clean -f -x
+
+# Review and commit the changes to branch-X.Y
+git commit
+```
+:::danger
+Ambari 2.7 and Earlier Releases (Deprecated)
+:::
+
+Older Ambari branches still required that you update every `pom.xml` manually through the below process:
+
+**Steps followed for 2.2.0 release as a reference**
+
+```bash
+# Update the release version
+mvn versions:set -DnewVersion=2.2.0.0.0
+pushd ambari-metrics
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd contrib/ambari-log4j
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd contrib/ambari-scom
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd docs
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+
+# Update the ambari.version properties in all pom.xml
+$ find . -name "pom.xml" | xargs grep "ambari\.version"
+
+./contrib/ambari-scom/ambari-scom-server/pom.xml: 2.1.0-SNAPSHOT
+./contrib/ambari-scom/ambari-scom-server/pom.xml: ${ambari.version}
+./contrib/views/hive/pom.xml: 2.1.0.0.0
+./contrib/views/jobs/pom.xml: ${ambari.version}
+./contrib/views/pig/pom.xml: 2.1.0.0.0
+./contrib/views/pom.xml: 2.1.0.0.0
+./contrib/views/storm/pom.xml: ${ambari.version}
+./contrib/views/tez/pom.xml: ${ambari.version}
+./docs/pom.xml: 2.1.0
+./docs/pom.xml: ${project.artifactId}-${ambari.version}
+
+# Update any 2.1.0-SNAPSHOT references in pom.xml
+$ grep -r --include "pom.xml" "2.1.0-SNAPSHOT" .
+
+# Remove .versionsBackup files
+git clean -f -x -d
+
+# Review and commit the changes to branch-X.Y
+git commit
+```
+
+**Update KEYS**
+
+If this is the first time you have taken release management responsibilities, make sure to update the KEYS file and commit the updated KEYS in both the ambari trunk branch and the release branch. Also in addition to updating the KEYS file in the tree, you also need to p ush the KEYS file to [https://dist.apache.org/repos/dist/release/ambari/](https://dist.apache.org/repos/dist/release/ambari/)
+
+```bash
+gpg2 --list-keys jluniya@apache.org >> KEYS
+gpg2 --armor --export jluniya@apache.org >> KEYS
+
+# commit the changes to both trunk and new release branch
+git commit
+
+# push the updated KEYS file to https://dist.apache.org/repos/dist/release/ambari/.
+
+# Only PMCs members can do this 'svn' step.
+
+svn co https://dist.apache.org/repos/dist/release/ambari ambari_svn
+cp {path_to_keys_file}/KEYS ambari_svn/KEYS
+svn update KEYS
+svn commit -m "Updating KEYS for Ambari"
+```
+
+**Setup Build**
+
+Setup Jenkins Job for the new branch on http://builds.apache.org
+
+## Creating Release Candidate
+
+```
+Note: The first release candidate is rc0. The following documented process assumes rc0, but replace it with the appropriate rc number as required.
+
+```
+
+**Checkout the release branch**
+
+```
+git checkout branch-X.Y
+```
+
+**Create a Release Tag from the release branch**
+
+```bash
+git tag -a release-X.Y.Z-rc0 -m 'Ambari X.Y.Z RC0'
+git push origin release-X.Y.Z-rc0
+```
+
+**Create a tarball**
+
+```bash
+# create a clean copy of the source
+ cd ambari-git-X.Y.Z
+ git clean -f -x -d
+ cd ..
+
+ cp -R ambari-git-X.Y.Z apache-ambari-X.Y.Z-src
+
+ # create ambari-web/public by running the build instructions per https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Development
+ # once ambari-web/public is created, copy it as ambari-web/public-static
+ cp -R ambari-git-X.Y.Z/ambari-web/public apache-ambari-X.Y.Z-src/ambari-web/public-static
+
+ # make sure apache rat tool runs successfully
+ cp -R apache-ambari-X.Y.Z-src apache-ambari-X.Y.Z-ratcheck
+ cd apache-ambari-X.Y.Z-ratcheck
+ mvn clean apache-rat:check
+ cd ..
+
+ # if rat check fails, file JIRAs and fix them before proceeding.
+
+ # tar it up, but exclude git artifacts
+ tar --exclude=.git --exclude=.gitignore --exclude=.gitattributes -zcvf apache-ambari-X.Y.Z-src.tar.gz apache-ambari-X.Y.Z-src
+```
+
+**Sign the tarball**
+
+```bash
+gpg2 --armor --output apache-ambari-X.Y.Z-src.tar.gz.asc --detach-sig apache-ambari-X.Y.Z-src.tar.gz
+```
+
+**Generate SHA512 checksums:**
+
+```
+sha512sum apache-ambari-X.Y.Z-src.tar.gz > apache-ambari-X.Y.Z-src.tar.gz.sha512
+```
+
+or
+
+```
+openssl sha512 apache-ambari-X.Y.Z-src.tar.gz > apache-ambari-X.Y.Z-src.tar.gz.sha512
+```
+
+**Upload the artifacts to your apache home:**
+
+The artifacts then need to be copied over (SFTP) to
+
+```
+public_html/apache-ambari-X.Y.Z-rc0
+```
+
+## Voting on Release Candidate
+
+**Call for a vote on the dev@ambari.apache.org mailing list with something like this:**
+
+I have created an ambari-** release candidate.
+
+GIT source tag (r***)
+
+```
+https://git-wip-us.apache.org/repos/asf/ambari/repo?p=ambari.git;a=log;h=refs/tags/release-x.y.z-rc0
+```
+
+Staging site: http://home.apache.org/user_name/apache-ambari-X.Y.Z-rc0
+
+Vote will be open for 72 hours.
+
+```
+[ ] +1 approve
+[ ] +0 no opinion
+[ ] -1 disapprove (and reason why)
+```
+
+Once the vote passes/fails, send out an email with subject like "[RESULT] [VOTE] Apache Ambari x.y.z rc0" to dev@ambari.apache.org. For the vote to pass, 3 +1 votes are required. If the vote does not pass another release candidate will need to be created after addressing the feedback from the community.
+
+## Publishing and Announcement
+
+* Login to [https://id.apache.org](https://id.apache.org) and verify the fingerprint of PGP key used to sign above is provided. (gpg --fingerprint)
+* Upload your PGP public key only to _/home/_
+
+Publish the release as below:
+
+```bash
+svn co https://dist.apache.org/repos/dist/release/ambari ambari
+
+# Note : Only PMCs members can do this 'svn' step.
+
+cd ambari
+mkdir ambari-X.Y.Z
+scp ~/public_html/apache-ambari-X.Y.Z-rc0/* ambari-X.Y.Z
+svn add ambari-X.Y.Z
+svn rm ambari-A.B.C # Remove the older release from the mirror. Only the latest version should appear in dist.
+
+svn commit -m "Committing Release X.Y.Z"
+```
+
+Create the release tag:
+
+```bash
+git tag -a release-X.Y.Z -m 'Ambari X.Y.Z'
+git push origin release-X.Y.Z
+```
+
+Note that it takes 24 hours for the changes to propagate to the mirrors.
+
+Wait 24 hours and verify that the bits are available in the mirrors before sending an announcement.
+
+**Update Ambari Website and Wiki**
+
+http://ambari.apache.org is checked in Git in `/ambari/docs/src/site` folder.
+
+```bash
+cd docs
+mvn versions:set -DnewVersion=X.Y.Z
+
+# Make necessary changes, typically to pom.xml, site.xml, index.apt, and whats-new.apt
+mvn clean site
+```
+
+Examine the changes under _/ambari/docs/target_ folder.
+
+Update the wiki to add pages for installation of the new version. _Usually you can copy the pages for the last release and make the URL changes to point to new repo/tarball location._
+
+**Send out Announcement to dev@ambari.apache.org and user@ambari.apache.org.**
+
+Subject: [ANNOUNCE] Apache Ambari X.Y.Z.
+
+The Apache Ambari team is proud to announce Apache Ambari version X.Y.Z
+
+Apache Ambari is a tool for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari consists of a set of RESTful APIs and a browser-based management console UI.
+
+The release bits are at: http://www.apache.org/dyn/closer.cgi/ambari/ambari-X.Y.Z.
+
+To use the released bits please use the following documentation:
+
+https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+X.Y.Z
+
+We would like to thank all the contributors that made the release possible.
+
+Regards,
+
+The Ambari Team
+
+**Submit release data to Apache reporter database.**
+
+This step can be done only by a project PMC. If release manager is not an Ambari PMC then please reach out to an existing Ambari PMC or contact Ambari PMC chair to complete this step.
+
+- Login to https://reporter.apache.org/addrelease.html?ambari with apache credentials.
+- Fill out the fields:
+ - Committe: ambari
+ - Full version name: 2.2.0
+ - Date of release (YYYY-MM-DD): 2015-12-19
+- Submit the data
+- Verify that the submitted data is reflected at https://reporter.apache.org/?ambari
+
+Performing this step keeps [https://reporter.apache.org/?ambari](https://reporter.apache.org/?ambari) site updated and people using the Apache Reporter Service will be able to see the latest release data for Ambari.
+
+## Publish Ambari artifacts to Maven central
+
+Please use the following [document](https://docs.google.com/document/d/1RjWQOaTUne6t8DPJorPhOMWAfOb6Xou6sAdHk96CHDw/edit) to publish Ambari artifacts to Maven central.
diff --git a/versioned_docs/version-2.7.5/ambari-dev/unit-test-reports.md b/versioned_docs/version-2.7.5/ambari-dev/unit-test-reports.md
new file mode 100644
index 0000000..9fadf3f
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/unit-test-reports.md
@@ -0,0 +1,6 @@
+# Unit Test Reports
+
+Branch | Unit Test Report URL
+-------|-------------
+trunk | https://builds.apache.org/job/Ambari-trunk-Commit/
+branch-2.2 | https://builds.apache.org/job/Ambari-branch-2.2/
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-dev/verifying-release-candidate.md b/versioned_docs/version-2.7.5/ambari-dev/verifying-release-candidate.md
new file mode 100644
index 0000000..707db13
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-dev/verifying-release-candidate.md
@@ -0,0 +1,107 @@
+# Verifying Release Candidate
+
+[Apache Release Process](http://www.apache.org/dev/release-publishing)
+
+The steps are based on what is needed on a fresh centos6 VM created based on [Quick Start Guide](../quick-start/quick-start-guide.md)
+
+## Verify hashes and signature
+
+```bash
+mkdir -p /usr/work/ambari
+pushd /usr/work/ambari
+```
+
+_Download the src tarball, asc signature, and md5/sha1 hashes._
+
+Verify the hashes
+
+```
+openssl md5 apache-ambari-2.4.1-src.tar.gz | diff apache-ambari-2.4.1-src.tar.gz.md5 -
+openssl sha1 apache-ambari-2.4.1-src.tar.gz | diff apache-ambari-2.4.1-src.tar.gz.sha1 -
+```
+
+Verify the signature
+
+```bash
+gpg --keyserver pgpkeys.mit.edu --recv-key <key ID>
+gpg apache-ambari-2.4.1-src.tar.gz.asc
+```
+
+## Compiling the code
+
+If you are verifying the release on a clean machine (e.g. freshly installed VM) then you need to run several preparatory step.
+
+### Install mvn
+
+```bash
+mkdir /usr/local/apache-maven
+cd /usr/local/apache-maven
+wget http://mirror.olnevhost.net/pub/apache/maven/binaries/apache-maven-3.2.1-bin.tar.gz
+tar -xvf apache-maven-3.2.1-bin.tar.gz
+export M2_HOME=/usr/local/apache-maven/apache-maven-3.2.1
+export M2=$M2_HOME/bin
+export PATH=$M2:$PATH
+```
+
+### Install java
+
+```bash
+mkdir /usr/jdk
+cd /usr/jdk
+cp "FROM SOURCE"/jdk-7u67-linux-x64.tar.gz . (or download the latest)
+tar -xvf jdk-7u67-linux-x64.tar.gz
+export PATH=$PATH:/usr/jdk/jdk1.7.0_67/bin
+export JAVA_HOME=/usr/jdk/jdk1.7.0_67
+export _JAVA_OPTIONS="-Xmx2048m -XX:MaxPermSize=1024m -Djava.awt.headless=true"
+```
+
+### Install packages
+
+```bash
+yum install -y git
+curl --silent --location https://rpm.nodesource.com/setup | bash -
+yum install -y nodejs
+yum install -y gcc-c++ make
+npm install -g brunch@1.7.20
+yum install -y libfreetype.so.6
+yum install -y freetype
+yum install -y fontconfig
+yum install -y python-devel
+yum install -y rpm-build
+```
+
+### Install python tools
+
+```bash
+wget http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c11-py2.6.egg --no-check-certificate
+
+sh setuptools-0.6c11-py2.6.egg
+```
+
+### Additional steps
+
+These steps may not be needed in every environment. You can either perform these steps before build or after, and if, you encounter specific errors.
+
+_Install pom files needed by ambari-metrics-kafka-sink_
+
+
+```bash
+mkdir /tmp/pom-files
+pushd /tmp/pom-files
+cp "FROM SOURCE"/jms-1.1.pom .
+cp "FROM SOURCE"/jmxri-1.2.1.pom .
+cp "FROM SOURCE"/jmxtools-1.2.1.pom .
+mvn install:install-file -Dfile=jmxri-1.2.1.pom -DgroupId=com.sun.jmx -DartifactId=jmxri -Dversion=1.2.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jms-1.1.pom -DgroupId=javax.jms -DartifactId=jms -Dversion=1.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxtools-1.2.1.pom -DgroupId=com.sun.jdmk -DartifactId=jmxtools -Dversion=1.2.1 -Dpackaging=jar
+popd
+```
+
+### Compile the code
+
+```bash
+pushd /usr/work/ambari
+tar -xvf apache-ambari-2.4.1-src.tar.gz
+cd apache-ambari-2.4.1-src
+mvn clean install -DskipTests
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step1.png b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step1.png
new file mode 100644
index 0000000..814b402
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step1.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step2.png b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step2.png
new file mode 100644
index 0000000..c903003
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step2.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step3.png b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step3.png
new file mode 100644
index 0000000..a0548b5
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step3.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step4.png b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step4.png
new file mode 100644
index 0000000..5f1eb47
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step4.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step5.png b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step5.png
new file mode 100644
index 0000000..68e4ca6
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step5.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step6.png b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step6.png
new file mode 100644
index 0000000..65a83b3
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/imgs/step6.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/index.md b/versioned_docs/version-2.7.5/ambari-plugin-contribution/index.md
new file mode 100644
index 0000000..fe9d0b3
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/index.md
@@ -0,0 +1,9 @@
+# Ambari Plugin Contributions
+
+These are independent extensions that are contributed to the Ambari codebase.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+
+<DocCardList />
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg
new file mode 100644
index 0000000..0414dd4
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png
new file mode 100644
index 0000000..365b9f3
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg
new file mode 100644
index 0000000..5cd861d
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/index.md b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/index.md
new file mode 100644
index 0000000..efe224f
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/index.md
@@ -0,0 +1,63 @@
+# Ambari SCOM Management Pack
+
+This information is intended for **Apache Hadoop** and **Microsoft System Center Operations Manager** users who install the **Ambari SCOM Management Pack**.
+
+## Introduction
+
+### Versions
+
+Ambari SCOM Version | Ambari Server Version | Version
+--------------------|------------------------|---------
+2.0.0 | 1.5.1 | 1.5.1.2.0.0.0-673
+1.0.0 | 1.4.4 | 1.4.4.1.0.0.1-472
+0.9.0 | 1.2.5 | 1.2.5.0.9.0.0-60
+
+The Ambari SCOM contribution can be found in the Apache Ambari project:
+
+- https://github.com/apache/ambari/tree/trunk/contrib/ambari-scom
+
+### Useful Resources
+
+The following links connect you to information about common tasks that are associated with System Center Management Packs:
+
+- [Administering the Management Pack Life Cycle](http://go.microsoft.com/fwlink/?LinkId=211463)
+- [How to Import a Management Pack in Operations Manager 2007](http://go.microsoft.com/fwlink/?LinkID=142351)
+- [How to Monitor Using Overrides](http://go.microsoft.com/fwlink/?LinkID=117777)
+- [How to Create a Run As Account in Operations Manager 2007](http://technet.microsoft.com/en-us/library/hh321655.aspx)
+- [How to Modify an Existing Run As Profile](http://go.microsoft.com/fwlink/?LinkID=165412)
+- [How to Export Management Pack Customizations](http://go.microsoft.com/fwlink/?LinkId=209940)
+- [How to Remove a Management Pack](http://go.microsoft.com/fwlink/?LinkId=209941)
+
+For questions about Operations Manager and monitoring packs, see the [System Center Operations Manager community forum](http://social.technet.microsoft.com/Forums/systemcenter/en-US/home?category=systemcenteroperationsmanager).
+
+A useful resource is the [System Center Operations Manager Unleashed blog](http://opsmgrunleashed.wordpress.com/), which contains "By Example" posts for specific monitoring packs.
+
+## Get Started
+
+### Overview
+
+**Ambari SCOM** extends the functionality of **Microsoft System Center Operations Manager** to monitor Apache Hadoop clusters, and leverages Ambari (and the Ambari REST API) to obtain Hadoop metrics. The Ambari SCOM Management Pack will:
+
+- Automatically discover all nodes within a Hadoop cluster(s).
+- Proactively monitor the availability and capacity.
+- Proactively notify when the health is critical.
+- Intuitively and efficiently visualize the health of Hadoop cluster via dashboards
+
+
+
+### Architecture
+
+Ambari SCOM is made up of the following components
+
+Component | Description
+----------|------------
+Ambari SCOM Management Pack | The Ambari SCOM Management Pack extends the functional of Microsoft System Center Operations Manager to monitor Hadoop clusters.
+Ambari SCOM Server | The Ambari SCOM Server component connects to the Hadoop cluster components and exposes a REST API for the Ambari SCOM Management Pack.
+ResourceProvider | An Ambari ResourceProvider is a pluggable interface in Ambari enables the customization of the Ambari SCOM Server.
+ClusterLayout ResourceProvider | An Ambari ResourceProvider implementation that supplies information on cluster layout (i.e. where Hadoop master and slave components are installed) to the Ambari SCOM Server. This allows Ambari to know how and where to access components of the Hadoop cluster.
+Property ResourceProvider | An Ambari ResourceProvider implementation that integrates with the SQL Server database instance for retrieving stored Hadoop metrics.
+SQL Server | A SQL Server instance that stores the metrics emitted from Hadoop via the SqlServerSink and the Hadoop Metrics2 interface.
+SqlServerSink | This is a Hadoop Metrics2 sink designed to consume metrics emitted from the Hadoop cluster. Ambari SCOM provides a SQL Server implementation.
+
+
+
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/installation.md b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/installation.md
new file mode 100644
index 0000000..6032c1d
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/scom/installation.md
@@ -0,0 +1,340 @@
+# Installation
+
+## Prerequisite Software
+
+Setting up Ambari SCOM assumes the following prerequisite software:
+
+* Ambari SCOM 1.0
+ - Apache Hadoop 1.x cluster (HDFS and MapReduce) 1
+* Ambari SCOM 2.0
+ - Apache Hadoop 2.x cluster (HDFS and YARN/MapReduce) 2
+* JDK 1.7
+* Microsoft SQL Server 2012
+* Microsoft JDBC Driver 4.0 for SQL Server 3
+* Microsoft System Center Operations Manager (SCOM) 2012 SP1 or later
+* System Center Monitoring Agent installed on **Watcher Node** 4
+
+1 _Ambari SCOM_ 1.0 has been tested with a Hadoop cluster based on **Hortonworks Data Platform 1.3 for Windows** ("[HDP 1.3 for Windows](http://hortonworks.com/products/releases/hdp-1-3-for-windows/)")
+
+2 _Ambari SCOM_ 2.0 has been tested with a Hadoop cluster based on **Hortonworks Data Platform 2.1 for Windows** ("[HDP 2.1 for Windows](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-Win-latest/bk_installing_hdp_for_windows/content/win-getting-ready.html)")
+
+3 Obtain the _Microsoft JDBC Driver 4.0 for SQL Server_ JAR file (`sqljdbc4.jar`) at [http://technet.microsoft.com/en-us/library/ms378749.aspx](http://technet.microsoft.com/en-us/library/ms378749.aspx)
+
+4 See Microsoft TechNet topic for [Managing Discovery and Agents](http://technet.microsoft.com/en-us/library/hh212772.aspx). Minimum Agent requirements _.NET 4_ and _PowerShell 2.0 + 3.0_
+
+## Package Contents
+
+```
+├─ ambari-scom- _**version**_.zip
+├── README.md
+├── server.zip
+├── metrics-sink.zip
+├── mp.zip
+└── ambari-scom.msi
+```
+
+File | Name | Description
+-----|------|-------------
+server.zip | Server Package | Contains the required software for configuring the Ambari SCOM Server software.
+metrics-sink.zip | Metrics Sink Package | Contains the required software for manually configuring SQL Server and the Hadoop Metrics Sink.
+ambari-scom.msi | MSI Installer | The Ambari SCOM MSI Installer for configuring the Ambari SCOM Server and Hadoop Metrics Sink
+mp.zip | Management Pack Package | Contains the Ambari SCOM Management Pack software.
+
+## Ambari SCOM Server Installation
+
+:::caution
+The **Ambari SCOM Management Pack** must connect to an Ambari SCOM Server to retrieve cluster metrics. Therefore, you need to have an Ambari SCOM Server running in your cluster. If you have already installed your Hadoop cluster (including the Ganglia Service) with Ambari (minimum **Ambari 1.5.1 for SCOM 2.0.0**) and have an Ambari Server already running + managing your Hadoop 1.x cluster, you can use that Ambari Server and point the **Management Pack** that host. You can proceed directly to [Installing Ambari SCOM Management Pack](#id-2installation-mgmtpack) and skip these steps to install an Ambari SCOM Server. If you do not have an Ambari Server running + managing your cluster, you **must** install an Ambari SCOM Server using one of the methods described below.
+:::
+
+The following methods are available for installing Ambari SCOM Server:
+
+* **Manual Installation** - This installation method requires you to configure the SQL Server database, setup the Ambari SCOM Server and configure the Hadoop Metrics Sink. This provides the most flexible install option based on your environment.
+* **MSI Installation** - This installation method installs the Ambari SCOM Server and configures the Hadoop Metrics Sink on all hosts in the cluster automatically using an MSI Installer. After launching the MSI, you provide information about your SQL Server database and the cluster for the installer to handle configuration.
+
+## Manual Installation
+
+### Configuring SQL Server
+
+1. Configure an existing SQL Server instance for "mixed mode" authentication.
+
+2. Confirm SQL Server is installed with TCP/IP active and enabled. (default port: 1433)
+3. Create a user and password. Remember this user and password as this will be the account used by the Hadoop metrics interface for capturing metrics. (default user: sa)
+4. Extract the contents of the `metrics-sink.zip` package to obtain the DDL script.
+
+5. Create the Ambari SCOM database schema by running the `Hadoop-Metrics-SQLServer-CREATE.ddl` script.
+
+:::info
+The Hadoop Metrics DDL script will create a database called "HadoopMetrics".
+:::
+
+### Configuring Hadoop Metrics Sink
+
+#### Preparing the Metrics Sink
+
+1. Extract the contents of the `metrics-sink.zip` package to obtain the `metrics-sink-<strong><em>version</em></strong>.jar` file.
+
+2. Obtain the _Microsoft JDBC Driver 4.0 for SQL Server_ `sqljdbc4.jar` file.
+
+3. Copy `sqljdbc4.jar` and `metrics-sink-version.jar` to each host in the cluster. For example, copy to `C:\Ambari\metrics-sink-version.jar` and `C:\Ambari\sqljdbc4.jar`
+on each host.
+
+#### Setup Hadoop Metrics2 Interface
+
+1. On each host in the cluster, setup the Hadoop metrics2 interface to use the `SQLServerSink`.
+
+Edit the `hadoop-metrics2.properties` file (located in the `<strong><em>{C:\hadoop\install\dir}</em></strong>\bin` folder of each host in the cluster):
+
+```
+*.sink.sql.class=org.apache.hadoop.metrics2.sink.SqlServerSink
+
+namenode.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+datanode.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+jobtracker.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+tasktracker.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+maptask.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+reducetask.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+```
+
+:::info
+_Where:_
+
+* _server = the SQL Server hostname_
+* _port = the SQL Server port (for example, 1433)_
+* _user = the SQL Server user (for example, sa)_
+* _password = the SQL Server password (for example, BigData1)_
+:::
+
+1. Update the Java classpath for each Hadoop service to include the `metrics-sink-<strong><em>version</em></strong>.jar` and `sqljdbc4.jar` files.
+
+
+ - Example: Updating the Java classpath for _HDP for Windows_ clusters
+
+ The `service.xml` files will be located in the `C:\hadoop\install\dir\bin` folder of each host in the cluster. The Java classpath is specified for each service in the `<arguments>` element of the `service.xml` file. For example, to update the Java classpath for the `NameNode` component, edit the `C:\hadoop\bin\namenode.xml` file.
+
+ ```
+ ...
+
+ ... -classpath ...;C:\Ambari\metrics-sink-1.5.1.2.0.0.0-673.jar;C:\Ambari\sqljdbc4.jar ...
+
+ ...
+
+ ```
+
+2. Restart Hadoop for these changes to take affect.
+
+#### Verify Metrics Collection
+
+1. Confirm metrics are being captured in the SQL Server database by querying the `MetricRecord` table:
+
+```sql
+select * from HadoopMetrics.dbo.MetricRecord
+```
+:::info
+In the above SQL statement, `HadoopMetrics` is the database name.
+:::
+
+### Installing and Configuring Ambari SCOM Server
+
+#### Running the Server
+
+1. Designate a machine in the cluster to run the Ambari SCOM Server.
+
+2. Extract the contents of the `server.zip` package to obtain the Ambari SCOM Server packages.
+
+```
+├── ambari-scom-server- **_version_**-conf.zip
+├── ambari-scom-server- **_version_**-lib.zip
+└── ambari-scom-server- **_version_**.jar
+```
+
+3. Extract the contents of the `ambari-scom-server-version-lib.zip` package to obtain the Ambari SCOM dependencies.
+
+4. Extract the contents of the `ambari-scom-server-version-conf.zip` package to obtain the Ambari SCOM configuration files.
+
+5. From the configuration files, edit the `ambari.properties` file:
+
+```
+scom.sink.db.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
+scom.sink.db.url=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+```
+
+:::info
+_Where:_
+ - _server = the SQL Server hostname_
+ - _port = the SQL Server port (for example, 1433)_
+ - _user = the SQL Server user (for example, sa)_
+ - _password = the SQL Server password (for example, BigData1)_
+:::
+
+6. Run the `org.apache.ambari.scom.AmbariServer` class from the Java command line to start the Ambari SCOM Server.
+
+:::info
+Be sure to include the following in the classpath:
+ - `ambari-scom-server-version.jar` file
+ - configuration folder containing the Ambari SCOM configuration files
+ - lib folder containing the Ambari SCOM dependencies
+ - folder containing the `clusterproperties.txt` file from the Hadoop install. For example, `c:\hadoop\install\dir`
+ - `sqljdbc4.jar` SQLServer JDBC Driver file
+::
+
+For example:
+
+```bash
+java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Xms512m -Xmx2048m -cp "c:\ambari-scom\server\conf;c:\ambari-scom\server\lib\*;c:\jdbc\sqljdbc4.jar;c:\hadoop\install\dir;c:\ambari-scom\server\ambari-scom-server-1.5.1.2.0.0.0-673.jar" org.apache.ambari.scom.AmbariServer
+```
+
+:::info
+In the above command, be sure to replace the Ambari SCOM version in the `ambari-scom-server-version.jar` and replace `c:\hadoop\install\dir` with the folder containing the `clusterproperties.txt` file.
+:::
+
+#### Verify the Server API
+
+1. From a browser access the API
+
+```
+http://[ambari-scom-server]:8080/api/v1/clusters
+```
+2. Verify that metrics are being reported.
+
+```
+http://[ambari-scom-server]:8080/api/v1/clusters/ambari/services/HDFS/components/NAMENODE
+```
+
+## MSI Installation
+
+### Configuring SQL Server
+
+1. Configure an existing SQL Server instance for "mixed mode" authentication.
+
+2. Confirm SQL Server is installed with TCP/IP active and enabled. (default port: 1433)
+3. Create a user and password. (default user: sa)
+
+### Running the MSI Installer
+
+1. Designate a machine in the cluster to run the Ambari SCOM Server.
+
+2. Extract the contents of the `server.zip` package to obtain the Ambari SCOM Server packages.
+
+3. Run the `ambari-scom.msi` installer. The "Ambari SCOM Setup" dialog appears:
+
+ 
+
+4. Provide the following information:
+
+Field | Description
+------|------------
+Ambari SCOM package directory | The directory where the installer will place the Ambari SCOM Server packages. For example: C:\Ambari
+SQL Server hostname | The hostname of the SQL Server instance for Ambari SCOM Server to use to store Hadoop metrics.
+SQL Server port | The port of the SQL Server instance.
+SQL Server login | The login username.
+SQL Server password | The login password
+Path to SQL Server JDBC Driver (sqljdbc4.jar) | The path to the JDBC Driver JAR file.
+Path to the cluster layout file (clusterproperties.txt) | The path to the cluster layout properties file.
+
+5. You can optionally select to Start Services
+6. Click Install
+7. After completion, links are created on the desktop to "Start Ambari SCOM Server", "Browse Ambari API" and "Browse Ambari API Metrics". After starting the Ambari SCOM Server, browse the API and Metrics to confirm the server is working properly.
+
+:::info
+The MSI installer installation log can be found at `C:\AmbariInstallFiles\AmbariSetupTools\ambari.winpkg.install.log`
+:::
+
+### Installing Ambari SCOM Management Pack
+
+:::info
+Before installing the Management pack, be sure to install the Ambari SCOM Server using the Ambari SCOM Server Installation instructions.
+:::
+
+#### Import the Management Pack
+
+Perform the following to import the Ambari SCOM Management Pack into System Center Operations Manager.
+
+1. Extract the contents of the `mp.zip` package to obtain the Ambari SCOM management pack (`.mpb`) files.
+
+2. Ensure Windows Server 2012 running SCOM with SQL Server (full text search).
+
+3. Open System Center Operations Manager.
+
+4. Go to Administration -> Management Packs.
+
+5. From the Tasks panel, select Import Management Packs...
+
+6. In the Import Management Packs dialog, select Add -> Add from disk...
+
+7. You are prompted to search the Online Catalog. Click "No".
+
+8. Browse for the Ambari SCOM management pack files.
+
+9. Select the following files:
+
+```
+Ambari.SCOM.Monitoring.mpb
+Ambari.SCOM.Management.mpb
+Ambari.SCOM.Presentation.mpb
+```
+10. Click "Open"
+11. Review the Import list and click "Install".
+
+12. The Ambari SCOM Management Pack installation will start.
+
+:::info
+The Ambari SCOM package also includes `AmbariSCOMManagementPack.msi` which is an alternative packaging of the `mp.zip`. This MSI is being made in **beta** form in this release.
+:::
+
+#### Create Run As Account
+
+Perform the following to configure a account to use when the Ambari SCOM Management Pack talks to the Ambari SCOM Server.
+
+1. After Importing the Management Pack is complete, go to Administration -> Run As Configuration -> Accounts.
+
+2. In the Tasks panel, select "Create Run as Account..."
+3. You are presented with the Create Run As Account Wizard.
+
+4. Go thru the wizard, select Run As account type "Basic Authentication".
+
+5. Give the account a Display name and click "Next".
+
+6. Enter the account name and password for the Ambari SCOM Server. This account will be used to connect to the Ambari SCOM Server to access the Ambari REST API. Default is account name is "admin" and password is "admin".
+
+7. Click "Next"
+8. Select the "Less secure" distribution security option.
+
+9. Click "Next" and complete the wizard.
+
+#### Configure the Management Pack
+
+Perform the following to configure the Ambari SCOM Management Pack to talk to the Ambari SCOM Server.
+
+1. Go to Authoring -> Management Pack Templates -> Ambari SCOM
+2. In the Tasks panel, select "Add Monitoring Wizard".
+
+3. Select monitoring type "Ambari SCOM"
+4. Provide a name and select the destination management pack.
+
+5. Provide the Ambari URI with is the address of the Ambari SCOM Server in the format:
+
+```
+http://[ambari-scom-server]:8080/api/
+```
+
+:::info
+In the above Ambari URI, `ambari-scom-server` is the Ambari SCOM Server.
+:::
+
+6. Select the Run As Account that you created in Create Run As Account.
+
+7. Select "Watcher Node". If node is not listed, click "Add" and browse for the node. Click "Next".
+
+8. Complete the Add Monitoring Wizard and proceed to the Monitoring Scenariosfor information on using the management pack.
+
+#### Best Practice: Create Management Pack for Customizations
+
+By default, Operations Manager saves all customizations such as overrides to the **Default Management Pack**. As a best practice, you should instead create a separate management pack for each sealed management pack you want to customize.
+
+When you create a management pack for the purpose of storing customized settings for a sealed management pack, it is helpful to base the name of the new management pack on the name of the management pack that it is customizing, such as **Ambari SCOM Customizations**.
+
+Creating a new management pack for storing customizations of each sealed management pack makes it easier to export the customizations from a test environment to a production environment. It also makes it easier to delete a management pack, because you must delete any dependencies before you can delete a management pack. If customizations for all management packs are saved in the **Default Management Pack** and you need to delete a single management pack, you must first delete the **Default Management Pack**, which also deletes customizations to other management packs.
+
+## Monitoring Scenarios
+
+[Monitoring Scenarios](https://cwiki.apache.org/confluence/display/AMBARI/3.+Monitoring+Scenarios)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/ambari-plugin-contribution/step-by-step.md b/versioned_docs/version-2.7.5/ambari-plugin-contribution/step-by-step.md
new file mode 100644
index 0000000..41d96d2
--- /dev/null
+++ b/versioned_docs/version-2.7.5/ambari-plugin-contribution/step-by-step.md
@@ -0,0 +1,52 @@
+# Step-by-step guide on adding a dashboard widget for a host.
+
+## Create your own dashboard widget for hosts:
+
+Requirements:
+
+- Jmxtrans
+ - Jmxtrans is the application chosen to compile rrd files in order to produce graphing data on ganglia.
+https://github.com/jmxtrans/jmxtrans
+- .rrd files
+ - All the Ganglia rrd files are stored in the /var/lib/rrds directory on the host machine where the Ganglia server is installed.
+ - In this example I’ll be using the “**Nimbus_JVM_Memory_Heap_used.rrd**” file for the data of my custom widget.
+
+**Step 1**:
+
+First we need to go add the rrd file in the “**ganglia_properties.json**” file which is located in the `ambari\ambari-server\src\main\resources` directory of your Ambari source code. This is necessary so that the Ambari-server can call your rrd file from Ganglia via the Ambari API.
+
+
+
+Line 108: Create the path for the metrics to be included in the API.
+
+Line 109: Specify the rrd file.
+
+**Step 2**:
+
+Now we are going to add the API path created in step 1 at line 108, to the “**update_controller.js**” file located in the `ambari\ambari-web\app\controllers\global` directory, so that our graph data can be updated frequently.
+
+
+
+**Step 3**:
+
+Create a JavaScript file for the view of the template of our custom widget and save it in the `ambari\ambari-web\app\views\main\host\metrics` directory of your Ambari source code. In this case I saved my file as “**nimbus.js**”
+
+
+
+**Step 4**:
+
+Add the JavaScript file you created in the previous step into the “**views.js**” file located in the `ambari\ambari-web\app` directory.
+
+
+
+**Step 5**:
+
+Add the .js file view created in step 3 in the “**metrics.hbs**” template file located in the `ambari\ambari-web\app\templates\main\host` directory.
+
+
+
+**Step 6**:
+
+Add the API call to the “**ajax.js**” file located in the `ambari\ambari-web\app\utils` directory.
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.5/introduction.md b/versioned_docs/version-2.7.5/introduction.md
new file mode 100644
index 0000000..58a8611
--- /dev/null
+++ b/versioned_docs/version-2.7.5/introduction.md
@@ -0,0 +1,48 @@
+---
+sidebar_position: 1
+---
+
+# Introduction
+
+The Apache Ambari project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs.
+
+Ambari enables System Administrators to:
+
+* Provision a Hadoop Cluster
+ - Ambari provides a step-by-step wizard for installing Hadoop services across any number of hosts.
+
+ - Ambari handles configuration of Hadoop services for the cluster.
+
+* Manage a Hadoop Cluster
+ - Ambari provides central management for starting, stopping, and reconfiguring Hadoop services across the entire cluster.
+
+* Monitor a Hadoop Cluster
+ - Ambari provides a dashboard for monitoring health and status of the Hadoop cluster.
+
+ - Ambari leverages [Ambari Metrics System](https://issues.apache.org/jira/browse/AMBARI-5707) for metrics collection.
+
+ - Ambari leverages [Ambari Alert Framework](https://issues.apache.org/jira/browse/AMBARI-6354) for system alerting and will notify you when your attention is needed (e.g., a node goes down, remaining disk space is low, etc).
+
+Ambari enables Application Developers and System Integrators to:
+
+* Easily integrate Hadoop provisioning, management, and monitoring capabilities to their own applications with the [Ambari REST APIs](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md).
+
+## Getting Started with Ambari
+
+Follow the [installation guide for Ambari 2.7.6](https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+2.7.6).
+
+Note: Ambari currently supports the 64-bit version of the following Operating Systems:
+
+* RHEL (Redhat Enterprise Linux) 7.4, 7.3, 7.2
+* CentOS 7.4, 7.3, 7.2
+* OEL (Oracle Enterprise Linux) 7.4, 7.3, 7.2
+* Amazon Linux 2
+* SLES (SuSE Linux Enterprise Server) 12 SP3, 12 SP2
+* Ubuntu 14 and 16
+* Debian 9
+
+## Get Involved
+
+Visit the [Ambari Wiki](https://cwiki.apache.org/confluence/display/AMBARI/Ambari) for design documents, roadmap, development guidelines, etc.
+
+[Join the Ambari User Meetup Group](http://www.meetup.com/Apache-Ambari-User-Group). You can see the slides from [April 2, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/109316812/), [June 25, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/119184782/), and [September 25, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/134373312/) meetups.
diff --git a/versioned_docs/version-2.7.5/quick-start/_category_.json b/versioned_docs/version-2.7.5/quick-start/_category_.json
new file mode 100644
index 0000000..b1ce0cc
--- /dev/null
+++ b/versioned_docs/version-2.7.5/quick-start/_category_.json
@@ -0,0 +1,8 @@
+{
+ "label": "Quick Start",
+ "position": 2,
+ "link": {
+ "type": "generated-index",
+ "description": "Ambari Quick Start"
+ }
+}
diff --git a/versioned_docs/version-2.7.5/quick-start/quick-start-for-new-vm-users.md b/versioned_docs/version-2.7.5/quick-start/quick-start-for-new-vm-users.md
new file mode 100644
index 0000000..6b7469c
--- /dev/null
+++ b/versioned_docs/version-2.7.5/quick-start/quick-start-for-new-vm-users.md
@@ -0,0 +1,410 @@
+---
+sidebar_position: 2
+---
+
+# Quick Start for New VM Users
+
+This Quick Start guide is for readers who are new to the use of virtual machines, Apache Ambari, and/or the Apache Hadoop component stack, who would like to install and use a small local Hadoop cluster. The instructions are for a local host machine running OS X.
+
+The following instructions cover four main steps for installing Ambari and HDP using VirtualBox and Vagrant:
+
+1. Install VirtualBox and Vagrant.
+
+2. Install and start one or more Linux virtual machines. Each machine represents a node in a cluster.
+
+3. On one of the virtual machines, download, install, and deploy the version of Ambari you wish to use.
+
+4. Using Ambari, deploy the version of HDP you wish to use.
+
+When you complete the example in this Quick Start, you should have a three-node cluster of virtual machines running Ambari 2.4.1.0 and HDP 2.5.0 (unless you specify different repository versions).
+
+Once VirtualBox and Vagrant are installed, steps 2 through 4 can be done multiple times to change versions, create a larger cluster, and so on. There is no need to repeat step 1 unless you want to upgrade VirtualBox and/or Vagrant later.
+
+Note: these steps were most recently tested on MacOS 10.11.6, El Capitan.
+
+
+## Terminology
+
+A _virtual machine_, or _VM_, is a software program that exhibits the behavior of a separate computer and is capable of running applications and programs within its own environment.
+
+A virtual machine is often called a _guest_, because it runs within another computing environment--usually known as the _host_. For example, if you install three Linux VMs on a Mac, the Mac is the host machine; the three Linux VMs are guests.
+
+Multiple virtual machines can exist within a single host at one time. In the following examples, one or more virtual machines run on a _host_ machine running OS X. OS X is the primary operating system. The virtual machines (guests) are installed under OS X. The virtual machines run Linux in separate environments on OS X. Thus, your Mac is the "host" machine, and the virtual machines that run Ambari and Hadoop are called "guest" machines.
+
+## Prerequisites
+
+You will need the following resources for this Quick Start:
+
+* A solid internet connection, preferably with at least 5 MB available download bandwidth.
+
+* If you are installing the VMs on a Mac, at least 16 GB of memory (assuming 3 GB per VM)
+
+## Install VirtualBox and Vagrant
+
+VirtualBox is a software virtualization package that installs on an operating system as an application. It allows you to run multiple virtual machines at the same time. In this Quick Start you will use VirtualBox to run Linux nodes within VirtualBox on OS X:
+
+
+
+Vagrant is a tool that makes it easier to work with virtual machines. It helps automate the work of setting up, running, and removingvirtual machine environments. Using Vagrant, you can install and run a preconfigured cluster environment with Ambari and the HDP stack.
+
+1. Download and install VirtualBox from [https://www.virtualbox.org/wiki/Downloads](https://www.virtualbox.org/wiki/Downloads). This Quick Start has been tested on version 5.1.6.
+
+2. Download and install Vagrant from [https://www.vagrantup.com/downloads.html](https://www.vagrantup.com/downloads.html) [.<br></br>](http://downloads.vagrantup.com)
+3. Clone the `ambari-vagrant` GitHub repository into a convenient folder on your Mac. Navigate to the folder, and enter the following command from the terminal:
+
+```bash
+git clone https://github.com/u39kun/ambari-vagrant.git
+```
+
+The repository contains scripts for setting up Ambari virtual machines on several Linux distributions.
+
+4. Add virtual machine hostnames and addresses to the `/etc/hosts` file on your computer. The following command copies a set of host names and addresses from `ambari-vagrant/append-to-etc-hosts.txt` to the end of the `/etc/hosts` files:
+
+```bash
+sudo -s 'cat ambari-vagrant/append-to-etc-hosts.txt >> /etc/hosts'
+```
+5. Use the `vagrant` command to create a private key to use with Ambari:
+
+```bash
+vagrant
+```
+
+The `vagrant` command displays Vagrant command information, and then it creates a private key in the file `~/.vagrant.d/insecure_private_key`.
+
+## Start Linux Virtual Machines
+
+The `ambari-vagrant` directory (cloned from GitHub) contains several subdirectories, each for a specific Linux distribution. Each subdirectory has scripts and configuration files for running Ambari and HDP on that version of Linux.
+
+To start one or more virtual machines:
+
+1. Change your current directory to `ambari-vagrant`:
+
+```bash
+cd ambari-vagrant
+```
+
+If you run an `ls` command on the `ambari-vagrant` directory, you will see subdirectories for several different operating systems and operating system versions.
+
+2. `cd` into the OS subdirectory for the OS you wish to use. CentOS is recommended, because it is quicker to launch than other operating systems.
+
+The remainder of this example uses CentOS 7.0. (To install and use a different version or distribution of Linux, specify the other directory name in place of `centos7.0`.)
+
+```bash
+cd centos7.0
+```
+
+**Important**: All VM `vagrant` commands operate within your current directory. Be sure to run them from the local (Mac) subdirectory associated with the VM operating system that you have chosen to use. If you attempt to run a `vagrant` command from another directory, it will not find the VM.
+
+Copy the private key into the directory associated with the chosen operating system.
+
+For this example, which uses `centos7.0`, issue the following command:
+
+```bash
+cp ~/.vagrant.d/insecure_private_key .
+```
+3. (Optional) If you have at least 16 GB of memory on your Mac, consider increasing the amount of memory allocated to the VMs.
+
+Edit the following line in `Vagrantfile` , increasing allocated memory from 3072 to 4096 or more; for example:
+
+```bash
+vb.customize ["modifyvm", :id, "--memory", 4096] # RAM allocated to each VM
+```
+4. Every virtual machine will have a directory called `/vagrant` inside the VM. This corresponds to the `ambari-vagrant/<os></os>` directory on your local computer, making it easy to transfer files back and forth between your host Mac and the virtual machine. If you have any files to access from within the VM, you can place them in this shared directory.
+
+5. Start one or more VMs, using the `./up.sh` command. Each VM will run one HDP node.Recommendation: if you have at least 16GB of RAM on your Mac and wish to run a small cluster, start with three nodes.
+
+```bash
+./up.sh
+```
+
+For example, the following command starts 3 VMs:<br></br> `./up.sh 3`
+
+On an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes five minutes. For CentOS 7.0, the hostnames are `c7001`, `c7002`, and `c7003`.
+
+Additional notes:
+- If you ran the VMs before and used `vagrant destroy` to remove the VM's, this is the step at which you would recreate and start the VMs.
+
+- The default `Vagrantfile` (in each OS subdirectory) can create up to 10 virtual machines.
+
+- The `up.sh 3` command is equivalent to `vagrant up c700{1..3}`.
+
+- The fully-qualified domain name (FQDN) for each VM has the format `<os-code>[01-10].ambari.apache.org</os-code>`, where `<os-code></os-code>` is `c59` (CentOS 5.9), `c64` (CentOS 6.4), etc. For example, `c5901.ambari.apache.org` will be the FQDN for node 01 running CentOS 5.9.
+
+- The IP address for each VM has the format `192.168.<os-subnet>.1[01-10]</os-subnet>`, where `<os-subnet></os-subnet>` is `64` for CentOS 6.4, `70` for CentOS 7.0, and so on. For example, `192.168.70.101` will be the IP address for CentOS 7.0 node `c7001`.
+
+6. Check the status of your VM(s), and review any errors. The following example shows the results of `./upsh 3` for three VMs running with CentOS 7.0:
+
+```
+LMBP:centos7.0 lkg$ vagrant status
+
+Current machine states:
+c7001 running (virtualbox)
+c7002 running (virtualbox)
+c7003 running (virtualbox)
+c7004 not created (virtualbox)
+c7005 not created (virtualbox)
+c7006 not created (virtualbox)
+c7007 not created (virtualbox)
+c7008 not created (virtualbox)
+c7009 not created (virtualbox)
+c7010 not created (virtualbox)
+```
+
+In the preceding list, three virtual machines are installed and running.
+
+7. At this point, you can snapshot the VMs to have a fresh set of running machines to reuse if desired. This is especially helpful when installing Apache Ambari and the HDP stack for the first time; it allows you to back out to fresh VMs and reinstall Ambari and HDP if you encounter errors. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Access Virtual Machines
+
+Use the following steps when you want to access a running virtual machine:
+
+1. To log on to a virtual machine, use the `vagrant ssh` command on your host machine, and specify the hostname; for example:
+
+```
+LMBP:centos7.0 lkg$ vagrant ssh c7001
+
+Last login: Tue Jan 12 11:20:28 2016
+[vagrant@c7001 ~]$
+```
+
+From this point onward, this terminal window is connected to the virtual machine until yo u exit the virtual machine. All commands go to the VM, not to your Mac.
+
+_**Recommendation**_: Open a second terminal window for your Mac. This is useful when accessing the Ambari Web UI. To distinguish between the two, terminal windows typically list the computer name or VM hostname on each command-line prompt and at the top of the terminal window.
+
+2. When you first access the VM you will be logged in as user `vagrant`. Switch to the `root` user; be sure to include the space between "su" and "-":
+
+```
+[vagrant@c7001 ~]$ sudo su -
+
+Last login: Sun Sep 25 01:34:28 AEST 2016 on pts/0
+root@c7001:~#
+```
+
+If at any time you wish to return the terminal window to your host machine:
+
+1.
+
+ 1. Use the `logout` command to log out of root
+ 2. Use the `exit` command to return to your host machine (Mac)
+
+At this point, the VMs are still running in the background. You can re-issue the `vagrant ssh` command later, to reconnect, or you can stop the virtual machines. For more information, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Install Ambari on the Virtual Machines
+
+**Prerequisites**: Before installing Ambari, the following software packages must be installed on your VM:
+
+* rpm
+* curl
+* wget
+* pdsh
+
+On CentOS: to check if a package is installed, run `yum info <package-name></package-name>`. To install a package, run `yum install <package-name></package-name>`.
+
+To install Ambari, you can build it yourself from source (see [Ambari Development](../ambari-dev/index.md)), or you can use published binaries.
+
+As this is a Quick Start Guide to get you going quickly, ready-made publicly-available binaries are referenced. Note that these binaries were built and publicly made available via Hortonworks, a commercial vendor for Hadoop. This is for your convenience. Note that using the binaries shown here would make HDP, Hortonworks' distribution, available to be installed via Apache Ambari. The instructions here should still work (only the repo URLs need to be changed) if you have Ambari binaries from any other vendor/organization/individuals (the instructions here can be updated if anyone wanted to expand this to include such ready-made, publicly accessible binaries from any source - such contributions are welcome). This would also work if you had built the binaries yourself.
+
+From the terminal window o n the VM where you want to run the main Ambari service, download the Ambari repository. The following comman ds download Ambari version 2.5.1.0 and install `ambari-server`. To install a different version of Ambari, specify the appropriate repo URL. Choose the appropriate commands for the operating system on your VMs:
+
+```bash
+# CentOS 6 (for CentOS 7, replace centos6 with centos7 in the repo URL)
+#
+# to test public release 2.5.1
+wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo
+yum install ambari-server -y
+
+# Ubuntu 14 (for Ubuntu 16, replace ubuntu14 with ubuntu16 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/apt/sources.list.d/ambari.list http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.5.1.0/ambari.list
+apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
+apt-get update
+apt-get install ambari-server -y
+
+# SUSE 11 (for SUSE 12, replace suse11 with suse12 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/zypp/repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.5.1.0/ambari.repo
+zypper install ambari-server -y
+```
+
+On an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes seven minutes. Timing also depends on internet download speeds.
+
+To install Ambari with default settings, set up and start `ambari-server`:
+
+```bash
+ambari-server setup -s
+ambari-server start
+```
+
+To check Ambari Server status, issue the following command:<br></br> `ambari-server status`
+
+
+After Ambari Server has started, launch a browser on your host machine (Mac). Access the Ambari Web UI at ` http://<hostname>.<a href="http://ambari.apache.org">ambari.apache.org</a>:8080</hostname>`. The `<hostname></hostname>` part of the URL specifies the VM where you installed Ambari;for example:
+
+```
+http://c7001.ambari.apache.org:8080
+```
+
+**Note**: The Ambari Server can take some time to launch and be ready to accept connections. Keep trying the URL until you see the login page.
+
+
+At this point, you can snapshot the VMs to have a cluster with Ambari installed, to rerun later if desired. This is especially helpful when installing Apache Ambari and the HDP stack for the first time; it allows you to back out to fresh VMs running Ambari, and reinstall a fresh HDP stack if you encounter errors. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Install the HDP Stack
+
+The following instructions describe basic steps for using Ambari to install HDP components.
+
+1. On the Ambari screen, login using default username `admin`, password `admin`.
+
+2. On the welcome page, choose "Launch Install Wizard."
+3. Specify a name for your cluster, and then click Next.
+
+4. On the Select Version page, choose which version of HDP to install, and then click Next.
+
+5. On the Install Options page, complete the following steps:
+ 1. List the FQDNs of the virtual machines. For example:
+
+```
+c7001.ambari.apache.org
+c7002.ambari.apache.org
+c7003.ambari.apache.org
+```
+
+Alternatively, you can use a range expression:
+
+```
+c70[01-03].ambari.apache.org
+```
+ 2. Upload the `insecure_private_key` file that you created earlier: browse to the `ambari-vagrant` directory, navigate to the operating system folder for your VM's, and choose the key file.
+
+ 3. Change the SSH User Account to `vagrant`.
+
+ 4. Click "Register and Confirm."
+
+6. On the Confirm Hosts page, Ambari displays installation status.
+
+If you see a yellow banner with the following message, click on the link to review warnings:
+
+See the Troubleshooting section (later on this page) for more information.
+
+7. When all host checks pass, close the warning window:
+
+
+
+8. Click Next to continue:
+9. On the Choose Services page, uncheck any components that you do not expect to use. If any are required for selected components, Ambari will request to add them back in.
+
+10. On the Assign Masters screen, choose hosts or simply click Next to use default values.
+
+11. On the Assign Slaves and Clients screen, choose hosts or simply click Next to use default values.
+
+12. On the Customize Services screen
+ 1. Review services with warning notes, such as Hive and Ambari Metrics in the following image:
+
+ 2. Specify missing property values (such as admin passwords) as directed by the installation wizard. When all configurations have been addressed, click Next.
+
+13. On the Review screen, review the service definitions, and then click Next.
+
+14. The Install, Start and Test page shows deployment status. This step takes a while; on an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes 45 minutes.
+
+15. When the cluster installs successfully, you can snapshot the VMs to have a fresh cluster with Ambari and HDP installed, to rerun later if desired. This allows you to experiment with the cluster and quickly restore back to a previous state if you wish. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Troubleshooting
+
+This subsection describes a few error conditions that might occur during Ambari installation and HDP cluster deployment:
+
+### Confirm Hosts
+
+If you see an error similar to the following on the Confirm Hosts page of the Ambari installation wizard, click the link to see the warnings:
+'Some warnings were encountered while performing checks against the 3 registered hosts above. Click here to see the warnings."
+
+**`ntpd` Error**
+
+On the Host Checks window, the following warning indicates that you need to start `ntpd` on each host:
+
+
+
+To start the services, for each VM navigate to a terminal window (on your Mac, `vagrant ssh <vm-name></vm-name>`). Issue the following commands:
+
+`service ntpd start`<br></br> `service ntpd status`
+
+You should see messages confirming that `ntpd` is running. Navigate back to the Host Checks window of the Ambari installation wizard and click Rerun Checks. When all checks complete successfully, click Close to continue the installation process.
+
+### Install, Start and Test
+
+If the Install, Start and Test step fails with the following error during DataNode deployment:
+
+```
+Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
+Requires: snappy(x86-64) = 1.0.5-1.el6
+Installed: snappy-1.1.0-3.el7.x86_64 (@anaconda/7.2)
+```
+
+Run the following commands under the root account on each VM:
+
+```bash
+yum remove -y snappy-1.1.0-3.el7.x86_64
+yum install snappy-devel -y
+```
+
+### Stopping and Restarting Virtual Machines
+
+Hadoop is a complex ecosystem with a lot of status checks and cross-component messages. This can make it challenging to halt and restart several VMs and restore them later without warnings or errors.
+
+**Recommendations**
+
+If you would like to save state for a period of time and you plan to stop using your Mac during that time, if you sleep your Mac the cluster should continue from where it left off after you wake the Mac.
+
+When stopping a set of VMs--if you don't need to save cluster state--it can be helpful to stop all services first, stop ambari-server (`ambari-server stop`), and then issue a Vagrant `halt` or `suspend` command.
+
+When restarting a cluster after halting or taking a snapshot, check Ambari server status and restart it if necessary:
+
+```bash
+ambari-server status ambari-server start
+```
+
+After logging into the Ambari Web UI, expect to see alert warnings or errors due to timeout conditions. Check the associated messages to determine whether they might affect your use of the virtual cluster. If so, it can be helpful to stop and restart one or more associated components.
+
+# Reference: Basic Vagrant Commands
+
+The following table lists several common Vagrant commands. For more information, see Vagrant [Command-Line Interface](https://www.vagrantup.com/docs/cli/) documentation.
+
+**vagrant up**
+
+Create and configure guest machines. Example: `vagrant up c6406`
+
+`up.sh` is a wrapper for this call. You can use this command to start more VMs after you called `up.sh`.
+
+Note: if you do not specify the `<vm-name></vm-name>` parameter, Vagrant will attempt to start ten VMs.
+
+**vagrant suspend []**
+
+Save the current running state of a VM and stop the VM. A suspend effectively saves the _exact point-in-time state_ of a machine. When you issue a `resume` command, the VM begins running immediately from that point, rather than doing a full boot.
+
+When you are ready to begin working with it again, run `vagrant up`. The machine will resume where you left off. The main benefit of `suspend` is that it is very fast; it usually takes only 5 to 10 seconds to stop and start your work. The downside is that the operation uses disk space for the VM and to store all VM state information (in RAM, when running) on disk.
+
+Optional: Specify a specific VM to suspend.
+
+**vagrant halt** `vagrant up` `halt` **vagrant destroy -f [**
+
+Remove all traces of the guest machine from your system. The `destroy` command stops the guest machine, powers it down, and removes all guest hard disks. When you are ready to begin working with it again, run `vagrant up`. The benefit of this all disk space and RAM consumed by the guest machine are reclaimed; your host machine is left clean. The downside is that the `vagrant up` operation will take extra time; rebuilding the environment takes the longest (compared with `suspend` and `halt`) because it re-imports and re-provisions the machine.
+
+Optional: Specify a specific VM to destroy.
+
+**vagrant ssh**
+
+Starts a SSH session to the host.
+
+Example: `vagrant ssh c6401`
+**vagrant status []** **vagrant snapshot**
+
+A Vagrant snapshot saves the current state of a VM so that you can restart the VM from the same point at a future time. Commands include push, pop, save, restore, list, and delete. For more information, see [https://www.vagrantup.com/docs/cli/snapshot.html](https://www.vagrantup.com/docs/cli/snapshot.html).
+
+Note: Upon resuming a snapshot, you may find that time-sensitive services such as the (HBase RegionServer) may be down. If this happens, you will need to restart those services.
+
+**vagrant --help**
+
+**Recommendation**: After you start the VMs--but before you run anything on the VMs–save a snapshot. This allows you to restore the initial state of your VMs. This process is much faster than starting the VMs from scratch and then reinstalling Ambari and HDP. You can return to the initial state without destroying other named snapshots that you create later.
+
+More information: [https://www.vagrantup.com/docs/getting-started/teardown.html](https://www.vagrantup.com/docs/getting-started/teardown.html)
+
+If you have favorite ways of starting and stopping VMs running a Hadoop cluster, please feel free to share them in the Comments section. Thanks!
diff --git a/versioned_docs/version-2.7.5/quick-start/quick-start-guide.md b/versioned_docs/version-2.7.5/quick-start/quick-start-guide.md
new file mode 100644
index 0000000..ddab91f
--- /dev/null
+++ b/versioned_docs/version-2.7.5/quick-start/quick-start-guide.md
@@ -0,0 +1,252 @@
+---
+sidebar_position: 1
+---
+
+# Quick Start Guide
+
+This document shows how to quickly set up a cluster using Ambari on your local machine using virtual machines.
+
+This utilizes [VirtualBox](https://www.virtualbox.org/) and [Vagrant](https://www.vagrantup.com/) so you will need to install both.
+
+Note that the steps were tested on MacOS 10.8.4 / 10.8.5.
+
+After you have installed VirtualBox and Vagrant on your computer, check out the "ambari-vagrant" repo on github:
+
+```bash
+git clone https://github.com/u39kun/ambari-vagrant.git
+```
+
+Edit your **/etc/hosts** on your computer so that you will be able to resolve hostnames for the VMs:
+
+```bash
+sudo -s 'cat ambari-vagrant/append-to-etc-hosts.txt >> /etc/hosts'
+```
+
+Copy the private key to your home directory (or some place convenient for you) so that it's easily accessible for uploading via Ambari Web:
+
+```bash
+vagrant
+```
+
+The above command shows the command usage and also creates a private key as ~/.vagrant.d/insecure_private_key. This key will be used in the following steps.
+
+First, change directory to ambari-vagrant:
+
+```bash
+cd ambari-vagrant
+```
+
+You will see subdirectories for different OS's. "cd" into the OS that you want to test. **centos6.8** is recommended as this is quicker to launch than other OS's.
+
+Now you can start VMs with the following command:
+
+```bash
+cd centos6.8
+cp ~/.vagrant.d/insecure_private_key .
+./up.sh
+```
+
+For example, **up.sh 3** starts 3 VMs. 3 seems to be a good number with 16GB of RAM without taxing the system too much.
+
+With the default **Vagrantfile**, you can specify up to 10 (if your computer can handle it; you can even add more).
+
+VMs will have the FQDN
+
+If it is your first time running a vagrant command, run:
+
+```bash
+vagrant init
+```
+
+Log into the VM:
+
+```bash
+vagrant ssh c6801
+```
+
+```bash
+sudo su -
+```
+
+to make yourself root.
+
+To install Ambari, you can build it yourself from source (see [Ambari Development](../ambari-dev/index.md)), or you can use published binaries.
+
+As this is a Quick Start Guide to get you going quickly, ready-made, publicly available binaries are referenced in the steps below.
+
+Note that these binaries were built and made publicly available via Hortonworks, a commercial vendor for Hadoop. This is for your convenience. Note that using the binaries shown here would make HDP, Hortonworks' distribution, available to be installed via Apache Ambari. The instructions here should still work (only the repo URLs need to be changed) if you have Ambari binaries from any other vendor/organization/individuals (the instructions here can be updated if anyone wanted to expand this to include such ready-made, publicly accessible binaries from any source - such contributions are welcome). This would also work if you had built the binaries yourself.
+
+```bash
+# CentOS 6 (for CentOS 7, replace centos6 with centos7 in the repo URL)
+#
+# to test public release 2.5.1
+wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo
+yum install ambari-server -y
+
+# Ubuntu 14 (for Ubuntu 16, replace ubuntu14 with ubuntu16 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/apt/sources.list.d/ambari.list http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.5.1.0/ambari.list
+apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
+apt-get update
+apt-get install ambari-server -y
+
+# SUSE 11 (for SUSE 12, replace suse11 with suse12 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/zypp/repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.5.1.0/ambari.repo
+zypper install ambari-server -y
+```
+
+Ambari offers many installation options (see [Ambari User Guides](https://cwiki.apache.org/confluence/display/AMBARI/Ambari+User+Guides)), but to get up and running quickly with default settings, you can run the following to set up and start ambari-server.
+
+```bash
+ambari-server setup -s
+ambari-server start
+```
+
+_For frontend developers only: see Frontend Development section below for extra setup instructions._
+
+Once Ambari Server is started, hit [http://c6801.ambari.apache.org:8080](http://c6801.ambari.apache.org:8080) (URL depends on the OS being tested) from your browser on your local computer.
+
+Note that Ambari Server can take some time to fully come up and ready to accept connections. Keep hitting the URL until you get the login page.
+
+Once you are at the login page, login with the default username **admin** and password **admin**.
+
+On the Install Options page, use the FQDNs of the VMs. For example:
+
+```
+c6801.ambari.apache.org
+c6802.ambari.apache.org
+c6803.ambari.apache.org
+```
+
+Alternatively, you can use a range expression:
+
+```
+c68[01-03].ambari.apache.org
+```
+
+Specify the the non-root SSH user **vagrant**, and upload **insecure_private_key** file that you copied earlier as the private key.
+
+Follow the onscreen instructions to install your cluster.
+
+When done testing, run **vagrant destroy -f** to purge the VMs.
+
+**vagrant up**
+Starts a specific VM. up.sh is a wrapper for this call.
+
+Note: if you don't specify the
+
+**vagrant destroy -f**
+Destroys all VMs launched from the current directory (deletes them from disk as well)
+You can optionally specify a specific VM to destroy
+
+**vagrant suspend**
+Suspends (snapshot) all VMs launched from the current directory so that you can resume them later
+You can optionally specify a specific VM to suspend
+
+**vagrant resume**
+Resumes all suspended VMs launched from the current directory
+You can optionally specify a specific VM to resume
+
+**vagrant status**
+Shows which VMs are running, suspended, etc.
+
+## Modifying RAM for the VMs
+
+Each VM is allocated 2GB of RAM. These can be changed by editing **Vagrantfile**. To change the RAM allocation, modify the following line:
+
+```bash
+vb.customize ["modifyvm", :id, "--memory", 2048]
+```
+
+## Taking Snapshots
+
+Vagrant makes it easy to take snapshots of the entire cluster.
+
+First, install the snapshot plugin:
+
+```bash
+vagrant plugin install vagrant-vbox-snapshot --plugin-version=0.0.2
+```
+
+This enables the "vagrant snapshot" command. Note that the above installs vesion 0.0.2. if you install the latest plugin version 0.0.3 does not allow taking snapshots of the whole cluster at the same time (you have to specify a VM name).
+
+Run **vagrant snapshot** to see the syntax.
+
+Note that the plugin tries to take a snapshot of all VMs configured in Vagrantfile. If you are always using 3 VMs, for example, you can comment out c68[04-10] in Vagrantfile so that the snapshot commands only operate on c68[01-03].
+
+Note: Upon resuming a snapshot, you may find that time-sensitive services may be down (e.g, HBase RegionServer is down, etc.)
+
+Tip: After starting the VMs but before you do anything on the VMs, run "vagrant snapshot take init". This way, you can go back to the initial state of the VMs by running "vagrant snapshot go init"; this only takes seconds (much faster than starting the VMs up from scratch by using up.sh or "vagrant up"). Another advantage of this is that you can always go back to the initial state without destroying other named snapshots that you created.
+
+## Misc
+
+All VMs launched will have a directory called **/vagrant** inside the VM. This maps to the **ambari-vagrant/** directory on your local computer. You can use this shared directory mapping to push files, etc.
+
+If you want to test OS's other than what's currently in the **ambari-vagrant** repo, please see [http://www.vagrantbox.es/](http://www.vagrantbox.es/) for all the readily available OS images you can test. Note that Ambari currently works on RHEL 5/6, CentOS 5/6, Oracle Linux 5/6, SUSE 11, and SLES 11. Ubuntu support is work in progress.
+
+## Kerberos Support
+
+Ambari supports adding Kerberos security to an existing Ambari-installed cluster. First setup any one host as the KDC as follows:
+
+Install the Kerberos server on the chosen host. e.g. for Centos/RedHat
+
+```bash
+yum install krb5-server krb5-libs krb5-auth-dialog rng-tools -y
+```
+
+Create the Kerberos database.
+
+```bash
+rngd -r /dev/urandom -o /dev/random
+/usr/sbin/kdb5_util create -s
+```
+
+Update `/etc/krb5.conf` on the KDC host. e.g. if your realm is `EXAMPLE.COM` and kdc host is `c6801.ambari.apache.org`
+
+```bash
+[realms]
+ EXAMPLE.COM = {
+ admin_server = c6801.ambari.apache.org
+ kdc = c6801.ambari.apache.org
+ }
+```
+
+Restart Kerberos services. e.g. for Centos/RedHat
+
+```bash
+/etc/rc.d/init.d/krb5kdc restart
+/etc/rc.d/init.d/kadmin restart
+```
+
+```bash
+$ sudo kadmin.local
+kadmin.local: add_principal admin/admin@EXAMPLE.COM
+WARNING: no policy specified for admin/admin@EXAMPLE.COM; defaulting to no policy
+Enter password for principal "admin/admin@EXAMPLE.COM":
+Re-enter password for principal "admin/admin@EXAMPLE.COM":
+Principal "admin/admin@EXAMPLE.COM" created.
+
+```
+
+Remember the password for this principal. The Ambari Kerberos Wizard will request it later.Distribute the updated `/etc/krb5.conf` file to remaining hosts in the cluster.
+
+Navigate to _Ambari Dashboard —> Admin —> Kerberos_ to launch the Kerberos Wizard and follow the wizard steps. If you run into errors, the Ambari server logs can be found at `/var/log/ambari-server/ambari-server.log`.
+
+## Pre-Configured Development Environment
+
+Simply edit **Vagrantfile** to launch a VM with all the tools necessary to build Ambari from source.
+
+```bash
+cd ambari-vagrant/centos6.8
+vi Vagrantfile Frontend DevelopmentYou can use this set up to develop and test out Ambari Web frontend code against a real Ambari Server on a multi-node environment.You need to first fork the apache/ambari repository if you haven't already. Read the How to Contribute guide for instructions on how to fork.On the host machine:cd ambari-vagrant/centos6.8
+# Replace the [forked-repository-url] with your fork's clone url
+git clone [forked-repository-url] ambari
+cd ambari/ambari-web
+npm install
+brunch wOn c6801 (where Ambari Server is installed):cd /usr/lib/ambari-server
+mv web web-orig
+ln -s /vagrant/ambari/ambari-web/public web
+ambari-server restartWith this setup, whenever you change the content of ambari-web files (under ambari-vagrant/ambari/) on the host machine, brunch will pick up changes in the background and update ambari-vagrant/ambari/ambari-web/public. Because of the symbolic link, the changes are automatically picked up by Ambari Server. All you have to do is hit refresh on the browser to see the frontend code changes reflected.Not seeing code changes as expected? If you have run the maven command to build Ambari previously, you will see files called app.js.gz and vendor.js.gz under the public folder. You need to delete these files for the frontend code changes to be effective, as the app.js.gz and vendor.js.gz files take precedence over app.js and vendor.js, respectively.
+
+```
diff --git a/versioned_docs/version-2.7.6/ambari-alerts.md b/versioned_docs/version-2.7.6/ambari-alerts.md
new file mode 100644
index 0000000..04288af
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-alerts.md
@@ -0,0 +1,13 @@
+# Ambari Alerts
+
+Help page for Alerts defined in Ambari.
+
+## Ambari Agent Heartbeat
+
+**Service**: Ambari
+**Component**: Ambari Server
+**Type**: SERVER
+**Groups**: AMBARI Default
+**Description**: This alert is triggered if the server has lost contact with an agent.
+
+If this alert is generated then the alert text will contain the host name (e.g. c6401.ambari.apache.org is not sending heartbeats.). Check that the agent is running and if its running tail the log to see if its receiving and heartbeat response from the server. Check if the server host name is correct in /etc/ambari-agent/conf/ambari-agent.ini file and is reachable.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/alerts.md b/versioned_docs/version-2.7.6/ambari-design/alerts.md
new file mode 100644
index 0000000..95655a2
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/alerts.md
@@ -0,0 +1,208 @@
+---
+
+title: Alerts
+
+---
+WEB and METRIC alert types include a `connection_timeout` property on the alert definition (see below in `AlertDefinition : source : uri : connection_timeout`). This value is in seconds and defaults to 5.0. Use the Ambari REST API by updating the `source` block if you need to modify the connection timeout.
+
+```json
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster/alert_definitions/42",
+ "AlertDefinition" : {
+ "cluster_name" : "MyCluster",
+ "component_name" : "APP_TIMELINE_SERVER",
+ "description" : "This host-level alert is triggered if the App Timeline Server Web UI is unreachable.",
+ "enabled" : true,
+ "id" : 42,
+ "ignore_host" : false,
+ "interval" : 1,
+ "label" : "App Timeline Web UI",
+ "name" : "yarn_app_timeline_server_webui",
+ "scope" : "ANY",
+ "service_name" : "YARN",
+ "source" : {
+ "reporting" : {
+ "ok" : {
+ "text" : "HTTP {0} response in {2:.3f}s"
+ },
+ "warning" : {
+ "text" : "HTTP {0} response from {1} in {2:.3f}s ({3})"
+ },
+ "critical" : {
+ "text" : "Connection failed to {1} ({3})"
+ }
+ },
+ "type" : "WEB",
+ "uri" : {
+ "http" : "{{yarn-site/yarn.timeline-service.webapp.address}}",
+ "https" : "{{yarn-site/yarn.timeline-service.webapp.https.address}}",
+ "https_property" : "{{yarn-site/yarn.http.policy}}",
+ "https_property_value" : "HTTPS_ONLY",
+ "kerberos_keytab" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.keytab}}",
+ "kerberos_principal" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.principal}}",
+ "default_port" : 0.0,
+ "connection_timeout" : 5.0
+ }
+ }
+ }
+}
+```
+
+For example, to update the `connection_timeout` with the API, you need to PUT the entire contents of the `source` block with your API call. For example, you need to PUT the following to update the `connection_timeout` to **6.5** seconds.
+
+```
+PUT /api/v1/clusters/MyCluster/alert_definitions/42
+
+{
+"AlertDefinition" : {
+ "source" : {
+ "reporting" : {
+ "ok" : {
+ "text" : "HTTP {0} response in {2:.3f}s"
+ },
+ "warning" : {
+ "text" : "HTTP {0} response from {1} in {2:.3f}s ({3})"
+ },
+ "critical" : {
+ "text" : "Connection failed to {1} ({3})"
+ }
+ },
+ "type" : "WEB",
+ "uri" : {
+ "http" : "{{yarn-site/yarn.timeline-service.webapp.address}}",
+ "https" : "{{yarn-site/yarn.timeline-service.webapp.https.address}}",
+ "https_property" : "{{yarn-site/yarn.http.policy}}",
+ "https_property_value" : "HTTPS_ONLY",
+ "kerberos_keytab" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.keytab}}",
+ "kerberos_principal" : "{{yarn-site/yarn.timeline-service.http-authentication.kerberos.principal}}",
+ "default_port" : 0.0,
+ "connection_timeout" : 6.5
+ }
+ }
+ }
+}
+```
+
+## Creating a Script-based Alert Dispatcher
+
+This document will describe how to enable a custom script-based alert dispatcher which is capable of responding to Ambari alerts. The dispatcher will invoke a script with the parameters of the alert as command line arguments.
+
+The dispatcher must know the location of the script that is being executed. This is configured through `ambari.properties` by setting either:
+
+* `notification.dispatch.alert.script`
+* a custom property key that points to the script, such as `foo.bar.alert.dispatch.script`
+
+Some examples of this are:
+
+```
+notification.dispatch.alert.script=/contrib/ambari-alerts/scripts/default_logger.py
+com.mycompany.dispatch.syslog.script=/contrib/ambari-alerts/scripts/legacy_sys_logger.py
+com.mycompany.dispatch.shell.script=/contrib/ambari-alerts/scripts/shell_logger.sh
+```
+
+When an alert instance changes state and Ambari needs to dispatch that alert state change, the custom script will be invoked:
+
+```
+# main method which is called when invoked on the command line
+# :param definitionName: the alert definition unique ID
+# :param definitionLabel: the human readable alert definition label
+# :param serviceName: the service that the alert definition belongs to
+# :param alertState: the state of the alert (OK, WARNING, etc)
+# :param alertText: the text of the alert
+
+def main():
+ definitionName = sys.argv[1]
+ definitionLabel = sys.argv[2]
+ serviceName = sys.argv[3]
+ alertState = sys.argv[4]
+ alertText = sys.argv[5]
+```
+
+```
+POST api/v1/alert_targets
+
+ {
+ "AlertTarget": {
+ "name": "syslogger",
+ "description": "Syslog Target",
+ "notification_type": "ALERT_SCRIPT",
+ "global": true
+ }
+ }
+```
+
+The above call will create a global alert target that will dispatch all alerts across all alert groups. Without specifying `ambari.dispatch-property.script` as a property of the alert target, Ambari will look for the default configuration key of `notification.dispatch.alert.script` in `ambari.properties`.
+
+```
+POST api/v1/alert_targets
+
+ {
+ "AlertTarget": {
+ "name": "syslogger",
+ "description": "Syslog Target",
+ "notification_type": "ALERT_SCRIPT",
+ "global": true,
+ "properties": {
+ "ambari.dispatch-property.script": "com.mycompany.dispatch.syslog.script"
+ }
+ }
+ }
+```
+
+The above call also creates a global alert target. However, a specific script key is being specified. The result is that `ambari.properties` should contain something similar to:
+
+```
+com.mycompany.dispatch.syslog.script=/contrib/ambari-alerts/scripts/legacy_sys_logger.py
+```
+
+## Customizing the Alert Template
+Ambari 2.0 leverages its own alerting system to convey the state of various aspects of managed clusters. The notification template content produced by Ambari is tightly coupled to a notification type. Email and SNMP both have customizable templates that can be used to generate content. This document describes the steps necessary to change the template used by Ambari 2.0 when creating alert notifications.
+
+This procedure is targeted at Ambari Administrators that have access to the Ambari Server file system and the `ambari.properties` file. Those Administrators can create new templates or change the existing templates that are used when generating alert notification content. At this time, there is no mechanism to expose this flexibility via the APIs or the web client to end users.
+
+By default, an [alert-templates.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/alert-templates.xml) ships with Ambari 2.0 bundled inside of Ambari Server JAR. This file contains all of the templates for every known type of notification (for example EMAIL and SNMP). This file is bundled in the Ambari Server JAR so the template is not exposed on the disk. But we can use that file as a reference example.
+
+When you customize the alert template, you are effectively overriding the template bundled by default. To override the alert templates XML:
+
+Some alert notification types, such as EMAIL, automatically combine all pending alerts into a single outbound notification (" **digest**"). Others, like SNMP, never combine pending alerts and will always create a 1:1 notification for every alert in the system (" **individual**"). All alert notification types are specified in the same alert templates file, but the specific alert template for each notification type will most likely vary greatly.
+
+The structure of the template file is defined as follows. Each `<alert-template></alert-template>` element declares what type of alert notification it should be used for.
+
+Variable |Description
+---------------------------------------------|-------------------------------------------------
+$alert.getAlertDefinition() |The definition that the alert is an instance of.
+$alert.getAlertName() |The name of the alert.
+$alert.getAlertState() |The alert state (OK\|WARNING\|CRITICAL\|UNKNOWN)
+$alert.getAlertText() |The specific alert text.
+$alert.getComponentName() |The component, if any, that the alert is defined for.
+$alert.getHostName() |The hostname, if any, that the alert was triggered for.
+$alert.getServiceName() |The name of the service that the alert is defined for.
+$alert.hasComponentName() |True if the alert is for a specific service component.
+$alert.hasHostName() |True if the alert was triggered for a specific host.
+$ambari.getServerHostName() |The Ambari Server hostname.
+$ambari.getServerUrl() |The Ambari Server URL.
+$ambari.getServerVersion() |The Ambari Server version.
+$dispatch.getTargetDescription() |The notification target description.
+$dispatch.getTargetName() |The notification target name.
+$summary.getAlerts() |A list of all of the alerts in the notification.
+$summary.getAlerts(service,alertState) |A list of all alerts for a given service or alert state (OK\|WARNING\|CRITICAL\|UNKNOWN).
+$summary.getCriticalCount() |The CRITICAL alert count.
+$summary.getOkCount() |The OK alert count.
+$summary.getServices() |A list of all services that are reporting an alert in the notification.
+$summary.getServicesByAlertState(alertState) |A list of all services for a given alert state (OK|WARNING|CRITICAL|UNKNOWN).
+$summary.getTotalCount() |The total alert count.
+$summary.getUnknownCount() |The UNKNOWN alert count.
+$summary.getWarningCount() |The WARNING alert count.
+
+The template uses Apache Velocity to render all tokenized content. The following variables are available for use in your template:
+
+```
+$summary.getTotalCount()
+```
+
+```
+$summary.getAlerts()
+```
+
+The following example illustrates how to change the subject line of all outbound email notifications to include a hard coded identifier:
+
diff --git a/versioned_docs/version-2.7.6/ambari-design/ambari-architecture.md b/versioned_docs/version-2.7.6/ambari-design/ambari-architecture.md
new file mode 100644
index 0000000..042fb32
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/ambari-architecture.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 1
+---
+
+# Ambari Architecture
+
+ <embed src="/pdf/Ambari_Architecture.pdf" type="application/pdf" width="960px" height="700px"></embed>
+
+
+
diff --git a/versioned_docs/version-2.7.6/ambari-design/blueprints/blueprint-ha.md b/versioned_docs/version-2.7.6/ambari-design/blueprints/blueprint-ha.md
new file mode 100644
index 0000000..7f6d869
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/blueprints/blueprint-ha.md
@@ -0,0 +1,550 @@
+
+# Blueprint Support for HA Clusters
+
+## Summary
+
+As of Ambari 2.0, Blueprints are able to deploy the following components with HA:
+
++ HDFS NameNode HA
+
++ YARN ResourceManager HA
++ HBase RegionServers HA
+
+As of Ambari 2.1, Blueprints are able to deploy the following components with HA:
+
++ Hive Components ([AMBARI-10489](https://issues.apache.org/jira/browse/AMBARI-10489))
++ Storm Nimbus ([AMBARI-11087](https://issues.apache.org/jira/browse/AMBARI-11087))
++ Oozie Server ([AMBARI-6683](https://issues.apache.org/jira/browse/AMBARI-6683))
+
+
+
+This functionality currently requires providing fine-grained configurations. This document provides examples.
+
+### FAQ
+
+#### Compatibility with Ambari UI
+
+While this feature does not require the Ambari UI to function, the Blueprints HA feature is completely compatible with the Ambari UI. An HA cluster created via Blueprints can be monitored and configured via the Ambari UI, just as any other Blueprints cluster would function.
+
+#### Supported Stack Versions
+
+This feature is supported as of HDP 2.1 and newer releases. Previous versions of HDP have not been tested with this feature.
+
+### Examples
+
+#### Blueprint Example: HDFS NameNode HA Cluster
+
+HDFS NameNode HA allows a cluster to be configured such that a NameNode is not a single point of failure.
+
+For more details on [HDFS NameNode HA see the Apache Hadoop documentation](http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).
+
+In an Ambari-deployed HDFS NameNode HA cluster:
+
++ 2 NameNodes are deployed: an “active” and a “passive” namenode.
++ If the active NameNode should stop functioning properly, the passive node’s Zookeeper client will detect this case, and the passive node will become the new active node.
++ HDFS relies on Zookeeper to manage the details of failover in these cases.
++ The Blueprints HA feature will automatically invoke all required commands and setup steps for an HDFS NameNode HA cluster, provided that the correct configuration is provided in the Blueprint. The shared edit logs of each NameNode are managed by the Quorum Journal Manager, rather than NFS shared storage. The use of NFS shared storage in an HDFS HA setup is not supported by Ambari Blueprints, and is generally not encouraged.
+
+#### How
+
+The Blueprints HA feature will automatically invoke all required commands and setup steps for an HDFS NameNode HA cluster, provided that the correct configuration is provided in the Blueprint. The shared edit logs of each NameNode are managed by the Quorum Journal Manager, rather than NFS shared storage. The use of NFS shared storage in an HDFS HA setup is not supported by Ambari Blueprints, and is generally not encouraged.
+
+The following HDFS stack components should be included in any host group in a Blueprint that supports an HA HDFS NameNode:
+
+1. NAMENODE
+
+2. ZKFC
+
+3. ZOOKEEPER_SERVER
+
+4. JOURNALNODE
+
+
+#### Configuring Active and Standby NameNodes
+
+The HDFS “NAMENODE” component must be assigned to two servers, either via two separate host groups, or to a host group that maps to two physical servers in the Cluster Creation Template for this cluster.
+
+By default, the Blueprint processor will assign the “active” NameNode to one host, and the “standby” NameNode to another. The user of an HA Blueprint does not need to configure the initial status of each NameNode, since this can be assigned automatically.
+
+If desired, the user can configure the initial state of each NameNode by adding the following configuration properties in the “hadoop-env” namespace:
+
+1. dfs_ha_initial_namenode_active - This property should contain the hostname for the “active” NameNode in this cluster.
+
+2. dfs_ha_initial_namenode_standby - This property should contain the host name for the “passive” NameNode in this cluster.
+
+:::caution
+These properties should only be used when the initial state of the active or standby NameNodes needs to be configured to a specific node. This setting is only guaranteed to be accurate in the initial state of the cluster. Over time, the active/standby state of each NameNode may change as failover events occur in the cluster.
+
+The active or standby status of a NameNode is not recorded or expressed when an HDFS HA Cluster is being exported to a Blueprint, using the Blueprint REST API endpoint. Since clusters change over time, this state is only accurate in the initial startup of the cluster.
+
+Generally, it is assumed that most users will not need to choose the active or standby status of each NameNode, so the default behavior in Blueprints HA is to assign the status of each node automatically.
+:::
+
+#### Example Blueprint
+
+This is a minimal blueprint with HDFS HA: [hdfs_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/hdfs_ha_blueprint.json?version=4&modificationDate=1434548806000&api=v2)
+
+These are the base configurations required. See the blueprint above for more details:
+```json
+ "configurations":
+ { "core-site": {
+ "properties" : {
+ "fs.defaultFS" : "hdfs://mycluster",
+ "ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_3%:2181"
+ }}
+ },
+ { "hdfs-site": {
+ "properties" : {
+ "dfs.client.failover.proxy.provider.mycluster" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+ "dfs.ha.automatic-failover.enabled" : "true",
+ "dfs.ha.fencing.methods" : "shell(/bin/true)",
+ "dfs.ha.namenodes.mycluster" : "nn1,nn2",
+ "dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
+ "dfs.namenode.http-address.mycluster.nn1" : "%HOSTGROUP::master_1%:50070",
+ "dfs.namenode.http-address.mycluster.nn2" : "%HOSTGROUP::master_3%:50070",
+ "dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
+ "dfs.namenode.https-address.mycluster.nn1" : "%HOSTGROUP::master_1%:50470",
+ "dfs.namenode.https-address.mycluster.nn2" : "%HOSTGROUP::master_3%:50470",
+ "dfs.namenode.rpc-address.mycluster.nn1" : "%HOSTGROUP::master_1%:8020",
+ "dfs.namenode.rpc-address.mycluster.nn2" : "%HOSTGROUP::master_3%:8020",
+ "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::master_2%:8485;%HOSTGROUP::master_3%:8485/mycluster",
+ "dfs.nameservices" : "mycluster"
+ }}
+ }
+ ]
+ ```
+
+#### HostName Topology Substitution in Configuration Property Values
+
+The host-related properties should be set using the “HOSTGROUP” syntax to refer to a given Blueprint’s host group, in order to map each NameNode’s actual host (defined in the Cluster Creation Template) to the properties in hdfs-site that require these host mappings.
+
+The syntax for these properties should be:
+
+“%HOSTGROUP::HOST_GROUP_NAME%:PORT””
+
+For example, the following property from the snippet above:
+
+"dfs.namenode.http-address.mycluster.nn1":
+
+"%HOSTGROUP::master_1%:50070"
+
+This property value is interpreted by the Blueprint processor to refer to the host that maps to the “master_1” host group, which should include a “NAMENODE” among its list of components. The address property listed above should map to the host for “master_1”, at the port “50070”.
+
+Using this syntax is the most portable way to define host-specific properties within a Blueprint, since no direct host name references are used. This will allow a Blueprint to be applied in a wider variety of cluster deployments, and not be tied to a specific set of hostnames.
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-hdfs-ha" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/blueprint-hdfs-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+#### Example Cluster Creation Template
+
+```json
+{
+ "blueprint": "blueprint-hdfs-ha",
+ "default_password": "changethis",
+ "host_groups": [
+ { "hosts": [
+ { "fqdn": "c6401.ambari.apache.org" }
+ ], "name": "gateway"
+ },
+ { "hosts": [
+ { "fqdn": "c6402.ambari.apache.org" }
+ ], "name": "master_1"
+ },
+ { "hosts": [
+ { "fqdn": "c6403.ambari.apache.org" }
+ ], "name": "master_2"
+ },
+ { "hosts": [
+ { "fqdn": "c6404.ambari.apache.org" }
+ ], "name": "master_3"
+ },
+ { "hosts": [
+ { "fqdn": "c6405.ambari.apache.org" }
+ ],
+ "name": "slave_1"
+ }
+ ]
+}
+```
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+```
+POST /api/v1/clusters/my-hdfs-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-hdfs-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
+
+### Blueprint Example: Yarn ResourceManager HA Cluster
+
+#### Summary
+
+Yarn ResourceManager High Availability (HA) adds support for deploying two Yarn ResourceManagers in a given Yarn cluster. This support removes the single point of failure that occurs when single ResourceManager is used.
+
+The Yarn ResourceManager support for HA is somewhat similar to HDFS NameNode HA in that an “active/standby” architecture is adopted, with Zookeeper used to handle the failover scenarios between the two ResourceManager instances.
+
+The following Apache Hadoop documentation describes the steps required in order to setup Yarn ResourceManager HA manually:
+
+[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html)
+
+:::caution
+Ambari Blueprints will handle much of the details of server setup listed in this documentation, but the user of Ambari will need to define the various configuration properties listed in this article (yarn.resourcemanager.ha.enabled, yarn.resourcemanager.hostname.$NAME_OF_RESOURCE_MANAGER, etc). The example Blueprints listed below will demonstrate the configuration properties that must be included in the Blueprint for this feature to startup the HA cluster properly.
+:::
+
+ The following stack components should be included in any host group in a Blueprint that supports an HA Yarn ResourceManager
+
+1. RESOURCEMANAGER
+
+2. ZOOKEEPER_SERVER
+
+
+#### Initial setup of Active and Standby ResourceManagers
+
+The Yarn ResourceManager HA feature depends upon Zookeeper in order to manage the details of the “active” or “standby” status of a given ResourceManager. When the two ResourceManagers are starting up initially, the first ResourceManager instance to acquire a Zookeeper lock, called a “znode”, will become the “active” ResourceManager for the cluster, with the other instance assuming the role of the “standby” ResourceManager.
+
+:::caution
+The Blueprints HA feature does not support configuring the initial “active” or “standby” status of the ResourceManagers deployed in a Yarn HA cluster. The first instance to obtain the Zookeeper lock will become the “active” node. This allows the user to specify the host groups that contain the 2 ResourceManager instances, but also shields the user from the need to select the first “active” node.
+
+After the cluster has started up initially, the state of the “active” and “standby” ResourceManagers may change over time. The initial “active” server is not guaranteed to remain the “active” node over the lifetime of the cluster. During a failover event, the “standby” node may be required to fulfill the role of the “active” server.
+
+The active or standby status of a ResourceManager is not recorded or expressed when a Yarn cluster is being exported to a Blueprint, using the Blueprint REST API endpoint. Since clusters change over time, this state is only accurate in the initial startup of the cluster.
+:::
+
+#### Example Blueprint
+
+The following link includes an example Blueprint for a 3-node Yarn ResourceManager HA Cluster:
+
+[yarn_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/yarn_ha_blueprint.json?version=2&modificationDate=1432208770000&api=v2)
+
+```json
+{
+ "Blueprints": {
+ "stack_name": "HDP",
+ "stack_version": "2.2"
+ },
+ "host_groups": [
+ {
+ "name": "gateway",
+ "cardinality" : "1",
+ "components": [
+ { "name": "HDFS_CLIENT" },
+ { "name": "MAPREDUCE2_CLIENT" },
+ { "name": "METRICS_COLLECTOR" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "TEZ_CLIENT" },
+ { "name": "YARN_CLIENT" },
+ { "name": "ZOOKEEPER_CLIENT" }
+ ]
+ },
+ {
+ "name": "master_1",
+ "cardinality" : "1",
+ "components": [
+ { "name": "HISTORYSERVER" },
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "NAMENODE" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "master_2",
+ "cardinality" : "1",
+ "components": [
+ { "name": "APP_TIMELINE_SERVER" },
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "RESOURCEMANAGER" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "master_3",
+ "cardinality" : "1",
+ "components": [
+ { "name": "JOURNALNODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "RESOURCEMANAGER" },
+ { "name": "SECONDARY_NAMENODE" },
+ { "name": "ZOOKEEPER_SERVER" }
+ ]
+ },
+ {
+ "name": "slave_1",
+ "components": [
+ { "name": "DATANODE" },
+ { "name": "METRICS_MONITOR" },
+ { "name": "NODEMANAGER" }
+ ]
+ }
+ ],
+ "configurations": [
+ {
+ "core-site": {
+ "properties" : {
+ "fs.defaultFS" : "hdfs://%HOSTGROUP::master_1%:8020"
+ }}
+ },{
+ "yarn-site" : {
+ "properties" : {
+ "hadoop.registry.rm.enabled" : "false",
+ "hadoop.registry.zk.quorum" : "%HOSTGROUP::master_3%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_1%:2181",
+ "yarn.log.server.url" : "http://%HOSTGROUP::master_2%:19888/jobhistory/logs",
+ "yarn.resourcemanager.address" : "%HOSTGROUP::master_2%:8050",
+ "yarn.resourcemanager.admin.address" : "%HOSTGROUP::master_2%:8141",
+ "yarn.resourcemanager.cluster-id" : "yarn-cluster",
+ "yarn.resourcemanager.ha.automatic-failover.zk-base-path" : "/yarn-leader-election",
+ "yarn.resourcemanager.ha.enabled" : "true",
+ "yarn.resourcemanager.ha.rm-ids" : "rm1,rm2",
+ "yarn.resourcemanager.hostname" : "%HOSTGROUP::master_2%",
+ "yarn.resourcemanager.recovery.enabled" : "true",
+ "yarn.resourcemanager.resource-tracker.address" : "%HOSTGROUP::master_2%:8025",
+ "yarn.resourcemanager.scheduler.address" : "%HOSTGROUP::master_2%:8030",
+ "yarn.resourcemanager.store.class" : "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
+ "yarn.resourcemanager.webapp.address" : "%HOSTGROUP::master_2%:8088",
+ "yarn.resourcemanager.webapp.https.address" : "%HOSTGROUP::master_2%:8090",
+ "yarn.timeline-service.address" : "%HOSTGROUP::master_2%:10200",
+ "yarn.timeline-service.webapp.address" : "%HOSTGROUP::master_2%:8188",
+ "yarn.timeline-service.webapp.https.address" : "%HOSTGROUP::master_2%:8190"
+ }
+ }
+ }
+ ]
+}
+```
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-yarn-ha" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/blueprint-yarn-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+
+```
+
+#### Example Cluster Creation Template
+
+```json
+{
+ "blueprint": "blueprint-yarn-ha",
+ "default_password": "changethis",
+ "configurations": [
+ { "yarn-site" : {
+ "yarn.resourcemanager.zk-address" : "c6402.ambari.apache.org:2181,c6403.ambari.apache.org:2181,c6404.ambari.apache.org:2181”,
+ ”yarn.resourcemanager.hostname.rm1" : "c6403.ambari.apache.org",
+ "yarn.resourcemanager.hostname.rm2" : "c6404.ambari.apache.org"
+ }}
+ ],
+ "host_groups": [
+ { "hosts": [
+ { "fqdn": "c6401.ambari.apache.org" }
+ ], "name": "gateway"
+ },
+ { "hosts": [
+ { "fqdn": "c6402.ambari.apache.org" }
+ ], "name": "master_1"
+ },
+ { "hosts": [
+ { "fqdn": "c6403.ambari.apache.org" }
+ ], "name": "master_2"
+ },
+ { "hosts": [
+ { "fqdn": "c6404.ambari.apache.org" }
+ ], "name": "master_3"
+ },
+ { "hosts": [
+ { "fqdn": "c6405.ambari.apache.org" }
+ ],
+ "name": "slave_1"
+ }
+ ]
+}
+```
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/my-yarn-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-yarn-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
+
+
+### Blueprint Example: HBase RegionServer HA Cluster
+
+#### Summary
+
+HBase provides a High Availability feature for reads across HBase Region Servers.
+
+The following link to the Apache HBase documentation provides more information on HA support in HBase:
+
+[http://hbase.apache.org/book.html#arch.timelineconsistent.reads](http://hbase.apache.org/book.html#arch.timelineconsistent.reads)
+
+:::caution
+The documentation listed here explains how to deploy an HBase RegionServer HA cluster via a Blueprint, but there are separate application-specific steps that must be taken in order to enable this feature for a specific table in HBase. A table must be created with replication enabled, so that multiple Region Servers can handle the keys for this table.
+:::
+
+For more information on how to define an HBase table with replication enabled (after the cluster has been created), please refer to the following HBase documentation:
+
+[http://hbase.apache.org/book.html#_creating_a_table_with_region_replication](http://hbase.apache.org/book.html#_creating_a_table_with_region_replication)
+
+The following stack components should be included in any host group in a Blueprint that supports an HA HBase RegionServer:
+
+1. HBASE_REGIONSERVER
+
+
+At least two “HBASE_REGIONSERVER” components must be deployed in order to enable this feature, so that table information can be replicated across more than one Region Server.
+
+#### Example Blueprint
+
+The following link includes an example Blueprint for a 2-node HBase RegionServer HA Cluster:
+
+[hbase_rs_ha_blueprint.json](https://cwiki.apache.org/confluence/download/attachments/55151584/hbase_rs_ha_blueprint.json?version=1&modificationDate=1427136904000&api=v2)
+
+The following JSON snippet includes the “hbase-site” configuration typically required for a cluster that utilizes the HBase RegionServer HA feature:
+
+```json
+{
+ "configurations" : [
+ {
+ "hbase-site" : {
+ ...
+ "hbase.regionserver.global.memstore.lowerLimit" : "0.38",
+ "hbase.regionserver.global.memstore.upperLimit" : "0.4",
+ "hbase.regionserver.handler.count" : "60",
+ "hbase.regionserver.info.port" : "60030",
+ "hbase.regionserver.storefile.refresh.period" : "20",
+ "hbase.rootdir" : "hdfs://%HOSTGROUP::host_group_1%:8020/apps/hbase/data",
+ "hbase.security.authentication" : "simple",
+ "hbase.security.authorization" : "false",
+ "hbase.superuser" : "hbase",
+ "hbase.tmp.dir" : "/hadoop/hbase",
+ "hbase.zookeeper.property.clientPort" : "2181",
+ "hbase.zookeeper.quorum" : "%HOSTGROUP::host_group_1%,%HOSTGROUP::host_group_2%",
+ "hbase.zookeeper.useMulti" : "true",
+ "hfile.block.cache.size" : "0.40",
+ "zookeeper.session.timeout" : "30000",
+ "zookeeper.znode.parent" : "/hbase-unsecure"
+ }
+
+ }
+ ]
+}
+```
+:::caution
+The JSON example above is not a complete set of “hbase-site” configurations, but rather shows the configuration settings that are relevant to HBase RegionServer HA. In particular, the “hbase.regionserver.storefile.refresh.period” setting is the most relevant to HBase RegionServer HA, since this property must be set to a value greater than zero in order for the HA feature to be enabled.
+:::
+
+#### Register Blueprint with Ambari Server
+
+Post the blueprint to the "blueprint-hbase-rs-ha" resource to the Ambari Server.
+
+POST /api/v1/blueprints/blueprint-hbase-rs-ha
+
+...
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+
+#### Example Cluster Creation Template
+```json
+{
+ "blueprint" : "blueprint-hbase-rs-ha",
+ "default_password" : "default",
+ "host_groups" :[
+ {
+ "name" : "host_group_1",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ },
+ {
+ "name" : "host_group_2",
+ "hosts" : [
+ {
+ "fqdn" : "c6402.ambari.apache.org"
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+#### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/my-hbase-rs-ha-cluster
+
+...
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/my-hbase-rs-ha-cluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+
+...
+[ Client can then monitor the URL in the 202 response to check the status of the cluster deployment. ]
+...
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/blueprints/blueprint-ranger.md b/versioned_docs/version-2.7.6/ambari-design/blueprints/blueprint-ranger.md
new file mode 100644
index 0000000..e14f468
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/blueprints/blueprint-ranger.md
@@ -0,0 +1,423 @@
+---
+title: Blueprint support for Ranger
+---
+Starting from HDP2.3 Ranger can be deployed using Blueprints in two ways either using stack advisor or setting all the needed properties in the Blueprint.
+
+## Deploy Ranger with the use of stack advisor
+
+Stack advisor makes simple the deployment of Ranger as it sets automatically the needed properties thus the user has to provided only a minimal set of configurations. The configurations properties that must be provided in either the Blueprint or cluster creation template are:
+
+* admin-properties:
+ - DB_FLAVOR - the default is MYSQL. No need to provide this if MYSQL is to be used the database server for Ranger databases. Consult Ranger documentation for supported database servers. Also ensure the ambari-server has the appropriate jdbc driver installed for the selected database server type (e.g.: ambari-server setup --jdbc-driver)
+ - db_host - set the host:port of the database server that Ranger Admin will use
+ - db_root_user - the db user with root access that will be used during deployment to create the databases used by Ranger. By default root is used if this property is not specified.
+
+ - db_root_password - the password for root user
+ - db_password - the password that will be used access the Ranger database
+ - audit_db_password - the password that will be used to access the Ranger Audit db
+* ranger-env
+ - ranger_admin_password - this is the Ambari user password created for creating repositories and policies in Ranger Admin for each plugin
+ - ranger-yarn-plugin-enabled - Enable/Disable YARN Ranger plugin. The default is Disable.
+
+ - ranger-hdfs-plugin-enabled - Enable/Disable HDFS Ranger plugin. The default is Disable.
+
+ - ranger-hbase-plugin-enabled -Enable/Disable HBase Ranger plugin. The default is Disable.
+
+ - ... - check Ranger documentation for the list of supported ranger plugins
+* kms-properties
+ - DB_FLAVOR - the default is MYSQL. No need to provide this if MYSQL is to be used the database server for Ranger databases. Consult Ranger KMS documentation for supported database servers. Also ensure the ambari-server has the appropriate jdbc driver installed for the selected database server type (e.g.: ambari-server setup --jdbc-driver)
+ - SQL_CONNECTOR_JAR - the default is /usr/share/java/mysql-connector-java.jar
+ - KMS_MASTER_KEY_PASSWD
+ - db_host - the host:port of the database server that Ranger KMS will use
+ - db_root_user - the db user with root access that will be used during deployment to create the databases used by Ranger KMS. By default root is used if this property is not specified.
+
+ - db_root_password - database password for root user
+ - db_password - database password for the Ranger KMS schema
+
+* hadoop-env
+ - keyserver_port - Port number where Key Management Server is available
+ - keyserver_host - Hostnames where Key Management Server is installed
+
+## Deploy Ranger without the use of stack advisor
+
+Without stack advisor all the configs related to Ranger, Ranger KMS and ranger plugins that don't have default values must be set in the Blueprint or cluster creation template. Consult Ranger and ranger plugin plugins documentation for all properties.
+
+An example of such Blueprint where everything is set manually (note that this just covers a subset of currently supported configuration properties and ranger plugins):
+
+```json
+{
+ "configurations" : [
+ {
+ "admin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "DB_FLAVOR" : "MYSQL",
+ "audit_db_name" : "ranger_audit",
+ "db_name" : "ranger",
+ "audit_db_user" : "rangerlogger",
+ "SQL_CONNECTOR_JAR" : "/usr/share/java/mysql-connector-java.jar",
+ "db_user" : "rangeradmin",
+ "policymgr_external_url" : "http://%HOSTGROUP::host_group_1%:6080",
+ "db_host" : "172.17.0.9:3306",
+ "db_root_user" : "root"
+ }
+ }
+ },
+ {
+ "ranger-kms-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.kms.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient",
+ "ranger.plugin.kms.service.name" : "{{repo_name}}",
+ "ranger.plugin.kms.policy.rest.url" : "{{policymgr_mgr_url}}"
+ }
+ }
+ },
+ {
+ "kms-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "hadoop.kms.security.authorization.manager" : "org.apache.ranger.authorization.kms.authorizer.RangerKmsAuthorizer",
+ "hadoop.kms.key.provider.uri" : "dbks://http@localhost:9292/kms"
+ }
+ }
+ },
+ {
+ "ranger-hdfs-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "hadoop",
+ "ranger-hdfs-plugin-enabled" : "Yes",
+ "common.name.for.certificate" : "",
+ "policy_user" : "ambari-qa",
+ "hadoop.rpc.protection" : ""
+ }
+ }
+ },
+ {
+ "ranger-admin-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.ldap.group.searchfilter" : "{{ranger_ug_ldap_group_searchfilter}}",
+ "ranger.ldap.group.searchbase" : "{{ranger_ug_ldap_group_searchbase}}",
+ "ranger.sso.enabled" : "false",
+ "ranger.externalurl" : "{{ranger_external_url}}",
+ "ranger.sso.browser.useragent" : "Mozilla,chrome",
+ "ranger.service.https.attrib.ssl.enabled" : "false",
+ "ranger.ldap.ad.referral" : "ignore",
+ "ranger.jpa.jdbc.url" : "jdbc:mysql://172.17.0.9:3306/ranger",
+ "ranger.https.attrib.keystore.file" : "/etc/ranger/admin/conf/ranger-admin-keystore.jks",
+ "ranger.ldap.user.searchfilter" : "{{ranger_ug_ldap_user_searchfilter}}",
+ "ranger.jpa.jdbc.driver" : "com.mysql.jdbc.Driver",
+ "ranger.authentication.method" : "UNIX",
+ "ranger.service.host" : "{{ranger_host}}",
+ "ranger.jpa.audit.jdbc.user" : "{{ranger_audit_db_user}}",
+ "ranger.ldap.referral" : "ignore",
+ "ranger.jpa.audit.jdbc.credential.alias" : "rangeraudit",
+ "ranger.service.https.attrib.keystore.pass" : "SECRET:ranger-admin-site:2:ranger.service.https.attrib.keystore.pass",
+ "ranger.audit.solr.username" : "ranger_solr",
+ "ranger.sso.query.param.originalurl" : "originalUrl",
+ "ranger.service.http.enabled" : "true",
+ "ranger.audit.source.type" : "solr",
+ "ranger.ldap.url" : "{{ranger_ug_ldap_url}}",
+ "ranger.service.https.attrib.clientAuth" : "want",
+ "ranger.ldap.ad.domain" : "",
+ "ranger.ldap.ad.bind.dn" : "{{ranger_ug_ldap_bind_dn}}",
+ "ranger.credential.provider.path" : "/etc/ranger/admin/rangeradmin.jceks",
+ "ranger.jpa.audit.jdbc.driver" : "{{ranger_jdbc_driver}}",
+ "ranger.audit.solr.urls" : "",
+ "ranger.sso.publicKey" : "",
+ "ranger.ldap.bind.dn" : "{{ranger_ug_ldap_bind_dn}}",
+ "ranger.unixauth.service.port" : "5151",
+ "ranger.ldap.group.roleattribute" : "cn",
+ "ranger.jpa.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.sso.cookiename" : "hadoop-jwt",
+ "ranger.service.https.attrib.keystore.keyalias" : "rangeradmin",
+ "ranger.audit.solr.zookeepers" : "NONE",
+ "ranger.jpa.jdbc.user" : "{{ranger_db_user}}",
+ "ranger.jpa.jdbc.credential.alias" : "rangeradmin",
+ "ranger.ldap.ad.user.searchfilter" : "{{ranger_ug_ldap_user_searchfilter}}",
+ "ranger.ldap.user.dnpattern" : "uid={0},ou=users,dc=xasecure,dc=net",
+ "ranger.ldap.base.dn" : "dc=example,dc=com",
+ "ranger.service.http.port" : "6080",
+ "ranger.jpa.audit.jdbc.url" : "{{audit_jdbc_url}}",
+ "ranger.service.https.port" : "6182",
+ "ranger.sso.providerurl" : "",
+ "ranger.ldap.ad.url" : "{{ranger_ug_ldap_url}}",
+ "ranger.jpa.audit.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.unixauth.remote.login.enabled" : "true",
+ "ranger.ldap.ad.base.dn" : "dc=example,dc=com",
+ "ranger.unixauth.service.hostname" : "{{ugsync_host}}"
+ }
+ }
+ },
+ {
+ "dbks-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.ks.jpa.jdbc.url" : "jdbc:mysql://172.17.0.9:3306/rangerkms",
+ "hadoop.kms.blacklist.DECRYPT_EEK" : "hdfs",
+ "ranger.ks.jpa.jdbc.dialect" : "{{jdbc_dialect}}",
+ "ranger.ks.jdbc.sqlconnectorjar" : "{{ews_lib_jar_path}}",
+ "ranger.ks.jpa.jdbc.user" : "{{db_user}}",
+ "ranger.ks.jpa.jdbc.credential.alias" : "ranger.ks.jdbc.password",
+ "ranger.ks.jpa.jdbc.credential.provider.path" : "/etc/ranger/kms/rangerkms.jceks",
+ "ranger.ks.masterkey.credential.alias" : "ranger.ks.masterkey.password",
+ "ranger.ks.jpa.jdbc.driver" : "com.mysql.jdbc.Driver"
+ }
+ }
+ },
+ {
+ "kms-env" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "kms_log_dir" : "/var/log/ranger/kms",
+ "create_db_user" : "true",
+ "kms_group" : "kms",
+ "kms_user" : "kms",
+ "kms_port" : "9292"
+ }
+ }
+ },
+ {
+ "ranger-hdfs-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.hdfs.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+
+ {
+ "ranger-env" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "xml_configurations_supported" : "true",
+ "ranger_user" : "ranger",
+ "xasecure.audit.destination.hdfs.dir" : "hdfs://ambari-agent-1.node.dc1.consul:8020/ranger/audit",
+ "create_db_dbuser" : "true",
+ "ranger-hdfs-plugin-enabled" : "Yes",
+ "ranger_privelege_user_jdbc_url" : "jdbc:mysql://172.17.0.9:3306",
+ "ranger-knox-plugin-enabled" : "No",
+ "is_solrCloud_enabled" : "false",
+ "bind_anonymous" : "false",
+ "ranger-yarn-plugin-enabled" : "Yes",
+ "ranger-kafka-plugin-enabled" : "No",
+ "xasecure.audit.destination.hdfs" : "true",
+ "ranger-hive-plugin-enabled" : "No",
+ "xasecure.audit.destination.solr" : "false",
+ "xasecure.audit.destination.db" : "true",
+ "ranger_group" : "ranger",
+ "ranger_admin_username" : "amb_ranger_admin",
+ "ranger-hbase-plugin-enabled" : "Yes",
+ "admin_username" : "admin"
+ }
+ }
+ },
+
+ {
+ "kms-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "keyadmin",
+ "KMS_MASTER_KEY_PASSWD" : "SECRET:kms-properties:1:KMS_MASTER_KEY_PASSWD",
+ "DB_FLAVOR" : "MYSQL",
+ "db_name" : "rangerkms",
+ "SQL_CONNECTOR_JAR" : "/usr/share/java/mysql-connector-java.jar",
+ "db_user" : "rangerkms",
+ "db_host" : "172.17.0.9:3306",
+ "db_root_user" : "root"
+ }
+ }
+ },
+
+ {
+ "ranger-yarn-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.yarn.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+
+ {
+ "usersync-properties" : {
+ "properties_attributes" : { },
+ "properties" : { }
+ }
+ },
+
+ {
+ "ranger-hbase-security" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "ranger.plugin.hbase.policy.source.impl" : "org.apache.ranger.admin.client.RangerAdminRESTClient"
+ }
+ }
+ },
+ {
+ "hdfs-site" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "dfs.encryption.key.provider.uri" : "kms://http@%HOSTGROUP::host_group_1%:9292/kms",
+ "dfs.namenode.inode.attributes.provider.class" : "org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer"
+ }
+ }
+ },
+ {
+ "ranger-yarn-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "yarn",
+ "common.name.for.certificate" : "",
+ "ranger-yarn-plugin-enabled" : "Yes",
+ "policy_user" : "ambari-qa",
+ "hadoop.rpc.protection" : ""
+ }
+ }
+ },
+ {
+ "ranger-hbase-plugin-properties" : {
+ "properties_attributes" : { },
+ "properties" : {
+ "REPOSITORY_CONFIG_USERNAME" : "hbase",
+ "common.name.for.certificate" : "",
+ "ranger-hbase-plugin-enabled" : "Yes",
+ "policy_user" : "ambari-qa"
+ }
+ }
+ }
+ ],
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "RANGER_ADMIN"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "HBASE_CLIENT"
+ },
+ {
+ "name" : "HBASE_MASTER"
+ },
+ {
+ "name" : "RANGER_USERSYNC"
+ },
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "RANGER_KMS_SERVER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_2",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "APP_TIMELINE_SERVER"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_3",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "HBASE_REGIONSERVER"
+ },
+ {
+ "name" : "HBASE_CLIENT"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.3"
+ }
+}
+```
+## Deploy Ranger in HA mode
+
+The difference from deploying Ranger in non-HA mode is:
+
+* Deploy RANGER_ADMIN component to multiple host
+* Setup a load balancer and configure it to front all RANGER_ADMIN instances (The URL of a Ranger Admin instance is [http://host:port](http://hostport) (default port 6080) )
+* admin-properties
+ - policymgr_external_url - override the value of this configuration property with the URL of the load balancer. Each component interacting with Ranger is using the value of this property to connect to Ranger thus these will connect via the balancer.
diff --git a/versioned_docs/version-2.7.6/ambari-design/blueprints/index.md b/versioned_docs/version-2.7.6/ambari-design/blueprints/index.md
new file mode 100644
index 0000000..4acb444
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/blueprints/index.md
@@ -0,0 +1,1003 @@
+---
+slug: /blueprints
+---
+# Blueprints
+
+## Introduction
+
+Ambari Blueprints are a declarative definition of a cluster. With a Blueprint, you specify a [Stack](../stack-and-services/overview.mdx), the Component layout and the Configurations to materialize a Hadoop cluster instance (via a REST API) **without** having to use the Ambari Cluster Install Wizard.
+
+### **Notable JIRAs**
+JIRA | Description
+------------------------------------------------------------------|---------------------------------------------
+[AMBARI-4467](https://issues.apache.org/jira/browse/AMBARI-4467) |Blueprints REST resource.
+[AMBARI-5077](https://issues.apache.org/jira/browse/AMBARI-5077) |Provision cluster with blueprint.
+[AMBARI-4786](https://issues.apache.org/jira/browse/AMBARI-4786) |Export blueprints from running/existing cluster.
+[AMBARI-5114](https://issues.apache.org/jira/browse/AMBARI-5114) |Configurations with blueprints.
+[AMBARI-6275](https://issues.apache.org/jira/browse/AMBARI-6275) |Add hosts using blueprints.
+[AMBARI-10750](https://issues.apache.org/jira/browse/AMBARI-10750)|2.1 blueprint changes.
+
+## API Resources and Syntax
+
+The following table lists the basic Blueprints API resources.
+
+The API calls on this wiki page include the HTTP Method (for example: `GET, PUT, POST`) and a sample URI (for example: `/api/v1/blueprints`) . When actually calling the Ambari REST API, you want to be sure to set the `X-Requested-By` header and provide authentication information as appropriate. For example, calling the API using `curl`:
+
+```bash
+curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://c6401.ambari.apache.org:8080 /api/v1/blueprints
+```
+
+## Blueprint Usage Overview
+
+#### Step 0: Prepare Ambari Server and Agents
+
+Install the Ambari Server, run setup and start. Install the Ambari Agents on all hosts and perform manual registration.
+
+#### Step 1: Create Blueprint
+
+A blueprint can be created by hand or can be created by exporting a blueprint from an existing cluster.
+
+To export a cluster from an existing cluster: `GET /api/v1/clusters/:clusterName?format=blueprint`
+
+#### Step 2: Register Blueprint with Ambari
+
+`POST /api/v1/blueprints/:blueprintName`
+
+Request body is blueprint created in **Step 1**.
+
+To disable topology validation and register a blueprint:
+
+`<span>POST /api/v1/blueprints/:blueprintName?validate_topology=false</span>`
+
+Disabling topology validation allows a user to force registration of a blueprint that fails topology validation.
+
+#### Step 3: Create Cluster Creation Template
+
+Map Physical Hosts to Blueprint: Create the mapping between blueprint host groups and physical hosts.
+
+Provide Cluster Specific Configuration Overrides :Configuration can be applied at the cluster and host group scope and overrides any configurations specified in the blueprint.
+
+#### Step 4: Setup Stack Repositories (Optional)
+
+There are scenarios where public Stack repositories may not be accessible during creation of cluster via blueprint or an alternate repository is required for Stack.
+
+To use a local or alternate repository:
+
+```
+PUT /api/v1/stacks/:stack/versions/:stackVersion/operating_systems/:osType/repositories/:repoId
+
+{
+ "Repositories" : {
+ "base_url" : "",
+ "verify_base_url" : true
+ }
+}
+```
+
+This API may be invoked multiple times to set Base URL for multiple OS types or Stack versions. If this step is not performed, by default, blueprints will use the latest Base URL defined in the Stack.
+
+#### Step 5: Create Cluster
+
+`POST /api/v1/clusters/:clusterName`
+
+Request body includes blueprint name, host mappings and configurations from **Step 3**.
+
+Request is asynchronous and returns a `/requests` URL which can be used to monitor progress.
+
+#### Step 6: Monitor Cluster Creation Progress
+
+Using the `/requests` URL returned in **Step 4**, monitor the progress of the tasks associated with cluster creation.
+
+#### Limitations
+
+Ambari Blueprints currently does not support creating cluster reflecting a High Availability topology.
+
+Ambari 2.0 adds support for deploying High Availability clusters in Blueprints. Please see [Blueprint Support for HA Clusters](./blueprint-ha.md) for more information on this topic.
+
+## Blueprint Details
+
+### Prepare Ambari Server and Agents
+
+1. Perform your Ambari Server install and setup.
+
+```bash
+yum install ambari-server
+ambari-server setup
+```
+
+2. After setup completes, start your Ambari Server.
+
+```bash
+ambari-server start
+```
+
+3. Install Ambari Agents on all of the hosts you plan to include in your cluster.
+
+ ```bash
+ yum install ambari-agent
+ ```
+
+4. Set the Ambari Server on the Ambari Agents.
+
+```bash
+vi /etc/ambari-agent/conf/ambari-agent.ini
+```
+
+5. Set hostname= to the Fully Qualified Domain Name for the Ambari Server. Save and exit.
+
+```bash
+hostname=c6401.ambari.apache.org
+```
+
+6. Start the Agents to initiate registration to Server.
+
+```bash
+ambari-agent start
+```
+
+7. Confirm the Agent hosts are registered with the Server.
+[http://your.ambari.server:8080/api/v1/hosts](http://your.ambari.server:8080/api/v1/hosts)
+
+
+### Blueprint Structure
+
+A blueprint document is in JSON format and has the following structure:
+
+```json
+{
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value",
+ "property-name2" : "property-value"
+ }
+ },
+ {
+ "configuration-type2" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "host_groups" : [
+ {
+ "name" : "host-group-name",
+ "components" : [
+ {
+ "name" : "component-name"
+ },
+ {
+ "name" : "component-name2",
+ "provision_action" : "(INSTALL_AND_START | INSTALL_ONLY)"
+ }
+ ...
+
+ ],
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "settings" : [
+ "deployment_settings": [
+ {"skip_failure":"true"}
+ ],
+ "repository_settings":[
+ {
+ "override_strategy":"ALWAYS_APPLY",
+ "operating_system":"redhat7",
+ "repo_id":"HDP",
+ "base_url":"http://myserver/hdp"
+ },
+ {
+ "override_strategy":"APPLY_WHEN_MISSING",
+ "operating_system":"redhat7",
+ "repo_id":"HDP-UTIL-1.1",
+ "base_url":"http://myserver/hdp-util"
+ }
+ ],
+ "recovery_settings":[
+ {"recovery_enabled":"true"}
+ ],
+ "service_settings":[
+ {
+ "name":"SERVICE_ONE",
+ "recovery_enabled":"true"
+ },
+ {
+ "name":"SERVICE_TWO",
+ "recovery_enabled":"true"
+ }
+ ],
+ "component_settings":[
+ {
+ "name":"COMPONENT_A_OF_SERVICE_ONE"
+ "recover_enabled":"true"
+ },
+ {
+ "name":"COMPONENT_B_OF_SERVICE_TWO",
+ "recover_enabled":"true"
+ }
+ ]
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.1",
+ "security" : {
+ "type" : "(NONE | KERBEROS)",
+ "kerberos_descriptor" : {
+ ...
+
+ }
+ }
+ }
+}
+```
+
+#### **Blueprint Field Descriptions**
+
+* **configurations** : A list of configuration maps keyed by configuration type. An example of a configuration type is "core-site". When specified at the top level, configurations are applied at cluster scope and override default values for the cluster. When specified within a "host_groups" element, configurations are applied at the host level for all hosts mapped to the host group. Host scoped configurations override cluster scoped configuration for all hosts mapped to the host group. The configurations element is optional at both levels.
+
+* **host_groups** : A list of host groups which define topology (components) and configuration for all hosts which are mapped to the host group. At least one host group must be specified.
+
+ - **name** : The name of the host group. Mandatory field which is referred to in the cluster creation body when mapping physical hosts to host groups.
+
+ - **components** : A list of components which will run on all hosts that are mapped to the host group. At least one component must be specified for each host group.
+
+ - **provision_action** : Cluster wide provision action can be specified in Cluster Creation Template (see below), but optionally this can be overwritten on component level, by specifying a different provision_action here. The default provision_action is INSTALL_AND_START.
+
+ - **cardinality** : This field is optional and intended to provide a hint to the deployer as to how many instances of a particular host_group can be instantiated; it has no bearing on how the cluster is deployed. When a blueprint is exported for an existing cluster, this field will indicate the number of hosts that correspond the the host group in that cluster.
+
+* **Blueprints** : Blueprint and stack information
+ - **stack_name** : The name of the stack. All stacks currently shipped with Ambari have the name "HDP". This is a required field.
+
+ - **stack_version** : The version of the stack. For example: "1.3.2" or "2.1". This is a required field.When deploying a cluster using a blueprint, the stack definition identified in the blueprint must be available to the Ambari instance in the new cluster.
+
+ - **blueprint_name** : Optional field which specifies that name of the blueprint. Typically the name of the blueprint is specified in the URL of the REST invocation. The only reason to specify the name in the blueprint is when creating multiple blueprints via a single REST invocation. **Be aware that the name specified in this field will override the name specified in the URL.**
+ - **security** : Optional block to specify security settings for blueprint. Supported security types are **NONE** and **KERBEROS**. In case of KERBEROS, users have the option to embed a valid kerberos descriptor - to override default values defined for HDP stack - in field **kerberos_descriptor**or as an alternative they may reference a previously saved kerberos descriptor using **kerberos_descriptor_reference**field.
+
+In case of selecting **KERBEROS** as security_type it's mandatory to add **kerberos-env** and **krb5-conf** config types. (Checkout configurations section in **Blueprint example with KERBEROS** on this page)
+Be aware that, Kerberos client packages needs to be installed on the host running Ambari server and krb5.conf needs to be configured properly to contain your realm (admin_server and kdc).
+
+[Automated Kerberization](../kerberos/index.md) page describes structure of kerberos_descriptor.
+
+* **settings**: An optional section to provide additional configuration for cluster behavior during and after the blueprint deployment. You can provide configurations for the following properties:
+ - recovery_settings: A section to specify if all services (globally) should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart). For example:
+ ```json
+ "settings": [
+ "recovery_settings":[
+ {
+ "recovery_enabled":"true"
+ }
+ ]
+ ],
+ ```
+ - service_settings: A section to specify if services should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart).
+ For example:
+ ```json
+ "settings": [
+ "service_settings":[
+ {
+ "name":"HDFS",
+ "recovery_enabled":"true"
+ },
+ {
+ "name":"ZOOKEEPER",
+ "recovery_enabled":"true"
+ }
+ ]
+ ],_
+ ```
+ - component_settings: A section to specify if components should be set with auto restart once the cluster is deployed. To configure it, specify "recover_enabled" property to either "true" (auto restart), or "false" (do not auto restart).
+ For example:
+ ```json
+ "settings": [
+ "component_settings":[
+ {
+ "name":"KAFKA_CLIENT"
+ "recover_enabled":"true"
+ },
+ {
+ "name":"METRICS_MONITOR",
+ "recover_enabled":"true"
+ }
+ ]
+ ],
+ ```
+ - deployment_settings: A section to specify if the blueprint deployment should auto skip install and start failures. To configure this behavior, specify "skip_failure" property to either "true" (auto skip failures), or "false" (do not auto skip failures). Blueprint deployment will fail on the very first deployment failure if the blueprint file does not contain the "deployment_settings" section.
+
+ For example:
+
+ ```json
+ "settings": [
+ "recovery_settings":[
+ {
+ "recovery_enabled":"true"
+ }
+ ]
+ ],
+ ```
+
+ - repository_settings: A section to specify custom repository URLs for the blueprint deployment. This section allows you to provide custom URLs to override the default ones. Without this section, you will need to update the repository URLs via REST API before deploying the cluster with the blueprint. "override_strategy" can be "ALWAYS_APPLY" ( always override the default one), or "APPLY_WHEN_MISSING" (only add it if no repository exists with the specific operating system and the repository id information). Repository URLs stored in the Ambari server database will be used if the blueprint does not have the "repository_settings" section.
+ For example:
+ ```json
+ settings: [
+ "repository_settings":[
+ {
+ "override_strategy":"ALWAYS_APPLY"
+ "operating_system":"redhat7",
+ "repo_id":"HDP",
+ "base_url":"[http://myserver/hdp](http:// myserver/hdp) "
+ },
+ {
+ "override_strategy":"APPLY_WHEN_MISSING",
+ "operating_system":"redhat7",
+ "repo_id":"HDP-UTIL-1.1",
+ "base_url":"[http://myserver/hdp-util](http:// myserver/hdp-util) "
+ }
+ ]
+ ]
+ ```
+
+### Cluster Creation Template Structure
+
+A Cluster Creation Template is in JSON format and has the following structure:
+
+```json
+{
+ "blueprint" : "blueprint-name",
+ "default_password" : "super-secret-password",
+ "provision_action" : "(INSTALL_AND_START | INSTALL_ONLY)"
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ...
+
+ ],
+ "host_groups" :[
+ {
+ "name" : "host-group-name",
+ "configurations" : [
+ {
+ "configuration-type" : {
+ "property-name" : "property-value"
+ }
+ }
+ ],
+ "hosts" : [
+ {
+ "fqdn" : "host.domain.com"
+ },
+ {
+ "fqdn" : "host2.domain.com"
+ }
+ ...
+
+ ]
+ }
+ ...
+
+ ],
+ "credentials" : [
+ {
+ "alias" : "kdc.admin.credential",
+ "principal" : "{PRINCIPAL}",
+ "key" : "{KEY}",
+ "type" : "(TEMPORARY | PERSISTED)"
+ }
+ ],
+ "security" : {
+ "type" : "(NONE | KERBEROS)",
+ "kerberos_descriptor" : {
+ ...
+
+ }
+ }
+}
+```
+
+Starting in Ambari version 2.1.0, it is possible to specify a host count and a host predicate in the cluster creation template host group section instead of a host name.
+
+```json
+{
+ "name" : "group-using-host-count",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+}
+```
+
+Starting in Ambari version 2.2.0, it is possible to specify configuration recommendation strategy in the cluster creation template.
+
+```json
+{
+ "blueprint" : "blueprint-name",
+ "config_recommendation_strategy" : "ONLY_STACK_DEFAULTS_APPLY",
+ ...
+
+}
+```
+
+Starting in Ambari version 2.2.1, it is possible to specify the host rack info in the cluster creation template ([AMBARI-14600](https://issues.apache.org/jira/browse/AMBARI-14600)).
+
+```json
+"hosts" : [
+ {
+ "fqdn" : "amb2.service.consul",
+ "rack_info": "/dc1/rack1"
+ }
+ ]
+```
+
+**Cluster Creation Template Structure: Host Mappings and Configuration Field Descriptions**
+
+* **blueprint** : Name of the blueprint that defines the cluster to be deployed. Blueprint must already exist. Required field.
+
+* **default_password** : Optional field which specifies a default password for all required passwords which are not specified in the blueprint or cluster creation template configurations.
+
+* **provision_action** : Default provision_action is INSTALL_AND_START, optionally this can be overwritten on component level, by specifying a different provision_action for a given component.
+
+* **configurations** : A list of configuration maps keyed by configuration type. An example of a configuration type is "core-site". When specified at the top level, configurations are applied at cluster scope and override default values for the cluster. When specified within a "host_groups" element, configurations are applied at the host level for all hosts mapped to the host group. Host scoped configurations override cluster scoped configuration for all hosts mapped to the host group. All cluster scoped and host group scoped configurations specified here override configurations specified in the corresponding blueprint. The configurations element is optional at both levels.
+
+* **config_recommendation_strategy :** Optional field which specifies the strategy of applying configuration recommendations to a cluster. Recommended configurations gathered by the response of the stack advisor, they may override partly/totally user defined custom configurations depending on selected strategy. A property value is considered as custom configuration in case it has a value other then stack default. Available from Ambari 2.2.0.
+
+ - **NEVER_APPLY** : Configuration recommendations are ignored with this option. (This is the default)
+ - **ONLY_STACK_DEFAULTS_APPLY**:Configuration recommendations are applied only for properties defined in HDP stack by default.
+
+ - **ALWAYS_APPLY**: All c onfiguration recommendations are applied, they may override custom configurations provided by the user in the Blueprint and/or Cluster Creation Template.
+
+ - **ALWAYS_APPLY_DONT_OVERRIDE_CUSTOM_VALUES**: All c onfiguration recommendations are applied,however custom configurations defined by the user in the Blueprint and/or Cluster Creation Template are not overridden by recommended configuration values. Available as of Ambari 2.4.0.
+
+* **host_groups** : A list of host groups being deployed to the cluster. At least one host group must be specified.
+
+ - **name** : Required field which must correspond to a name of a host group in the associated blueprint.
+
+ - **hosts** : List of host mapping information
+ + **fqdn** : Fully qualified domain name for each host that is being mapped to the host group. At least one host must be specified
+ - **host_count** : The number of hosts that should be mapped to this host group. This can be specified instead of concrete host names. If no host_predicate is specified, any host that isn't explicitly mapped to another host group is available to be mapped to this host group. Available as of Ambari 2.1.0.
+
+ - **host_predicate** : Optional field which is used together with host_count to control which hosts are mapped to the host group. This is useful in supporting host 'flavors' where different host groups require different host types. The default predicate matches all hosts which aren't explicitly mapped to another host group. The syntax of the predicate is the standard Ambari API query syntax applied against the "api/v1/hosts" endpoint. Available as of Ambari 2.1.0.
+
+* **credentials** : Optional block to create credentials, kdc.admin.credential is required in case of setting up KERBEROS security. Store type could be **PERSISTED**
+or **TEMPORARY**. Temporary admin credentials are valid 90 minutes or until server restart.
+
+* **security** : Optional block to override security settings defined in Blueprint. Supported security types are **NONE** and **KERBEROS**. In case of KERBEROS, users have the option to embed a valid kerberos descriptor - to override default values defined for HDP stack - in field **kerberos_descriptor**or as an alternative they may reference a previously saved kerberos descriptor using **kerberos_descriptor_reference**field. Security settings defined here will override Blueprint settings, however overriding security type used in Blueprint to a less secure mode is not possible (ex. set security.type=NONE in cluster template having security.type=KERBEROS in Blueprint). In case of selecting **KERBEROS** as security_type it's mandatory to add **kerberos-env** and **krb5-conf** config types. (Checkout configurations section in **Blueprint example with KERBEROS** on this page)
+[Automated Kerberization](../kerberos/index.md) page describes structure of kerberos_descriptor.
+
+### Configurations
+
+#### Default Values and Overrides
+
+* **Stack Defaults**: Each Stack provides configurations for all included services which serve as defaults for all clusters deployed via Blueprints.
+
+* **Blueprint Cluster Scoped**: Configurations provided at the top level of a Blueprint override the corresponding default values for the entire cluster.
+
+* **Blueprint Host Group Scoped**: Configurations provided within a host_group element of a Blueprint override both the corresponding default values and blueprint cluster scoped values only for hosts mapped to the host group.
+
+* **Cluster Creation Template Cluster Scoped**: Configurations provided at the top level of the Cluster Creation Template override both the corresponding default and blueprint cluster scoped values for the entire cluster.
+
+* **Cluster Creation Template Host Group Scoped**: Configurations provided within a host_group element of a Cluster Creation Template override all other values for hosts mapped to the host group.
+
+#### Required Configurations
+
+* Not all configuration properties have valid defaults
+* Required properties must be specified by the Blueprint user
+* Come in two categories, passwords and non passwords
+* Non password required properties are validated at Blueprint creation time
+* Required password properties are validated at cluster creation time
+* For required password properties, they may be explicitly set in either the Blueprint or Cluster Creation Template configurations or a default password may be specified in the Cluster Creation Template which will be applied to all passwords which have not been explicitly set
+ - "default_password" : "super-secret-password"
+* If required configuration validation fails, a 400 response is returned indicating which properties must be specified
+
+## [Blueprint Examples](#)
+
+## Blueprint Example: Single-Node HDP 2.4 Cluster
+
+* Single-node cluster (c6401.ambari.apache.org)
+* HDP 2.4 Stack
+* Install Core Hadoop Services (HDFS, YARN, MapReduce2, ZooKeeper)
+
+### Example Blueprint
+
+```json
+{
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "components" : [
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "blueprint_name" : "single-node-hdfs-yarn",
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Register blueprint with Ambari Server
+
+Post the blueprint to the "single-node-hdfs-yarn" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/single-node-hdfs-yarn
+
+...
+
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+### Example Cluster Creation Template
+
+We are performing a single-node install and the blueprint above has **one** host group. Therefore, for our cluster instance, we define **one** host in **host_group_1** and reference the **single-node-hdfs-yarn** blueprint.
+
+**Explicit Host Name Example**
+
+```json
+{
+ "blueprint" : "single-node-hdfs-yarn",
+ "host_groups" :[
+ {
+ "name" : "host_group_1",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/MySingleNodeCluster
+
+...
+
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+```
+
+## Blueprint Example: Multi-Node HDP 2.4 Cluster
+
+* Multi-node cluster (three hosts)
+* Host Groups: "master", "slaves" (one master host, two slave hosts)
+* Use HDP 2.4 Stack
+* Install Core Hadoop Services (HDFS, YARN, MapReduce2, ZooKeeper)
+
+### Example Blueprint
+
+The blueprint ("multi-node-hdfs-yarn") below defines with **two** host groups (a "master" and the "slaves") which hosts the various Service components (masters, slaves and clients).
+
+```json
+{
+ "host_groups" : [
+ {
+ "name" : "master",
+ "components" : [
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "RESOURCEMANAGER"
+ },
+ {
+ "name" : "HISTORYSERVER"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "slaves",
+ "components" : [
+ {
+ "name" : "DATANODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "NODEMANAGER"
+ },
+ {
+ "name" : "YARN_CLIENT"
+ },
+ {
+ "name" : "MAPREDUCE2_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ }
+ ],
+ "cardinality" : "1+"
+ }
+ ],
+ "Blueprints" : {
+ "blueprint_name" : "multi-node-hdfs-yarn",
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Register blueprint with Ambari Server
+
+Post the blueprint to the "single-node-hdfs-yarn" resource to the Ambari Server.
+
+```
+POST /api/v1/blueprints/multi-node-hdfs-yarn
+...
+
+[ Request Body is the example blueprint defined above ]
+...
+
+201 - Created
+```
+
+### Example Cluster Creation Template
+
+We are performing a multi-node install and the blueprint above has **two** host groups. Therefore, for our cluster instance, we define **one** host in **masters**, **two** hosts in **slaves** and reference the **multi-node-hdfs-yarn** blueprint.
+
+The below multi-node cluster creation template example uses the "host_count" and "host_predicate" syntax for the "slave" host group which is available as of Ambari 2.1.0. For older versions of Ambari, the "hosts/fqdn" syntax must be used.
+
+```
+{
+ "blueprint" : "multi-node-hdfs-yarn",
+ "default_password" : "my-super-secret-password",
+ "host_groups" :[
+ {
+ "name" : "master",
+ "hosts" : [
+ {
+ "fqdn" : "c6401.ambari.apache.org"
+ }
+ ]
+ },
+ {
+ "name" : "slaves",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+ }
+ ]
+}
+```
+
+### Create Cluster Instance
+
+Post the cluster to the Ambari Server to provision the cluster.
+
+```
+POST /api/v1/clusters/MyThreeNodeCluster
+...
+
+[ Request Body is above Cluster Creation Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyThreeNodeCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "InProgress"
+ }
+}
+```
+
+## Adding Hosts to an Existing Cluster
+
+After creating a cluster using the Ambari Blueprint API, you may scale up the cluster using the API.
+
+There are two forms of the API, one for adding a single host and another for adding multiple hosts.
+
+The blueprint add hosts API is available as of Ambari 2.0.
+
+Currently, only clusters originally provisioned via the blueprint API may be scaled using this API.
+
+### Example Add Host Template
+
+#### Single Host Example
+
+Host is specified in URL
+
+```
+{
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves"
+}
+```
+
+#### Multiple Host Form
+
+Host is specified in request body
+
+```
+[
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_name" : "c6403.ambari.apache.org"
+ },
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_name" : "c6404.ambari.apache.org"
+ }
+]
+```
+
+#### Multiple Host Form using host_count
+
+Starting with Ambari 2.1, the fields "host_count" and "host_predicate" can also be used when adding a host.
+
+These fields behave exactly the same as they do when specified in the cluster creation template.
+
+```
+[
+ {
+ "blueprint" : "multi-node-hdfs-yarn",
+ "host_group" : "slaves",
+ "host_count" : 5,
+ "host_predicate" : "Hosts/os_type=centos6&Hosts/cpu_count=2"
+ }
+]
+```
+
+### Add Host Request
+
+#### Single Host
+
+```
+POST /api/v1/clusters/myExistingCluster/hosts/c6403.ambari.apache.org
+...
+
+[ Request Body is above Single Host Add Host Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/myExistingCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "Pending"
+ }
+}
+```
+
+#### Multiple Hosts
+
+```
+POST /api/v1/clusters/myExistingCluster/hosts
+...
+
+[ Request Body is above Multiple Host Add Host Template ]
+...
+
+202 - Accepted
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/myExistingCluster/requests/1",
+ "Requests" : {
+ "id" : 1,
+ "status" : "Pending"
+ }
+}
+```
+
+## Blueprint Example : Provisioning Multi-Node HDP 2.3 Cluster to use KERBEROS
+
+The blueprint below could be used to setup a cluster containing three host groups with KERBEROS security. Overriding default kerberos descriptor is not necessary however specifying a few Kerberos specific properties in kerberos-env and krb5-conf is a must to setup services to use Kerberos. Note: prior to Ambari 2.4.0 use "kdc_host" instead of "kdc_hosts".
+
+```json
+{
+ "configurations" : [
+ {
+ "kerberos-env": {
+ "properties_attributes" : { },
+ "properties" : {
+ "realm" : "AMBARI.APACHE.ORG",
+ "kdc_type" : "mit-kdc",
+ "kdc_hosts" : "(kerberos_server_name)",
+ "admin_server_host" : "(kerberos_server_name)"
+ }
+ }
+ },
+ {
+ "krb5-conf": {
+ "properties_attributes" : { },
+ "properties" : {
+ "domains" : "AMBARI.APACHE.ORG",
+ "manage_krb5_conf" : "true"
+ }
+ }
+ }
+ ],
+ "host_groups" : [
+ {
+ "name" : "host_group_1",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "NAMENODE"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_2",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "KERBEROS_CLIENT"
+ },
+ {
+ "name" : "SECONDARY_NAMENODE"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ },
+ {
+ "name" : "host_group_3",
+ "configurations" : [ ],
+ "components" : [
+ {
+ "name" : "ZOOKEEPER_CLIENT"
+ },
+ {
+ "name" : "ZOOKEEPER_SERVER"
+ },
+ {
+ "name" : "KERBEROS_CLIENT"
+ },
+ {
+ "name" : "HDFS_CLIENT"
+ },
+ {
+ "name" : "DATANODE"
+ }
+ ],
+ "cardinality" : "1"
+ }
+ ],
+ "Blueprints" : {
+ "stack_name" : "HDP",
+ "stack_version" : "2.3",
+ "security" : {
+ "type" : "KERBEROS"
+ }
+ }
+}
+```
+
+The **Cluster Creation Template** below could be used to setup a cluster containing hosts with KERBEROS security using the Blueprint from above. Overriding default kerberos descriptor is not necessary however specifying kdc.admin credentials is a must.
+
+```json
+{
+ "blueprint": "kerberosBlueprint",
+ "default_password": "admin",
+ "host_groups": [
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-1" }
+ ],
+ "name": "host_group_1",
+ "configurations" : [ ]
+ },
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-2" }
+ ],
+ "name": "host_group_2",
+ "configurations" : [ ]
+ },
+ {
+ "hosts": [
+ { "fqdn": "ambari-agent-3" }
+ ],
+ "name": "host_group_3",
+ "configurations" : [ ]
+ }
+ ],
+ "credentials" : [
+ {
+ "alias" : "kdc.admin.credential",
+ "principal" : "admin/admin",
+ "key" : "admin",
+ "type" : "TEMPORARY"
+ }
+ ],
+ "security" : {
+ "type" : "KERBEROS"
+ },
+ "Clusters" : {"cluster_name":"kerberosCluster"}
+}
+```
+
+## Blueprint Support for High Availability Clusters
+
+Support for deploying HA clusters for HDFS, Yarn, and HBase has been added in Ambari 2.0. Please see the following link for more information:
+
+[Blueprint Support for HA Clusters](./blueprint-ha.md)
diff --git a/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/create_theme.png b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/create_theme.png
new file mode 100644
index 0000000..ab504a3
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/create_theme.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png
new file mode 100644
index 0000000..df0bed0
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/enhanced_configs_dependencies.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png
new file mode 100644
index 0000000..1b96243
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/imgs/enhanced_hbase_configs.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/index.md b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/index.md
new file mode 100644
index 0000000..52a2279
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/enhanced-configs/index.md
@@ -0,0 +1,342 @@
+---
+title: Enhanced Configs
+---
+
+Introduced in Ambari-2.1.0, the Enhanced Configs feature makes it possible for service providers to customize their service's configs to a great deal and determine which configs are prominently shown to user without making any UI code changes. Customization includes providing a service friendly layout, better controls (sliders, combos, lists, toggles, spinners, etc.), better validation (minimum, maximum, enums), automatic unit conversion (MB, GB, seconds, milliseconds, etc.), configuration dependencies and improved dynamic recommendations of default values.
+
+A service provider can accomplish all the above just by changing their service definition in the _stacks_/folder.
+
+Example: HBase Enhanced Configs
+
+
+
+## Features
+
+* Define theme with custom layout of configs
+ - Tabs
+ - Sections
+ - Sub-sections
+* Place selected configs in the layout defined above
+* Associate UI widget to use for a config
+ - Radio Buttons
+ - Slider
+ - Combo
+ - Time Interval Spinner
+ - Toggle
+ - Directory
+ - Directories
+ - List
+ - Password
+ - Text Field
+ - Checkbox
+ - Text Area
+* Automatic unit conversion for configs which have to be shown in units different from the units being saved as.
+
+ - Memory - B, KB, MB, GB, TB, PB
+ - Time - milliseconds, seconds, minutes, hours, days, months, years
+ - Percentage - float, percentage
+* Ability to define dependencies between configurations across services (depends-on, depended-by).
+
+* Ability to dynamically update values of other depended-by configs when a config is changed.
+
+## Enable Enhanced Configs - Steps
+
+### Step 1 - Create Theme (UI Metadata)
+
+The first step is to create a theme for your service in the stack definition folder. A theme provides necessary information of the UI to construct the enhanced configs. UI information like layout (tabs, sections, sub-sections), placement of configs in the sub-sections, and which widgets and units to use for each config
+
+
+
+1. Modify metainfo.xml to define a theme by including a themes block.
+
+```
+themes-dir theme.json true
+```
+2. The optional element can be used if the default theme folder of ' _themes_' is not desired, or taken by another service in the same _metainfo.xml_.
+
+3. Multiple themes can be defined, however only the first _default_theme will be used for the service.
+
+4. Each theme points to a theme JSON file (via _fileName_element) in the _themes-dir_folder.
+
+5. The _theme.json_file contains one _configuration_block containing three main keys
+ 1. _layouts_- specify tabs, sections and sub-section layout
+ 2. _placement_- specify configurations to place in sub-sections
+ 3. _widgets_- specify which UI widgets to use for each config
+
+```json
+{
+ "configuration": {
+ "layouts": [ ... ],
+ "placement": { ... },
+ "widgets": [ ... ]
+ }
+}
+```
+6. Layouts - Multiple layouts can be defined in a theme. Currently only the first layout will be used while rendering. A _layout_ has following content:
+ 1. Tabs: Multiple tabs can be defined in a layout. Each tab can have its contents laid out using a simple grid-layout using the _tab-columns_and _tab-rows_keys.
+
+In below example the _Settings_tab has a grid of 3 rows and 2 columns in which sections can be placed.
+
+```json
+"layouts": [
+ {
+ "name": "default",
+ "tabs": [
+ {
+ "name": "settings",
+ "display-name": "Settings",
+ "layout": {
+ "tab-columns": "2",
+ "tab-rows": "3",
+ "sections": [ ... ]
+ }
+ }
+ ]
+ }
+]
+```
+ 2. Sections: Each section is defined inside a tab and specifies its location and size inside the tab's grid-layout by using the _row-index_, _column-index_, _row-span_and _column-span_keys. Being a container itself, it can further define a grid-layout for the sub-sections it contains using the _section-rows_and _section-columns_keys.
+
+In below example the _MapReduce_section occupies the first cell of the _Settings_tab grid, and itself has a grid-layout of 1 row and 3 columns.
+
+```
+"sections": [ { "name": "section-mr-scheduler", "display-name": "MapReduce", "row-index": "0", "column-index": "0", "row-span": "1", "column-span": "1", "section-columns": "3", "section-rows": "1", "subsections": [ ... ] }, ...]
+```
+ 3. Sub-sections: Each sub-section is defined inside a section and specifies its location and size inside the section's grid-layout using the _row-index_, _column-index_, _row-span_and _column-span_keys. Each section also has an optional _border_boolean key which tells if a border should encapsulate its content.
+
+```
+"subsections": [ { "name": "subsection-mr-scheduler-row1-col1", "display-name": "MapReduce Framework", "row-index": "0", "column-index": "0", "row-span": "1", "column-span": "1" }, ...]
+```
+7. Placement: Specifies the order of configurations that are to be placed into each sub-section. Each placement identifies a config, and which sub-section it should appear in. The placement specifies which layout it applies to using the _configuration-layout_key.
+
+```
+"placement": { "configuration-layout": "default", "configs": [ { "config": "mapred-site/mapreduce.map.memory.mb", "subsection-name": "subsection-mr-scheduler-row1-col1" }, { "config": "mapred-site/mapreduce.reduce.memory.mb", "subsection-name": "subsection-mr-scheduler-row1-col2" }, ... ]}
+```
+8. Wigets: The widgets array specifies which UI widget should be used to show a specific config. It also contains extra UI specific metadata required to show the widget.
+
+In the example below, both configs are using the slider widget. However the unit varies, resulting in one config being shown in bytes and another being shown as percentage. This unit is purely for showing a config - which is different from the units in which it is actually persisted in Ambari. For example, the percent unit below maybe persisted as a _float_, while the MB config below can be persisted in B (bytes).
+
+```
+"widgets": [ { "config": "yarn-site/yarn.nodemanager.resource.memory-mb", "widget": { "type": "slider", "units": [ { "unit-name": "MB" } ] } }, { "config": "yarn-site/yarn.nodemanager.resource.percentage-physical-cpu-limit", "widget": { "type": "slider", "units": [ { "unit-name": "percent" } ] } }, { "config": "yarn-site/yarn.node-labels.enabled", "widget": { "type": "toggle" } }, ...]
+```
+
+For a complete reference to what UI widgets are available and what metadata can be specified per widget, please refer to _Appendix A_.
+
+### Step 2 - Annotate stack configs (Non-UI Metadata)
+
+Each configuration that is used by the service's theme has to provide extra metadata about the configuration. The list of available metadata are:
+
+* display-name
+* value-attributes
+ - type
+ + string
+ + value-list
+ + float
+ + int
+ + boolean
+ - minimum
+ - maximum
+ - unit
+ - increment-step
+ - entries
+ + entry
+ * value
+ * description
+* depends-on
+ - property
+ + type
+ + name
+
+The value-attributes provide meta information about the value that can used as hints by the appropriate widget. For example the slider widget can make use of the minimum and maximum values in its working.
+
+Examples:
+
+```xml
+<property>
+ <name>namenode_heapsize</name>
+ <value>1024</value>
+ <description>NameNode Java heap size</description>
+ <display-name>NameNode Java heap size</display-name>
+ <value-attributes>
+ <type>int</type>
+ <minimum>0</minimum>
+ <maximum>268435456</maximum>
+ <unit>MB</unit>
+ <increment-step>256</increment-step>
+ </value-attributes>
+ <depends-on>
+ <property>
+ <type>hdfs-site</type>
+ <name>dfs.datanode.data.dir</name>
+ </property>
+ </depends-on>
+</property>
+
+```
+
+```xml
+<property>
+ <name>hive.default.fileformat</name>
+ <value>TextFile</value>
+ <description>Default file format for CREATE TABLE statement.</description>
+ <display-name>Default File Format</display-name>
+ <value-attributes>
+ <type>value-list</type>
+ <entries>
+ <entry>
+ <value>ORC</value>
+ <description>The Optimized Row Columnar (ORC) file format provides a highly efficient way to store Hive data. It was designed to overcome limitations of the other Hive file formats. Using ORC files improves performance when Hive is reading, writing, and processi
+ </entry>
+ <entry>
+ <value>TextFile</value>
+ <description>Text file format saves Hive data as normal text.</description>
+ </entry>
+ </entries>
+ </value-attributes>
+</property>
+```
+
+The depends-on is useful in building a dependency graph between different configs in Ambari. Ambari uses these bi-directional relationships (depends-on and depended-by) to automatically update dependent configs using the stack-advisor functionality in Ambari.
+
+Dependencies between configurations is a directed-acyclic-graph (DAG). When a configuration is updated, the UI has to determine its effect on other configs in the graph. To determine this, the /recommendations_endpoint should be provided an array of what configurations have been just changed in the changed_configurations field. Based on the provided changed-configs, only its dependencies are updated in the response.
+
+Example:
+
+Figure below shows some config dependencies - A effects B and C, each of which effects DE and FG respectively.
+
+
+
+Now assume user changes B to B' - a call to _/_ _recommendations_will only change D and E to D' and E' respectively (AB'CD'E'FG). No other config will be changed. Now assume that C is changed to C' -/recommendations will only change F and G to F' and G' while still keeping the values of B' D' E' intact (AB'C'D'E'F'G'). Now if you change A to A', it will affect all its children (A'B''C''D''E''F''G''). The user will have chance to pick and choose which he wants to apply.
+
+The call to _/recommendations_ happens whenever a configuration with dependencies is changed. The POST call has the action configuration-dependencies - which will only change the configurations and its dependencies identified by the changed_configurations field.
+
+### Step 3 - Restart Ambari server
+
+Restarting ambari-server is required for any changes in the themes or the stack-definition to be loaded.
+
+## Reference
+
+* HDFS HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/HDFS/themes/theme.json)
+* YARN HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/themes/theme.json)
+* HIVE HDP-2.2 [theme.json](https://github.com/apache/ambari/blob/branch-2.1.2/ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/themes/theme.json)
+* RANGER HDP-2.3 [theme_version_2.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/RANGER/themes/theme_version_2.json)
+
+## Appendix
+
+### Appendix A - Widget Non-UI Metadata
+
+<table>
+ <tr>
+ <th>Widget</th>
+ <th>Metadata Used</th>
+ </tr>
+ <tr>
+ <td>Slider</td>
+ <td>
+ <value-attributes><br></br>
+ <type>int</type><br></br>
+ <minimum>1073741824</minimum><br></br>
+ <maximum>17179869184</maximum><br></br>
+ <unit>B</unit><br></br>
+ <increment-step>1073741824</increment-step><br></br>
+ </value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Combo</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>4</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>8</value><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+ </value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Directory, Directories, Password, Text Field, Text Area</td>
+ <td>No value-attributes required</td>
+ </tr>
+ <tr>
+ <td>List</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>4</value><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>8</value><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>2+</selection-cardinality><br></br>
+</value-attributes><br></br>
+ </td>
+ </tr>
+ <tr>
+ <td>Radio-Buttons</td>
+ <td>
+ <value-attributes> <br></br>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>1</value><br></br>
+ <label>Radio Option 1</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>2</value><br></br>
+ <label>Radio Option 2</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>3</value><br></br>
+ <label>Radio Option 3</label><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+</value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Time Interval Spinner</td>
+ <td>
+<value-attributes> <br></br>
+ <type>int</type><br></br>
+ <minimum>0</minimum><br></br>
+ <maximum>2592000000</maximum><br></br>
+ <unit>milliseconds</unit><br></br>
+</value-attributes>
+ </td>
+ </tr>
+ <tr>
+ <td>Toggle, Checkbox</td>
+ <td>
+ <value-attributes>
+ <type>value-list</type><br></br>
+ <entries><br></br>
+ <entry><br></br>
+ <value>true</value><br></br>
+ <label>Native</label><br></br>
+ </entry><br></br>
+ <entry><br></br>
+ <value>false</value><br></br>
+ <label>Off</label><br></br>
+ </entry><br></br>
+ </entries><br></br>
+ <selection-cardinality>1</selection-cardinality><br></br>
+</value-attributes>
+ </td>
+ </tr>
+</table>
diff --git a/versioned_docs/version-2.7.6/ambari-design/index.md b/versioned_docs/version-2.7.6/ambari-design/index.md
new file mode 100644
index 0000000..c0afb15
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/index.md
@@ -0,0 +1,19 @@
+# Ambari Design
+
+Ambari Architecture: https://issues.apache.org/jira/secure/attachment/12559939/Ambari_Architecture.pdf
+
+Ambari Server-Agent Registration Flow: http://www.slideshare.net/hortonworks/ambari-agentregistrationflow-17041261
+
+Ambari Local Repository Setup: http://www.slideshare.net/hortonworks/ambari-using-a-local-repository
+
+API Documentation: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
+
+Technology Stack: [Technology Stack](./technology-stack.md)
+
+Integration: http://developer.teradata.com/viewpoint/articles/viewpoint-integration-with-apache-ambari-for-hadoop-monitoring
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+
+<DocCardList />
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/kerberos/enabling_kerberos.md b/versioned_docs/version-2.7.6/ambari-design/kerberos/enabling_kerberos.md
new file mode 100644
index 0000000..2b11190
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/kerberos/enabling_kerberos.md
@@ -0,0 +1,373 @@
+---
+title: Enabling Kerberos
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](#enabling-kerberos)
+ - [The Enable Kerberos Wizard](#the-enable-kerberos-wizard)
+ - [The REST API](#the-rest-api)
+ - [Miscellaneous Technical Information](#miscellaneous-technical-information)
+ - [Password Generation](#password-generation)
+
+<a name="enabling-kerberos"></a>
+
+## Enabling Kerberos
+
+Enabling Kerberos on the cluster may be done using the _Enable Kerberos Wizard_ within the Ambari UI
+or using the REST API.
+
+<a name="the-enable-kerberos-wizard"></a>
+
+### The Enable Kerberos Wizard
+
+The _Enable Kerberos Wizard_, in the Ambari UI, provides an easy to use wizard interface that walks
+through the process of enabling Kerberos.
+
+<a name="the-rest-api"></a>
+
+### The REST API
+
+It is possible to enable Kerberos using Ambari's REST API using the following API calls:
+
+**_Notes:_**
+
+- Change the authentication credentials as needed
+ - `curl ... -u username:password ...`
+ - The examples below use
+ - username: admin
+ - password: admin
+- Change the Ambari server host name and port as needed
+ - `curl ... http://HOST:PORT/api/v1/...`
+ - The examples below use
+ - HOST: AMBARI_SERVER
+ - PORT: 8080
+- Change the cluster name as needed
+ - `curl ... http://.../CLUSTER/...`
+ - The examples below use
+ - CLUSTER: CLUSTER_NAME
+- @./payload indicates the the payload data is stored in some file rather than declared inline
+ - `curl ... -d @./payload ...`
+ - The examples below use `./payload` which should be replace with the actual file path
+ - The contents of the payload file are indicated below the curl statement
+
+#### Add the KERBEROS Service to cluster
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS
+```
+
+#### Add the KERBEROS_CLIENT component to the KERBEROS service
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS/components/KERBEROS_CLIENT
+```
+
+#### Create and set KERBEROS service configurations
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME
+```
+
+Example payload when using an MIT KDC:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "mit-kdc",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.KDC.SERVER",
+ "master_kdc" : "FQDN.MASTER.KDC.SERVER",
+ "admin_server_host" : "FQDN.ADMIN.KDC.SERVER",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false"
+ }
+ }
+ }
+ }
+]
+```
+
+Example payload when using an Active Directory:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "active-directory",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.AD.SERVER",
+ "master_kdc" : "FQDN.MASTER.AD.SERVER",
+ "admin_server_host" : "FQDN.AD.SERVER",
+ "ldap_url" : "LDAPS://AD_HOST:PORT",
+ "container_dn" : "OU=....,....",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "password_length": "20",
+ "password_min_lowercase_letters": "1",
+ "password_min_uppercase_letters": "1",
+ "password_min_digits": "1",
+ "password_min_punctuation": "1",
+ "password_min_whitespace": "0",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false",
+ "create_attributes_template" : "{\n \"objectClass\": [\"top\", \"person\", \"organizationalPerson\", \"user\"],\n \"cn\": \"$principal_name\",\n #if( $is_service )\n \"servicePrincipalName\": \"$principal_name\",\n #end\n \"userPrincipalName\": \"$normalized_principal\",\n \"unicodePwd\": \"$password\",\n \"accountExpires\": \"0\",\n \"userAccountControl\": \"66048\"}"
+ }
+ }
+ }
+ }
+]
+```
+Example payload when using IPA:
+
+```
+[
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "krb5-conf",
+ "tag": "version1",
+ "properties": {
+ "domains":"",
+ "manage_krb5_conf": "true",
+ "conf_dir":"/etc",
+ "content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable = true\n default_realm = {{realm}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n default_ccache_name = /tmp/krb5cc_%{uid}\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes = {{encryption_types}}\n{% if domains %}\n[domain_realm]\n{%- for domain in domains.split(',') %}\n {{domain|trim()}} = {{realm}}\n{%- endfor %}\n{% endif %}\n[logging]\n default = FILE:/var/log/krb5kdc.log\n admin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n{%- if master_kdc %}\n master_kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{%- if kdc_hosts > 0 -%}\n{%- set kdc_host_list = kdc_hosts.split(',') -%}\n{%- if kdc_host_list and kdc_host_list|length > 0 %}\n admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}\n{%- if kdc_host_list -%}\n{%- if master_kdc and (master_kdc not in kdc_host_list) %}\n kdc = {{master_kdc|trim()}}\n{%- endif -%}\n{% for kdc_host in kdc_host_list %}\n kdc = {{kdc_host|trim()}}\n{%- endfor -%}\n{% endif %}\n{%- endif %}\n{%- endif %}\n }\n\n{# Append additional realm declarations below #}"
+ }
+ }
+ }
+ },
+ {
+ "Clusters": {
+ "desired_config": {
+ "type": "kerberos-env",
+ "tag": "version1",
+ "properties": {
+ "kdc_type": "ipa",
+ "manage_identities": "true",
+ "create_ambari_principal": "true",
+ "manage_auth_to_local": "true",
+ "install_packages": "true",
+ "encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
+ "realm" : "EXAMPLE.COM",
+ "kdc_hosts" : "FQDN.KDC.SERVER",
+ "master_kdc" : "FQDN.MASTER.KDC.SERVER",
+ "admin_server_host" : "FQDN.ADMIN.KDC.SERVER",
+ "executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
+ "service_check_principal_name" : "${cluster_name}-${short_date}",
+ "case_insensitive_username_rules" : "false"
+ }
+ }
+ }
+ }
+]
+```
+
+#### Create the KERBEROS_CLIENT host components
+_Once for each host, replace HOST_NAME_
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d '{"host_components" : [{"HostRoles" : {"component_name":"KERBEROS_CLIENT"}}]}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/hosts?Hosts/host_name=HOST_NAME
+```
+
+#### Install the KERBEROS service and components
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "INSTALLED"}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services/KERBEROS
+```
+
+#### Stop all services
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services
+```
+
+#### Get the default Kerberos Descriptor
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://AMBARI_SERVER:8080/api/v1/stacks/STACK_NAME/versions/STACK_VERSION/artifacts/kerberos_descriptor
+```
+
+#### Get the customized Kerberos Descriptor (if previously set)
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/artifacts/kerberos_descriptor
+```
+
+#### Set the Kerberos Descriptor
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/artifacts/kerberos_descriptor
+```
+
+Payload:
+
+```
+{
+ "artifact_data" : {
+ ...
+ }
+}
+```
+
+**_Note:_** The Kerberos Descriptor payload may be a complete Kerberos Descriptor or just the updates to overlay on top of the default Kerberos Descriptor.
+
+#### Set the KDC administrator credentials
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/credentials/kdc.admin.credential
+```
+
+Payload:
+
+```
+{
+ "Credential" : {
+ "principal" : "admin/admin@EXAMPLE.COM",
+ "key" : "h4d00p&!",
+ "type" : "temporary"
+ }
+}
+```
+
+**_Note:_** the _principal_ and _key_ (password) values should be updated to match the correct credentials
+for the KDC administrator account
+
+**_Note:_** the `type` value may be `temporary` or `persisted`; however the value may only be `persisted`
+if Ambari's credential store has been previously setup.
+
+#### Enable Kerberos
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d @./payload http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME
+```
+
+Payload
+
+```
+{
+ "Clusters": {
+ "security_type" : "KERBEROS"
+ }
+}
+```
+
+#### Start all services
+
+```
+curl -H "X-Requested-By:ambari" -u admin:admin -i -X PUT -d '{"ServiceInfo": {"state" : "STARTED"}}' http://AMBARI_SERVER:8080/api/v1/clusters/CLUSTER_NAME/services
+```
+
+<a name="miscellaneous-technical-information"></a>
+
+### Miscellaneous Technical Information
+
+<a name="password-generation"></a>
+
+#### Password Generation
+
+When enabling Kerberos using an Active Directory, Ambari must use an internal mechanism to build
+the keytab files. This is because keytab files cannot be requested remotely from an Active Directory.
+In order to create keytab files, Ambari needs to know the password for each relevant Kerberos
+identity. Therefore, Ambari sets or updates the identity's password as needed.
+
+The password for each Ambari-managed account in an Active Directory is randomly generated and
+stored only long enough in memory to set the account's password and generate the keytab file.
+Passwords are generated using the following user-settable parameters:
+
+- Password length (`kerberos-env/password_length`)
+ - Default Value: 20
+- Minimum number of lower-cased letters (`kerberos-env/password_min_lowercase_letters`)
+ - Default Value: 1
+ - Character Set: `abcdefghijklmnopqrstuvwxyz`
+- Minimum number of upper-cased letters (`kerberos-env/password_min_uppercase_letters`)
+ - Default Value: 1
+ - Character Set: `ABCDEFGHIJKLMNOPQRSTUVWXYZ`
+- Minimum number of digits (`kerberos-env/password_min_digits`)
+ - Default Value: 1
+ - Character Set: `1234567890`
+- Minimum number of punctuation characters (`kerberos-env/password_min_punctuation`)
+ - Default Value: 1
+ - Character Set: `?.!$%^*()-_+=~`
+- Minimum number of whitespace characters (`kerberos-env/password_min_whitespace`)
+ - Default Value: 0
+ - Character Set: `(space character)`
+
+The following algorithm is executed:
+
+1. Create an array to store password characters
+2. For each character class (upper-case letter, lower-case letter, digit, ...), randomly select the
+minimum number of characters from the relevant character set and store them in the array
+3. For the number of characters calculated as the difference between the expected password length and
+the number of characters already collected, randomly select a character from a randomly-selected character
+class and store it into the array
+4. For the number of characters expected in the password, randomly pull one from the array and append
+to the password result
+5. Return the generated password
+
+To generate a random integer used to identify an index within a character set, a static instance of
+the `java.security.SecureRandom` class ([http://docs.oracle.com/javase/7/docs/api/java/security/SecureRandom.html](http://docs.oracle.com/javase/7/docs/api/java/security/SecureRandom.html))
+is used.
diff --git a/versioned_docs/version-2.7.6/ambari-design/kerberos/index.md b/versioned_docs/version-2.7.6/ambari-design/kerberos/index.md
new file mode 100644
index 0000000..51c56e1
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/kerberos/index.md
@@ -0,0 +1,125 @@
+---
+slug: /kerberos
+---
+
+# Ambari Kerberos Automation
+
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+
+- [Introduction](#introduction)
+ - [How it Works](#how-it-works)
+ - [Enabling Kerberos](#enabling-kerberos)
+ - [Adding Components](#adding-components)
+ - [Adding Hosts](#adding-hosts)
+ - [Regenerating Keytabs](#regenerating-keytabs)
+ - [Disabling Kerberos](#disabling-kerberos)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="introduction"></a>
+
+## Introduction
+
+Before Ambari 2.0.0, configuring an Ambari cluster to use Kerberos involved setting up the Kerberos
+client infrastructure on each host, creating the required identities, generating and distributing the
+needed keytabs files, and updating the necessary configuration properties. On a small cluster this may
+not seem to be too large of an effort; however as the size of the cluster increases, so does the amount
+of work that is involved.
+
+This is where Ambari’s Kerberos Automation facility can help. It performs all of these steps and
+also helps to maintain the cluster as new services and hosts are added.
+
+Kerberos automation can be invoked using Ambari’s REST API as well as via the _Enable Kerberos Wizard_
+in the Ambari UI.
+
+<a name="how-it-works"></a>
+
+### How it works
+
+Stacks and services that can utilize Kerberos credentials for authentication must have a Kerberos
+Descriptor declaring required Kerberos identities and how to update configurations. The Ambari
+infrastructure uses this data, and any updates applied by an administrator, to perform Kerberos
+related operations such as initially enabling Kerberos, enabling Kerberos for on hosts and added
+components, regenerating credentials, and disabling Kerberos.
+
+It should be notated that the Kerberos service is required to be installed on all hosts of the cluster
+before any automated tasks can be performed. If using the Ambari UI, this should happen as part of the
+Enable Kerberos wizard workflow.
+
+<a name="enabling-kerberos"></a>
+
+### Enabling Kerberos
+
+When enabling Kerberos, all of the services in the cluster are expected to be stopped. The main
+reason for this is to avoid state issues as the services are stopped and then started when the cluster
+is transitioning to use Kerberos.
+
+The following steps are taken to enable Kerberos on the cluster en masse:
+
+1. Create or update accounts in the configured KDC (or Active Directory)
+2. Generate keytab files and distribute them to the appropriate hosts
+3. Update relevant configurations
+
+<a name="adding-components"></a>
+
+### Adding Components
+
+If Kerberos is enabled for the Ambari cluster, whenever new components are added, the new components
+will automatically be configured for Kerberos, and any necessary principals and keytab files will be
+created and distributed as needed.
+
+For each new component, the following steps will occur before the component is installed and started:
+
+1. Update relevant configurations
+2. Create or update accounts in the configured KDC (or Active Directory)
+3. Generate keytab files and distribute them to the appropriate hosts
+
+<a name="adding-hosts"></a>
+
+### Adding Hosts
+
+When adding a new host, the Kerberos client must be installed on it. This does not happen automatically;
+however the _Add Host Wizard_ in the Ambari UI will will perform this step if Kerberos was enabled for
+the Ambari cluster. Once the host is added, generally one or more components are installed on
+it - see [Adding Components](#adding-components).
+
+<a name="regenerating-keytabs"></a>
+
+### Regenerating Keytabs
+
+Once a cluster has Kerberos enabled, it may be necessary to regenerate keytabs. There are two options
+for this:
+
+- `all` - create any missing principals and unconditionally update the passwords for existing principals, then create and distribute all relevant keytab files
+- `missing` - create any missing principals; then create and distribute keytab files for the newly-created principals
+
+In either case, the affected services should be restarted after the regeneration process is complete.
+
+If performed through the Ambari UI, the user will be asked which keytab regeneration mode to use and
+whether services are to be restarted or not.
+
+<a name="disabling-kerberos"></a>
+
+### Disabling Kerberos
+
+In the event Kerberos needs to be removed from the Ambari cluster, Ambari will remove the managed
+Kerberos identities, keytab files, and Kerberos-specific configurations. The Ambari UI will perform
+the steps of stopping and starting the services as well as removing the Kerberos service; however
+this will need to be done manually, if using the Ambari REST API.
diff --git a/versioned_docs/version-2.7.6/ambari-design/kerberos/kerberos_descriptor.md b/versioned_docs/version-2.7.6/ambari-design/kerberos/kerberos_descriptor.md
new file mode 100644
index 0000000..2dd4798
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/kerberos/kerberos_descriptor.md
@@ -0,0 +1,855 @@
+---
+title: The Kerberos Descriptor
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](#the-kerberos-descriptor)
+ - [Components of a Kerberos Descriptor](#components-of-a-kerberos-descriptor)
+ - [Stack-level Properties](#stack-level-properties)
+ - [Stack-level Identities](#stack-level-identities)
+ - [Stack-level Auth-to-local-properties](#stack-level-auth-to-local-properties)
+ - [Stack-level Configurations](#stack-level-configuratons)
+ - [Services](#services)
+ - [Service-level Identities](#service-level-identities)
+ - [Service-level Auth-to-local-properties](#service-level-auth-to-local-properties)
+ - [Service-level Configurations](#service-level-configurations)
+ - [Components](#service-components)
+ - [Component-level Identities](#component-level-identities)
+ - [Component-level Auth-to-local-properties](#component-level-auth-to-local-properties)
+ - [Component-level Configurations](#component-level-configurations)
+ - [Kerberos Descriptor specifications](#kerberos-descriptor-specifications)
+ - [properties](#properties)
+ - [auth-to-local-properties](#auth-to-local-properties)
+ - [configurations](#configurations)
+ - [identities](#identities)
+ - [principal](#principal)
+ - [keytab](#keytab)
+ - [services](#services)
+ - [components](#components)
+ - [Examples](#examples)
+- [The Kerberos Service](kerberos_service.md)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="the-kerberos-descriptor"></a>
+
+## The Kerberos Descriptor
+
+The Kerberos Descriptor is a JSON-formatted text file containing information needed by Ambari to enable
+or disable Kerberos for a stack and its services. This file must be named **_kerberos.json_** and should
+be in the root directory of the relevant stack or service definition. Kerberos Descriptors are meant to
+be hierarchical such that details in the stack-level descriptor can be overwritten (or updated) by details
+in the service-level descriptors.
+
+For the services in a stack to be Kerberized, there must be a stack-level Kerberos Descriptor. This
+ensures that even if a common service has a Kerberos Descriptor, it may not be Kerberized unless the
+relevant stack indicates that supports Kerberos by having a stack-level Kerberos Descriptor.
+
+For a component of a service to be Kerberized, there must be an entry for it in its containing service's
+service-level descriptor. This allows for some of a services' components to be managed and other
+components of that service to be ignored by the automated Kerberos facility.
+
+Kerberos Descriptors are inherited from the base stack or service, but may be overridden as a full
+descriptor - partial descriptors are not allowed.
+
+A complete descriptor (which is built using the stack-level descriptor, the service-level descriptors,
+and any updates from user input) has the following structure:
+
+- Stack-level Properties
+- Stack-level Identities
+- Stack-level Configurations
+- Stack-level Auth-to-local-properties
+- Services
+ - Service-level Identities
+ - Service-level Auth-to-local-properties
+ - Service-level Configurations
+ - Components
+ - Component-level Identities
+ - Component-level Auth-to-local-properties
+ - Component-level Configurations
+
+Each level of the descriptor inherits the data from its parent. This data, however, may be overridden
+if necessary. For example, a component will inherit the configurations and identities of its container
+service; which in turn inherits the configurations and identities from the stack.
+
+<a name="components-of-a-kerberos-descriptor"></a>
+
+### Components of a Kerberos Descriptor
+
+<a name="stack-level-properties"></a>
+
+#### Stack-level Properties
+
+Stack-level properties is an optional set of name/value pairs that can be used in variable replacements.
+For example, if a property named "**_property1_**" exists with the value of "**_value1_**", then any instance of
+"**_${property1}_**" within a configuration property name or configuration property value will be replaced
+with "**_value1_**".
+
+This property is only relevant in the stack-level Kerberos Descriptor and may not be overridden by
+lower-level descriptors.
+
+See [properties](#properties).
+
+<a name="stack-level-identities"></a>
+
+#### Stack-level Identities
+
+Stack-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are common among all services in the stack. An example of such an identity is the
+Ambari smoke test user, which is used by all services to perform service check operations. Service-
+and component-level identities may reference (and specialize) stack-level identities using the
+identity’s name with a forward slash (/) preceding it. For example if there was a stack-level identity
+with the name "smokeuser", then a service or a component may create an identity block that references
+and specializes it by declaring a "**_reference_**" property and setting it to "/smokeuser". Within
+this identity block details of the identity may be and overwritten as necessary. This does not alter
+the stack-level identity, it essentially creates a copy of it and updates the copy's properties.
+
+See [identities](#identities).
+
+<a name="stack-level-auth-to-local-properties"></a>
+
+#### Stack-level Auth-to-local-properties
+
+Stack-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="stack-level-configurations"></a>
+
+#### Stack-level Configurations
+
+Stack-level configurations is an optional configurations block containing a list of zero or more
+configuration descriptors that are common among all services in the stack. Configuration descriptors
+are overridable due to the structure of the data. However, overriding configuration properties may
+create undesired behavior since it is not known until after the Kerberization process is complete
+what value a property will have.
+
+See [configurations](#configurations).
+
+<a name="services"></a>
+
+#### Services
+
+Services is a list of zero or more service descriptors. A stack-level Kerberos Descriptor should not
+list any services; however a service-level Kerberos Descriptor should contain at least one.
+
+See [services](#services).
+
+<a name="service-level-identities"></a>
+
+#### Service-level Identities
+
+Service-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are common among all components of the service. Component-level identities may
+reference (and specialize) service-level identities by specifying a relative or an absolute path
+to it.
+
+For example if there was a service-level identity with the name "service_identity", then a child
+component may create an identity block that references and specializes it by setting its "reference"
+attribute to "../service_identity" or "/service_name/service_identity" and overriding any values as
+necessary. This does not override the service-level identity, it essentially creates a copy of it and
+updates the copy's properties.
+
+##### Examples
+
+```
+{
+ "name" : "relative_path_example",
+ "reference" : "../service_identity",
+ ...
+}
+```
+
+```
+{
+ "name" : "absolute_path_example",
+ "reference" : "/SERVICE/service_identity",
+ ...
+}
+```
+
+**Note**: By using the absolute path to an identity, any service-level identity may be referenced by
+any other service or component.
+
+See [identities](#identities).
+
+<a name="service-level-auth-to-local-properties"></a>
+
+#### Service-level Auth-to-local-properties
+
+Service-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="service-level-configurations"></a>
+
+#### Service-level Configurations
+
+Service-level configurations is an optional configurations block listing of zero or more configuration
+descriptors that are common among all components within a service. Configuration descriptors may be
+overridden due to the structure of the data. However, overriding configuration properties may create
+undesired behavior since it is not known until after the Kerberization process is complete what value
+a property will have.
+
+See [configurations](#configurations).
+
+<a name="service-components"></a>
+
+#### Components
+
+Components is a list of zero or more component descriptor blocks.
+
+See [components](#components).
+
+<a name="component-level-identities"></a>
+
+#### Component-level Identities
+
+Component-level identities is an optional identities block containing a list of zero or more identity
+descriptors that are specific to the component. A Component-level identity may be referenced
+(and specialized) by using the absolute path to it (`/service_name/component_name/identity_name`).
+This does not override the component-level identity, it essentially creates a copy of it and updates
+the copy's properties.
+
+See [identities](#identities).
+
+<a name="component-level-auth-to-local-properties"></a>
+
+#### Component-level Auth-to-local-properties
+
+Component-level auth-to-local-properties is an optional list of zero or more configuration property
+specifications `(config-type/property_name[|concatenation_scheme])` indicating which properties should
+be updated with dynamically generated auto-to-local rule sets.
+
+See [auth-to-local-properties](#auth-to-local-properties).
+
+<a name="component-level-configurations"></a>
+
+#### Component-level Configurations
+
+Component-level configurations is an optional configurations block listing zero or more configuration
+descriptors that are specific to the component.
+
+See [configurations](#configurations).
+
+### Descriptor Specifications
+
+<a name="properties"></a>
+
+#### properties
+
+The `properties` block is only valid in the service-level Kerberos Descriptor file. This block is
+a set of name/value pairs as follows:
+
+```
+"properties" : {
+ "property_1" : "value_1",
+ "property_2" : "value_2",
+ ...
+}
+```
+
+<a name="auth-to-local-properties"></a>
+
+#### auth-to-local-properties
+
+The `auth-to-local-properties` block is valid in the stack-, service-, and component-level
+descriptors. This block is a list of configuration specifications
+(`config-type/property_name[|concatenation_scheme]`) indicating which properties contain
+auth-to-local rules that should be dynamically updated based on the identities used within the
+Kerberized cluster.
+
+The specification optionally declares the concatenation scheme to use to append
+the rules into a rule set value. If specified one of the following schemes may be set:
+
+- **`new_lines`** - rules in the rule set are separated by a new line (`\n`)
+- **`new_lines_escaped`** - rules in the rule set are separated by a `\` and a new line (`\n`)
+- **`spaces`** - rules in the rule set are separated by a whitespace character (effectively placing all rules in a single line)
+
+If not specified, the default concatenation scheme is `new_lines`.
+
+```
+"auth-to-local-properties" : [
+ "core-site/hadoop.security.auth_to_local",
+ "service.properties/http.authentication.kerberos.name.rules|new_lines_escaped",
+ ...
+]
+```
+
+<a name="configurations"></a>
+
+#### configurations
+
+A `configurations` block may exist in stack-, service-, and component-level descriptors.
+This block is a list of one or more configuration blocks containing a single structure named using
+the configuration type and containing values for each relevant property.
+
+Each property name and value may be a concrete value or contain variables to be replaced using values
+from the stack-level `properties` block or any available configuration. Properties from the `properties`
+block are referenced by name (`${property_name}`), configuration properties are reference by
+configuration specification (`${config-type/property_name}`) and kerberos principals are referenced by the principal path
+(`principals/SERVICE/COMPONENT/principal_name`).
+
+```
+"configurations" : [
+ {
+ "config-type-1" : {
+ "${cluster-env/smokeuser}_property" : "value1",
+ "some_realm_property" : "${realm}",
+ ...
+ }
+ },
+ {
+ "config-type-2" : {
+ "property-2" : "${cluster-env/smokeuser}",
+ ...
+ }
+ },
+ ...
+]
+```
+
+If `cluster-env/smokuser` was `"ambari-qa"` and realm was `"EXAMPLE.COM"`, the above block would
+effectively be translated to
+
+```
+"configurations" : [
+ {
+ "config-type-1" : {
+ "ambari-qa_property" : "value1",
+ "some_realm_property" : "EXAMPLE.COM",
+ ...
+ }
+ },
+ {
+ "config-type-2" : {
+ "property-2" : "ambari-qa",
+ ...
+ }
+ },
+ ...
+]
+```
+
+<a name="identities"></a>
+
+#### identities
+
+An `identities` descriptor may exist in stack-, service-, and component-level descriptors. This block
+is a list of zero or more identity descriptors. Each identity descriptor is a block containing a `name`,
+an optional `reference` identifier, an optional `principal` descriptor, and an optional `keytab`
+descriptor.
+
+The `name` property of an `identity` descriptor should be a concrete name that is unique with in its
+`local` scope (stack, service, or component). However, to maintain backwards-compatibility with
+previous versions of Ambari, it may be a reference identifier to some other identity in the
+Kerberos Descriptor. This feature is deprecated and may not be available in future versions of Ambari.
+
+The `reference` property of an `identitiy` descriptor is optional. If it exists, it indicates that the
+properties from referenced identity is to be used as the base for the current identity and any properties
+specified in the local identity block overrides the base data. In this scenario, the base data is copied
+to the local identities and therefore changes are realized locally, not globally. Referenced identities
+may be hierarchical, so a referenced identity may reference another identity, and so on. Because of
+this, care must be taken not to create cyclic references. Reference values must be in the form of a
+relative or absolute _path_ to the referenced identity descriptor. Relative _paths_ start with a `../`
+and may be specified in component-level identity descriptors to reference an identity descriptor
+in the parent service. Absolute _paths_ start with a `/` and may be specified at any level as follows:
+
+- **Stack-level** identity reference: `/identitiy_name`
+- **Service-level** identity reference: `/SERVICE_NAME/identitiy_name`
+- **Component-level** identity reference: `/SERVICE_NAME/COMPONENT_NAME/identitiy_name`
+
+```
+"identities" : [
+ {
+ "name" : "local_identity",
+ "principal" : {
+ ...
+ },
+ "keytab" : {
+ ...
+ }
+ },
+ {
+ "name" : "/smokeuser",
+ "principal" : {
+ "configuration" : "service-site/principal_property_name"
+ },
+ "keytab" : {
+ "configuration" : "service-site/keytab_property_name"
+ }
+ },
+ ...
+]
+```
+
+<a name="principal"></a>
+
+#### principal
+
+The `principal` block is an optional block inside an `identity` descriptor block. It declares the
+details about the identity’s principal, including the principal’s `value`, the `type` (user or service),
+the relevant `configuration` property, and a local username mapping. All properties are optional; however
+if no base or default value is available (via the parent identity's `reference` value) for all properties,
+the principal may be ignored.
+
+The `value` property of the principal is expected to be the normalized principal name, including the
+principal’s components and realm. In most cases, the realm should be specified using the realm variable
+(`${realm}` or `${kerberos-env/realm}`). Also, in the case of a service principal, "`_HOST`" should be
+used to represent the relevant hostname. This value is typically replaced on the agent side by either
+the agent-side scripts or the services themselves to be the hostname of the current host. However the
+built-in hostname variable (`${hostname}`) may be used if "`_HOST`" replacement on the agent-side is
+not available for the service. Examples: `smokeuser@${realm}`, `service/_HOST@${realm}`.
+
+The `type` property of the principal may be either `user` or `service`. If not specified, the type is
+assumed to be `user`. This value dictates how the identity is to be created in the KDC or Active Directory.
+It is especially important in the Active Directory case due to how accounts are created. It also,
+indicates to Ambari how to handle the principal and relevant keytab file reguarding the user interface
+behavior and data caching.
+
+The `configuration` property is an optional configuration specification (`config-type/property_name`)
+that is to be set to this principal's `value` (after its variables have been replaced).
+
+The `local_username` property, if supplied, indicates which local user account to use when generating
+auth-to-local rules for this identity. If not specified, no explicit auth-to-local rule will be generated.
+
+```
+"principal" : {
+ "value": "${cluster-env/smokeuser}@${realm}",
+ "type" : "user" ,
+ "configuration": "cluster-env/smokeuser_principal_name",
+ "local_username" : "${cluster-env/smokeuser}"
+}
+```
+
+```
+"principal" : {
+ "value": "component1/_HOST@${realm}",
+ "type" : "service" ,
+ "configuration": "service-site/component1.principal"
+}
+```
+
+<a name="keytab"></a>
+
+#### keytab
+
+The `keytab` block is an optional block inside an `identity` descriptor block. It describes how to
+create and store the relevant keytab file. This block declares the keytab file's path in the local
+filesystem of the destination host, the permissions to assign to that file, and the relevant
+configuration property.
+
+The `file` property declares an absolute path to use to store the keytab file when distributing to
+relevant hosts. If this is not supplied, the keytab file will not be created.
+
+The `owner` property is an optional block indicating the local user account to assign as the owner of
+the file and what access (`"rw"` - read/write; `"r"` - read-only) should
+be granted to that user. By default, the owner will be given read-only access.
+
+The `group` property is an optional block indicating which local group to assigned as the group owner
+of the file and what access (`"rw"` - read/write; `"r"` - read-only; `“”` - no access) should be granted
+to local user accounts in that group. By default, the group will be given no access.
+
+The `configuration` property is an optional configuration specification (`config-type/property_name`)
+that is to be set to the path of this keytabs file (after any variables have been replaced).
+
+```
+"keytab" : {
+ "file": "${keytab_dir}/smokeuser.headless.keytab",
+ "owner": {
+ "name": "${cluster-env/smokeuser}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "${cluster-env/smokeuser_keytab}"
+}
+```
+
+<a name="services"></a>
+
+#### services
+
+A `services` block may exist in the stack-level and the service-level Kerberos Descriptor file.
+This block is a list of zero or more service descriptors to add to the Kerberos Descriptor.
+
+Each service block contains a service `name`, and optionals `identities`, `auth_to_local_properties`
+`configurations`, and `components` blocks.
+
+```
+"services": [
+ {
+ "name": "SERVICE1_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ],
+ "components": [
+ ...
+ ]
+ },
+ {
+ "name": "SERVICE2_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ],
+ "components": [
+ ...
+ ]
+ },
+ …
+]
+```
+
+<a name="components"></a>
+
+#### components
+
+A `components` block may exist within a `service` descriptor block. This block is a list of zero or
+more component descriptors belonging to the containing service descriptor. Each component descriptor
+is a block containing a component `name`, and optional `identities`, `auth_to_local_properties`,
+and `configurations` blocks.
+
+```
+"components": [
+ {
+ "name": "COMPONENT_NAME",
+ "identities": [
+ ...
+ ],
+ "auth_to_local_properties" : [
+ ...
+ ],
+ "configurations": [
+ ...
+ ]
+ },
+ ...
+]
+```
+
+<a name="examples"></a>
+
+### Examples
+
+#### Example Stack-level Kerberos Descriptor
+The following example is annotated for descriptive purposes. The annotations are not valid in a real
+JSON-formatted file.
+
+```
+{
+ // Properties that can be used in variable replacement operations.
+ // For example, ${keytab_dir} will resolve to "/etc/security/keytabs".
+ // Since variable replacement is recursive, ${realm} will resolve
+ // to ${kerberos-env/realm}, which in-turn will resolve to the
+ // declared default realm for the cluster
+ "properties": {
+ "realm": "${kerberos-env/realm}",
+ "keytab_dir": "/etc/security/keytabs"
+ },
+ // A list of global Kerberos identities. These may be referenced
+ // using /identity_name. For example the “spnego” identity may be
+ // referenced using “/spnego”
+ "identities": [
+ {
+ "name": "spnego",
+ // Details about this identity's principal. This instance does not
+ // declare any value for configuration or local username. That is
+ // left up to the services and components use wish to reference
+ // this principal and set overrides for those values.
+ "principal": {
+ "value": "HTTP/_HOST@${realm}",
+ "type" : "service"
+ },
+ // Details about this identity’s keytab file. This keytab file
+ // will be created in the configured keytab file directory with
+ // read-only access granted to root and users in the cluster’s
+ // default user group (typically, hadoop). To ensure that only
+ // a single copy exists on the file system, references to this
+ // identity should not override the keytab file details;
+ // however if it is desired that multiple keytab files are
+ // created, these values may be overridden in a reference
+ // within a service or component. Since no configuration
+ // specification is set, the the keytab file location will not
+ // be set in any configuration file by default. Services and
+ // components need to reference this identity to update this
+ // value as needed.
+ "keytab": {
+ "file": "${keytab_dir}/spnego.service.keytab",
+ "owner": {
+ "name": "root",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ }
+ }
+ },
+ {
+ "name": "smokeuser",
+ // Details about this identity's principal. This instance declares
+ // a configuration and local username mapping. Services and
+ // components can override this to set additional configurations
+ // that should be set to this principal value. Overriding the
+ // local username may create undesired behavior since there may be
+ // conflicting entries in relevant auth-to-local rule sets.
+ "principal": {
+ "value": "${cluster-env/smokeuser}@${realm}",
+ "type" : "user",
+ "configuration": "cluster-env/smokeuser_principal_name",
+ "local_username" : "${cluster-env/smokeuser}"
+ },
+ // Details about this identity’s keytab file. This keytab file
+ // will be created in the configured keytab file directory with
+ // read-only access granted to the configured smoke user
+ // (typically ambari-qa) and users in the cluster’s default
+ // user group (typically hadoop). To ensure that only a single
+ // copy exists on the file system, references to this identity
+ // should not override the keytab file details; however if it
+ // is desired that multiple keytab files are created, these
+ // values may be overridden in a reference within a service or
+ // component.
+ "keytab": {
+ "file": "${keytab_dir}/smokeuser.headless.keytab",
+ "owner": {
+ "name": "${cluster-env/smokeuser}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "cluster-env/smokeuser_keytab"
+ }
+ }
+ ]
+}
+```
+
+#### Example Service-level Kerberos Descriptor
+The following example is annotated for descriptive purposes. The annotations are not valid in a real
+JSON-formatted file.
+
+```
+{
+ // One or more services may be listed in a service-level Kerberos
+ // Descriptor file
+ "services": [
+ {
+ "name": "SERVICE_1",
+ // Service-level identities to be created if this service is installed.
+ // Any relevant keytab files will be distributed to hosts with at least
+ // one of the components on it.
+ "identities": [
+ // Service-specific identity declaration, declaring all properties
+ // needed initiate the creation of the principal and keytab files,
+ // as well as setting the service-specific configurations. This may
+ // be referenced by contained components using ../service1_identity.
+ {
+ "name": "service1_identity",
+ "principal": {
+ "value": "service1/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/service1.principal"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/service1.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": "r"
+ },
+ "configuration": "service1-site/service1.keytab.file"
+ }
+ },
+ // Service-level identity referencing the stack-level spnego
+ // identity and overriding the principal and keytab configuration
+ // specifications.
+ {
+ "name": "service1_spnego",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/service1.web.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/service1.web.keytab.file"
+ }
+ },
+ // Service-level identity referencing the stack-level smokeuser
+ // identity. No properties are being overridden and overriding
+ // the principal and keytab configuration specifications.
+ // This ensures that the smokeuser principal is created and its
+ // keytab file is distributed to all hosts where components of this
+ // this service are installed.
+ {
+ "name": "service1_smokeuser",
+ "reference": "/smokeuser"
+ }
+ ],
+ // Properties related to this service that require the auth-to-local
+ // rules to be dynamically generated based on identities create for
+ // the cluster.
+ "auth_to_local_properties" : [
+ "service1-site/security.auth_to_local"
+ ],
+ // Configuration properties to be set when this service is installed,
+ // no matter which components are installed
+ "configurations": [
+ {
+ "service-site": {
+ "service1.security.authentication": "kerberos",
+ "service1.security.auth_to_local": ""
+ }
+ }
+ ],
+ // A list of components related to this service
+ "components": [
+ {
+ "name": "COMPONENT_1",
+ // Component-specific identities to be created when this component
+ // is installed. Any keytab files specified will be distributed
+ // only to the hosts where this component is installed.
+ "identities": [
+ // An identity "local" to this component
+ {
+ "name": "component1_service_identity",
+ "principal": {
+ "value": "component1/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/comp1.principal",
+ "local_username" : "${service1-env/service_user}"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/s1c1.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": ""
+ },
+ "configuration": "service1-site/comp1.keytab.file"
+ }
+ },
+ // The stack-level spnego identity overridden to set component-specific
+ // configurations
+ {
+ "name": "component1_spnego_1",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/comp1.spnego.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp1.spnego.keytab.file"
+ }
+ },
+ // The stack-level spnego identity overridden to set a different set of component-specific
+ // configurations
+ {
+ "name": "component1_spnego_2",
+ "reference": "/spnego",
+ "principal": {
+ "configuration": "service1-site/comp1.someother.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp1.someother.keytab.file"
+ }
+ }
+ ],
+ // Component-specific configurations to set if this component is installed
+ "configurations": [
+ {
+ "service-site": {
+ "comp1.security.type": "kerberos"
+ }
+ }
+ ]
+ },
+ {
+ "name": "COMPONENT_2",
+ "identities": [
+ {
+ "name": "component2_service_identity",
+ "principal": {
+ "value": "component2/_HOST@${realm}",
+ "type" : "service",
+ "configuration": "service1-site/comp2.principal",
+ "local_username" : "${service1-env/service_user}"
+ },
+ "keytab": {
+ "file": "${keytab_dir}/s1c2.service.keytab",
+ "owner": {
+ "name": "${service1-env/service_user}",
+ "access": "r"
+ },
+ "group": {
+ "name": "${cluster-env/user_group}",
+ "access": ""
+ },
+ "configuration": "service1-site/comp2.keytab.file"
+ }
+ },
+ // The service-level service1_identity identity overridden to
+ // set component-specific configurations
+ {
+ "name": "component2_service1_identity",
+ "reference": "../service1_identity",
+ "principal": {
+ "configuration": "service1-site/comp2.service.principal"
+ },
+ "keytab": {
+ "configuration": "service1-site/comp2.service.keytab.file"
+ }
+ }
+ ],
+ "configurations" : [
+ {
+ "service-site" : {
+ "comp2.security.type": "kerberos"
+ }
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
diff --git a/versioned_docs/version-2.7.6/ambari-design/kerberos/kerberos_service.md b/versioned_docs/version-2.7.6/ambari-design/kerberos/kerberos_service.md
new file mode 100644
index 0000000..2045aa7
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/kerberos/kerberos_service.md
@@ -0,0 +1,341 @@
+---
+title: The Kerberos Service
+---
+<!---
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+- [Introduction](index.md)
+- [The Kerberos Descriptor](kerberos_descriptor.md)
+- [The Kerberos Service](#the-kerberos-service)
+ - [Configurations](#configurations)
+ - [kerberos-env](#kerberos-env)
+ - [krb5-conf](#krb5-conf)
+- [Enabling Kerberos](enabling_kerberos.md)
+
+<a name="the-kerberos-service"></a>
+
+## The Kerberos Service
+
+<a name="configurations"></a>
+
+### Configurations
+
+<a name="kerberos-env"></a>
+
+#### kerberos-env
+
+##### kdc_type
+
+The type of KDC being used.
+
+_Possible Values:_
+- `none`
+ - Ambari is not to integrate with a KDC. In this case, it is expected that the Kerberos identities
+will be created and the keytab files are distributed manually
+- `mit-kdc`
+ - Ambari is to integrate with an MIT KDC
+- `active-directory`
+ - Ambari is to integrate with an Active Directory
+- `ipa`
+ - Ambari is to integrate with a FreeIPA server
+
+##### manage_identities
+
+Indicates whether the Ambari-specified user and service Kerberos identities (principals and keytab files)
+should be managed (created, deleted, updated, etc...) by Ambari (`true`) or managed manually by the
+user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+##### create_ambari_principal
+
+Indicates whether the Ambari Kerberos identity (principal and keytab file used by Ambari, itself, and
+its views) should be managed (created, deleted, updated, etc...) by Ambari (`true`) or managed manually
+by the user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+This property is dependent on the value of `manage_identities`, where as if `manage_identities` is
+false, `create_ambari_principal` will assumed to be `false` as well.
+
+##### manage_auth_to_local
+
+Indicates whether the Hadoop auth-to-local rules should be managed by Ambari (`true`) or managed
+manually by the user (`false`).
+
+_Possible Values:_ `true`, `false`
+
+##### install_packages
+
+Indicates whether Ambari should install the Kerberos client packages (`true`) or not (`false`).
+If not, it is expected that Kerberos utility programs installed by the user (such as kadmin, kinit,
+klist, and kdestroy) are compatible with MIT Kerberos 5 version 1.10.3 in command line options and
+behaviors.
+
+_Possible Values:_ `true`, `false`
+
+##### ldap_url
+
+The URL to the Active Directory LDAP Interface. This value **must** indicate a secure channel using
+LDAPS since it is required for creating and updating passwords for Active Directory accounts.
+
+_Example:_ `ldaps://ad.example.com:636`
+
+If the `kdc_type` is `active-directory`, this property is mandatory.
+
+##### container_dn
+
+The distinguished name (DN) of the container used store the Ambari-managed user and service principals
+within the configured Active Directory
+
+_Example:_ `OU=hadoop,DC=example,DC=com`
+
+If the `kdc_type` is `active-directory`, this property is mandatory.
+
+##### encryption_types
+
+The supported (space-delimited) list of session key encryption types that should be returned by the KDC.
+
+_Default value:_ aes des3-cbc-sha1 rc4 des-cbc-md5
+
+##### realm
+
+The default realm to use when creating service principals
+
+_Example:_ `EXAMPLE.COM`
+
+This value is expected to be in all uppercase characters.
+
+##### kdc_hosts
+
+A comma-delimited list of IP addresses or FQDNs for the list of relevant KDC hosts. Optionally a
+port number may be included for each entry.
+
+_Example:_ `kdc.example.com, kdc1.example.com`
+
+_Example:_ `kdc.example.com:88, kdc1.example.com:88`
+
+##### admin_server_host
+
+The IP address or FQDN for the Kerberos administrative host. Optionally a port number may be included.
+
+_Example:_ `kadmin.example.com`
+
+_Example:_ `kadmin.example.com:88`
+
+If the `kdc_type` is `mit-kdc` or `ipa`, the value must be the FQDN of the Kerberos administrative host.
+
+##### master_kdc
+
+The IP address or FQDN of the master KDC host in a master-slave KDC deployment. Optionally a port
+number may be included.
+
+_Example:_ `kadmin.example.com`
+
+_Example:_ `kadmin.example.com:88`
+
+##### executable_search_paths
+
+A comma-delimited list of search paths to use to find Kerberos utilities like kadmin and kinit.
+
+_Default value:_ `/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin`
+
+##### password_length
+
+The length required length for generated passwords.
+
+_Default value:_ `20`
+
+##### password_min_lowercase_letters
+
+The minimum number of lowercase letters (a-z) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_uppercase_letters
+
+The minimum number of uppercase letters (A-Z) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_digits
+
+The minimum number of digits (0-9) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_punctuation
+
+The minimum number of punctuation characters (?.!$%^*()-_+=~) required in generated passwords
+
+_Default value:_ `1`
+
+##### password_min_whitespace
+
+The minimum number of whitespace characters required in generated passwords
+
+_Default value:_ `0`
+
+##### service_check_principal_name
+
+The principal name to use when executing the Kerberos service check
+
+_Example:_ `${cluster_name}-${short_date}`
+
+##### case_insensitive_username_rules
+
+Force principal names to resolve to lowercase local usernames in auth-to-local rules
+
+_Possible values:_ `true`, `false`
+
+_Default value:_ `false`
+
+##### ad_create_attributes_template
+
+A Velocity template to use to generate a JSON-formatted document containing the set of attribute
+names and values needed to create a new Kerberos identity in the relevant Active Directory.
+
+Variables include:
+
+- `principal_name` - the components (primary and instance) portion of the principal
+- `principal_primary` - the _primary component_ of the principal name
+- `principal_instance` - the _instance component_ of the principal name
+- `realm` - the `realm` portion of the principal
+- `realm_lowercase` - the lowercase form of the `realm` of the principal
+- `normalized_principal` - the full principal value, including the component and realms parts
+- `principal_digest` - a binhexed-encoded SHA1 digest of the normalized principal
+- `principal_digest_256` - a binhexed-encoded SHA256 digest of the normalized principal
+- `principal_digest_512` - a binhexed-encoded SHA512 digest of the normalized principal
+- `password` - the generated password
+- `is_service` - `true` if the principal is a _service_ principal, `false` if the principal is a _user_ principal
+- `container_dn` - the `kerberos-env/container_dn` property value
+
+_Note_: A principal is made up of the following parts: primary component, instances component
+(optional), and realm:
+
+* User principal: **_`primary_component`_**@**_`realm`_**
+* Service principal: **_`primary_component`_**/**_`instance_component`_**@**_`realm`_**
+
+_Default value:_
+
+```
+{
+"objectClass": ["top", "person", "organizationalPerson", "user"],
+"cn": "$principal_name",
+#if( $is_service )
+"servicePrincipalName": "$principal_name",
+#end
+"userPrincipalName": "$normalized_principal",
+"unicodePwd": "$password",
+"accountExpires": "0",
+"userAccountControl": "66048"
+}
+```
+
+This property is mandatory and only used if the `kdc_type` is `active-directory`
+
+##### kdc_create_attributes
+
+The set of attributes to use when creating a new Kerberos identity in the relevant (MIT) KDC.
+
+_Example:_ `-requires_preauth max_renew_life=7d`
+
+This property is optional and only used if the `kdc_type` is `mit-kdc`
+
+##### ipa_user_group
+
+The group in IPA that user principals should be a member of.
+
+This property is optional and only used if the `kdc_type` is `ipa`
+
+<a name="krb5-conf"></a>
+
+#### krb5-conf
+
+##### manage_krb5_conf
+
+Indicates whether the krb5.conf file should be managed (created, updated, etc...) by Ambari (`true`)
+or managed manually by the user (`false`).
+
+_Possible values:_ `true`, `false`
+
+_Default value:_ `false`
+
+##### domains
+
+A comma-separated list of domain names used to map server host names to the realm name.
+
+_Example:_ host.example.com, example.com, .example.com
+
+This property is optional
+
+##### conf_dir
+
+The krb5.conf configuration directory
+Default value: /etc
+
+##### content
+
+Customizable krb5.conf template (Jinja template engine)
+
+_Default value:_
+
+```
+[libdefaults]
+ renew_lifetime = 7d
+ forwardable = true
+ default_realm = {{realm}}
+ ticket_lifetime = 24h
+ dns_lookup_realm = false
+ dns_lookup_kdc = false
+ default_ccache_name = /tmp/krb5cc_%{uid}
+ #default_tgs_enctypes = {{encryption_types}}
+ #default_tkt_enctypes = {{encryption_types}}
+{% if domains %}
+[domain_realm]
+{%- for domain in domains.split(',') %}
+ {{domain|trim()}} = {{realm}}
+{%- endfor %}
+{% endif %}
+[logging]
+ default = FILE:/var/log/krb5kdc.log
+ admin_server = FILE:/var/log/kadmind.log
+ kdc = FILE:/var/log/krb5kdc.log
+
+[realms]
+ {{realm}} = {
+{%- if master_kdc %}
+ master_kdc = {{master_kdc|trim()}}
+{%- endif -%}
+{%- if kdc_hosts > 0 -%}
+{%- set kdc_host_list = kdc_hosts.split(',') -%}
+{%- if kdc_host_list and kdc_host_list|length > 0 %}
+ admin_server = {{admin_server_host|default(kdc_host_list[0]|trim(), True)}}
+{%- if kdc_host_list -%}
+{%- if master_kdc and (master_kdc not in kdc_host_list) %}
+ kdc = {{master_kdc|trim()}}
+{%- endif -%}
+{% for kdc_host in kdc_host_list %}
+ kdc = {{kdc_host|trim()}}
+{%- endfor -%}
+{% endif %}
+{%- endif %}
+{%- endif %}
+ }
+
+{# Append additional realm declarations below #}
+```
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/ambari-metrics-whitelisting.md b/versioned_docs/version-2.7.6/ambari-design/metrics/ambari-metrics-whitelisting.md
new file mode 100644
index 0000000..6852652
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/ambari-metrics-whitelisting.md
@@ -0,0 +1,60 @@
+# Ambari Metrics - Whitelisting
+
+In large clusters (500+ nodes), sometimes there are performance issues seen in AMS aggregations. In the ambari-metrics-collector log file, we can see log lines that look like
+
+```
+20:51:30,952 INFO 2080712366@qtp-974606690-381 AsyncProcess:1597 - #1, waiting for 13948 actions to finish
+20:51:31,601 INFO 1279097595@qtp-974606690-359 AsyncProcess:1597 - #1, waiting for 19376 actions to finish
+```
+
+In Ambari 3.0.0, we are tackling these performance issues through a complete schema and aggregation logic revamp. Until then, we can use AMS whitelisting to reduce the number of metrics tracked by AMS, there by solving this scale problem.
+
+## How do we enable whitelisting in AMS.
+
+**Until Ambari 2.4.3**
+ A metric whitelist file can be used to track the set of metrics in AMS. All other metrics will be discarded.
+
+**STEPS**
+
+* Metric whitelist file is present in /etc/ambari-metrics-collector/conf. If not present in older Ambari versions, it can be downloaded from https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-timelineservice/conf/unix/metrics_whitelist to the collector host.
+* Adding config ams-site : timeline.metrics.whitelist.file = <path_to_whitelist_file>
+* Restart AMS collector
+* Verify whitelisting config was used. In ambari-metrics-collector log file, verify the line 'Whitelisting # metrics'.
+
+**From Ambari 2.5.0 onwards**
+From Ambari 2.5.0, more refinements for whitelisting were included.
+
+* **App Blacklisting** - Blacklist metrics from one or more services. Other service metrics will be entirely allowed or controlled through a whitelist file.
+
+ ```
+ ams-site : timeline.metrics.apps.blacklist = hbase,namenode
+ ```
+
+* **App Whitelisting** - Whitelist metrics from one or more services.
+
+ ```
+ ams-site:timeline.metrics.apps.whitelist = nimbus,datanode
+ ```
+
+ NOTE : The App name can be found from the metadata URL :
+
+ ```
+ http:<metrics_collector_host>:6188/ws/v1/timeline/metrics/metadata
+ ```
+
+* **Metric Whitelisting** - Same as the whitelisting method in Ambari 2.4.3 (through a whitelist file).
+In addition to supplying metric names in the whitelist file, patterns can also be supplied using the ._p_ perfix. For example, a pattern can be specified as follows
+
+._p_dfs.FSNamesystem.*
+
+._p_jvm.JvmMetrics*
+
+An example of a metric whitelisting file that has both metrics and patterns - https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-timelineservice/src/test/resources/test_data/metric_whitelist.dat.
+
+These whitelisting/blacklisting techniques can be used together.
+
+* If you just have timeline.metrics.whitelist.file = <some_file>, only metrics in that file will be allowed (irrespective of whatever apps might be sending them).
+* If you just have timeline.metrics.apps.blacklist = datanode, all datanode metrics will be disallowed. Metrics from all other services will be allowed.
+* If you just have timeline.metrics.apps.whitelist = namenode, it is not useful since there is no blacklisting at all.
+* If you have metric whitelisting enabled (through a file), and have timeline.metrics.apps.blacklist = datanode, all datanode metrics will be disallowed. The whitelisted metrics from other services will be allowed.
+* If you have timeline.metrics.apps.blacklist = datanode, timeline.metrics.apps.whitelist = namenode and metric whitelisting enabled (through a file), datanode metrics will be blacklisted, all namenode metrics will be allowed, and whitelisted metrics from other services will be allowed.
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/ambari-server-metrics.md b/versioned_docs/version-2.7.6/ambari-design/metrics/ambari-server-metrics.md
new file mode 100644
index 0000000..380fa38
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/ambari-server-metrics.md
@@ -0,0 +1,109 @@
+# Ambari Server Metrics
+
+## Outline
+Ambari Server can be used to manage a few tens of nodes to 1000+ nodes. In large clusters, or clusters with sub-optimal infrastructure, capturing Ambari Server performance can be useful for tuning the server as well as guiding future performance optimization efforts. Through this feature, a Metrics Source-Sink framework has been implemented within the AmbariServer which facilitates fine grained control of the various metric sources as well as eases addition of future metrics sources.
+
+Specifically, Ambari server JVM and database (EclipseLink) metric sources have been wired up to send metrics to AMS, and visualized through Grafana dashboards.
+
+## Metrics System Terminology
+
+## Configuration / Enabling
+* To enable Ambari Server metrics, make sure the following config file exists during Ambari Server start/restart - /etc/ambari-server/conf/metrics.properties.
+* Currently, only 2 metric sources have been implemented - JVM Metric Source and Database Metric Source.
+* To add / remove a metric source to be tracked the following config needs to be modified in the metrics.properties file.
+ ```
+ metric.sources=jvm,database
+ ```
+* Source specific configs are discussed in the metrics source section.
+
+## Metric Sources
+
+Name|Functionality|Interface|Implementation(s)
+----|-------------|---------|-----------------------
+Metrics Service |Serves as a starting point for the Metrics system.<br></br>Loads metrics configuration.<br></br>Initializes the sink. If the sink is not properly initialized (AMS is not yet deployed), it tries to re-initialize every 5 minutes asynchronously.<br></br>Initializes and starts configured sources. | org.apache.ambari.server.metrics.system.MetricsService | org.apache.ambari.server.metrics.system.impl.MetricsServiceImpl
+Metric Source | Any sub-component of Ambari Server that has metrics of interest.<br></br>Needs subset of metrics configuration corresponding to the source and the Sink to be initialized.<br></br>Periodically publishes metrics to the Sink.<br></br>Example - JVM, database etc. | org.apache.ambari.server.metrics.system.MetricsSource |org.apache.ambari.server.metrics.system.impl.JvmMetricsSource<br></br>org.apache.ambari.server.metrics.system.impl.DatabaseMetricsSource
+Metric Sink | Flushes the metrics to an external metrics collection system (Metrics Collector) | org.apache.ambari.server.metrics.system.MetricsSink | org.apache.ambari.server.metrics.system.impl.AmbariMetricSinkImp
+
+### JVM Metrics
+
+**Working**
+
+* Collects and publishes Ambari Server JVM related metrics using Codahale library.
+* Metrics collected for GC, Buffers, Threads, Memory and File descriptor.
+* To enable this source, add "jvm" to the metric.sources config in metrics.properties and restart Ambari Server.
+
+**Configs**
+
+Config Name|Default Value|Explanation
+-----------|-------------|---------------------
+source.jvm.class | org.apache.ambari.server.metrics.system.impl.JvmMetricsSource | Class used to collect JVM Metrics.
+source.jvm.interval | 10 | Interval, in seconds, used to denote how often metrics should be collected.
+
+**Grafana dashboard**
+
+* The 'Ambari Server - JVM' dashboard represents the metrics captured from the JvmMetricsSource.
+* Contains memory, GC and threads related graphs that might be of interest on a non performing syste
+
+### Database Metrics
+
+**Working**
+
+The EclipseLink PeformanceMonitor has been extended to support a custom Ambari Database Metrics source. It provides us with monitoring data per entity and per operation on the entity.
+
+The Performance Monitor provides 2 kinds of metrics -
+
+* Counter - Number of occurrences of the operation / query. For such type of metrics, the metric name will start with Counter.
+* Timer - Total cumulative time spent on the operation / query. For such type of metrics, the metric name will start with Timer.
+For example, some of the metrics being collected tothe Database Metrics Source.
+
+* Counter.ReadObjectQuery.HostRoleCommandEntity.readHostRoleCommandEntity
+
+* Timer.ReadAllQuery.StackEntity.StackEntity.findByNameAndVersion.ObjectBuilding
+
+
+In addition to the Counter & Timer metrics collected from EclipseLink, a computed metric of Timer/Counter (divided by) is also sent. This metrics provides the average time taken for an operation across time.
+
+For example, if
+
+```
+ Counter Metric : Counter.ReadAllQuery.HostRoleCommandEntity = 10000
+ Timer Metric : Timer.ReadAllQuery.HostRoleCommandEntity = 50
+ Computed Metric (Avg time for the operation) : ReadAllQuery.HostRoleCommandEntity = 200 (10000 div by 50)
+```
+
+As seen above, the computed metric name will be the same as the Timer & Counter metric except without the 'Timer.' / 'Counter.' prefix.
+
+To enable this source, add "**database**" to the **metric.sources** config in metrics.properties and restart Ambari Server.
+
+**configs**
+
+Config Name|Default Value|Explanation
+-----------|-------------|---------------------
+source.database.class | org.apache.ambari.server.metrics.system.impl.DatabaseMetricsSource | Class used to collect Database Metrics from extended Performance Monitor class - org.apache.ambari.server.metrics.system.impl.AmbariPerformanceMonitor.
+source.database.performance.monitor.query.weight | HEAVY | EclipseLink Performance monitor granularity : NONE / NORMAL / HEAVY / ALL
+source.database.monitor.dumptime | 60000 | Collection interval in milliseconds
+source.database.monitor.entities | Cluster(.*)Entity,Host(.*)Entity,ExecutionCommandEntity, ServiceComponentDesiredStateEntity,Alert(.*)Entity,StackEntity,StageEntity | Only these entities' metrics will be collected and tracked. (org.apache.ambari.server.orm.entities).
+source.database.monitor.query.keywords.include | CacheMisses | Include some metrics which have the keyword even if they are not part of requested Entities.
+
+**Grafana dashboards**
+
+Ambari database metrics have been represented in 2 Grafana dashboards.
+
+* 'Ambari Server - Database' dashboard
+ * An aggregate dashboard that displays Total ReadAllQuery, Cache Hits, Cache Misses, Query Stages, Query Types across all entities.
+ * It also contains an example of how to visualize Timer, Counter and Avg Timing data for a specific entity - HostRoleCommandEntity.
+* 'Ambari Server - Top N Entities' dashboard
+ * Shows Top N entities that have maximum number of ReadAllQuery operations done on them.
+ * Shows Top N entities that the database spent the most time in ReadAllQuery operations.
+ * Shows Top N entities that have maximum Cache Misses
+These dashboard graphs are meant to provide an example of how to create graphs to query specific entities or operations in an Ad Hoc manner.
+
+## Disabling Ambari Server metrics globally
+
+* Add the following config to /etc/ambari-server/conf/ambari.properties
+ * ambariserver.metrics.disable=true
+* Restart Ambari Server
+
+## Related JIRA
+
+[AMBARI-17589](https://issues.apache.org/jira/browse/AMBARI-17589)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/configuration.mdx b/versioned_docs/version-2.7.6/ambari-design/metrics/configuration.mdx
new file mode 100644
index 0000000..78d05c2
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/configuration.mdx
@@ -0,0 +1,190 @@
+# Configuration
+
+## Metrics Collector
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+ams-site | /etc/ambari-metrics-collector/conf/ams-site.xml |Settings that control the API daemon and the aggregator threads.
+ams-env | /etc/ambari-metrics-collector/conf/ams-env.sh |Memory / PATH settings for the API daemon
+ams-hbase-site | /etc/ams-hbase/conf/hbase-site.xml<br></br>/etc/ambari-metrics-collector/conf/hbase-site.xml |Settings for the HBase storage used for the metrics data.
+ams-hbase-env | /etc/ams-hbase/conf/hbase-env.sh |Memory / PATH settings for the HBase storage.<br></br>**Note**: In embedded more, the heap memory setting for master and regionserver is summed up as total memory for single HBase daemon.
+
+## Metrics Monitor
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+ams-env |/etc/ambari-metrics-monitor/conf/ams-env.sh |Used for log and pid dir modifications, this is the same configuration as above, common to both components.
+metric_groups |/etc/ambari-metrics-monitor/conf/metric_groups.conf |Not available in the UI. Used to control what **HOST/SYSTEM** metrics are reported.
+metric_monitor |/etc/ambari-metrics-monitor/conf/metric_monitor.ini |Not available in the UI. Settings for the monitor daemon.
+
+## Metric Collector - ams-site - Configuration details
+
+* Modifying retention interval for time aggregated data. Refer to Aggregation section for more information on aggregation: API spec
+(Note: In Ambari 2.0 and 2.1, the Phoenix version does not support Alter TTL queries. So these can be modified from the UI, only at install time. Please refer to Known Issues section for workaround.)
+
+Configuration Type| File Path | Comment
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.ttl |86400 |1 minute resolution data purge interval. Default is 1 day.
+timeline.metrics.host.aggregator.minute.ttl |604800 |Host based X minutes resolution data purge interval. Default is 7 days.<br></br>(X = configurable interval, default interval is 2 minutes)
+timeline.metrics.host.aggregator.hourly.ttl |2592000 |Host based hourly resolution data purge interval. Default is 30 days.
+timeline.metrics.host.aggregator.daily.ttl |31536000 |Host based daily resolution data purge interval. Default is 1 year.
+timeline.metrics.cluster.aggregator.minute.ttl |2592000 |Cluster wide minute resolution data purge interval. Default is 30 days.
+timeline.metrics.cluster.aggregator.hourly.ttl |31536000 |Cluster wide hourly resolution data purge interval. Default is 1 year.
+timeline.metrics.cluster.aggregator.daily.ttl |63072000 |Cluster wide daily resolution data purge interval. Default is 2 years.
+**Note**: The precision table at 1 minute resolution stores raw precision data for 1 day, when user queries for past 1 hour of data, the AMS API returns raw precision data.
+
+* Modifying the aggregation intervals for HOST and CLUSTER aggregators.
+On wake up the aggregator threads resume from (last run time + interval) as long as last run time is not too old.
+
+Property| Default Value Path | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.minute.interval |120 |Time in seconds to sleep for the minute resolution host based aggregator. Default resolution is 2 minutes.
+timeline.metrics.host.aggregator.hourly.interval |3600 |Time in seconds to sleep for the hourly resolution host based aggregator. Default resolution is 1 hour.
+timeline.metrics.host.aggregator.daily.interval |86400 |Time in seconds to sleep for the day resolution host based aggregator. Default resolution is 24 hours.
+timeline.metrics.cluster.aggregator.minute.interval |120 |Time in seconds to sleep for the minute resolution cluster wide aggregator. Default resolution is 2 minutes.
+timeline.metrics.cluster.aggregator.hourly.interval |3600 |Time in seconds to sleep for the hourly resolution cluster wide aggregator. Default is 1 hour.
+timeline.metrics.cluster.aggregator.daily.interval |86400 |Time in seconds to sleep for the day resolution cluster wide aggregator. Default is 24 hours.
+
+* Modifying checkpoint information. The aggregators store the timestamp or last run time on local FS.
+After reading last run time, the aggregator thread decides to aggregate as long as the (currentTime - lastRunTime) < multipler * aggregation_interval.
+The multiplier is configurable for each aggregator.
+
+Property | Default Value | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.minute.checkpointCutOffMultiplier |2 |Multiplier value * interval = Max allowed checkpoint lag. Effectively if aggregator checkpoint is greater than max allowed checkpoint delay,the checkpoint will be discarded by the aggregator.
+timeline.metrics.host.aggregator.hourly.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.host.aggregator.daily.checkpointCutOffMultiplier |1 |Same as above
+timeline.metrics.cluster.aggregator.minute.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.cluster.aggregator.hourly.checkpointCutOffMultiplier |2 |Same as above
+timeline.metrics.cluster.aggregator.daily.checkpointCutOffMultiplier |1 |Same as above
+timeline.metrics.aggregator.checkpoint.dir |/var/lib/ambari-metrics-collector/checkpoint |Directory to store aggregator checkpoints. Change to a permanent location so that checkpoint are not lost.
+
+* Other important configuration properties
+
+
+Property | Default Value | Description
+---------------|-------------------------------------------------|----------------------------------------
+timeline.metrics.host.aggregator.*.disabled |false |Disable host based * aggregations. ( * => minute/hourly/daily)
+timeline.metrics.cluster.aggregator.*.disabled |false |Disable cluster based * aggregations. ( * => minute/hourly/daily)
+timeline.metrics.cluster.aggregator.minute.timeslice.interval |30 |Lowest resolution of desired data for cluster level minute aggregates.
+timeline.metrics.hbase.data.block.encoding |FAST_DIFF| Codecs are enabled on a table by setting the DATA_BLOCK_ENCODING property. Default encoding is FAST_DIFF. This can be changed only before creating tables.
+timeline.metrics.hbase.compression.scheme |SNAPPY |Compression codes need to be installed and available before setting the scheme. Default compression is SNAPPY. Disable by setting to None. This can be changed only before creating tables.
+timeline.metrics.service.default.result.limit |5760 |Max result limit on number of rows returned. Calculated as follows: 4 aggregate metrics/min * 60 * 24: Retrieve aggregate data for 1 day.
+timeline.metrics.service.checkpointDelay |60 |Time in seconds to sleep on the first run or when the checkpoint is too old.
+timeline.metrics.service.resultset.fetchSize |2000 |JDBC resultset prefect size for aggregator queries.
+timeline.metrics.service.cluster.aggregator.appIds |datanode,nodemanager,hbase |List of application ids to use for aggregating host level metrics for an application. Example: bytes_read across Yarn Nodemanagers.
+
+## Configuring Ambari Metrics service in distributed mode
+
+In distributed mode, Metric Collector writes go to HDFS of the cluster. Currently distributed mode does not support multi-mode Metric Collector, however the plan is to allow Metric Collector to scale horizontally to allow a multi-node HBase storage layer.
+
+**Note**: Make sure there is a local Datanode hosted with the Collector, it provides AMS HBase the distinct advantage of write and reads sharded across the data volumes available to the DN.
+
+Following steps need to be performed either at install time or after deploy to configure Metric Collector in distributed mode. Note: If configuring after install, the data will not be automatically copied over to HDFS.
+
+1. Edit ams-site, Set timeline.metrics.service.operation.mode = distributed
+2. Edit ams-hbase-site,
+ - Set hbase.rootdir = hdfs://namenode-host;:8020/user/ams/hbase [ If NN HA is enabled, hdfs://nameservice-id/user/ams/hbase ]
+ (Note: /user/ams/hbase here is the directory where metric data will be stored in HDFS)
+ - Set hbase.cluster.distributed = true
+ - Add dfs.client.read.shortcircuit = true (This is an optimization with a local DN present)
+3. Restart Metrics Collector
+
+**Note**: In Ambari 2.0.x, there is a bug in deploying AMS in distributed mode, if Namenode HA is enabled. Please follow the instruction listed in this JIRA as workaround steps: ([AMBARI-10707](https://issues.apache.org/jira/browse/AMBARI-10707))
+
+**Note**: In Ambari 2.2.1, stack advisor changes the dependent configs for distributed mode automatically through recommendations. Ideally, the only config that needs to be changed is timeline.metrics.service.operation.mode = distributed. The other configs - hbase.rootdir, hbase.cluster.distributed and dfs.client.read.shortcircuit will be changed automatically.
+
+## Migrating data from embedded to distributed mode
+
+Steps to migrate existing metric data to HDFS and starting AMS in distributed mode:
+
+* Stop AMS Metric Collector
+* Create hdfs directory for ams user, Example:
+
+ ```bash
+ su - hdfs -c 'hdfs dfs -mkdir /user/ams'
+ su - hdfs -c 'hdfs dfs -chown ams:hadoop /user/ams'
+ ```
+* Copy the metric data from the AMS local directory (existing value of hbase.rootdir in ams-hbase-site) to HDFS directory. Example:
+
+ ```bash
+ cd /var/lib/ambari-metrics-collector/
+ su - hdfs -c 'hdfs dfs -copyFromLocal hbase hdfs:// <namnode-http-address>:8020/user/ams/'
+ su - hdfs -c 'hdfs dfs -chown -R ams:hadoop /user/ams/hbase'
+ ```
+
+* Start the Metric Collector after making the changes needed for distributed mode.
+
+## Enabling HBase Region, User and Table Metrics
+
+Ambari disables HBase metrics (per region, per user and per table) by default. HBase metrics can be numerous and can cause performance issues. HBase RegionServer metrics are available by default.
+
+If you want HBase (per region, per user and per table) metrics to be collected by Ambari, you can do the following. It is **highly recommended** that you test turning on this option and confirm that your AMS performance is acceptable.
+
+### Step-by-step guide
+1. On the Ambari Server, browse to:
+
+ ```bash
+ var/lib/ambari-server/resources/stacks/HDP/3.0/services/HBASE/ package/templates
+ ```
+
+2. When Ambari is older than 2.7.0, on the Ambari Server, browse to:
+
+ ```bash
+ var/lib/ambari-server/resources/common-services/HBASE/ $VERSION/package/templates
+ ```
+
+3. Edit the following template files:
+
+ ```bash
+ hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2
+ hadoop-metrics2-hbase.properties-GANGLIA-RS.j2
+ ```
+
+4. Comment out (or remove) the following lines:
+
+ ```bash
+ *.source.filter.class=org.apache.hadoop.metrics2.filter. RegexFilter
+ hbase.*.source.filter.exclude=.*(Regions|Users|Tables).*
+ ```
+
+5. Save the template files and restart Ambari Server and then HBase for the changes to take effect.
+
+:::tip
+If you upgrade Ambari to a newer version, you will need to re-apply this change to the template file.
+:::
+
+## Enabling HDFS per-user Metrics
+
+HDFS per-user Metrics aren't emitted by default. Kindly exercise caution before enabling them and make sure to refer to the details of client and service port numbers.
+
+To be able to use the HDFS - Users dashboard in your Grafana instance as well as to view metrics for HDFS per user, you will need to add these custom properties to your configuration.
+
+### Step-by-step guide
+In Ambari, HDFS > Configs > Advanced > Custom hdfs-site, Add the following properties.
+
+```
+dfs.namenode.servicerpc-address=<namenodehost>:8021
+
+ipc.8020.callqueue.impl=org.apache.hadoop.ipc.FairCallQueue
+
+ipc.8020.backoff.enable=true
+
+ipc.8020.scheduler.impl=org.apache.hadoop.ipc.DecayRpcScheduler
+
+ipc.8020.scheduler.priority.levels=3
+
+ipc.8020.decay-scheduler.backoff.responsetime.enable=true
+
+ipc.8020.decay-scheduler.backoff.responsetime.thresholds=10,20,30
+```
+
+**Things to Consider**
+client port : 8020 (if different, replace it with appropriate port in all keys)Things to consider:
+
+* service port: 8021 (if different, replace it with appropriate port in first value)
+* namenodehost: needs to be a FQDN.
+Once these properties are added, it should look like this.
+
+
+**Restart HDFS & you should see the metrics being emitted. You should now also be able to use the HDFS - Users Dashboard in Grafana.**
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/ams-arch.jpg b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/ams-arch.jpg
new file mode 100644
index 0000000..df41aa6
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/ams-arch.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/connect-phoenix.png b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/connect-phoenix.png
new file mode 100644
index 0000000..dac73fd
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/connect-phoenix.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png
new file mode 100644
index 0000000..51eeed6
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/enabling-hdfs-user-metrics.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/hosts-metadata.png b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/hosts-metadata.png
new file mode 100644
index 0000000..7bd26bd
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/hosts-metadata.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/metrics-datastructure.png b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/metrics-datastructure.png
new file mode 100644
index 0000000..fe56c81
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/metrics-datastructure.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/metrics-metadata.png b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/metrics-metadata.png
new file mode 100644
index 0000000..4d1f8c2
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/metrics-metadata.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/restart-datanode.png b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/restart-datanode.png
new file mode 100644
index 0000000..31b0f0d
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/imgs/restart-datanode.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/index.md b/versioned_docs/version-2.7.6/ambari-design/metrics/index.md
new file mode 100644
index 0000000..63c92e3
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/index.md
@@ -0,0 +1,22 @@
+# Metrics
+
+**Ambari Metrics System** ("AMS") is a system for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+## Terminology
+
+Term | Definition
+--------------------------------|-------------------------------------------------------------
+Ambari Metrics System (“AMS”) | The built-in metrics collection system for Ambari.
+Metrics Collector | The standalone server that collects metrics, aggregates metrics, serves metrics from the Hadoop service sinks and the Metrics Monitor.
+Metrics Monitor | Installed on each host in the cluster to collect system-level metrics and forward to the Metrics Collector.
+Metrics Hadoop Sinks | Plugs into the various Hadoop components sinks to send Hadoop metrics to the Metrics Collector.
+
+## Architecture
+Following image depicts the high level conceptual architecture of the new Ambari Metrics System:
+
+
+
+The **Metrics Collector** is daemon that receives data from registered publishers (the Monitors and Sinks). The Collector itself is build using Hadoop technologies such as HBase Phoenix and ATS. The Collector can store data on the local filesystem (referred to as "embedded mode") or use an external HDFS (referred to as "distributed mode").
+
+## Learn More
+Browse the following to learn more about the [Ambari Metrics REST API](./metrics-api-specification.md) specification and about advanced [Configuration](./configuration.mdx) of AMS.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/metrics-api-specification.md b/versioned_docs/version-2.7.6/ambari-design/metrics/metrics-api-specification.md
new file mode 100644
index 0000000..72ae688
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/metrics-api-specification.md
@@ -0,0 +1,63 @@
+---
+title: Ambari Metrics API specification
+---
+
+The Ambari REST API supports metric queries at CLUSTER, HOST, COMPONENT and HOST COMPONENT levels.
+
+Broadly, the types of metrics queries supported are: **time range** or **point in time**.
+
+Following is an illustration of an API call that fetches metrics from the Metrics backend service using Ambari API.
+
+## CLUSTER
+
+E.g.: Dashboard metrics : Fetch load average across all nodes of a cluster
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>?fields=metrics/load[1430844925,1430848525,15]&_=1430848532904
+```
+The above API call retrieves the load average, aggregated across all hosts in the cluster.
+
+The request part of the API call selects the cluster instance while the predicate includes the metric with the time range query, followed by current time in milliseconds.
+
+Time range query:
+
+Field | Value |Comment
+-----------|------------|----------------------------------------
+Start time | 1430844925 |Start time for the time range. (Epoch)
+End time | 1430848525 |End time of the time range. (Epoch)
+Step | 15 |Default step, this is used only for zero padding or null padding if the padding interval cannot be determined from the retrieved data.
+
+## HOST
+
+E.g.: Host metrics: Get the cpu utilization on a particular host in the cluster
+
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>?fields=metrics/cpu/cpu_user[1430844610,1430848210,15],metrics/cpu/cpu_wio[1430844610,1430848210,15],metrics/cpu/cpu_nice[1430844610,1430848210,15],metrics/cpu/cpu_aidle[1430844610,1430848210,15],metrics/cpu/cpu_system[1430844610,1430848210,15],metrics/cpu/cpu_idle[1430844610,1430848210,15]&_=1430848217591
+```
+
+The above API call retrieves all cpu related metrics required to chart out cpu utilization on a host page.
+
+The request part of the above API call selects the host which is queried while the predicate part includes the metric names with time range query.
+
+## COMPONENT
+
+E.g.: Service metrics: Get the capacity utilization metrics aggregated across all datanodes but only the latest value (point in time)
+
+```
+ http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/services/HDFS/components/DATANODE?fields=metrics/dfs/datanode/DfsUsed,metrics/dfs/datanode/Capacity&_=1430849798630
+```
+
+The above API call retrieves two metrics values which represent the point in time metric value for the requested metrics obtained for the Metrics Service backend. (non JMX)
+
+For a call to get JMX metrics directly from a Hadoop daemon, use the metrics name that corresponds to the JMX MBean metric, example: metrics/dfs/FSNamesystem/CapacityUsedGB (Refer to Stack Defined Metrics for more info)
+
+The request part of the above API call selects the service from the cluster while predicate part includes the metrics names.
+
+## HOST COMPONENT
+E.g.: Daemon metrics: Get the heap memory usage for active Namenode
+
+```
+http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/NAMENODE?fields=metrics/jvm/memHeapCommittedM[1430847303,1430850903,15],metrics/jvm/memHeapUsedM[1430847303,1430850903,15]&_=1430850903846
+The above API call retrieves JVM heap metrics for the Active Namenode in the cluster.
+```
+
+The request part of the api selects the Namenode host component while predicate part includes metrics with time range query.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/metrics-collector-api-specification.md b/versioned_docs/version-2.7.6/ambari-design/metrics/metrics-collector-api-specification.md
new file mode 100644
index 0000000..a9e90fa
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/metrics-collector-api-specification.md
@@ -0,0 +1,232 @@
+# Metrics Collector API Specification
+
+## Sending Metrics to AMS (POST)
+
+Sending metrics to Ambari Metrics Service can be achieved through the following API call.
+
+The Sink implementations responsible for sending metrics to AMS, buffer data for 1 minute before sending. TimelineMetricCache provides a simple cache implementation to achieve this behavior.
+
+Sample sink implementation use by Hadoop daemons: https://github.com/apache/ambari/tree/trunk/ambari-metrics/ambari-metrics-hadoop-sink
+
+```uri
+POST http://<ambari-metrics-collector>:6188/ws/v1/timeline/metrics
+```
+
+```json
+{
+ "metrics": [
+ {
+ "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",
+ "appid": "amssmoketestfake",
+ "hostname": "ambari20-5.c.pramod-thangali.internal",
+ "timestamp": 1432075898000,
+ "starttime": 1432075898000,
+ "metrics": {
+ "1432075898000": 0.963781711428,
+ "1432075899000": 1432075898000
+ }
+ }
+ ]
+}
+```
+
+```
+Connecting (POST) to <ambari-metrics-collector>:6188/ws/v1/timeline/metrics/
+Http response: 200 OK
+```
+
+## Fetching Metrics from AMS (GET)
+
+**Sample call**
+```
+GET http://<ambari-metrics-collector>:6188/ws/v1/timeline/metrics?metricNames=AMBARI_METRICS.SmokeTest.FakeMetric&appId=amssmoketestfake&hostname=<hostname>&precision=seconds&startTime=1432075838000&endTime=1432075959000
+Http response: 200 OK
+Http data:
+{
+ "metrics": [
+ {
+ "timestamp": 1432075898089,
+ "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",
+ "appid": "amssmoketestfake",
+ "hostname": "ambari20-5.c.pramod-thangali.internal",
+ "starttime": 1432075898000,
+ "metrics": {
+ "1432075898000": 0.963781711428,
+ "1432075899000": 1432075898000
+ }
+ }
+ ]
+}
+```
+
+**Generic GET call format**
+```uri
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=<>&hostname=<>&appId=<>&startTime=<>&endTime=<>&precision=<>
+```
+
+**Query Parameters Explanation**
+
+Parameter|Optional/Mandatory|Explanation|Values it can take
+---------|------------------|-----------|-------------------
+metricNames | Mandatory Comma | separated list of metrics that are required. | disk_free,mem_free... etc
+appId | Mandatory |The AppId that corresponds to the metricNames that were requested. Currently, only 1 AppId is required and allowed. | HOST/namenode/datanode/nimbus/hbase/kafka_broker/FLUME_HANDLER etc
+hostname | Optional | Comma separated list of hostnames. When no specified, cluster aggregates are returned. | h1,h2..etc
+startTime, endTime | Optional | Start and End time values. If not specified, the last data point of the metric is returned. | epoch times in seconds or milliseconds
+precision | Optional | What precision the data needs to be returned. If not specified, the precision is calculated based on the time range requested (Table below) |SECONDS/MINUTES/DAYS/HOURS
+
+**Precision query parameter (Default resolution)**
+
+ <table class="confluenceTable">
+ <colgroup>
+ <col />
+ <col />
+ <col />
+ </colgroup>
+ <tbody>
+ <tr>
+ <th class="confluenceTh"><p><span>Query Time</span></p><p><span>range</span></p></th>
+ <th class="confluenceTh"><p><span>Resolution of </span><span>returned metrics</span></p></th>
+ <th class="confluenceTh"><p><span>Comments</span></p></th>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>Upto 2 hours</span></p></td>
+ <td class="confluenceTd"><p><span>SECONDS</span></p></td>
+ <td class="confluenceTd">
+ <ul>
+ <li><span><span>10 second data for host metrics</span></span></li>
+ <li><span><span>30 second data for Aggregated query (No host specified)</span><br class="_mce_tagged_br" /></span></li>
+ </ul></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>2 hours - 1 day</span></p></td>
+ <td class="confluenceTd"><p><span>MINUTES</span></p></td>
+ <td class="confluenceTd"><p><span>5 minute data</span></p></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span><span>1 day</span> - 30 days</span></p></td>
+ <td class="confluenceTd"><p>HOURS</p></td>
+ <td class="confluenceTd"><p><span>1 hour data</span></p></td>
+ </tr>
+ <tr>
+ <td class="confluenceTd"><p><span>> 30 days</span></p></td>
+ <td class="confluenceTd"><p><span>DAYS</span></p></td>
+ <td class="confluenceTd">1 day data</td>
+ </tr>
+ </tbody>
+ </table>
+
+**Specifying Aggregate Functions**
+
+The metricName can have a specific aggregate function qualifier after the metricName (as shown below) to request specific aggregates. Valid values are ._avg, ._max, ._min, ._sum. When an aggregate query is requested without an aggregate function in the metricName, the default is AVG.
+Examples
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._avg,regionserver.Server.writeRequestCount._max&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.readRequestCount,regionserver.Server.writeRequestCount._max&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+
+**Specifying Post processing Functions**
+
+Similar to aggregate functions, post processing functions can also be specified. Currently, we have 2 post processing functions - rate (Rate per second) and diff (difference between consecutive values). Post processing functions can also be applied after aggregate functions.
+Examples
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._rate,regionserver.Server.writeRequestCount._diff&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.readRequestCount._max._diff&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+**Specifying Wild Cards**
+
+Both metricNames and hostname take wildcard (%) values for a group of metric (or hosts). A query can have a combination of full metric names and names with wildcards also.
+
+Examples
+
+```
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=master.AssignmentManger.ritCount,regionserver.Server.%&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+
+http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.%&hostname=abc.testdomain12%.devlocal&appId=hbase&startTime=14000000&endTime=14200000
+```
+
+
+**Downsampling**
+
+As discussed before, AMS downsamples data when higher time ranges are requested. The default "downsampled across time" data returned is AVG. Specific downsamples can be requested by adding the aggregate function qualifiers ( ._avg, ._max, ._min, ._sum ) to the metric names the same way like requesting aggregates across the cluster.
+Example
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics?metricNames=regionserver.Server.totalRequestCount._max&hostname=abc.testdomain124.devlocal&appId=hbase&startTime=14000000&endTime=14200000&precision=MINUTES
+```
+The above query returns 5 minute data for the metric, where the data point value is the MAX of the values found in every 5 minute range.
+
+## AMS Metadata API
+
+AMS has 2 metadata endpoints that are useful for finding out the set of metrics it received, as well as the topology of the cluster.
+
+**METRICS METADATA**
+
+Endpoint :
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics/metadata
+```
+
+Data returned : A mapping between the set of APP_IDs to the list of metrics received with that AppId.
+
+Sample data returned
+
+
+
+**HOSTS METADATA**
+
+Endpoint :
+```
+ http://<AMS_HOST>:6188/ws/v1/timeline/metrics/hosts
+```
+Data returned : A mapping between the hosts in the cluster and the set of APP_IDs on the host.
+
+Sample data returned
+
+
+
+## Guide to writing your own Sink
+* Include the ambari-metrics-common artifacts from source or maven-central (when available) into your project
+* Find below helpful info regarding common data-structures to use from the ambari-metrics-common module
+* Extend the org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink class and implement the required methods
+* Use the org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache to store intermediate data until it is time to send (example: collection interval = 10 seconds, send interval = 1 minute). The cache implementation provides the logic needed for buffering and local aggregation.
+* Use org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink#emitMetrics to send metrics to AMS backend.
+
+**METRIC DATA STRUCTURE**
+
+Source location for common data structures module: https://github.com/apache/ambari/tree/trunk/ambari-metrics/ambari-metrics-common/
+
+Example sink implementation: https://github.com/apache/ambari/blob/trunk/ambari-metrics/ambari-metrics-hadoop-sink/
+
+
+
+**INTERNAL PHOENIX KEY STRUCTURE**
+
+ The Metric Record Key data structure is described below:
+
+Property|Type|Comment|Optional
+--------|----|--------|---------------
+Metric Name | String | First key part, important consideration while querying from HFile storage | N
+Hostname | String | Second key part | N
+Server time | Long | Timestamp on server when first metric write request was received | N
+Application Id | String | Uniquely identify service | N
+Instance Id | String | Second key part to identify instance/ component | Y
+Start time | Long | Start of the timeseries data |
+
+
+**HOW AGGREGATION WORKS**
+
+* The granularity of aggregate data can be controlled by setting wake up interval for each of the aggregator threads.
+* Presently we support 2 types of aggregators, HOST and APPLICATION with 3 time dimensions, per minute, per hour and per day.
+ * The HOST aggregates are just aggregates on precision data across the supported time dimensions.
+ * The APP aggregates are across appId. Note: We ignore instanceId for APP level aggregates. Same time dimensions apply for APP level aggregates.
+ * We also support HOST level metrics for APP, meaning you can expect a system metric example: "cpu_user" to be aggregated across datanodes, effectively calculating system metric for hosted apps.
+* Each aggregator performs checkpointing by storing last successful time of completion in a file. If the checkpoint is too old, the aggregators will discard checkpoint and aggregate data for the configured interval, meaning data in between (now - interval) time.
+* Refer to [Phoenix table schema](./operations.md) for details of tables and records.
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/operations.md b/versioned_docs/version-2.7.6/ambari-design/metrics/operations.md
new file mode 100644
index 0000000..5bfd1f6
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/operations.md
@@ -0,0 +1,151 @@
+# Operations
+
+## Metrics Collector
+
+**Pid file locations**
+
+Daemon | Default User | Pid File Path
+---------------|-------------------------------------------------|----------------------------------------
+Metrics Collector API |ams |/var/run/ambari-metrics-collector/ambari-metrics-collector.pid
+Metrics Collector Hbase |ams |/var/run/ambari-metrics-collector/hbase-ams-master.pid
+
+**Log file locations**
+
+Daemon | Log File Path
+---------------|------------------------------------------------
+Metrics Collector API |/var/log/ambari-metrics-collector/ambari-metrics-collector.log<br></br>/var/log/ambari-metrics-collector/ambari-metrics-collector.out
+Metrics Collector HBase |/var/log/ambari-metrics-collector/hbase-ams-master-<hostname>.log<br></br>/var/log/ambari-metrics-collector/hbase-ams-master-<hostname>.out
+
+**Manually restart Metrics Collector**
+
+Stop command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ stop'
+```
+
+Start command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ start'
+```
+
+## Metrics Monitor
+
+**Pid File location**
+
+```
+/var/run/ambari-metrics-monitor/ambari-metrics-monitor.pid
+```
+
+**Log File location**
+
+```
+/var/log/ambari-metrics-monitor/ambari-metrics-monitor.out
+```
+
+**Manually restart Metrics Monitor**
+Stop command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf stop'
+```
+
+Start command
+
+```bash
+su - ams -c '/usr/sbin/ambari-metrics-monitor --config /etc/ambari-metrics-monitor/conf start'
+```
+
+## Build Instructions
+
+The ambari-metrics-assembly package builds the assemblies (rpm/deb/msi) for various platforms.
+
+Following binaries can be found in the ambari-metrics-assembly/target folder after build is successful.
+
+```
+ambari-metrics-collecor-<ambari-version>.<arch>
+ambari-metrics-monitor-<ambari-version>.<arch>
+ambari-hadoop-sink-<ambari-version>.<arch>
+```
+
+**Note**: Ambari Metrics needs to be built before Ambari Server
+
+### RPM packages
+
+```bash
+cd ambari-metrics
+mvn clean package -Dbuild-rpm
+```
+
+### Debian packages
+Same instruction as above, change the maven target to build-deb
+
+### Windows msi
+TBU
+
+### Command line parameters
+
+Parameter | Default Value | Comment
+---------------|------------------|------------------------------
+hbase.tar | http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.0.0/tars/hbase-1.1.1.2.3.0.0-2557.tar.gz | HBase tarball. This is default version for Ambari 2.1.2
+hbase.folder | hbase-1.1.1.2.3.0.0-2557 |-
+hadoop.tar | http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.0.0/tars/hadoop-2.7.1.2.3.0.0-2557.tar.gz | Hadoop tarball, used for native libs. This is default version for Ambari 2.1.2
+hadoop.folder | hadoop-2.7.1.2.3.0.0-2557 |-
+
+**Note**
+
+After the change introducted by [AMBARI-18915](https://issues.apache.org/jira/browse/AMBARI-18915) - Update AMS pom to use Apache hbase,hadoop,phoenix tarballs REOPENED AMS uses hadoop tars downloaded from Apache by default. Since that version of libhadoop is not built with libsnappy, the following config change in ams-site is needed to make AMS start up correctly.
+
+**timeline.metrics.hbase.compression.scheme = None**
+
+## Disk space utilization guidance
+
+Num of Nodes | METRIC_RECORD (MB) | METRIC_RECORD_MINUTE (MB) | METRIC_RECORD_HOURLY (MB) | METRIC_RECORD_DAILY (MB) | METRIC_AGGREGATE (MB) | METRIC_AGGREGATE_MINUTE (MB) | METRIC_AGGREGATE_HOURLY (MB) | METRIC_AGGREGATE_DAILY (MB) | TOTAL (GB)
+-------|----------|-----------|----------|----------|---------|--------|----------|-----------------|-------------
+50 | 5120 | 2700 | 245 | 10 | 1500 |305 |28 |1 |10
+100 | 10240 | 5400 | 490 | 20 | 1500 |305 |28 |1 |18
+300 | 30720 | 16200 | 1470 | 60 | 1500 |305 |28 |1 |49
+500 | 51200 | 27000 | 2450 | 100 | 1500 |305 |28 |1 |81
+800 | 81920 | 43200 | 3920 | 160 | 1500 |305 |28 |1 |128
+
+**NOTE**:
+
+The above guidance has been derived from looking at AMS disk utilization in actual clusters.
+The ACTUAL numbers have been obtained by observing an actual cluster with the basic services (HDFS, YARN, HBase) installed along with Storm, Kafka and Flume.
+Kafka and Flume generate metrics only while a job is running. If those services are being used heavily, additional disk space is recommended. We ran sample jobs with STORM and KAFKA while deriving these numbers to make sure there is some contribution.
+
+**Actual disk utilization data**
+
+Num of Nodes | METRIC_RECORD (MB) | METRIC_RECORD_MINUTE (MB) | METRIC_RECORD_HOURLY (MB) | METRIC_RECORD_DAILY (MB) | METRIC_AGGREGATE (MB) | METRIC_AGGREGATE_MINUTE (MB) | METRIC_AGGREGATE_HOURLY (MB) | METRIC_AGGREGATE_DAILY (MB) | TOTAL (GB)
+-------|----------|-----------|----------|----------|---------|--------|----------|-----------------|-------------
+2 | 120 | 175 | 17 | 1 | 545 | 136 | 16 | 1 | 1
+3 | 294 | 51 | 3.4 | 1 | 104 | 26 | 1.8 | 1 | 0.5
+10 | 1024 | 540 | 49 | 2 | 1433.6 | 305 | 28 | 1 | 3.3
+
+## Phoenix Schema
+
+### Phoenix Tables
+
+Table Name | Description | Purge Interval(default)
+------------------------|---------------------------------------------------------------------------|-------------------------
+METRIC_RECORD | Data per metric per host at 10 seconds precision with 1 minute aggregates.| 1 day
+METRIC_RECORD_MINUTE | Data per metric per host at 5 minute precision | 1 week
+METRIC_RECORD_HOURLY | Data per metric per host at 1 hour precision | 30 days
+METRIC_RECORD_DAILY | Data per metric per host at 1 day precision | 1 year
+METRIC_AGGREGATE | Cluster wide aggregates per metric at 30 seconds precision | 1 week
+METRIC_AGGREGATE_MINUTE | Cluster wide aggregates per metric at 5 minute precision | 30 days
+METRIC_AGGREGATE_HOURLY | Cluster wide aggregates per metric at 1 hour precision | 1 year
+METRIC_AGGREGATE_DAILY | Cluster wide aggregates per metric at 1 day precision | 2 years
+
+### Connecting to Phoenix
+* Unpack Phoenix (4.2.0+) tarball onto the Metric Collector host
+* Change director to phonenix-4.*/bin
+* Edit sqlline.py and search for "java", replace java with full path to the java executable, example: "/usr/jdk64/jdk1.8.0_40/bin/java"
+* Connect command:
+Ambari versions 2.2.0 and below : ./sqlline.py localhost:61181:/hbase
+Ambari versions > 2.2.0 :
+```bash
+./sqlline.py localhost:61181:/ams-hbase-unsecure (embedded mode) and <cluster-zookeeper-quorum-host>:<cluster_zookeeper_port>:/ams-hbase-unsecure (distributed mode)
+```
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/stack-defined-metrics.md b/versioned_docs/version-2.7.6/ambari-design/metrics/stack-defined-metrics.md
new file mode 100644
index 0000000..e535f4b
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/stack-defined-metrics.md
@@ -0,0 +1,79 @@
+# Stack Defined Metrics
+
+The Ambari Stack definition represents the complete declarative description of Services that are comprised in a cluster.
+
+The stack definition also contains a definition file for all metrics that are supported by the Service.
+
+Presently the metrics.json describes the mapping between the metrics name requested in the REST API and the metrics name to use for making a call to the Metrics Service.
+
+Location of the **metrics.json** in the stack:
+
+Level|Location|Comment
+-----|--------|-------
+Cluster & Host | ganglia_properties.json | Presently, this file defines metrics for Host Component and Service Components as well but these are only used for older versions of stack < 2.0 and unit tests.<br></br>The Cluster and Host sections of this json file drive the Dashboard graphs.
+Component & Host Component | common-services.<SERVICE_NAME> | This file contains definition of metrics mapping for Ambari Metrics (type = ganglia) and JMX.
+
+**Note**: The individual stacks that override behavior from common services can redefine the metrics.json file, the inheritance is all-or-nothing, meaning if metrics.json file is present in the child stack, it will override the metrics.json from common-services
+
+**Structure of metrics.json file**
+
+Key|Allowed Values|Comments
+-----|--------|-------------
+Type |"ganglia" / "jmx" |type = ganglia implies Metrics Service request fulfilled by either a Ganglia (up to version 2.0) or Ambari Metrics (2.0 and above) backend service, this decision is taken by Ambari server at runtime.
+Category | "default" / "performance" ... |This is to group metrics into subsets for better navigability
+Metrics |metricKey : { <br></br>"metricName":<br></br>"pointInTime":<br></br>temporal":<br></br>} | metricKey = Key to be used by REST API. This is unique for a service and identifies the requested metric as well as what endpoint to use for serving the data (AMS vs JMX)<br></br>metricName = Name to use for the Metrics Service backend<br></br>pointInTime = Get latest value, no time range query allowed<br></br>temporal = Time range query supported
+
+Example:
+
+```json
+{
+
+ "NAMENODE": {
+
+ "Component": [
+
+ {
+
+ "type": "ganglia",
+
+ "metrics": {
+
+ "default": {
+
+ "metrics/dfs/FSNamesystem/TotalLoad": {
+
+ "metric": "dfs.FSNamesystem.TotalLoad",
+
+ "pointInTime": false,
+
+ "temporal": true
+
+ }
+
+ } ]
+
+ },
+
+ "HostComponent" : [
+
+ { "type" : "ganglia", ... }
+
+ { "type" : "jmx", .... }
+
+ ]
+
+}
+```
+
+**Sample API calls to retrieve metric definitions**:
+
+Service metrics:
+```
+Template => http://<ambari-server>:<port>/api/v1/stacks/<stackName>/versions/<stackVersion>/services/<serviceName>/artifacts/metrics_descriptor
+Example => http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/HDFS/artifacts/metrics_descriptor
+```
+Cluster & Host metrics:
+```
+Template => http://<ambari-server>:<port>/api/v1/stacks/<stackName>/versions/<stackVersion>/artifacts/metrics_descriptor
+Example => http://localhost:8080/api/v1/stacks/HDP/versions/2.3/artifacts/metrics_descriptor
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/troubleshooting.md b/versioned_docs/version-2.7.6/ambari-design/metrics/troubleshooting.md
new file mode 100644
index 0000000..cab1ea2
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/troubleshooting.md
@@ -0,0 +1,145 @@
+# Troubleshooting
+
+## Cleaning up Ambari Metrics System Data
+
+Following steps would help in cleaning up Ambari Metrics System data in a given cluster.
+
+Important Note:
+
+1. Cleaning up the AMS data would remove all the historical AMS data available
+2. The hbase parameters mentioned above are specific to AMS and they are different from the Cluster Hbase parameters
+
+### Step-by-step guide
+
+1. Using Ambari
+ * Set AMS to maintenance
+ * Stop AMS from Ambari. Identify the following from the AMS Configs screen
+ * 'Metrics Service operation mode' (embedded or distributed)
+ * hbase.rootdir
+ * hbase.zookeeper.property.dataDir
+2. AMS data would be stored in 'hbase.rootdir' identified above. Backup and remove the AMS data.
+ * If the Metrics Service operation mode
+ * is 'embedded', then the data is stored in OS files. Use regular OS commands to backup and remove the files in hbase.rootdir
+ * is 'distributed', then the data is stored in HDFS. Use 'hdfs dfs' commands to backup and remove the files in hbase.rootdir
+3. Remove the AMS zookeeper data by backing up and removing the contents of 'hbase.tmp.dir'/zookeeper
+4. Remove any Phoenix spool files from 'hbase.tmp.dir'/phoenix-spool folder
+5 Restart AMS using Ambari
+
+## Moving Metrics Collector to a new host
+
+1. Stop AMS Service
+
+2. Execute the following API call to Delete Metric Collector. (Replace server-host, cluster-name and host-name with the Metrics Collector host)
+
+```
+curl -u admin:admin -H "X-Requested-By:ambari" -i -X DELETE http://<server-host>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/METRICS_COLLECTOR
+```
+
+3. Execute the following API call to add Metrics collector to a new host. (Replace, server-host, cluster-name, host-name)
+
+```
+curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://<server-host>:8080/api/v1/clusters/<cluster-name>/hosts/<host-name>/host_components/METRICS_COLLECTOR
+```
+
+4. Install the Metrics Collector component from the Host page of the new host.
+
+5. If the AMS is in embedded mode, copy the AMS data from old node to new node.
+
+ * For embedded mode (ams-site: timeline.metrics.service.operation.mode), copy over the hbase.rootdir and tmpdir to new host from the old collector host.
+ * For distributed mode, since AMS HBase is writing to HDFS, no change will be necessary.
+ * Ensure that ams:hbase-site:hbase.rootdir and hbase.tmp.dir are pointing to the correct location in the new AMS node
+6. Start the Metrics Service.
+
+7. The service daemons will be pointing to the old metrics collector host. Perform a rolling restart of slave components and a normal restart of Master components for them to pick up the new collector host.
+
+Note : Restart of services is not needed post Ambari-2.5.0 since live collector information is maintained in the cluster zookeeper.
+
+
+
+## Troubleshooting Guide
+
+The following page documents common problems discovered with Ambari Metrics Service and provides a guide for things to look out for and already solved problems.
+
+**Important facts to collect from the system**:
+
+Problems with Metric Collector host
+* Output of "rpm -qa | grep ambari" on the collector host.
+* Total available System memory, output of : "free -g"
+* Total available disk space and available partitions, output of : "df -h "
+* Total number of hosts in the cluster
+* Configs: /etc/ams-hbase/conf/hbase-env.sh, /etc/ams-hbase/conf/hbase-site.xml, /etc/ambari-metrics-collector/conf/ams-env.sh, /etc/ambari-metrics-collector/conf/ams-site.xml
+* Collector logs:
+
+```
+/var/log/ambari-metrics-collector/ambari-metrics-collector.log, /var/log/ambari-metrics-collector/hbase-ams-master-<host>.log, /var/log/ambari-metrics-collector/hbase-ams-master-<host>.out
+Note: Additionally, If distributed mode is enabled, /var/log/ambari-metrics-collector/hbase-ams-zookeeper-<host>.log, /var/log/ambari-metrics-collector/hbase-ams-regionserver-<host>.log
+```
+
+* Response to the following URLs -
+
+```
+http://<ams-host>:6188/ws/v1/timeline/metrics/metadata
+http://<ams-host>:6188/ws/v1/timeline/metrics/hosts
+```
+
+* The response will be JSON and can be attached as a file.
+* From AMS HBase Master UI - http://<METRICS_COLLECTOR_HOST>:61310
+
+
+ * Region Count
+ * StoreFile Count
+ * JMX Snapshot - http://<METRICS_COLLECTOR_HOST>:61310/jmx
+
+
+**Problems with Metric Monitor host**
+
+```
+Monitor log file: /etc/ambari-metrics-monitor/ambari-metrics-monitor.out
+```
+
+**Check out [Configurations - Tuning](https://cwiki.apache.org/confluence/display/AMBARI/Configurations+-+Tuning) for scale issue troubleshooting.**
+
+**Issue 1: AMS HBase process slow disk writes**
+
+The symptoms and resolutions below address the **embedded** mode of AMS only.
+
+_Symptoms_:
+
+Behavior|How to detect
+--------|--------------
+High CPU usage | HBase process on Collector host taking up close to 100% of every core
+HBase Log: Compaction times | grep hbase-ams-master-<host>.log | grep "Finished memstore flush"<br></br>This yields MB written in X milliseconds, generally 128 MBps and above is average speed unless the disk is contended.<br></br>Also this search reveals how many times compaction ran per minute. A value greater than 6 or 8 is a warning that write volume is far greater than what HBase can hold in memory
+HBase Log: ZK timeout | HBase crashes saying zookeeper session timed out. This happens because in embedded mode the zk session timeout is limited to max of 30 seconds (HBase issue: fix planned for 2.1.3).<br></br>The cause is again slow disk reads.
+Collector Log : "waiting for some tasks to finish" | ambari-metric-collector log shows messages where AsyncProcess writes are queued up
+
+_Resolutions_:
+
+Configuration Change|Description
+--------|-----------------------
+ams-hbase-site :: hbase.rootdir | Change this path to a disk mount that is not heavily contended.
+ams-hbase-ste :: hbase.tmp.dir | Change this path to a location different from hbase.rootdir
+ams-hbase-env :: hbase_master_heapsize<br></br>ams-hbase-site :: hbase.hregion.memstore.flush.size | Bump this value up so more data is held in memory to address I/O speeds.<br></br>If heap size is increased and resident memory usage does not go up, this parameter can be changed to address how much data can be stored in a memstore per Region. Default is set to 128 MB. The size is in bytes.<br></br>Be careful with modifying this value, generally limit the setting between 64 MB (small heap with fast disk write), to 512 MB (large heap > 8 GB, and average write speed), since more data held in memory means longer time to write it to disk during a Flush operation.
+
+**Issue 2: Ambari Metrics take a long time to load**
+
+_Symptoms_:
+
+Behavior|How to detect
+--------|--------------
+Graphs: Loading time too long<br></br>Graphs: No data available | Check out service pages / host pages for metric graphs
+Socket read timeouts | ambari-server.log shows: Error message saying socket timeout for metrics
+Ambari UI slowing down | Host page loading time is high, heatmaps do not show data<br></br>Dashboard loading time is too high<br></br>Multiple sessions result in slowness
+
+_Resolutions_:
+
+Upgrade to 2.1.2+ is highly recommended.
+
+Following is a list of fixes in 2.1.2 release that should greatly help to alleviate the slow loading and timeouts:
+
+https://issues.apache.org/jira/browse/AMBARI-12654
+
+https://issues.apache.org/jira/browse/AMBARI-12983
+
+https://issues.apache.org/jira/browse/AMBARI-13108
+
+## [Known Issues](https://cwiki.apache.org/confluence/display/AMBARI/Known+Issues)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/metrics/upgrading-ambari-metrics-system.md b/versioned_docs/version-2.7.6/ambari-design/metrics/upgrading-ambari-metrics-system.md
new file mode 100644
index 0000000..e61a8d7
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/metrics/upgrading-ambari-metrics-system.md
@@ -0,0 +1,21 @@
+# Upgrading Ambari Metrics System
+
+**Upgrading from Ambari 2.0 or 2.0.1 to 2.1**
+
+1. Upgrade ambari server and perform needed post-upgrade checks. (make sure all services are up and running)
+2. Stop Ambari Metrics service
+3. Execute the following command on all hosts.
+
+ ```bash
+ yum upgrade -y ambari-metrics-monitor ambari-metrics-hadoop-sink
+ ```
+ (Use appropriate package manager on ubuntu and windows)
+
+4. Execute the following command on host running Metrics Collector
+
+ ```bash
+ yum upgrade -y ambari-metrics-collector
+ ```
+
+5. Start Ambari Metrics Service
+6. The Sink jars will be deployed on every host, the daemons will pick the changes to sink implementations when they are restarted. (E.g.: HDFS Namenode / Datanode)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/quick-links.md b/versioned_docs/version-2.7.6/ambari-design/quick-links.md
new file mode 100644
index 0000000..0fea1e4
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/quick-links.md
@@ -0,0 +1,225 @@
+# Quick Links
+
+## Introduction
+
+A service can add a list of quick links to the Ambari web UI by adding meta info to a file following a predefined JSON format. Ambari server parses the quick link JSON file and provides its content to the UI. So that Ambari web UI can calculate quick link URLs based on the information and populate the quick links drop-down list accordingly.
+
+## Design
+
+By default, the JSON file is called quicklinks.json and is located in the quicklinks directory under the service root directory. For example, for Oozie, the file is OOZIE/quicklinks/quicklinks.json. You can also name the file differently as well as put it in a custom directory under the service root directory.
+
+
+Use YARN as an example, the following is what the metainfo.xml looks like with the quick links configurations.
+
+```xml
+<services>
+ <service>
+ <name>YARN</name>
+ <version>2.7.1.2.3</version>
+ <quickLinksConfigurations>
+ <quickLinksConfiguration>
+ <fileName>quicklinks.json</fileName>
+ <default>true</default>
+ </quickLinksConfiguration>
+ </quickLinksConfigurations>
+```
+
+The metainfo.xml can have different quick links configuration as shown here for MapReduce2.
+
+The _quickLinksConfigurations-dir_is an optional field that tells Ambari Server where to load the quicklinks.json file. We can skip it if we want the service to use the default _quicklinks_directory.
+
+```xml
+<service>
+ <name>MAPREDUCE2</name>
+ <version>2.7.1.2.3</version>
+ <quickLinksConfigurations-dir>quicklinks-mapred</quickLinksConfigurations-dir>
+ <quickLinksConfigurations>
+ <quickLinksConfiguration>
+ <fileName>quicklinks.json</fileName>
+ <default>true</default>
+ </quickLinksConfiguration>
+ </quickLinksConfigurations>
+```
+
+A quick link JSON file has two major sections, the "configuration" section for determine the protocol (HTTP vs HTTPS), and the "links" section for meta information of each quick link to be displayed on the Ambari web UI. The JSON file also includes a "name" section at the top that defines the name of the quick links JSON file that server uses for identification.
+
+Ambari web UI uses information provided in the "configuration" section to determine if the service is running against HTTP or HTTPS. The result is used to construct all quick link URLs defined in the "links" section.
+
+Use YARN as an example, the following is what the quicklinks.json looks like
+
+```json
+{
+ "name": "default",
+ "description": "default quick links configuration",
+ "configuration": {
+ "protocol": {
+ # type tells the UI which protocol to use if all checks meet.
+
+ # Use https_only or http_only with empty checks section to explicitly specify the type
+ "type":"https",
+ "checks":[ # There can be more than one check needed.
+ {
+ "property":"yarn.http.policy",
+ # Desired section here either is a specific value for the property specified
+ # Or whether the property value should exit or not_exist, blank or not_blank
+ "desired":"HTTPS_ONLY",
+ "site":"yarn-site"
+ }
+ ]
+ },
+ #configuration for individual links
+ "links": [
+ {
+ "name": "resourcemanager_ui",
+ "label": "ResourceManager UI",
+ "requires_user_name": "false", #set this to true if UI should attach log in user name to the end of the quick link url
+ "url": "%@://%@:%@",
+
+ #section calculate the port numbe.
+ "port":{
+ #use a property for the whole url if the service does not have a property for the port.
+ #Specify the regex so the url can be parsed for the port value.
+ "http_property": "yarn.timeline-service.webapp.address",
+ "http_default_port": "8080",
+ "https_property": "yarn.timeline-service.webapp.https.address",
+ "https_default_port": "8090",
+ "regex": "\\w*:(\\d+)",
+ "site": "yarn-site"
+ }
+ },
+ {
+ "name": "resourcemanager_logs",
+ "label": "ResourceManager logs",
+ "requires_user_name": "false",
+ "url": "%@://%@:%@/logs",
+ "port":{
+ "http_property": "yarn.timeline-service.webapp.address",
+ "http_default_port": "8088",
+ "https_property": "yarn.timeline-service.webapp.https.address",
+ "https_default_port": "8090",
+ "regex": "\\w*:(\\d+)",
+ "site": "yarn-site"
+ }
+ }
+ ]
+ }
+}
+```
+
+# REST API
+
+You can examine the quick link information made available to the Ambari web UI by running the following REST API as an HTTP GET request.
+
+REST API
+
+```
+/api/v1/stacks/[stack_name]versions/[stack_version]/services/[service_name]/quicklinks?QuickLinkInfo/default=true&fields=*
+```
+
+Response sent to the Ambari web UI.
+
+```json
+{
+ "href" : "http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/YARN/quicklinks?QuickLinkInfo/default=true&fields=*",
+ "items" : [
+ {
+ "href" : "http://localhost:8080/api/v1/stacks/HDP/versions/2.3/services/YARN/quicklinks/quicklinks.json",
+ "QuickLinkInfo" : {
+ "default" : true,
+ "file_name" : "quicklinks.json",
+ "service_name" : "YARN",
+ "stack_name" : "HDP",
+ "stack_version" : "2.3",
+ "quicklink_data" : {
+ "QuickLinksConfiguration" : {
+ "description" : "default quick links configuration",
+ "name" : "default",
+ "configuration" : {
+ "protocol" : {
+ "type" : "https",
+ "checks" : [
+ {
+ "property" : "yarn.http.policy",
+ "desired" : "HTTPS_ONLY",
+ "site" : "yarn-site"
+ }
+ ]
+ },
+ "links" : [
+ {
+ "name" : "resourcemanager_jmx",
+ "label" : "ResourceManager JMX",
+ "url" : "%@://%@:%@/jmx",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "resourcemanager_logs",
+ "label" : "ResourceManager logs",
+ "url" : "%@://%@:%@/logs",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "resourcemanager_ui",
+ "label" : "ResourceManager UI",
+ "url" : "%@://%@:%@",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.resourcemanager.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.resourcemanager.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ },
+ {
+ "name" : "thread_stacks",
+ "label" : "Thread Stacks",
+ "url" : "%@://%@:%@/stacks",
+ "port" : {
+ "regex" : "\\w*:(\\d+)",
+ "site" : "yarn-site",
+ "http_property" : "yarn.timeline-service.webapp.address",
+ "http_default_port" : "8088",
+ "https_property" : "yarn.timeline-service.webapp.https.address",
+ "https_default_port" : "8090"
+ },
+ "removed" : false,
+ "component_name" : "RESOURCEMANAGER",
+ "requires_user_name" : "false"
+ }
+ ]
+ }
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+## Ambari Web UI
+
+The changes for the stack driven quick links are hidden from the UI presentation. The quick links drop-down list behavior remains unchanged.
diff --git a/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/create-widget.png b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/create-widget.png
new file mode 100644
index 0000000..87f901e
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/create-widget.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/gauge.png b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/gauge.png
new file mode 100644
index 0000000..13ae128
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/gauge.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/graphs.png b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/graphs.png
new file mode 100644
index 0000000..a9bf723
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/graphs.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/number.png b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/number.png
new file mode 100644
index 0000000..adae257
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/number.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/widget-browser.png b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/widget-browser.png
new file mode 100644
index 0000000..c205590
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/imgs/widget-browser.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/service-dashboard/index.mdx b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/index.mdx
new file mode 100644
index 0000000..1c48434
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/service-dashboard/index.mdx
@@ -0,0 +1,188 @@
+# Enhanced Service Dashboard
+
+This feature was first introduced in Ambari 2.1.0 release. Any Ambari release before 2.1.0 does not support this feature. Cluster is required to be upgraded to Ambari 2.1.0 or above to use this feature.
+
+:::caution
+This document assumes that the service metrics are being exposed via Ambari. If this is not the case then please refer to [Metrics](https://cwiki.apache.org/confluence/display/AMBARI/Metrics)document for more related information.
+:::
+
+## Introduction
+
+The term Enhanced Service Dashboard is used for being able to seamlessly add new widgets to the service summary page and heatmap page. This feature enables stack service to be packaged with the widget definitions in the JSON format. These widget definitions will appear as default widgets on the service summary page and heatmap page on service installation. In addition new widgets for the service can be created any time on the deployed cluster.
+
+Displaying default service dashboard widgets on service installation is a 3 step process:
+
+1. Push service metrics to Ambari Metric Collector.
+
+2. Declare service metrics in service's metrics.json file of Ambari. This step is required to expose metrics via Ambari RESTful API.
+
+3. Define service widgets in the widgets.jsonfile.
+
+:::tip
+Widget gets the data to be charted from the service metrics. It is important to validate that the required service metrics are being exposed from Ambari metrics endpoint before defining a widget.
+:::
+
+## Service Dashboard Widgets
+
+Ambari supports 4 widget types:
+
+1. Graph
+2. Gauge
+3. Number
+4. Template
+
+### Graph
+
+A widget to display line or area graphs that are derived from one or more than one service metrics value over a range of time.
+
+
+### Graph Widget Definition
+
+```json
+{
+ "widget_name": "Memory Utilization",
+ "description": "Percentage of total memory allocated to containers running in the cluster.",
+ "widget_type": "GRAPH",
+ "is_visible": true,
+ "metrics": [
+ {
+ "name": "yarn.QueueMetrics.Queue=root.AllocatedMB",
+ "metric_path": "metrics/yarn/Queue/root/AllocatedMB",
+ "service_name": "YARN",
+ "component_name": "RESOURCEMANAGER",
+ "host_component_criteria": "host_components/HostRoles/ha_state=ACTIVE"
+ },
+ {
+ "name": "yarn.QueueMetrics.Queue=root.AvailableMB",
+ "metric_path": "metrics/yarn/Queue/root/AvailableMB",
+ "service_name": "YARN",
+ "component_name": "RESOURCEMANAGER",
+ "host_component_criteria": "host_components/HostRoles/ha_state=ACTIVE"
+ }
+ ],
+ "values": [
+ {
+ "name": "Memory Utilization",
+ "value": "${(yarn.QueueMetrics.Queue=root.AllocatedMB / (yarn.QueueMetrics.Queue=root.AllocatedMB + yarn.QueueMetrics.Queue=root.AvailableMB)) * 100}"
+ }
+ ],
+ "properties": {
+ "display_unit": "%",
+ "graph_type": "LINE",
+ "time_range": "1"
+ }
+}
+```
+
+1. **widget_name:** This is the name that will be displayed in the UI for the widget.
+
+2. **description:**Description for the widget that will be displayed in the UI.
+
+3. **widget_type:**This information is used by the widget to create the widget from the metric data.
+
+4. **is_visible:** This boolean decides if the widget is shown on the service summary page by default or not.
+
+5. **metrics:** This is an array that includes all metrics definitions comprising the widget.
+
+6. **metrics/name:** Actual n ame of the metric as being pushed to the sink or emitted as JMX property by the service.
+
+7. **metrics/metric_path** **:** This is the path to which above mentioned metrics/name is mapped in metrics.json file for the service. Metric value will be exposed in the metrics attribute of the service component or host component endpoint of the Ambari API at the same path.
+
+8. **metrics/service_name:**Name of the service containing the component emitting the metric.
+
+9. **metrics/component_name:**Name of the component emitting the metric.
+
+10. **metrics/host_component_criteria:**This is an optional field. Presence of this field means that the metric is host component metric and not a service component metric. If a metric is intended to be queried on host component endpoint then the criteria for choosing the host component needs to be specified over here. If this is left as a single space string then the first found host component will be queried for the metric.
+
+11. **values:** This is an array of datasets for the Graph widget. For other widget types this array always has one element.
+
+12. **values/name:** This field is used only for Graph widget type. This shows up as a label name in the legend for the dataset shown in a Graph widget type.
+
+13. **values/value:** This is the expression from which the value for the dataset is calculated. Expression contains references to the declared metric name and constant numbers which acts as valid operands. Expression also contains a valid set of operators {+,-,*,/} that can be used along with valid operands. Parentheses are also permitted in the expression.
+
+14. **properties:** These contains set of properties specific to the widget type. For Graph widget type it contains display_unit, graph_type and time_range. t ime_range field is currently not honored in the UI.
+
+### Gauge
+
+A widget to display percentage calculated from current value of a metric or current values of more than one metric.
+
+
+### Number
+
+A widget to display a number optionally with a unit that can be calculated from the current value of a metric or current values of more than one metric.
+
+
+
+### Template
+
+A widget to display one or more numbers calculated from current value of a metric or current values of more than one metric along with the embedded string.
+
+Aggregator function for the Metric
+
+Aggregator function are related to only service component level metrics and are not supported for host component level metrics.
+
+Ambari Metrics System supports 4 type of aggregation:
+
+1. **max**: Maximum value of the metric across all host components
+2. **min**: Minimum value of the metric across all host components
+3. **avg**: Average value of the metric across all host components
+4. **sum**: Sum of metric value recorded for each host components
+
+By default Ambari Metrics System uses the average aggregator function while computing value for a service component metric but this behavior can be overridden by suffixing metric name with the aggregator function name (._max ,._min ,._avg and ._sum ).
+
+```json
+{
+ "widget_name": "Blocked Updates",
+ "description": "Number of milliseconds updates have been blocked so the memstore can be flushed.",
+ "default_section_name": "HBASE_SUMMARY",
+ "widget_type": "GRAPH",
+ "is_visible": true,
+ "metrics": [
+ {
+ "name": "regionserver.Server.updatesBlockedTime._sum",
+ "metric_path": "metrics/hbase/regionserver/Server/updatesBlockedTime._sum",
+ "service_name": "HBASE",
+ "component_name": "HBASE_REGIONSERVER"
+ }
+ ],
+ "values": [
+ {
+ "name": "Updates Blocked Time",
+ "value": "${regionserver.Server.updatesBlockedTime._sum}"
+ }
+ ],
+ "properties": {
+ "display_unit": "ms",
+ "graph_type": "LINE",
+ "time_range": "1"
+ }
+}
+```
+
+## Widget Operations:
+
+A Service that has widgets.json and metrics.json file will also be provided with following abilities:
+
+1. **Widget Browser** for performing widget related operations.
+
+2. **Create widget wizard** for creating new desired service widgets in a cluster.
+
+3. **Edit widget wizard** for editing service widgets post creation.
+
+## Widget Browser
+
+Widget Browser is the place from which actions can be performed on widgets like adding/removing a widget from the dashboard, sharing widget with other users and deleting a widget.While creating new widget, user can choose to share the widget with other users or not. All widgets that are created by the user + shared with the user + defined in the stack as default service widgets will be available in the widget browser of a user.
+
+
+### Create Widget Wizard
+
+A custom desired widget can be created from the exposed service metrics using 3 step Create Widget wizard.
+
+
+## Using Enhanced Service Dashboard feature
+
+If the existing service in Ambari is pushing the metrics to Ambari metrics collector then minimal work needs to be done. This includes adding metrics.json and widgets.json file for the service, and might include making changes to metainfo.xml file.
+
+:::tip
+[AMBARI-9910](https://issues.apache.org/jira/browse/AMBARI-9910) added widgets to Accumulo service page. This work can be referred as an example to use Enhanced Service Dashboard feature.
+:::
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/custom-services.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/custom-services.md
new file mode 100644
index 0000000..561da65
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/custom-services.md
@@ -0,0 +1,1059 @@
+# Custom Services
+
+There are many aspects to creating custom services. At its most basic a service must include its metainfo.xml and command script. It also must be packaged to allow adding it to a cluster. Some of the further sub-sections define optional elements of the service definition which can be included.
+
+## Defining a Custom Service
+
+
+
+### Service Metainfo and Component Category
+
+#### metainfo.xml
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service must be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you must specify the <commandScript> to use when executing commands. There is a defined set of default commands the component must support depending on the components category.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+### Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV". This service includes MASTER, SLAVE and CLIENT components.
+
+#### Create a Custom Service
+
+1. Create a directory named `<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```bash
+mkdir SAMPLESRV
+cd SAMPLESRV
+```
+2. Within the `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily>
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+3. In the above, the service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+4. Next, let's create that command script. Create a directory for the command script `SAMPLESRV` `/` ** `package/scripts`** that we designated in the service metainfo.
+
+```bash
+mkdir -p package/scripts
+cd package/scripts
+```
+5. Within the scripts directory, create the `.py` command script files mentioned in the metainfo. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+#### Implementing a Custom Command
+
+1. Browse to the `SAMPLESRV` directory, and edit the `metainfo.xml` file that describes the service. For example, adding a custom command to the SAMPLESRV_CLIENT:
+
+```xml
+
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+```
+2. Next, let's create that command script by editing the package/scripts/sample_client.py file that we designated in the service metainfo.
+
+
+```python
+import sys
+from resource_management import *
+
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+#### Adding Configs to the Custom Service
+
+In this example, we will add a configuration type "test-config" to our SAMPLESRV.
+
+1. Modify the metainfo.xml Add the configuration files to the CLIENT component will make it available in the client tar ball downloaded from Ambari.
+
+```xml
+<component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>test-config.xml</fileName>
+ <dictionaryName>test-config</dictionaryName>
+ </configFile>
+ </configFiles>
+</component>
+```
+2. Create a directory for the configuration dictionary file `SAMPLESRV` `/` **`configuration`**.
+
+```bash
+mkdir -p configuration
+cd configuration
+```
+3. Create the `test-config.xml` file. For example:
+
+```xml
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<configuration>
+ <property>
+ <name>some.test.property</name>
+ <value>this.is.the.default.value</value>
+ <description>This is a test description.</description>
+ </property>
+ <property>
+ <name>another.test.property</name>
+ <value>5</value>
+ <description>This is a second test description.</description>
+ </property>
+</configuration>
+
+```
+4. There is an optional setting "configuration-dir". Custom services should either not include the setting or should leave it as the default value "configuration".
+
+```xml
+<configuration-dir>configuration</configuration-dir>
+```
+5. Configuration dependencies can be included in the metainfo.xml in the a " `configuration-dependencies`" section. This section can be added to the service as a whole or a particular component. One of the implications of this dependency is that whenever the config-type is updated, Ambari automatically marks the component or service as requiring restart.
+
+For example, HIVE defines a component level configuration dependencies for the HIVE_METASTORE component
+
+```xml
+ <component>
+ <name>HIVE_METASTORE</name>
+ <displayName>Hive Metastore</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <reassignAllowed>true</reassignAllowed>
+ <clientsToUpdateConfigs></clientsToUpdateConfigs>
+... ...
+ <configuration-dependencies>
+ <config-type>hive-site</config-type>
+ </configuration-dependencies>
+ </component>
+```
+
+HIVE also defines service level configuration dependencies.
+
+```xml
+<configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hive-log4j</config-type>
+ <config-type>hive-exec-log4j</config-type>
+ <config-type>hive-env</config-type>
+ <config-type>hivemetastore-site.xml</config-type>
+ <config-type>webhcat-site</config-type>
+ <config-type>webhcat-env</config-type>
+ <config-type>parquet-logging</config-type>
+ <config-type>ranger-hive-plugin-properties</config-type>
+ <config-type>ranger-hive-audit</config-type>
+ <config-type>ranger-hive-policymgr-ssl</config-type>
+ <config-type>ranger-hive-security</config-type>
+ <config-type>mapred-site</config-type>
+ <config-type>application.properties</config-type>
+ <config-type>druid-common</config-type>
+ </configuration-dependencies>
+```
+
+## Packaging and Installing Custom Services
+
+### Introduction
+
+Custom services in Apache Ambari can be packaged and installed in many ways. Ideally, they should all be packaged and installed in the same manner. This document describes how to package and install custom services using Extensions and Management Packs. Using this approach, the custom service definitions do not get inserted under the stack versions services directory. This keeps the stack clean and allows users to easily see which services were installed by which package (stack or extension).
+
+### Management Packs
+
+A [management pack](./management-packs.md) is a mechanism for installing stacks, extensions and custom services. A management pack is packaged as a tar.gz file which expands as a directory that includes an mpack.json file and the stack, extension and custom service definitions that it defines.
+
+#### Example Structure
+
+myext-mpack1.0.0.0
+
+├── mpack.json
+
+└──
+
+#### mpack.json Format
+
+The mpacks.json file allows you to specify the name, version and description of the management pack along with the prerequisites for installing the management pack. For extension management packs, the only important prerequisite is the min_ambari_version. The most important part is the artifacts section. For the purpose here, the artifact type will always be "extension-definitions". You can provide any name for the artifact and you can potentially change the source_dir if you wish to package your extensions under a different directory than "extensions". For consistency, it is recommended that you use the default source_dir "extensions".
+
+```json
+{
+
+"type" : "full-release",
+
+"name" : "myextension-mpack",
+
+"version": "1.0.0.0",
+
+"description" : "MyExtension Management Pack",
+
+"prerequisites": {
+
+"min_ambari_version" : "2.4.0.0"
+
+},
+
+"artifacts": [
+
+{
+
+"name" : "myextension-extension-definitions",
+
+"type" : "extension-definitions",
+
+"source_dir": "extensions"
+
+}
+
+]
+
+}
+```
+
+### Extensions
+
+An [extension](./extensions.md)is a collection of one or more custom services which are packaged together. Much like stacks, each extension has a name which needs to be unique in the cluster. It also has a version folder to distinguish different releases of the extension which go in the resources/extensions folder with
+
+An extension version is similar to a stack version but it only includes the metainfo.xml and the services directory. This means that the alerts, kerberos, metrics, role command order and widgets files are not supported and should be included at the service level. In addition, the repositories, hooks, configurations, and upgrades directories are not supported although upgrade support can be added at the service level.
+
+#### Extension Structure
+
+```
+MY_EXT
+
+└── 1.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ ├── SERVICEA
+
+ ├── ...
+```
+
+#### Extension metainfo.xml Format:
+
+The extension metainfo.xml is very simple, it just specifies the minimum stack versions which are supported.
+
+```xml
+<metainfo>
+
+ <prerequisites>
+
+ <min-stack-versions>
+
+ <stack>
+
+ <name>BIGTOP</name>
+
+ <version>1.0.*</version>
+
+ </stack>
+
+ </min-stack-versions>
+
+ </prerequisites>
+
+</metainfo>
+```
+
+#### Extension Inheritance
+
+Extension versions can _extend_ other Extension versions in order to share command scripts and configurations. This reduces duplication of code across Extensions with the following:
+
+* add new Services in the child Extension version (not in the parent Extension version)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, **MyExtension 2.0**could extend **MyExtension 1.0** so only the changes applicable to **the MyExtension 2.0** extension are present in that Extension definition. This extension is defined in the metainfo.xml for **MyExtension 2.0**:
+
+```xml
+<metainfo>
+ <extends>1.0</extends>
+
+```
+
+### Extension Management Packs Structure
+
+```
+myext-mpack1.0.0.0
+
+├── mpack.json
+
+└── extensions
+
+ └── MY_EXT
+
+ └── 1.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ └── SERVICEA
+
+ └── 2.0
+
+ ├── metainfo.xml
+
+ └── services
+
+ ├── SERVICEA
+
+ └── …
+
+
+```
+
+### Installing Management Packs
+
+In order to install an extension management pack, you run the following command with or without the "-v" option:
+
+ambari-server install-mpack --mpack=/dir/to/myext-mpack-1.0.0.0.tar.gz -v
+
+This will check to see if the management pack's prerequisites are met (min_ambari_version). In addition it will check to see if there are any errors in the management pack format. Assuming everything is correct, the management pack will be extracted in:
+
+/var/lib/ambariserver/resources/mpacks.
+
+It will then create symlinks from /var/lib/ambari-server/resources/extensions for each extension version in /var/lib/ambari-server/resources/mpacks/
+
+Extension Directory | Target Management Pack Symlink
+--------------------|------------------------------------------------------------------
+resources/extensions/MY_EXT/1.0 | resources/mpacks/myext-mpack1.0.0.0/extensions/MY_EXT/1.0
+resources/extensions/MY_EXT/2.0 | resources/mpacks/myext-mpack1.0.0.0/extensions/MY_EXT/2.0
+
+### Verifying the Extension Installation
+
+Once you have installed the extension management pack, you can restart ambari-server.
+
+```bash
+ambari-server restart
+```
+
+After ambari-server has been restarted, you will see in the ambari DB your extension listed in the extension table:
+
+```
+ambari=> select * from extension;
+
+extension_id | extension_name | extension_version
+
+--------------+----------------+-------------------
+
+1 | EXT | 1.0
+
+(1 row)
+```
+
+You can also query for extensions by calling REST APIs.
+
+```
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"items" : [{
+
+"href" : "http://
+
+"Extensions" : {
+
+"extension_name" : "EXT"
+
+}
+
+}]
+
+}
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"Extensions" : {
+
+"extension_name" : "EXT"
+
+},
+
+"versions" : [{
+
+"href" : "http://
+
+"Versions" : {
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0"
+
+}
+
+}]
+
+}
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"Versions" : {
+
+"extension-errors" : [ ],
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0",
+
+"parent_extension_version" : null,
+
+"valid" : true
+
+}
+
+}
+```
+
+### Linking Extensions to the Stack
+
+Once you have verified that Ambari knows about your extension, the next step is linking the extension version to the current stack version. Linking adds the extension version's services to the list of stack version services. This allows you to install the extension services on the cluster. Linking an extension version to a stack version, will first verify whether the extension supports the given stack version. This is determined by the stack versions listed in the extension version's metainfo.xml.
+
+The following REST API call, will link an extension version to a stack version. In this example it is linking EXT/1.0 with the BIGTOP/1.0 stack version.
+
+```bash
+curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"ExtensionLink": {"stack_name": "BIGTOP", "stack_version": "1.0", "extension_name": "EXT", "extension_version": "1.0"}}' http://
+```
+
+You can examine links (or extension links) either in the Ambari DB or with REST API calls.
+
+```
+ambari=> select * from extensionlink;
+
+link_id | stack_id | extension_id
+
+---------+----------+--------------
+
+1 | 2 | 1
+
+(1 row)
+
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://
+
+{
+
+"href" : "http://
+
+"items" : [{
+
+"href" : "http://
+
+"ExtensionLink" : {
+
+"extension_name" : "EXT",
+
+"extension_version" : "1.0",
+
+"link_id" : 1,
+
+"stack_name" : "BIGTOP",
+
+"stack_version" : "1.0"
+
+}
+
+}]
+
+}
+```
+
+## Role Command Order
+
+Each service can define its own role command order by including a role_command_order.json file in its service folder. The service should only specify the relationship of its components to other components. In other words, if a service only includes COMP_X, it should only list dependencies related to COMP_X. If when COMP_X starts it is dependent on the NameNode start and when the NameNode stops it should wait for COMP_X to stop, the following would be included in the role command order:
+
+```json
+{
+ "_comment" : "Record format:",
+ "_comment" : "blockedRole-blockedCommand: [blockerRole1-blockerCommand1, blockerRole2-blockerCommand2, ...]",
+ "general_deps" : {
+ "_comment" : "dependencies for all cases"
+ },
+ "_comment" : "Dependencies that are used when GLUSTERFS is not present in cluster",
+ "optional_no_glusterfs": {
+ "COMP_X-START": ["NAMENODE-START"],
+ "NAMENODE-STOP": ["COMP_X-STOP"]
+ }
+}
+```
+
+The entries in the service's role command order will be merged with the role command order defined in the stack. For example, since the stack already has a dependency for NAMENODE-STOP, in the example above COMP_X-STOP would be added to the rest of the NAMENODE-STOP dependencies and the COMP_X-START dependency on NAMENODE-START would be added as a new dependency.
+
+**Sections**
+Ambari uses the below sections only:
+
+Section Name | When Used
+-------------|------------
+general_deps | Command orders are applied in all situations
+optional_glusterfs | Command orders are applied when cluster has instance of GLUSTERFS service
+optional_no_glusterfs | Command orders are applied when cluster does not have instance of GLUSTERFS service
+namenode_optional_ha | Command orders are applied when HDFS service is installed and JOURNALNODE component exists (HDFS HA is enabled)
+resourcemanager_optional_ha | Command orders are applied when YARN service is installed and multiple RESOURCEMANAGER host-components exist (YARN HA is enabled)
+
+**Commands**
+Commands currently supported by Ambari are
+
+* INSTALL
+* UNINSTALL
+* START
+* RESTART
+* STOP
+* EXECUTE
+* ABORT
+* UPGRADE
+* SERVICE_CHECK
+* CUSTOM_COMMAND
+* ACTIONEXECUTE
+
+## Service Advisor
+
+Each custom service can provide a service advisor as a Python script named service-advisor.py in their service folder. A Service Advisor allows custom services to integrate into the stack advisor behavior which only applies to the services within the stack.
+
+### Service Advisor Inheritance
+
+Unlike the Stack-advisor scripts, the service-advisor scripts do not automatically extend the parent service's service-advisor scripts. The service-advisor script needs to explicitly extend their parent's service service-advisor script. The following code sample shows how you would refer to a parent's service_advisor.py. In this case it is extending the root service-advisor.py file in the resources/stacks directory.
+
+**Sample service-advisor.py file inheritance**
+
+```python
+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
+STACKS_DIR = os.path.join(SCRIPT_DIR, '../../../stacks/')
+PARENT_FILE = os.path.join(STACKS_DIR, 'service_advisor.py')
+
+try:
+ with open(PARENT_FILE, 'rb') as fp:
+ service_advisor = imp.load_module('service_advisor', fp, PARENT_FILE, ('.py', 'rb', imp.PY_SOURCE))
+except Exception as e:
+ traceback.print_exc()
+ print "Failed to load parent"
+
+class HAWQ200ServiceAdvisor(service_advisor.ServiceAdvisor):
+```
+
+### Service Advisor Behavior
+
+Like the stack advisors, service advisors provide information on 4 important aspects for the service:
+
+1. Recommend layout of the service on cluster
+2. Recommend service configurations
+3. Validate layout of the service on cluster
+4. Validate service configurations
+
+By providing the service-advisor.py file, one can control dynamically each of the above for the service.
+
+The [main interface for the service-advisor](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51) scripts contains documentation on how each of the above are called, and what data is provided.
+
+**Base service_advisor.py from resources/stacks**
+
+```python
+
+class ServiceAdvisor(DefaultStackAdvisor):
+
+ def colocateService(self, hostsComponentsMap, serviceComponents):
+ pass
+
+ def getServiceConfigurationRecommendations(self, configurations, clusterSummary, services, hosts):
+ pass
+
+ def getServiceComponentLayoutValidations(self, services, hosts):
+ return []
+
+ def getServiceConfigurationsValidationItems(self, configurations, recommendedDefaults, services, hosts):
+ return []
+```
+
+**Examples**
+[Service Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51)
+[HAWQ 2.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0/service_advisor.py)
+[PXF 3.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/PXF/3.0.0/service_advisor.py)
+
+## Service Inheritance
+
+A service can inherit through the stack but may also inherit directly from common-services. This is declared in the metainfo.xml:
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <extends>common-services/HDFS/2.1.0.2.0</extends>
+```
+
+When a service inherits from another service version, how its defining files and directories are inherited follows a number of different patterns.
+
+The following files if defined in the current service version replace the definitions from the parent service version:
+
+* alerts.json
+* kerberos.json
+* metrics.json
+* role_command_order.json
+* service_advisor.py
+* widgets.json
+
+Note: All the services' role command orders will be merge with the stack's role command order to provide a master list.
+
+The following files if defined in the current service version are merged with the parent service version (supports removing/deleting parent entries):
+
+* quicklinks/quicklinks.json
+* themes/theme.json
+
+The following directories if defined in the current service version replace those from the parent service version:
+
+* packages
+* upgrades
+
+This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The configurations directory in the current service version merges the configuration files with those from the parent service version. Configuration files defined at any level can be omitted from the services configurations by specifying the config-type in the excluded-config-types list:
+
+```xml
+ <excluded-config-types>
+ <config-type>storm-site</config-type>
+ </excluded-config-types>
+```
+
+For an individual configuration file (or configuration type) like core-site.xml, it will by default merge with the configuration type from the parent. If the `supports_do_not_extend` attribute is specified as `true`, the configuration type will **not** be merged.
+
+```xml
+<configuration supports_do_not_extend="true">
+```
+
+### Inheritance and the Service MetaInfo
+
+By default all attributes of the service and components if defined in the metainfo.xml of the current service version will replace those of the parent service version unless specified in the sections that follow.
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <displayName>HDFS</displayName>
+ <comment>Apache Hadoop Distributed File System</comment>
+ <version>2.1.0.2.0</version>
+ <components>
+ <component>
+ <name>NAMENODE</name>
+ <displayName>NameNode</displayName>
+ <category>MASTER</category>
+ <cardinality>1-2</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <reassignAllowed>true</reassignAllowed>
+ <commandScript>
+ <script>scripts/namenode.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>1800</timeout>
+ </commandScript>
+ ...
+```
+
+The custom commands defined in the metainfo.xml of the current service version are merged with those of the parent service version.
+
+```xml
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/namenode.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+```
+
+The configuration dependencies defined in the metainfo.xml of the current service version are merged with those of the parent service version.
+
+```xml
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ ...
+ </configuration-dependencies>
+
+```
+
+The components defined in the metainfo.xml of the current service are merged with those of the parent (supports delete).
+
+```xml
+ <component>
+ <name>ZKFC</name>
+ <displayName>ZKFailoverController</displayName>
+ <category>SLAVE</category>
+```
+
+## Service Upgrade
+
+Each custom service can define its upgrade within its service definition. This allows the custom service to be integrated within the [stack's upgrade](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-StackUpgrades).
+
+### Service Upgrade Packs
+
+Each service can define _upgrade-packs_, which are XML files describing the upgrade process of that particular service and how the upgrade pack relates to the overall stack upgrade-packs. These _upgrade-pack_ XML files are placed in the service's _upgrades/_ folder in separate sub-folders specific to the stack-version they are meant to extend. Some examples of this can be seen in the testing code.
+
+**Examples**
+
+- [Upgrades folder](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/)
+- [Upgrade-pack XML](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml)
+
+### Matching Upgrade Packs
+
+Each upgrade-pack that the service defines should match the file name of the service defined by a particular stack version. For example in the testing code, HDP 2.2.0 had an [upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.2.0/upgrades/upgrade_test_15388.xml) upgrade-pack. The HDFS service defined an extension to that upgrade pack [HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml). In this case the upgrade-pack was defined in the HDP/2.0.5 stack. The upgrade-pack is an extension to HDP/2.2.0 because it is defined in upgrade/HDP/2.2.0 directory. Finally the name of the service's extension to the upgrade-pack upgrade_test_15388.xml matches the name of the upgrade-pack in HDP/2.2.0/upgrades.
+
+**Upgrade XML Format**
+
+The file format for the service is much the same as that of the stack. The target, target-stack and type attributes should all be the same as the stack's upgrade-pack.
+
+**Prerequisite Checks**
+
+The service is able to add its own prerequisite checks.
+
+**General Attributes and Prerequisite Checks**
+```xml
+<upgrade xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <target>2.4.*</target>
+ <target-stack>HDP-2.4.0</target-stack>
+ <type>ROLLING</type>
+ <prerequisite-checks>
+ <check>org.apache.ambari.server.checks.FooCheck</check>
+ </prerequisite-checks>
+```
+
+**Order Section**
+
+The order section of the upgrade-pack, consists of group elements just like the stack's upgrade-pack. The key difference is defining how these groups relate to groups in the stack's upgrade pack or other service upgrade-packs. In the first example we are referencing the PRE_CLUSTER group and adding a new execute-stage for the service FOO. The entry is supposed to be added after the execute-stage for HDFS based on the
+
+**Order Section - Add After Group Entry**
+```xml
+
+<order>
+ <group xsi:type="cluster" name="PRE_CLUSTER" title="Pre {{direction.text.proper}}">
+ <add-after-group-entry>HDFS</add-after-group-entry>
+ <execute-stage service="FOO" component="BAR" title="Backup FOO">
+ <task xsi:type="manual">
+ <message>Back FOO up.</message>
+ </task>
+ </execute-stage>
+ </group>
+
+```
+
+The same syntax can be used to order other sections like service check priorities and group services.
+
+**Order Section - Further Add After Group Entry Examples**
+```xml
+<group name="SERVICE_CHECK1" title="All Service Checks" xsi:type="service-check">
+ <add-after-group-entry>ZOOKEEPER</add-after-group-entry>
+ <priority>
+ <service>HBASE</service>
+ </priority>
+</group>
+
+<group name="CORE_MASTER" title="Core Masters">
+ <add-after-group-entry>YARN</add-after-group-entry>
+ <service name="HBASE">
+ <component>HBASE_MASTER</component>
+ </service>
+</group>
+```
+
+It is also possible to add new groups and order them after other groups in the stack's upgrade-packs. In the following example, we are adding the FOO group after the HIVE group using the add-after-group tag.
+
+**Order Section - Add After Group**
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO">
+ <component>BAR</component>
+ </service>
+</group>
+```
+
+You could also include both the add-after-group and the add-after-group-entry tags in the same group. This will create a new group if it doesn't already exist and will order it after the add-after-group's group name. The add-after-group-entry will determine the internal ordering of that group's services, priorities or execute stages.
+
+**Order Section - Add After Group**
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <add-after-group-entry>FOO</add-after-group-entry>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO2">
+ <component>BAR2</component>
+ </service>
+</group>
+```
+
+**Processing Section**
+
+The processing section of the upgrade-pack remains the same as what it would be in the stack's upgrade-pack.
+
+**Processing Section**
+```xml
+ <processing>
+ <service name="FOO">
+ <component name="BAR">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ <component name="BAR2">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ </service>
+ </processing>
+```
+## Custom Service Repo
+
+Each service can define its own repo info by adding repos/repoinfo.xml in its service folder. The service specific repo will be included in the list of repos defined for the stack.
+
+**Example**: https://github.com/apache/ambari/blob/trunk/contrib/management-packs/microsoft-r_mpack/src/main/resources/custom-services/MICROSOFT_R_SERVER/8.0.5/repos/repoinfo.xml
+
+```xml
+
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://cust.service.lab.com/Services/centos6/1.1/myservices</baseurl>
+ <repoid>CUSTOM-1.1</repoid>
+ <reponame>CUSTOM</reponame>
+ </repo>
+ </os>
+</reposinfo>
+```
+
+## Custom Services - Additional Configuration
+
+### Alerts
+
+Each service is capable of defining which alerts Ambari should track by providing an [alerts.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/alerts.json) file.
+
+### Kerberos
+
+Ambari is capable of enabling and disabling Kerberos for a cluster. To inform Ambari of the identities and configurations to be used for the service and its components, each service can provide a _kerberos.json_ file.
+
+### Metrics
+
+Ambari provides the [Ambari Metrics System ("AMS")](../metrics/index.md)service for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+Each service can define which metrics AMS should collect and provide by defining a [metrics.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metrics.json) file.
+
+Read more about the metrics.json file format in the [Stack Defined Metrics](../metrics/stack-defined-metrics.md) page.
+
+### Quick Links
+
+A service can add a list of quick links to the Ambari web UI by adding a quick links JSON file. Ambari server parses the quick links JSON file and provides its content to the Ambari web UI. The UI can calculate quick link URLs based on that information and populate the quick links drop-down list accordingly.
+
+### Widgets
+
+Each service can define which widgets and heat maps show up by default on the service summary page by defining a [widgets.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/widgets.json) file.
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md
new file mode 100644
index 0000000..3b76f47
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/defining-a-custom-stack-and-services.md
@@ -0,0 +1,644 @@
+# Defining a Custom Stack and Services
+
+## Background
+
+The Stack definitions can be found in the source tree at [/ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks). After you install the Ambari Server, the Stack definitions can be found at `/var/lib/ambari-server/resources/stacks`
+
+## Stack Properties
+
+The stack must contain or inherit a properties directory which contains two files: [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) and [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json). The properties directory must be at the root stack version level and must not be included in the other stack versions. This [directory](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) is new in Ambari 2.4.
+
+The stack_features.json contains a list of features that are included in Ambari and allows the stack to specify which versions of the stack include those features. The list of features are determined by the particular Ambari release. The reference list for a particular Ambari version should be found in the [HDP/2.0.6/properties/stack_features.json](https://github.com/apache/ambari/blob/branch-2.4/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) in the branch for that Ambari release. Each feature has a name and description and the stack can provide the minimum and maximum version where that feature is supported.
+
+```json
+{
+
+"stack_features": [
+
+{
+
+"name": "snappy",
+
+"description": "Snappy compressor/decompressor support",
+
+"min_version": "2.0.0.0",
+
+"max_version": "2.2.0.0"
+
+},
+
+...
+
+}
+```
+
+The stack_tools.json includes the name and location where the stack_selector and conf_selector tools are installed.
+
+```json
+{
+
+"stack_selector": ["hdp-select", "/usr/bin/hdp-select", "hdp-select"],
+
+"conf_selector": ["conf-select", "/usr/bin/conf-select", "conf-select"]
+
+}
+```
+
+For more information see the [Stack Properties](./stack-properties.md) wiki page.
+
+## Structure
+
+The structure of a Stack definition is as follows:
+
+```
+|_ stacks
+ |_
+ |_
+ metainfo.xml
+ |_ hooks
+ |_ repos
+ repoinfo.xml
+ |_ services
+ |_
+ metainfo.xml
+ metrics.json
+ |_ configuration
+ {configuration files}
+ |_ package
+ {files, scripts, templates}
+```
+
+## Defining a Service and Components
+
+**metainfo.xml**
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service can be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you specify the `<commandScript>` to use when executing commands. There is a defined set of default commands the component must support.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+## Using Stack Inheritance
+
+Stacks can _extend_ other Stacks in order to share command scripts and configurations. This reduces duplication of code across Stacks with the following:
+
+* define repositories for the child Stack
+* add new Services in the child Stack (not in the parent Stack)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, the **HDP 2.1 Stack _extends_ HDP 2.0.6 Stack** so only the changes applicable to **HDP 2.1 Stack** are present in that Stack definition. This extension is defined in the [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.1/metainfo.xml) for HDP 2.1 Stack:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.0.6</extends>
+</metainfo>
+```
+
+## Example: Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV", add it to an existing Stack definition. This service includes MASTER, SLAVE and CLIENT components.
+
+### Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+```
+3. Browse to the newly created `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+5. Next, let's create that command script. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts` that we designated in the service metainfo.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+```
+6. Browse to the scripts directory and create the `.py` command script files. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+### Install the Service (via Ambari Web "Add Services")
+
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Sample Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Sample Service" and click Next.
+
+4. Assign the "Sample Srv Master" and click Next.
+
+5. Select the hosts to install the "Sample Srv Client" and click Next.
+
+6. Once complete, the "My Sample Service" will be available Service navigation area.
+
+7. If you want to add the "Sample Srv Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+## Example: Implementing a Custom Client-only Service
+
+In this example, we will create a custom service called "TESTSRV", add it to an existing Stack definition and use the Ambari APIs to install/configure the service. This service is a CLIENT so it has two commands: install and configure.
+
+### Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTSRV</strong>` that will contain the service definition for **TESTSRV**.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+```
+3. Browse to the newly created `TESTSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTSRV</name>
+ <displayName>New Test Service</displayName>
+ <comment>A New Test Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TEST_CLIENT</name>
+ <displayName>New Test Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTSRV**", and it contains one component " **TEST_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts` that we designated in the service metainfo.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the client';
+ def configure(self, env):
+ print 'Configure the client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```
+ambari-server restart
+```
+
+### Adding Repository details in repoinfo.xml
+
+When adding a custom service, it may be needed to add additional repository details for the stack especially if the service binaries are available through a separate repository. Additional
+
+```xml
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://cust.service.lab.com/Services/centos6/1.1/myservices</baseurl>
+ <repoid>CUSTOM-1.1</repoid>
+ <reponame>CUSTOM</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.1</baseurl>
+ <repoid>HDP-2.0.6</repoid>
+ <reponame>HDP</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.17/repos/centos6</baseurl>
+ <repoid>HDP-UTILS-1.1.0.17</repoid>
+ <reponame>HDP-UTILS</reponame>
+ </repo>
+ </os>
+</reposinfo>
+```
+
+### Install the Service (via the Ambari REST API)
+
+1. Add the Service to the Cluster.
+
+
+```
+POST
+/api/v1/clusters/MyCluster/services
+
+{
+"ServiceInfo": {
+ "service_name":"TESTSRV"
+ }
+}
+```
+2. Add the Components to the Service. In this case, add TEST_CLIENT to TESTSRV.
+
+```
+POST
+/api/v1/clusters/MyCluster/services/TESTSRV/components/TEST_CLIENT
+```
+3. Install the component on all target hosts. For example, to install on `<a href="http://c6402.ambari.apache.org">c6402.ambari.apache.org</a>` and `<a href="http://c6403.ambari.apache.org">c6403.ambari.apache.org</a>`, first create the host_component resource on the hosts using POST.
+
+```
+POST
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+POST
+/api/v1/clusters/MyCluster/hosts/c6403.ambari.apache.org/host_components/TEST_CLIENT
+```
+4. Now have Ambari install the components on all hosts. In this single command, you are instructing Ambari to install all components related to the service. This call the `install()` method in the command script on each host.
+
+
+```
+PUT
+/api/v1/clusters/MyCluster/services/TESTSRV
+
+{
+ "RequestInfo": {
+ "context": "Install Test Srv Client"
+ },
+ "Body": {
+ "ServiceInfo": {
+ "state": "INSTALLED"
+ }
+ }
+}
+```
+5. Alternatively, instead of installing all components at the same time, you can explicitly install each host component. In this example, we will explicitly install the TEST_CLIENT on `<a href="http://c6402.ambari.apache.org">c6402.ambari.apache.org</a>`:
+
+```
+PUT
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+{
+ "RequestInfo": {
+ "context":"Install Test Srv Client"
+ },
+ "Body": {
+ "HostRoles": {
+ "state":"INSTALLED"
+ }
+ }
+}
+```
+6. Use the following to configure the client on the host. This will end up calling the `configure()` method in the command script.
+
+```
+POST
+/api/v1/clusters/MyCluster/requests
+
+{
+ "RequestInfo" : {
+ "command" : "CONFIGURE",
+ "context" : "Config Test Srv Client"
+ },
+ "Requests/resource_filters": [{
+ "service_name" : "TESTSRV",
+ "component_name" : "TEST_CLIENT",
+ "hosts" : "c6403.ambari.apache.org"
+ }]
+}
+```
+7. If you want to see which hosts the component is installed.
+
+```
+GET
+/api/v1/clusters/MyCluster/components/TEST_CLIENT
+```
+
+### Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Test Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Test Service" and click Next.
+
+4. Select the hosts to install the "New Test Client" and click Next.
+
+5. Once complete, the "My Test Service" will be available Service navigation area.
+
+6. If you want to add the "New Test Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+
+## Example: Implementing a Custom Client-only Service (with Configs)
+
+In this example, we will create a custom service called "TESTCONFIGSRV" and add it to an existing Stack definition. This service is a CLIENT so it has two commands: install and configure. And the service also includes a configuration type "test-config".
+
+### Create and Add the Service to the Stack
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```bash
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTCONFIGSRV</strong>` that will contain the service definition for TESTCONFIGSRV.
+
+```bash
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+```
+3. Browse to the newly created `TESTCONFIGSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTCONFIGSRV</name>
+ <displayName>New Test Config Service</displayName>
+ <comment>A New Test Config Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TESTCONFIG_CLIENT</name>
+ <displayName>New Test Config Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTCONFIGSRV**", and it contains one component " **TESTCONFIG_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts` that we designated in the service metainfo `<commandscript></commandscript>`.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the config client';
+ def configure(self, env):
+ print 'Configure the config client';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now let's define a config type for this service. Create a directory for the configuration dictionary file `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servicesTESTCONFIGSRV/configuration`.
+
+```bash
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+```
+8. Browse to the configuration directory and create the `test-config.xml` file. For example:
+
+```xml
+
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<configuration>
+ <property>
+ <name>some.test.property</name>
+ <value>this.is.the.default.value</value>
+ <description>This is a kool description.</description>
+ </property>
+</configuration>
+
+```
+9. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```
+ambari-server restart
+```
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/extensions.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/extensions.md
new file mode 100644
index 0000000..b89bc96
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/extensions.md
@@ -0,0 +1,208 @@
+# Extensions
+
+## Background
+Added in Ambari 2.4.
+
+An Extension is a collection of one or more custom services which are packaged together. Much like stacks, each extension has a name which needs to be unique in the cluster. It also has a version directory to distinguish different releases of the extension. Much like stack versions which go in `/var/lib/ambari-server/resources/stacks with <stack_name>/<stack_version>` sub-directories, extension versions go in `/var/lib/ambari-server/resources/extensions with <extension_name>/<extension_version>` sub-directories.
+
+An extension can be linked to supported stack versions. Once an extension version has been linked to the currently installed stack version, the custom services contained in the extension version may be added to the cluster in the same manner as if they were actually contained in the stack version.
+
+Third party developers can release Extensions which can be added to a cluster.
+
+## Structure
+
+The structure of an Extension definition is as follows:
+
+```
+|_ extensions
+ |_ <extension_name>
+ |_ <extension_version>
+ |_ metainfo.xml
+ |_ services
+ |_ <service_name>
+ |_ metainfo.xml
+ |_ metrics.json
+ |_ configuration
+ |_ {configuration files}
+ |_ package
+ |_ {files, scripts, templates}
+```
+
+An extension version is similar to a stack version but it only includes the metainfo.xml and the services directory. This means that the alerts, kerberos, metrics, role command order, widgets files are not supported and should be included at the service level. In addition, the repositories, hooks, configurations, and upgrades directories are not supported although upgrade support can be added at the service level.
+
+## Extension Inheritance
+
+Extension versions can extend other Extension versions in order to share command scripts and configurations. This reduces duplication of code across Extensions with the following:
+
+- add new Services in the child Extension version (not in the parent Extension version)
+- override command scripts of the parent Services
+- override configurations of the parent Services
+
+For example, **MyExtension 2.0** could extend **MyExtension 1.0** so only the changes applicable to the **MyExtension 2.0** extension are present in that Extension definition. This extension is defined in the metainfo.xml for **MyExtension 2.0**:
+
+```xml
+<metainfo>
+ <extends>1.0</extends>
+```
+
+## Supported Stack Versions
+
+**Each Extension Version must support one or more Stack Versions.** The Extension Version specifies the minimum Stack Version which it supports. This is included in the extension's metainfo.xml in the prerequisites section like so:
+
+```xml
+<metainfo>
+ <prerequisites>
+ <min-stack-versions>
+ <stack>
+ <name>HDP</name>
+ <version>2.4</version>
+ </stack>
+ <stack>
+ <name>OTHER</name>
+ <version>1.0</version>
+ </stack>
+ </min-stack-versions>
+ </prerequisites>
+</metainfo>
+
+```
+
+## Installing Extensions
+
+It is recommended to install extensions using [management packs](./management-packs.md). For more details see the [instructions on packaging custom services using extensions and management packs](custom-services.md).
+
+Once the extension version directory has been created under the resource/extensions directory with the required metainfo.xml file, you can restart ambari-server.
+
+## Extension REST APIs
+
+You can query for extensions by calling REST APIs.
+
+### Get all extensions
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions'
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/",
+ "items" : [
+ {
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT",
+ "Extensions" : {
+ "extension_name" : "EXT"
+ }
+ }
+ ]
+}
+```
+
+### Get extension
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions/EXT'
+
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT",
+ "Extensions" : {
+ "extension_name" : "EXT"
+ },
+ "versions" : [
+ {
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT/versions/1.0",
+ "Versions" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0"
+ }
+ }
+ ]
+}
+```
+
+### Get extension version
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/extensions/EXT/versions/1.0'
+
+{
+ "href" : "http://<server>:<port>/api/v1/extensions/EXT/versions/1.0/",
+ "Versions" : {
+ "extension-errors" : [],
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "parent_extension_version" : null,
+ "valid" : true
+ }
+}
+```
+
+## Extension Links
+
+An Extension Link is a link between a stack version and an extension version. Once an extension version has been linked to the currently installed stack version, the custom services contained in the extension version may be added to the cluster in the same manner as if they were actually contained in the stack version.
+
+It is only possible to link an extension version to a stack version if the stack version is supported by the extension version. The stack name must be specified in the prerequisites section of the extension's metainfo.xml and the stack version must be greater than or equal to the minimum version number specified.
+
+## Extension Link REST APIs
+
+You can retrieve, create, update and delete extension links by calling REST APIs.
+
+### Create Extension Link
+
+```
+The following curl command will link an extension EXT/1.0 to the stack HDP/2.4
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X POST -d '{"ExtensionLink": {"stack_name": "HDP", "stack_version":
+
+"2.4", "extension_name": "EXT", "extension_version": "1.0"}}' http://<server>:<port>/api/v1/links/
+```
+
+### Get All Extension Links
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/links'
+
+{
+ "href" : "http://<server>:<port>/api/v1/links/",
+ "items" : [
+ {
+ "href" : "http://<server>:<port>:8080/api/v1/links/1",
+ "ExtensionLink" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "link_id" : 1,
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+ }
+ ]
+}
+```
+
+### Get Extension Link
+
+```bash
+curl -u admin:admin -H 'X-Requested-By:ambari' -X GET 'http://<server>:<port>/api/v1/link/1'
+{
+ "href" : "http://<server>:<port>/api/v1/links/1",
+ "ExtensionLink" : {
+ "extension_name" : "EXT",
+ "extension_version" : "1.0",
+ "link_id" : 1,
+ "stack_name" : "HDP",
+ "stack_version" : "2.4"
+ }
+}
+```
+
+### Delete Extension Link
+
+```
+You must specify the ID of the Extension Link to be deleted.
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE http://<server>:<port>/api/v1/links/<link_id>
+```
+
+### Update All Extension Links
+
+```
+This will reread the stacks, extensions and services in order to make sure the state of the stack is up to date in memory.
+
+curl -u admin:admin -H 'X-Requested-By: ambari' -X PUT http://<server>:<port>/api/v1/links/
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/faq.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/faq.md
new file mode 100644
index 0000000..d19f838
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/faq.md
@@ -0,0 +1,29 @@
+# FAQ
+
+## **[STACK]/[SERVICE]/metainfo.xml**
+
+**If a component exists in the parent stack and is defined again in the child stack with just a few attributes, are these values just to override the parent's values or the whole component definition is replaced? What about other elements in metainfo.xml -- which rules apply?**
+
+Ambari goes property by property and merge them from parent to child. So if you remove a category for example from the child it will be inherited from parent, that goes for pretty much all properties.
+
+So, the question is how do we tackle existence of a property in both parent and child. Here, most of the decision still follow same paradigm as take the child value instead of parent and every property in parent, not explicitly deleted from child using a marker like
+
+
+* For config-dependencies, we take all or nothing approach, if this property exists in child use it and all of its children else take it from parent.
+
+* The custom commands are merged based on names, such that merged definition is a union of commands with child commands with same name overriding those fro parent.
+
+* Cardinality is overwritten by a child or take from the parent if child has not provided one.
+
+You could look at this method for more details: `org.apache.ambari.server.api.util.StackExtensionHelper#mergeServices`
+
+For more information see the [Service Inheritance](./custom-services.md#Service20%Inheritance) wiki page.
+
+**If a component is missing in the new definition but is present in the parent, does it get inherited?**
+
+Generally yes.
+
+**Configuration dependencies for the service -- are they overwritten or merged?**
+
+Overwritten.
+
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/hooks.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/hooks.md
new file mode 100644
index 0000000..aa83f2a
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/hooks.md
@@ -0,0 +1,48 @@
+# Hooks
+
+A stack can add during the following points in Ambari actions:
+
+* before ANY
+* before and after INSTALL
+* before RESTART
+* before START
+
+As mentioned in [Stack Inheritance](./stack-inheritance.md), the hooks directory if defined in the current stack version replace those from the parent stack version. This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The hooks directory should have 5 sub-directories:
+
+* after-INSTALL
+* before-ANY
+* before-INSTALL
+* before-RESTART
+* before-START
+
+Each hook directory can have 3 sub-directories:
+
+* files
+* scripts
+* templates
+
+The scripts directory is the main directory and must be supplied. This must contain a hook.py file. It may contain other python scripts which initialize variables (like a params.py file) or other utility files contain functions used in the hook.py file.
+
+The files directory can any non-python files such as scripts, jar or properties files.
+
+The templates folder can contain any required j2 files that are used to set up properties files.
+
+The hook.py file should extend the Hook class defined in resource_management/libraries/script/hook.py. The naming convention is to name the hook class after the hook folder such as AfterInstallHook if the class is in the after-INSTALL folder. The hook.py file must define the hook(self, env) function. Here is an example hook:
+
+>
+
+```py
+from resource_management.libraries.script.hook import Hook
+
+class AfterInstallHook(Hook):
+
+ def hook(self, env):
+ import params
+ env.set_params(params)
+ # Call any functions to set up the stack after install
+
+if __name__ == "__main__":
+ AfterInstallHook().execute()
+```
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/how-to-define-stacks-and-services.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/how-to-define-stacks-and-services.md
new file mode 100644
index 0000000..cf07b79
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/how-to-define-stacks-and-services.md
@@ -0,0 +1,1216 @@
+# How-To Define Stacks and Services
+
+Services managed by Ambari are defined in its _stacks_ folder .
+
+To define your own services and stacks to be managed by Ambari, follow the steps below.
+
+There is also an example you can follow on how to [create your custom stack and service](./defining-a-custom-stack-and-services.md).
+
+A stack is a collection of services. Multiple versions of a stack can be defined, each with its own set of services. Stacks in Ambari are defined in [ambari-server/src/main/resources/stacks ;](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folder, which can be found at **/var/lib/ambari-server/resources/stacks** folder after install.
+
+Services managed by a stack can be defined either in [ambari-server/src/main/resources/common-services](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services)or [ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folders. These folders after install can be found at _/var/lib/ambari-server/resources/common-services_ or _/var/lib/ambari-server/resources/stacks_ folders respectively.
+
+> **Question: When do I define service in _common-services_ vs. _stacks_ folders?**
+One would define services in the [common-services](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services)folder if there is possibility of the service being used in multiple stacks. For example, almost all stacks would need the HDFS service - so instead of redefining HDFS in each stack, the one defined in common-services is referenced .Likewise, if a service is never going to be shared, it can be defined in the [stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks)folder.Basically services defined in stacks folder are used by containment, whereas the ones defined in common-services are used by reference.
+
+## Define Service
+
+Shown below is how to define a service in _common-services_ folder. The same approach can be taken when defining services in the _stacks_ folder, which will be discussed in the _Define Stack_ section.
+
+
+
+Services **MUST** provide the main _metainfo.xml_ file which provides important metadata about the service.
+
+Apart from that, other files can be provided to give more information about the service. More details about these files are provided below.
+
+A service may also inherit from either a previous stack version or common services. For more information see the [Service Inheritance](./stack-inheritance.md) page.
+
+### _metainfo.xml_
+
+In the _metainfo.xml_ service descriptor, one can first define the service and its components.
+
+Complete reference can be found in the [Writing metainfo.xml](./writing-metainfo.md) page.
+
+A good reference implementation is the [HDFS metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L27).
+
+> **Question: Is it possible to define multiple services in the same metainfo.xml?**
+Yes. Though it is possible, it is discouraged to define multiple services in the same service folder.
+
+YARN and MapReduce2 are services that are defined together in the [YARN folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0). Its [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml) defines both services.
+
+#### Scripts
+
+With the components defined, we need to provide scripts which can handle the various stages of the service and component's lifecycle.
+
+The scripts necessary to manage service and components are specified in the _metainfo.xml_ ([HDFS](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L35))
+Each of these scripts should extend the [Script](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-common/src/main/python/resource_management/libraries/script/script.py) class which provides useful methods. Example: [NameNode script](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L68)
+
+
+
+These scripts should be provided in the __ folder.
+
+
+
+**package/scripts**
+Folder | Purpose
+-------|--------
+**package/scripts** | Contains scripts invoked by Ambari. These scripts are loaded into the execution path with the correct environment.<br></br>Example: [HDFS](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts)
+**package/files** | Contains files used by above scripts. Generally these are other scripts (bash, python, etc.) invoked as a separate process.<br></br>Example: [checkWebUI.py](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/files/checkWebUI.py) is run in HDFS service-check to determine if Journal Nodes are available
+**package/templates** | Template files used by above scripts to generate files on managed hosts. These are generally configuration files required by the service to operate.<br></br>Example: [exclude_hosts_list.j2](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/templates/exclude_hosts_list.j2) which is used by scripts to generate _/etc/hadoop/conf/dfs.exclude_
+
+#### Python
+
+Ambari by default supports Python scripts for management of service and components.
+
+Component scripts should extend `resource_management.Script` class and provide methods required for that component's lifecycle.
+
+Taken from the page on [how to create custom stack](./defining-a-custom-stack-and-services.md), the following methods are needed for MASTER, SLAVE and CLIENT components to go through their lifecycle.
+
+```python
+import sys
+from resource_management import Script
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+```python
+import sys
+from resource_management import Script
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+```python
+import sys
+from resource_management import Script
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+
+Ambari provides helpful Python libraries below which are useful in writing service scripts. For complete reference on these libraries visit the [Ambari Python Libraries](https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Python+Libraries) page.
+
+* resource_management
+* ambari_commons
+* ambari_simplejson
+
+##### OS Variant Scripts
+
+If the service is supported on multiple OSes which requires separate scripts, the base _resource_management.Script_ class can be extended with different _@OsFamilyImpl()_ annotations.
+
+This allows for the separation of only OS specific methods of the component.
+
+Example: [NameNode default script](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L126), [NameNode Windows script](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L346).
+
+> **Examples**
+NameNode [Start](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py#L93), [Stop](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py#L208).
+
+DataNode [Start and Stop](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py#L68).
+
+HDFS [configurations persistence](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py#L31)
+
+#### Custom Actions
+
+Sometimes services need to perform actions unique to that service which go beyond the default actions provided by Ambari (like _install_ , _start, stop, configure,_ etc.).
+
+Services can define such actions and expose them to the user in UI so that they can be easily invoked.
+
+As an example, we show the _Rebalance HDFS_ custom action implemented by HDFS.
+
+##### Stack Changes
+
+1. [Define custom command inside the _customCommands_ section](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L49) of the component in _metainfo.xml_.
+
+2. [Implement method with same name as custom command](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L273) in script referenced from _metainfo.xml_.
+
+ 1. If custom command does not have OS variants, it can be implemented in the same class that extends _resource_management.Script_
+ 2. If there are OS variants, different methods can be implemented in each class annotated by _@OsFamilyImpl(os_family=...)_. [Default rebalancehdfs](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L273),[Windows rebalancehdfs](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py#L354).
+
+This will provide ability by the backend to run the script on all managed hosts where the service is installed.
+
+##### UI Changes
+
+No UI changes are necessary to see the custom action on the host page.
+
+The action should show up in the host-component's list of actions. Any master-component actions will automatically show up on the service's action menu.
+
+When the action is clicked in UI, the POST call is made automatically to trigger the script defined above.
+
+> **Question: How do I provide my own label and icon for the custom action in UI?**
+In Ambari UI, add your component action to the _App.HostComponentActionMap_ object with custom icon and name. Ex: [REBALANCEHDFS](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-web/app/models/host_component.js#L351).
+
+### Configuration
+
+Configuration files for a service should be placed by default in the _ [configuration](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration)_ folder.
+
+If a different named folder has to be used, the [< _configuration-dir>_](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml#L249) element can be used in _metainfo.xml_ to point to that folder.
+
+The important sections of the metainfo.xml with regards to configurations are:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <displayName>HDFS</displayName>
+ <comment>Apache Hadoop Distributed File System</comment>
+ <version>2.1.0.2.0</version>
+ <components>
+ ...
+ <component>
+ <name>HDFS_CLIENT</name>
+ ...
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>hdfs-site.xml</fileName>
+ <dictionaryName>hdfs-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>xml</type>
+ <fileName>core-site.xml</fileName>
+ <dictionaryName>core-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>log4j.properties</fileName>
+ <dictionaryName>hdfs-log4j,yarn-log4j</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>hadoop-env.sh</fileName>
+ <dictionaryName>hadoop-env</dictionaryName>
+ </configFile>
+ </configFiles>
+ ...
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ </configuration-dependencies>
+ </component>
+ ...
+ </components>
+
+ <configuration-dir>configuration</configuration-dir>
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hdfs-site</config-type>
+ <config-type>hadoop-env</config-type>
+ <config-type>hadoop-policy</config-type>
+ <config-type>hdfs-log4j</config-type>
+ <config-type>ranger-hdfs-plugin-properties</config-type>
+ <config-type>ssl-client</config-type>
+ <config-type>ssl-server</config-type>
+ <config-type>ranger-hdfs-audit</config-type>
+ <config-type>ranger-hdfs-policymgr-ssl</config-type>
+ <config-type>ranger-hdfs-security</config-type>
+ <config-type>ams-ssl-client</config-type>
+ </configuration-dependencies>
+ </service>
+ </services>
+</metainfo>
+```
+
+* **config-type** - String representing a group of configurations. Example: _core-site, hdfs-site, yarn-site_, etc. When configurations are saved in Ambari, they are persisted within a version of config-type which is immutable. If you change and save HDFS core-site configs 4 times, you will have 4 versions of config-type core-site. Also, when a service's configs are saved, only the changed config-types are updated.
+
+* **configFiles** - lists the config-files handled by the enclosing component
+* **configFile** - represents one config-file of a certain type
+
+ - **type** - type of file based on which contents are generated differently
+
+ + **xml** - XML file generated in Hadoop friendly format. Ex:[hdfs-site.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml)
+ + **env** - Generally used for scripts where the content value is used as a template. The template has config-tags whose values are populated at runtime during file generation. Ex:[hadoop-env.sh](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml)
+ + **properties** - Generates property files where entries are in key=value format. Ex:[falcon-runtime.properties](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-runtime.properties.xml)
+ - **dictionaryName** - Name of the config-type as which key/values of this config file will be stored
+* **configuration-dependencies** - Lists the config-types on which this component or service depends on. One of the implications of this dependency is that whenever the config-type is updated, Ambari automatically marks the component or service as requiring restart. From the code section above, whenever _core-site_ is updated, both HDFS service as well as HDFS_CLIENT component will be marked as requiring restart.
+
+* **configuration-dir** - Directory where files listed in _configFiles_ will be. Optional. Default value is _configuration_.
+
+#### Adding new configs in a config-type
+
+There are a number of different parameters that can be specified to a config item when it is added to a config-type. These have been covered [here](https://cwiki.apache.org/confluence/display/AMBARI/Configuration+support+in+Ambari).
+
+#### UI - Categories
+
+Configurations defined above show up in the service's _Configs_ page.
+
+To customize categories and ordering of configurations in UI, the following files have to be updated.
+
+**Create Category** - Update the _ [ambari-web/app/models/stack_service.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/models/stack_service.js#L226)_ file to add your own service, along with your new categories.
+
+**Use Category** - To place configs inside a defined category, and specify an order in which configs are placed, add configs to [ambari-web/app/data/HDP2/site_properties.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2/site_properties.js) file. In this file one can specify the category to use, and the index where a config should be placed. The stack folders in [ambari-web/app/data](https://github.com/apache/ambari/tree/trunk/ambari-web/app/data) are hierarchical and inherit from previous versions. The mapping of configurations into sections is defined here. Example [Hive Categories](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2.2/hive_properties.js), [Tez Categories](https://github.com/apache/ambari/blob/trunk/ambari-web/app/data/HDP2.2/tez_properties.js).
+
+#### UI - Enhanced Configs
+
+The _Enhanced Configs_feature makes it possible for service providers to customize their service's configs to a great deal and determine which configs are prominently shown to user without making any UI code changes. Customization includes providing a service friendly layout, better controls (sliders, combos, lists, toggles, spinners, etc.), better validation (minimum, maximum, enums), automatic unit conversion (MB, GB, seconds, milliseconds, etc.), configuration dependencies and improved dynamic recommendations of default values.
+
+A service provider can accomplish all the above just by changing their service definition in the _stacks_/folder.
+
+Read more in the _ [Enhanced Configs](https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Configs)_ page
+
+### Alerts
+
+Each service is capable of defining which alerts Ambari should track by providing an [alerts.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/alerts.json) file.
+
+Read more about Ambari Alerts framework [in the Alerts wiki page](https://cwiki.apache.org/confluence/display/AMBARI/Alerts) and the alerts.json format in the [Alerts definition documentation](https://github.com/apache/ambari/blob/branch-2.1/ambari-server/docs/api/v1/alert-definitions.md).
+
+### Kerberos
+
+Ambari is capable of enabling and disabling Kerberos for a cluster. To inform Ambari of the identities and configurations to be used for the service and its components, each service can provide a _kerberos.json_ file.
+
+Read more about Kerberos support in the _ [Automated Kerberization](https://cwiki.apache.org/confluence/display/AMBARI/Automated+Kerberizaton)_ wiki page and the Kerberos descriptor in the [Kerberos Descriptor documentation](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/security/kerberos/kerberos_descriptor.md).
+
+### Metrics
+
+Ambari provides the [Ambari Metrics System ("AMS")](https://cwiki.apache.org/confluence/display/AMBARI/Metrics)service for collecting, aggregating and serving Hadoop and system metrics in Ambari-managed clusters.
+
+Each service can define which metrics AMS should collect and provide by defining a [metrics.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metrics.json) file.
+
+You can read about the metrics.json file format in the [Stack Defined Metrics](https://cwiki.apache.org/confluence/display/AMBARI/Stack+Defined+Metrics) page.
+
+### Quick Links
+
+A service can add a list of quick links to the Ambari web UI by adding metainfo to a text file following a predefined JSON format. Ambari server parses the quicklink JSON file and provides its content to the UI. So that Ambari web UI can calculate quick link URLs based on the information and populate the quicklinks drop-down list accordingly.
+
+Read more about quick links JSON file design in the [Quick Links](../quick-links.md) page.
+
+### Widgets
+
+Each service can define which widgets and heatmaps show up by default on the service summary page by defining a [widgets.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/widgets.json) file.
+
+You can read more about the widgets descriptor in the [Enhanced Service Dashboard](https://cwiki.apache.org/confluence/display/AMBARI/Enhanced+Service+Dashboard) page.
+
+### Role Command Order
+
+From Ambari 2.2, each service can define its own role command order by including the [role_command_order.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json) file in its service folder. The service should only specify the relationship of its components to other components. In other words, if a service only includes COMP_X, it should only list dependencies related to COMP_X. If when COMP_X starts it is dependent on the NameNode start and when the NameNode stops it should wait for COMP_X to stop, the following would be included in the role command order:
+
+**Example service role_command_order.json**
+```json
+"COMP_X-START": ["NAMENODE-START"],
+ "NAMENODE-STOP": ["COMP_X-STOP"]
+```
+
+The entries in the service's role command order will be merged with the role command order defined in the stack. For example, since the stack already has a dependency for NAMENODE-STOP, in the example above COMP_X-STOP would be added to the rest of the NAMENODE-STOP dependencies and in addition the COMP_X-START dependency on NAMENODE-START would just be added as a new dependency.
+
+For more details on role command order, see the [Role Command Order](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-RoleCommandOrder) section below.
+
+### Service Advisor
+
+From Ambari 2.4, each service can choose to define its own service advisor rather than define the details of its configuration and layout in the stack advisor. This is particularly useful for custom services which are not defined in the stack. Ambari provides the _Service Advisor_ capability where a service can write a Python script named _service-advisor.py_ in their service folder. This folder can be in the stack's services directory where the service is defined or can be inherited from the service definition in common-services or elsewhere. Example: [common-services/HAWQ/2.0.0](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0).
+
+Unlike the Stack-advisor scripts, the service-advisor scripts do not automatically extend the parent service's service-advisor scripts. The service-advisor script needs to explicitly extend their parent's service service-advisor script. The following code sample shows how you would refer to a parent's service_advisor.py. In this case it is extending the root service-advisor.py file in the resources/stacks directory.
+
+**Sample service-advisor.py file inheritance**
+```python
+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
+STACKS_DIR = os.path.join(SCRIPT_DIR, '../../../stacks/')
+PARENT_FILE = os.path.join(STACKS_DIR, 'service_advisor.py')
+
+try:
+ with open(PARENT_FILE, 'rb') as fp:
+ service_advisor = imp.load_module('service_advisor', fp, PARENT_FILE, ('.py', 'rb', imp.PY_SOURCE))
+except Exception as e:
+ traceback.print_exc()
+ print "Failed to load parent"
+
+class HAWQ200ServiceAdvisor(service_advisor.ServiceAdvisor):
+```
+
+Like the stack advisors, service advisors provide information on 4 important aspects:
+
+1. Recommend layout of the service on cluster
+2. Recommend service configurations
+3. Validate layout of the service on cluster
+4. Validate service configurations
+
+By providing the service-advisor.py file, one can control dynamically each of the above for the service.
+
+The main interface for the service-advisor scripts contains documentation on how each of the above are called, and what data is provided.
+
+```python
+class ServiceAdvisor(DefaultStackAdvisor):
+"""
+ Abstract class implemented by all service advisors.
+
+"""
+
+"""
+ If any components of the service should be colocated with other services,
+ this is where you should set up that layout. Example:
+
+ # colocate HAWQSEGMENT with DATANODE, if no hosts have been allocated for HAWQSEGMENT
+ hawqSegment = [component for component in serviceComponents if component["StackServiceComponents"]["component_name"] == "HAWQSEGMENT"][0]
+ if not self.isComponentHostsPopulated(hawqSegment):
+ for hostName in hostsComponentsMap.keys():
+ hostComponents = hostsComponentsMap[hostName]
+ if {"name": "DATANODE"} in hostComponents and {"name": "HAWQSEGMENT"} not in hostComponents:
+ hostsComponentsMap[hostName].append( { "name": "HAWQSEGMENT" } )
+ if {"name": "DATANODE"} not in hostComponents and {"name": "HAWQSEGMENT"} in hostComponents:
+ hostComponents.remove({"name": "HAWQSEGMENT"})
+"""
+ def colocateService(self, hostsComponentsMap, serviceComponents):
+ pass
+
+"""
+ Any configuration recommendations for the service should be defined in this function.
+
+ This should be similar to any of the recommendXXXXConfigurations functions in the stack_advisor.py
+ such as recommendYARNConfigurations().
+
+"""
+ def getServiceConfigurationRecommendations(self, configurations, clusterSummary, services, hosts):
+ pass
+
+"""
+ Returns an array of Validation objects about issues with the hostnames to which components are assigned.
+
+ This should detect validation issues which are different than those the stack_advisor.py detects.
+
+ The default validations are in stack_advisor.py getComponentLayoutValidations function.
+
+"""
+ def getServiceComponentLayoutValidations(self, services, hosts):
+ return []
+
+"""
+ Any configuration validations for the service should be defined in this function.
+
+ This should be similar to any of the validateXXXXConfigurations functions in the stack_advisor.py
+ such as validateHDFSConfigurations.
+
+"""
+ def getServiceConfigurationsValidationItems(self, configurations, recommendedDefaults, services, hosts):
+ return []
+```
+
+#### **Examples**
+
+* [Service Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/service_advisor.py#L51)
+* [HAWQ 2.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0/service_advisor.py)
+* [PXF 3.0.0 Service Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/PXF/3.0.0/service_advisor.py)
+
+### Service Upgrade
+
+From Ambari 2.4, each service can now define its upgrade within its service definition. This is particularly useful for custom services which no longer need to modify the stack's upgrade-packs in order to integrate themselves into the [cluster upgrade](https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-StackUpgrades).
+
+
+Each service can define _upgrade-packs_, which are XML files describing the upgrade process of that particular service and how the upgrade pack relates to the overall stack upgrade-packs. These _upgrade-pack_ XML files are placed in the service's _upgrades/_ folder in separate sub-folders specific to the stack-version they are meant to extend. Some examples of this can be seen in the testing code.
+
+#### Examples
+
+* [Upgrades folder](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/)
+* [Upgrade-pack XML](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml)
+
+Each upgrade-pack that the service defines should match the file name of the service defined by a particular stack version. For example in the testing code, HDP 2.2.0 had an [upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.2.0/upgrades/upgrade_test_15388.xml) upgrade-pack. The HDFS service defined an extension to that upgrade pack [HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/stacks/HDP/2.0.5/services/HDFS/upgrades/HDP/2.2.0/upgrade_test_15388.xml). In this case the upgrade-pack was defined in the HDP/2.0.5 stack. The upgrade-pack is an extension to HDP/2.2.0 because it is defined in upgrade/HDP/2.2.0 directory. Finally the name of the service's extension to the upgrade-pack upgrade_test_15388.xml matches the name of the upgrade-pack in HDP/2.2.0/upgrades.
+
+The file format for the service is much the same as that of the stack. The target, target-stack and type attributes should all be the same as the stack's upgrade-pack. The service is able to add its own prerequisite checks.
+
+**General Attributes and Prerequisite Checks**
+```
+<upgrade xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
+ <target>2.4.*</target>
+ <target-stack>HDP-2.4.0</target-stack>
+ <type>ROLLING</type>
+ <prerequisite-checks>
+ <check>org.apache.ambari.server.checks.FooCheck</check>
+ </prerequisite-checks>
+```
+
+The order section of the upgrade-pack, consists of group elements just like the stack's upgrade-pack. The key difference is defining how these groups relate to groups in the stack's upgrade pack or other service upgrade-packs. In the first example we are referencing the PRE_CLUSTER group and adding a new execute-stage for the service FOO. The entry is supposed to be added after the execute-stage for HDFS based on the `<add-after-group-entry>` tag.
+
+```xml
+<order>
+ <group xsi:type="cluster" name="PRE_CLUSTER" title="Pre {{direction.text.proper}}">
+ <add-after-group-entry>HDFS</add-after-group-entry>
+ <execute-stage service="FOO" component="BAR" title="Backup FOO">
+ <task xsi:type="manual">
+ <message>Back FOO up.</message>
+ </task>
+ </execute-stage>
+ </group>
+
+```
+
+The same syntax can be used to order other sections like service check priorities and group services.
+
+```xml
+<group name="SERVICE_CHECK1" title="All Service Checks" xsi:type="service-check">
+ <add-after-group-entry>ZOOKEEPER</add-after-group-entry>
+ <priority>
+ <service>HBASE</service>
+ </priority>
+</group>
+
+<group name="CORE_MASTER" title="Core Masters">
+ <add-after-group-entry>YARN</add-after-group-entry>
+ <service name="HBASE">
+ <component>HBASE_MASTER</component>
+ </service>
+</group>
+```
+
+It is also possible to add new groups and order them after other groups in the stack's upgrade-packs. In the following example, we are adding the FOO group after the HIVE group using the add-after-group tag.
+
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO">
+ <component>BAR</component>
+ </service>
+</group>
+```
+
+You could also include both the add-after-group and the add-after-group-entry tags in the same group. This will create a new group if it doesn't already exist and will order it after the add-after-group's group name. The add-after-group-entry will determine the internal ordering of that group's services, priorities or execute stages.
+
+```xml
+<group name="FOO" title="Foo">
+ <add-after-group>HIVE</add-after-group>
+ <add-after-group-entry>FOO</add-after-group-entry>
+ <skippable>true</skippable>
+ <allow-retry>false</allow-retry>
+ <service name="FOO2">
+ <component>BAR2</component>
+ </service>
+</group>
+```
+
+The processing section of the upgrade-pack remains the same as what it would be in the stack's upgrade-pack.
+
+```xml
+ <processing>
+ <service name="FOO">
+ <component name="BAR">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ <component name="BAR2">
+ <upgrade>
+ <task xsi:type="restart-task" />
+ </upgrade>
+ </component>
+ </service>
+ </processing>
+```
+
+## Define Stack
+
+A stack is a versioned collection of services. Each stack is a folder is defined in [ambari-server/src/main/resources/stacks](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks) source. Once installed, these stack definitions are available on the ambari-server machine at _/var/lib/ambari-server/resources/stacks_.
+
+Each stack folder contains one sub-folder per version of the stack. Some of these stack-versions are active while some are not. Each stack-version includes services which are either referenced from _common-services_, or defined inside the stack-version's _services_ folder.
+
+
+
+Example: [HDP stack](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP). [HDP-2.4 stack version](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.4).
+
+### Stack-Version Descriptor
+
+Each stack-version should provide a _metainfo.xml_ (Example: [HDP-2.3](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/metainfo.xml), [HDP-2.4](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.4/metainfo.xml)) descriptor file which describes the following about this stack-version:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.3</extends>
+ <minJdk>1.7</minJdk>
+ <maxJdk>1.8</maxJdk>
+</metainfo>
+```
+
+* **versions/active** - Whether this stack-version is still available for install. If not available, this version will not show up in UI during install.
+
+* **extends** - The stack-version in this stack that is being extended. Extended stack-versions inherit services along with almost all aspects of the parent stack-version.
+
+* **minJdk** - Minimum JDK with which this stack-version is supported. Users are warned during installer wizard if the JDK used by Ambari is lower than this version.
+
+* **maxJdk** - Maximum JDK with which this stack-version is supported. Users are warned during installer wizard if the JDK used by Ambari is greater than this version.
+
+### Stack Properties
+
+The stack must contain or inherit a properties directory which contains two files: [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) and [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json). This [directory](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) is new in Ambari 2.4.
+
+The stack_features.json contains a list of features that are included in Ambari and allows the stack to specify which versions of the stack include those features. The list of features are determined by the particular Ambari release. The reference list for a particular Ambari version should be found in the [HDP/2.0.6/properties/stack_features.json](https://github.com/apache/ambari/blob/branch-2.4/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) in the branch for that Ambari release. Each feature has a name and description and the stack can provide the minimum and maximum version where that feature is supported.
+
+```json
+{
+
+"stack_features": [
+
+{
+
+"name": "snappy",
+
+"description": "Snappy compressor/decompressor support",
+
+"min_version": "2.0.0.0",
+
+"max_version": "2.2.0.0"
+
+},
+
+...
+
+}
+```
+
+The stack_tools.json includes the name and location where the stack_selector and conf_selector tools are installed.
+
+```json
+{
+
+"stack_selector": ["hdp-select", "/usr/bin/hdp-select", "hdp-select"],
+
+"conf_selector": ["conf-select", "/usr/bin/conf-select", "conf-select"]
+
+}
+```
+
+
+Any custom stack must include these two JSON files. For further information see the [Stack Properties](./stack-properties.md) wiki page.
+
+### Services
+
+Each stack-version includes services which are either referenced from _common-services_, or defined inside the stack-version's _services_ folder.
+
+Services are defined in _common-services_ if they will be shared across multiple stacks. If they will never be shared, then they can be defined inside the stack-version.
+
+#### Reference _common-services_
+
+To reference a service from common-services, the service descriptor file should use the < _extends>_ element. (Example: [HDFS in HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/metainfo.xml))
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HDFS</name>
+ <extends>common-services/HDFS/2.1.0.2.0</extends>
+ </service>
+ </services>
+</metainfo>
+```
+
+#### Define Service
+
+In exactly the same format as services defined in _common-services_, a new service can be defined inside the _services_ folder.
+
+Examples:
+
+* [HDFS in BIGTOP-0.8](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/BIGTOP/0.8/services/HDFS)
+* [GlusterFS in HDP-2.3.GlusterFS](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS)
+
+#### Extend Service
+
+When a stack-version extends another stack-version, it inherits all details of the parent service. It is also free to override and remove any portion of the inherited service definition.
+
+Examples:
+
+* HDP-2.3 / HDFS -[Adding NFS_GATEWAY component, updating service version and OS specific packages](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/HDFS/metainfo.xml)
+* HDP-2.2 / Storm -[Deleting STORM_REST_API component, updating service version and OS specific packages](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.2/services/STORM/metainfo.xml)
+* HDP-2.3 / YARN -[Deleting YARN node-label configuration from capacity-scheduler.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/YARN/configuration/capacity-scheduler.xml)
+* HDP-2.3 / Kafka -[Add Kafka Broker Process alert](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/KAFKA/alerts.json)
+
+### Role Command Order
+
+_**Role**_ is another name for **Component** (Ex: NAMENODE, DATANODE, RESOURCEMANAGER, HBASE_MASTER, etc.)
+
+As the name implies, it is possible to tell Ambari about the order in which commands should be run for the components defined in your stack.
+
+For example: "_ZooKeeper Server_ should be started before starting _NameNode_". Or "_HBase Master_ should be started only after _NameNode_ and _DataNodes_ are started".
+
+This can be specified by including the [role_command_order.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json) file in the stack-version folder.
+
+#### Format
+
+Specified in JSON format, the file contains a JSON object with top-level keys being either section names or comments Ex: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json).
+
+Inside each section object, the key describes the dependent component-action, and the value lists the component-actions which should be done before it.
+
+```json
+{
+ "_comment": "Section 1 comment",
+ "section_name_1": {
+ "_comment": "Section containing role command orders",
+ "-": ["-", "-"],
+ "-": ["-"],
+ ...
+
+ },
+ "_comment": "Next section comment",
+ ...
+
+}
+```
+
+#### Sections
+
+Ambari uses the below sections only:
+
+Section Name | When Used
+-------------|---------------
+general_deps |Command orders are applied in all situations
+optional_glusterfs | Command orders are applied when cluster has instance of GLUSTERFS service
+optional_no_glusterfs | Command orders are applied when cluster does not have instance of GLUSTERFS service
+namenode_optional_ha | Command orders are applied when HDFS service is installed and JOURNALNODE component exists (HDFS HA is enabled)
+resourcemanager_optional_ha | Command orders are applied when YARN service is installed and multiple RESOURCEMANAGER host-components exist (YARN HA is enabled)
+
+#### Commands
+
+Commands currently supported by Ambari are
+
+* INSTALL
+* UNINSTALL
+* START
+* RESTART
+* STOP
+* EXECUTE
+* ABORT
+* UPGRADE
+* SERVICE_CHECK
+* CUSTOM_COMMAND
+* ACTIONEXECUTE
+
+#### Examples
+
+Role Command Order | Explanation
+-------------------|---------------------
+"HIVE_METASTORE-START": ["MYSQL_SERVER-START", "NAMENODE-START"] | Start MySQL and NameNode components before starting Hive Metastore
+"MAPREDUCE_SERVICE_CHECK-SERVICE_CHECK": ["NODEMANAGER-START", "RESOURCEMANAGER-START"], | MapReduce service check needs ResourceManager and NodeManagers started
+"ZOOKEEPER_SERVER-STOP" : ["HBASE_MASTER-STOP", "HBASE_REGIONSERVER-STOP", "METRICS_COLLECTOR-STOP"], | Before stopping ZooKeeper servers, make sure HBase Masters, HBase RegionServers and AMS Metrics Collector are stopped.
+
+### Repositories
+
+Each stack-version can provide the location of package repositories to use, by providing a _repos/repoinfo.xml_ (Ex: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/repos/repoinfo.xml))
+The _repoinfo.xml_ file contains repositories grouped by operating systems. Each OS specifies a list of repositories that are shown to the user when the stack-version is selected for install.
+
+These repositories are used in conjunction with the [_packages_ defined in a service's metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L161) to install appropriate bits on the system.
+
+```xml
+<reposinfo>
+ <os family="redhat6">
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.0.6.1</baseurl>
+ <repoid>HDP-2.0.6</repoid>
+ <reponame>HDP</reponame>
+ </repo>
+ <repo>
+ <baseurl>http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.17/repos/centos6</baseurl>
+ <repoid>HDP-UTILS-1.1.0.17</repoid>
+ <reponame>HDP-UTILS</reponame>
+ </repo>
+ </os>
+<reposinfo>
+```
+
+baseurl- URL of the RPM repository where provided _repoid_ can be found
+**repoid** - Repo ID to use that are hosted at _baseurl
+**reponame** - Display name for the repo being used.
+
+#### Latest Builds
+
+Though repository base URL is capable of providing updates to a particular repo, it has to be defined at build time. This could be an issue later when the repository changes location, or update builds are hosted at a different site.
+
+For such scenarios, a stack-version can provide the location of a JSON file which can provide details of other repo URLs to use.
+
+Example: [HDP-2.3 repoinfo.xml uses](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/repos/repoinfo.xml), which then points to alternate repository URLs where latest builds can be found:
+
+```json
+{
+ ...
+
+ "HDP-2.3":{
+ "latest":{
+ "centos6":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos6/2.x/BUILDS/2.3.6.0-3586/",
+ "centos7":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/2.x/BUILDS/2.3.6.0-3586/",
+ "debian6":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/debian6/2.x/BUILDS/2.3.6.0-3586/",
+ "debian7":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/debian7/2.x/BUILDS/2.3.6.0-3586/",
+ "suse11":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/suse11sp3/2.x/BUILDS/2.3.6.0-3586/",
+ "ubuntu12":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu12/2.x/BUILDS/2.3.6.0-3586/",
+ "ubuntu14":"http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu14/2.x/BUILDS/2.3.6.0-3586/"
+ }
+ },
+ ...
+
+}
+```
+
+### Hooks
+
+A stack-version could have very basic and common instructions that need to be run before or after certain Ambari commands, across all services.
+
+Instead of duplicating this code across all service scripts and asking users to worry about them, Ambari provides the _Hooks_ ability where common before and after code can be pulled away into the _hooks_ folder. (Ex: [HDP-2.0.6](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks))
+
+
+
+#### Command Sub-Folders
+
+The general naming pattern for hooks sub-folders is `"<before|after>-<ANY|<CommandName>>"`.
+What this means is that the scripts/hook.py file under the sub-folder is run either before or after the command.
+
+**Examples:**
+
+Sub-Folder | Purpose | Example
+-----------|---------|------------
+before-START | Hook script called before START command is run on any component of the stack-version. | [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py#L30)<br></br>sets up Hadoop log and pid directories<br></br>creates Java Home symlink<br></br>Creates /etc/hadoop/conf/topology_script.py<br></br>etc.
+before-INSTALL | Hook script called before installing of any component of the stack-version | [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py#L33)<br></br>Creates repo files in /etc/yum.repos.d<br></br>Installs basic packages like curl, unzip, etc.
+
+Based on the commands currently supported by Ambari, the following sub-folders can be created based on necessity
+
+Prefix | - |Command
+-------|--|---------------------
+before | - |INSTALL UNINSTALL START RESTART STOP
+after | - |EXECUTE ABORT UPGRADE SERVICE_CHECK `<custom_command>`-Custom commands specified by the user like the DECOMMISSION or REBALANCEHDFS commands specified by HDFS
+
+The _scripts/hook.py_ script should import [resource_management.libraries.script.hook](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/script/hook.py) module and extend the Hook class
+
+```python
+from resource_management.libraries.script.hook import Hook
+
+class CustomHook(Hook):
+ def hook(self, env):
+ # Do custom work
+
+if __name__ == "__main__":
+ CustomHook().execute()
+```
+
+### Configurations
+
+Though most configurations are set at the service level, there can be configurations which apply across all services to indicate the state of the cluster installed with this stack.
+
+For example, things like ["is security enabled?"](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml#L25), ["what user runs smoke tests?"](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml#L46) etc.
+
+Such configurations can be defined in the [configuration folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration) of the stack. They are available for access just like the service-level configs.
+
+#### Stack Advisor
+
+With each stack containing multiple complex services, it becomes necessary to dynamically determine how the services are laid out on the cluster, and for determining values of certain configurations.
+
+Ambari provides the _Stack Advisor_ capability where stacks can write a Python script named _stack-advisor.py_ in the _services/_ folder. Example: [HDP-2.0.6](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py).
+
+Stack-advisor scripts automatically extend the parent stack-version's stack-advisor scripts. This allows newer stack-versions to change behavior without effecting earlier behavior.
+
+Stack advisors provide information on 4 important aspects:
+
+1. Recommend layout of services on cluster
+2. Recommend service configurations
+3. Validate layout of services on cluster
+4. Validate service configurations
+
+By providing the stack-advisor.py file, one can control dynamically each of the above.
+
+The main interface for the stack-advisor scripts contains documentation on how each of the above are called, and what data is provided
+
+```python
+class StackAdvisor(object):
+"""
+ Abstract class implemented by all stack advisors. Stack advisors advise on stack specific questions.
+
+ Currently stack advisors provide following abilities:
+ - Recommend where services should be installed in cluster
+ - Recommend configurations based on host hardware
+ - Validate user selection of where services are installed on cluster
+ - Validate user configuration values
+
+ Each of the above methods is passed in parameters about services and hosts involved as described below.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services selected by the user.
+
+ Example: {
+ "services": [
+ {
+ "StackServices": {
+ "service_name" : "HDFS",
+ "service_version" : "2.6.0.2.2",
+ },
+ "components" : [
+ {
+ "StackServiceComponents" : {
+ "cardinality" : "1+",
+ "component_category" : "SLAVE",
+ "component_name" : "DATANODE",
+ "display_name" : "DataNode",
+ "service_name" : "HDFS",
+ "hostnames" : []
+ },
+ "dependencies" : []
+ }, {
+ "StackServiceComponents" : {
+ "cardinality" : "1-2",
+ "component_category" : "MASTER",
+ "component_name" : "NAMENODE",
+ "display_name" : "NameNode",
+ "service_name" : "HDFS",
+ "hostnames" : []
+ },
+ "dependencies" : []
+ },
+ ...
+
+ ]
+ },
+ ...
+
+ ]
+ }
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ Example: {
+ "items": [
+ {
+ Hosts: {
+ "host_name": "c6401.ambari.apache.org",
+ "public_host_name" : "c6401.ambari.apache.org",
+ "ip": "192.168.1.101",
+ "cpu_count" : 1,
+ "disk_info" : [
+ {
+ "available" : "4564632",
+ "used" : "5230344",
+ "percent" : "54%",
+ "size" : "10319160",
+ "type" : "ext4",
+ "mountpoint" : "/"
+ },
+ {
+ "available" : "1832436",
+ "used" : "0",
+ "percent" : "0%",
+ "size" : "1832436",
+ "type" : "tmpfs",
+ "mountpoint" : "/dev/shm"
+ }
+ ],
+ "host_state" : "HEALTHY",
+ "os_arch" : "x86_64",
+ "os_type" : "centos6",
+ "total_mem" : 3664872
+ }
+ },
+ ...
+
+ ]
+ }
+
+ Each of the methods can either return recommendations or validations.
+
+ Recommendations are made in a Ambari Blueprints friendly format.
+
+ Validations are an array of validation objects.
+
+"""
+
+ def recommendComponentLayout(self, services, hosts):
+"""
+ Returns recommendation of which hosts various service components should be installed on.
+
+ This function takes as input all details about services being installed, and hosts
+ they are being installed into, to generate hostname assignments to various components
+ of each service.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Layout recommendation of service components on cluster hosts in Ambari Blueprints friendly format.
+
+ Example: {
+ "resources" : [
+ {
+ "hosts" : [
+ "c6402.ambari.apache.org",
+ "c6401.ambari.apache.org"
+ ],
+ "services" : [
+ "HDFS"
+ ],
+ "recommendations" : {
+ "blueprint" : {
+ "host_groups" : [
+ {
+ "name" : "host-group-2",
+ "components" : [
+ { "name" : "JOURNALNODE" },
+ { "name" : "ZKFC" },
+ { "name" : "DATANODE" },
+ { "name" : "SECONDARY_NAMENODE" }
+ ]
+ },
+ {
+ "name" : "host-group-1",
+ "components" :
+ { "name" : "HDFS_CLIENT" },
+ { "name" : "NAMENODE" },
+ { "name" : "JOURNALNODE" },
+ { "name" : "ZKFC" },
+ { "name" : "DATANODE" }
+ ]
+ }
+ ]
+ },
+ "blueprint_cluster_binding" : {
+ "host_groups" : [
+ {
+ "name" : "host-group-1",
+ "hosts" : [ { "fqdn" : "c6401.ambari.apache.org" } ]
+ },
+ {
+ "name" : "host-group-2",
+ "hosts" : [ { "fqdn" : "c6402.ambari.apache.org" } ]
+ }
+ ]
+ }
+ }
+ }
+ ]
+ }
+"""
+ pass
+
+ def validateComponentLayout(self, services, hosts):
+"""
+ Returns array of Validation issues with service component layout on hosts
+
+ This function takes as input all details about services being installed along with
+ hosts the components are being installed on (hostnames property is populated for
+ each component).
+
+ @type services: dictionary
+ @param services: Dictionary containing information about services and host layout selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Dictionary containing array of validation items
+ Example: {
+ "items": [
+ {
+ "type" : "host-group",
+ "level" : "ERROR",
+ "message" : "NameNode and Secondary NameNode should not be hosted on the same machine",
+ "component-name" : "NAMENODE",
+ "host" : "c6401.ambari.apache.org"
+ },
+ ...
+
+ ]
+ }
+"""
+ pass
+
+ def recommendConfigurations(self, services, hosts):
+"""
+ Returns recommendation of service configurations based on host-specific layout of components.
+
+ This function takes as input all details about services being installed, and hosts
+ they are being installed into, to recommend host-specific configurations.
+
+ @type services: dictionary
+ @param services: Dictionary containing all information about services and component layout selected by the user.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Layout recommendation of service components on cluster hosts in Ambari Blueprints friendly format.
+
+ Example: {
+ "services": [
+ "HIVE",
+ "TEZ",
+ "YARN"
+ ],
+ "recommendations": {
+ "blueprint": {
+ "host_groups": [],
+ "configurations": {
+ "yarn-site": {
+ "properties": {
+ "yarn.scheduler.minimum-allocation-mb": "682",
+ "yarn.scheduler.maximum-allocation-mb": "2048",
+ "yarn.nodemanager.resource.memory-mb": "2048"
+ }
+ },
+ "tez-site": {
+ "properties": {
+ "tez.am.java.opts": "-server -Xmx546m -Djava.net.preferIPv4Stack=true -XX:+UseNUMA -XX:+UseParallelGC",
+ "tez.am.resource.memory.mb": "682"
+ }
+ },
+ "hive-site": {
+ "properties": {
+ "hive.tez.container.size": "682",
+ "hive.tez.java.opts": "-server -Xmx546m -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC",
+ "hive.auto.convert.join.noconditionaltask.size": "238026752"
+ }
+ }
+ }
+ },
+ "blueprint_cluster_binding": {
+ "host_groups": []
+ }
+ },
+ "hosts": [
+ "c6401.ambari.apache.org",
+ "c6402.ambari.apache.org",
+ "c6403.ambari.apache.org"
+ ]
+ }
+"""
+ pass
+
+ def validateConfigurations(self, services, hosts):
+""""
+ Returns array of Validation issues with configurations provided by user
+ This function takes as input all details about services being installed along with
+ configuration values entered by the user. These configurations can be validated against
+ service requirements, or host hardware to generate validation issues.
+
+ @type services: dictionary
+ @param services: Dictionary containing information about services and user configurations.
+
+ @type hosts: dictionary
+ @param hosts: Dictionary containing all information about hosts in this cluster
+ @rtype: dictionary
+ @return: Dictionary containing array of validation items
+ Example: {
+ "items": [
+ {
+ "config-type": "yarn-site",
+ "message": "Value is less than the recommended default of 682",
+ "type": "configuration",
+ "config-name": "yarn.scheduler.minimum-allocation-mb",
+ "level": "WARN"
+ }
+ ]
+ }
+"""
+ pass
+```
+
+#### **Examples**
+
+* [Stack Advisor interface](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py#L23)
+* [Default Stack Advisor implementation - for all stacks](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/stack_advisor.py#L303)
+* [HDP (2.0.6) Default Stack Advisor implementation](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L28)
+* [YARN container size calculated](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L807)
+* Recommended configurations -[HDFS](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L222),[YARN](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L133),[MapReduce2](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L148),[HBase](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L245) (HDP-2.0.6),[HBase](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L148) (HDP-2.3)
+* [Delete HBase Bucket Cache configs on smaller machines](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py#L272)
+* [Specify maximum value for Tez config](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py#L184)
+
+### Properties
+
+Similar to stack configurations, most properties are defined at the service level, however there are global properties which can be defined at the stack-version level affecting across all services.
+
+Some examples are: [stack-selector and conf-selector](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json#L2) specific names or what [stack versions certain stack features](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json#L5) are supported by. Most of these properties were introduced in Ambari 2.4 version in the effort of parameterize stack information and facilitate the reuse of common-services code by other distributions.
+
+Such properties can be defined in .json format in the [properties folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) of the stack.
+
+More details about stack properties can be found on [Stack Properties section](https://cwiki.apache.org/confluence/x/pgPiAw).
+
+### Widgets
+
+At the stack-version level one can contribute heatmap entries to the main dashboard of the cluster.
+
+Generally these heatmaps would be ones which apply to all services - like host level heatmaps.
+
+Example: [HDP-2.0.6 contributes host level heatmaps](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/widgets.json)
+
+### Kerberos
+
+We have seen previously the Kerberos descriptor at the service level.
+
+One can be defined at the stack-version level also to describe identities across all services.
+
+Read more about the Kerberos support and the Kerberos Descriptor in the _ [Automated Kerberization](https://cwiki.apache.org/confluence/display/AMBARI/Automated+Kerberizaton)_ page.
+
+Example: [Smoke tests user and SPNEGO user defined in HDP-2.0.6](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.0.6/kerberos.json)
+
+### Stack Upgrades
+
+Ambari provides the ability to upgrade your cluster from a lower stack-version to a higher stack-version.
+
+Each stack-version can define _upgrade-packs_, which are XML files describing the upgrade process. These _upgrade-pack_ XML files are placed in the stack-version's _upgrades/_ folder.
+
+Example: [HDP-2.3](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades), [HDP-2.4](https://github.com/apache/ambari/tree/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.4/upgrades)
+
+Each stack-version should have an upgrade-pack for the next stack-version a cluster can **upgrade to**.
+
+Ex: [Upgrade-pack from HDP-2.3 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.4.xml)
+
+There are two types of upgrades:
+
+Upgrade Type | Pros | Cons
+-------------|------|----------
+Express Upgrade (EU) | Cluster unavailable - services are stopped during upgrade process | Much faster - clusters can be upgraded in a couple of hours
+Rolling Upgrade (RU) | Minimal cluster downtime - services available throughout upgrade process | Takes time (sometimes days depending on cluster size) due to incremental upgrade approach
+
+Each component which has to be upgraded by Ambari should specify the **versionAdvertised** flag in the metainfo.xml.
+
+This will tell Ambari to use the component's version and perform upgrade. Not specifying this flag will result in Ambari not upgrading the component.
+
+Example: [HDFS NameNode](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/metainfo.xml#L33) (versionAdvertised=true), [AMS Metrics Collector](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/metainfo.xml#L33) (versionAdvertised=false).
+
+#### Rolling Upgrades
+
+In Rolling Upgrade each service is upgraded with minimal downtime in mind. The general approach is to quickly upgrade the master components, followed by upgrading of workers in batches.
+
+The service will not be available when masters are restarting. However when master components are in High Availability (HA), the service continues to be available through restart of each master.
+
+You can read more about the Rolling Upgrade process via this [blog post](http://hortonworks.com/blog/introducing-automated-rolling-upgrades-with-apache-ambari-2-0/) and [documentation](https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_upgrading_Ambari/content/_upgrading_HDP_perform_rolling_upgrade.html).
+
+Examples
+
+* [HDP-2.2.x to HDP-2.2.y](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.2.xml)
+* [HDP-2.2 to HDP-2.3](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.3.xml)
+* [HDP-2.2 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/upgrade-2.4.xml)
+* [HDP-2.3 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.4.xml)
+
+#### Express Upgrades
+
+In Express Upgrade the goal is to upgrade entire cluster as fast as possible - even if it means cluster downtime. It is generally much faster than Rolling Upgrade.
+
+For each service the components are first stopped, upgraded and then started.
+
+You can read about Express Upgrade steps in this [documentation](https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_upgrading_Ambari/content/_upgrading_HDP_perform_express_upgrade.html).
+
+Examples
+
+* [HDP-2.1 to HDP-2.3](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.1/upgrades/nonrolling-upgrade-2.3.xml)
+* [HDP-2.2 to HDP-2.4](https://github.com/apache/ambari/blob/branch-2.2.1/ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/nonrolling-upgrade-2.4.xml)
+
+## Configuration support in Ambari
+[Configuration support in Ambari](https://cwiki.apache.org/confluence/display/AMBARI/Configuration+support+in+Ambari)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/define-service.png b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/define-service.png
new file mode 100644
index 0000000..40868dd
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/define-service.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/define-stack.png b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/define-stack.png
new file mode 100644
index 0000000..7ebfdb9
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/define-stack.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/hooks.png b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/hooks.png
new file mode 100644
index 0000000..7ebfdb9
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/hooks.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/scripts-folder.png b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/scripts-folder.png
new file mode 100644
index 0000000..f072a4c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/scripts-folder.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/scripts.png b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/scripts.png
new file mode 100644
index 0000000..30d1e3a
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/scripts.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/stacks-properties.png b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/stacks-properties.png
new file mode 100644
index 0000000..d526a8c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/imgs/stacks-properties.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/index.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/index.md
new file mode 100644
index 0000000..6ed5dfd
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/index.md
@@ -0,0 +1,16 @@
+# Stacks and Services
+
+**Introduction**
+
+Ambari supports the concept of Stacks and associated Services in a Stack Definition. By leveraging the Stack Definition, Ambari has a consistent and defined interface to install, manage and monitor a set of Services and provides extensibility model for new Stacks + Services to be introduced.
+
+From Ambari 2.4, there is also support for the concept of Extensions and its associated custom Services in an Extension Definition.
+
+Terminology
+
+Term | Description
+-----|------------
+Stack | Defines a set of Services and where to obtain the software packages for those Services. A Stack can have one or more versions, and each version can be active/inactive. For example, Stack = "HDP-1.3.3".
+Extension | Defines a set of custom Services which can be added to a stack version. An Extension can have one or more versions.
+Service | Defines the Components (MASTER, SLAVE, CLIENT) that make up the Service. For example, Service = "HDFS"
+Component | The individual Components that adhere to a certain defined lifecycle (start, stop, install, etc). For example, Service = "HDFS" has Components = "NameNode (MASTER)", "Secondary NameNode (MASTER)", "DataNode (SLAVE)" and "HDFS Client (CLIENT)"
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/management-packs.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/management-packs.md
new file mode 100644
index 0000000..aaf9118
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/management-packs.md
@@ -0,0 +1,327 @@
+# Management Packs
+
+## **Background**
+
+At present, stack definitions are bundled with Ambari core and are part of Apache Ambari releases. This enforces having to do an Ambari release with updated stack definitions whenever a new version of a stack is released. Also to add an "add-on" service (custom service) to the stack definition, one has to manually add the add-on service to the stack definition on an Ambari Server. There is no release vehicle that can be used to ship add-on services.
+
+Apache Ambari Management Packs addresses this issue by decoupling Ambari's core functionality (cluster management and monitoring) from stack management and definition. An Apache Ambari Management Pack (Mpack) can bundle multiple service definitions, stack definitions, stack add-on service definitions, view definitions services so that releasing these artifacts don't enforce an Apache Ambari release. Apache Ambari Management Packs will be released as separate release artifacts and will follow its own release cadence instead of being tightly coupled with Apache Ambari releases.
+
+Management packs are released as tarballs, however they contain a metadata file (mpack.json) that specify the contents of the management pack and actions to perform when installing the management pack.
+
+## **Apache JIRA**
+
+[AMBARI-14854](https://issues.apache.org/jira/browse/AMBARI-14854)
+
+## **Release Timelines**
+
+* Short Term Goals (Apache Ambari 2.4.0.0 release)
+ 1. Provide a release vehicle for stack definitions (example:HDP management pack, IOP management pack).
+
+ 2. Provide a release vehicle for add-on/custom services (example: Microsoft-R management pack)
+ 3. Retrofit in existing stack processing infrastructure
+ 4. Command line to update stack definitions and service definitions
+
+* Long Term Goals (Ambari 2.4+)
+ 1. Release HDP stacks as mpacks
+ 2. Build management pack processing infrastructure that will replace the stack processing infrastructure.
+
+ 3. Dynamic creation of stack definitions by processing management packs
+ 4. Provide a REST API adding/removing /upgrading managment packs.
+
+## **Management Pack Metadata (Mpack.json)**
+
+Management pack should contain following metadata information in mpack.json.
+
+* **Name**: Unique management pack name
+* **Version**: Management pack version
+* **Description**: Friendly description of the management pack
+* **Prerequisites**:
+ - Minimum Ambari Version on which the management pack is installable.
+
+ + **Example**: To install stackXYZ-ambari-mpack1.0.0.0 management pack, Ambari should be atleast on Apache Ambari 2.4.0.0 version.
+
+ - Minimum management pack version that should be installed before upgrading to this management pack.
+
+ + **Example**: To upgrade to stackXYZ-ambari-mpack-2.0.0.0 management pack, stackXYZ-ambari-mpack-1.8.0.0 management pack or higher should be installed.
+
+ - Minimum stack version that should already be present in the stack definitions for this management pack to be applicable.
+
+ + **Example**: To add a add-on service management pack myservice-ambari-mpack-1.0.0.0 management pack stackXYZ-2.1 stack definition should be present.
+
+* **Artifacts**:
+ - List of release artifacts (service definitions, stack definitions, stack-addon-service-definitions, view-definitions) bundled in the management pack.
+
+ - Metadata for each artifact like source directory, additional applicability for that artifact etc.
+
+ - Supported Artifact Types
+ + **service-definitions**: Contains service definitions similar to common-services/serviceA/1.0.0
+ + **stack-definitions**: Contains stack definitions similar to stacks/stackXYZ/1.0
+ + **extension-definitions**: Contains dynamic stack extensions (Refer:[Extensions](./extensions.md))
+ + **stack-addon-service-definitions**: Defines add-on service applicability for stacks and how to merge the add-on service into the stack definition.
+
+ + **view-definitions** (Not supported in Apache Ambari 2.4.0.0)
+ - A management pack can have more than one release artifacts.
+
+ + **Example**: It should be possible to create a management pack that bundles together
+ * **stack-definitions**: stackXYZ-1.0, stackXYZ-1.1, stackXYZ-2.0
+ * **service-definitions**: HAWQ, HDFS, ZOOKEEPER
+ * **stack-addon-service-definitions**: HAWQ/2.0.0 is applicable to stackXYZ-2.0, stackABC-1.0
+ * **view-definitions**: Hive, Jobs, Slider (Apache Ambari 2.4.0.0)
+
+## **Management Pack Structure**
+
+### StackXYZ Management Pack Structure
+
+_stackXYZ-ambari-mpack-1.0.0.0_
+
+```
+├── mpack.json
+
+├── common-services
+
+│ └── HDFS
+
+│ └── 2.1.0.2.0
+
+│ └── configuration
+
+└── stacks
+
+ └── stackXYZ
+
+ └── 1.0
+
+ ├── metainfo.xml
+
+ ├── repos
+
+ │ └── repoinfo.xml
+
+ ├── role_command_order.json
+
+ └── services
+
+ ├── HDFS
+
+ │ ├── configuration
+
+ │ │ └── hdfs-site.xml
+
+ │ └── metainfo.xml
+
+ ├── stack_advisor.py
+
+ └── ZOOKEEPER
+
+ └── metainfo.xml
+```
+
+### StackXYZ Management Pack Mpack.json
+
+_stackXYZ-ambari-mpack1.0.0.0/mpack.json_
+
+```json
+{
+
+ "type" : "full-release",
+
+ "name" : "stackXYZ-ambari-mpack",
+
+ "version": "1.0.0.0",
+
+ "description" : "StackXYZ Management Pack",
+
+ "prerequisites": {
+
+ "min_ambari_version" : "2.4.0.0"
+
+ },
+
+ "artifacts": [
+
+ {
+
+ "name" : "stackXYZ-service-definitions",
+
+ "type" : "service-definitions",
+
+ "source_dir": "common-services"
+
+ },
+
+ {
+
+ "name" : "stackXYZ-stack-definitions",
+
+ "type" : "stack-definitions",
+
+ "source_dir": "stacks"
+
+ }
+
+ ]
+
+}
+```
+
+### Add-On Service Management Pack Structure
+
+_myservice-ambari-mpack-1.0.0.0_
+
+```
+├── common-services
+
+│ └── MYSERVICE
+
+│ └── 1.0.0
+
+│ ├── configuration
+
+│ │ └── myserviceconfig.xml
+
+│ ├── metainfo.xml
+
+│ ├── package
+
+│ │ └── scripts
+
+│ │ ├── client.py
+
+│ │ ├── master.py
+
+│ │ └── slave.py
+
+│ └── role_command_order.json
+
+├── custom-services
+
+│ └── MYSERVICE
+
+│ ├── 1.0.0
+
+│ │ └── metainfo.xml
+
+│ └── 2.0.0
+
+│ └── metainfo.xml
+
+└── mpack.json
+```
+
+### Add-On Service Management Pack Mpack.json
+
+_myservice-ambari-mpack-1.0.0.0/mpack.json_
+
+```json
+{
+
+ "type" : "full-release",
+
+ "name" : "myservice-ambari-mpack",
+
+ "version": "1.0.0.0",
+
+ "description" : "MyService Management Pack",
+
+ "prerequisites": {
+
+ "min-ambari-version" : "2.4.0.0",
+
+ "min-stack-versions" : [
+
+ {
+
+ "stack_name" : "stackXYZ",
+
+ "stack_version" : "2.2"
+
+ }
+
+ ]
+
+ },
+
+ "artifacts": [
+
+ {
+
+ "name" : "MYSERVICE-service-definition",
+
+ "type" : "service-definition",
+
+ "source_dir" : "common-services/MYSERVICE/1.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "1.0.0"
+
+ },
+
+ {
+
+ "name" : "MYSERVICE-1.0.0",
+
+ "type" : "stack-addon-service-definition",
+
+ "source_dir": "addon-services/MYSERVICE/1.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "1.0.0",
+
+ "applicable_stacks" : [
+
+ {
+
+ "stack_name" : "stackXYZ", "stack_version" : "2.2"
+
+ }
+
+ ]
+
+ },
+
+ {
+
+ "name" : "MYSERVICE-2.0.0",
+
+ "type" : "stack-addon-service-definition",
+
+ "source_dir": "custom-services/MYSERVICE/2.0.0",
+
+ "service_name" : "MYSERVICE",
+
+ "service_version" : "2.0.0",
+
+ "applicable_stacks" : [
+
+ {
+
+ "stack_name" : "stackXYZ",
+
+ "stack_version" : "2.4"
+
+ }
+
+ ]
+
+ }
+
+ ]
+
+}
+```
+
+## **Installing Management Pack**
+
+```bash
+ambari-server install-mpack --mpack=/path/to/mpack.tar.gz --purge --verbose
+```
+
+**Note**: Do not pass the "--purge" command line parameter when installing an add-on service management pack. The "--purge" flag is used to purge any existing stack definition (HDP is included in Ambari release) and should be included only when installing a Stack Management Pack.
+
+## **Upgrading Management Pack**
+
+```bash
+ambari-server upgrade-mpack --mpack=/path/to/mpack.tar.gz --verbose
+```
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/overview.mdx b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/overview.mdx
new file mode 100644
index 0000000..b47e75e
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/overview.mdx
@@ -0,0 +1,554 @@
+# Overview
+
+## Background
+
+The Stack definitions can be found in the source tree at `<a href="https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks">/ambari-server/src/main/resources/stacks</a>`. After you install the Ambari Server, the Stack definitions can be found at `/var/lib/ambari-server/resources/stacks`
+
+## Structure
+
+The structure of a Stack definition is as follows:
+
+```
+|_ stacks
+ |_
+ |_
+ metainfo.xml
+ |_ hooks
+ |_ repos
+ repoinfo.xml
+ |_ services
+ |_
+ metainfo.xml
+ metrics.json
+ |_ configuration
+ {configuration files}
+ |_ package
+ {files, scripts, templates}
+```
+
+## Defining a Service and Components
+
+The `metainfo.xml` file in a Service describes the service, the components of the service and the management scripts to use for executing commands. A component of a service can be either a **MASTER**, **SLAVE** or **CLIENT** category. The
+
+For each Component you specify the <commandScript> to use when executing commands. There is a defined set of default commands the component must support.
+
+Component Category | Default Lifestyle Commands
+-------------------|--------------------------
+MASTER | install, start, stop, configure, status
+SLAVE | install, start, stop, configure, status
+CLIENT | install, configure, status
+
+Ambari supports different commands scripts written in **PYTHON**. The type is used to know how to execute the command scripts. You can also create **custom commands** if there are other commands beyond the default lifecycle commands your component needs to support.
+
+For example, in the YARN Service describes the ResourceManager component as follows in [`metainfo.xml`](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml):
+
+```xml
+<component>
+ <name>RESOURCEMANAGER</name>
+ <category>MASTER</category>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/resourcemanager.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+</component>
+```
+
+The ResourceManager is a MASTER component, and the command script is `<a href="https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/resourcemanager.py">scripts/resourcemanager.py</a>`, which can be found in the `services/YARN/package` directory. That command script is **PYTHON** and that script implements the default lifecycle commands as python methods. This is the **install** method for the default **INSTALL** command:
+
+```python
+class Resourcemanager(Script):
+ def install(self, env):
+ self.install_packages(env)
+ self.configure(env)
+```
+
+You can also see a custom command is defined **DECOMMISSION**, which means there is also a **decommission** method in that python command script:
+
+```python
+def decommission(self, env):
+ import params
+
+ ...
+
+ Execute(yarn_refresh_cmd,
+ user=yarn_user
+ )
+ pass
+```
+
+#### Using Stack Inheritance
+
+Stacks can _extend_ other Stacks in order to share command scripts and configurations. This reduces duplication of code across Stacks with the following:
+
+* define repositories for the child Stack
+* add new Services in the child Stack (not in the parent Stack)
+* override command scripts of the parent Services
+* override configurations of the parent Services
+
+For example, the **HDP 2.1 Stack _extends_ HDP 2.0.6 Stack** so only the changes applicable to **HDP 2.1 Stack** are present in that Stack definition. This extension is defined in the [metainfo.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.1/metainfo.xml) for HDP 2.1 Stack:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.0.6</extends>
+</metainfo>
+```
+
+## Example: Implementing a Custom Service
+
+In this example, we will create a custom service called "SAMPLESRV", add it to an existing Stack definition. This service includes MASTER, SLAVE and CLIENT components.
+
+## Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>SAMPLESRV</strong>` that will contain the service definition for **SAMPLESRV**.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV
+```
+3. Browse to the newly created `SAMPLESRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>SAMPLESRV</name>
+ <displayName>New Sample Service</displayName>
+ <comment>A New Sample Service</comment>
+ <version>1.0.0</version>
+ <components>
+ <component>
+ <name>SAMPLESRV_MASTER</name>
+ <displayName>Sample Srv Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1</cardinality>
+ <commandScript>
+ <script>scripts/master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_SLAVE</name>
+ <displayName>Sample Srv Slave</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/slave.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ <component>
+ <name>SAMPLESRV_CLIENT</name>
+ <displayName>Sample Srv Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/sample_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **SAMPLESRV**", and it contains:
+
+ - one **MASTER** component " **SAMPLESRV_MASTER**"
+ - one **SLAVE** component " **SAMPLESRV_SLAVE**"
+ - one **CLIENT** component " **SAMPLESRV_CLIENT**"
+5. Next, let's create that command script. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `SAMPLESRV` `/` ** `package/scripts`** that we designated in the service metainfo.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/SAMPLESRV/package/scripts
+```
+6. Browse to the scripts directory and create the `.py` command script files. For example `master.py` file:
+
+```python
+import sys
+from resource_management import *
+class Master(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Master';
+ def stop(self, env):
+ print 'Stop the Sample Srv Master';
+ def start(self, env):
+ print 'Start the Sample Srv Master';
+
+ def status(self, env):
+ print 'Status of the Sample Srv Master';
+ def configure(self, env):
+ print 'Configure the Sample Srv Master';
+if __name__ == "__main__":
+ Master().execute()
+```
+
+For example `slave` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class Slave(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Slave';
+ def stop(self, env):
+ print 'Stop the Sample Srv Slave';
+ def start(self, env):
+ print 'Start the Sample Srv Slave';
+ def status(self, env):
+ print 'Status of the Sample Srv Slave';
+ def configure(self, env):
+ print 'Configure the Sample Srv Slave';
+if __name__ == "__main__":
+ Slave().execute()
+```
+
+For example `sample_client` `.py` file:
+
+```python
+import sys
+from resource_management import *
+class SampleClient(Script):
+ def install(self, env):
+ print 'Install the Sample Srv Client';
+ def configure(self, env):
+ print 'Configure the Sample Srv Client';
+if __name__ == "__main__":
+ SampleClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+## Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Sample Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Sample Service" and click Next.
+
+4. Assign the "Sample Srv Master" and click Next.
+
+5. Select the hosts to install the "Sample Srv Client" and click Next.
+
+6. Once complete, the "My Sample Service" will be available Service navigation area.
+
+7. If you want to add the "Sample Srv Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+#### Example: Implementing a Custom Client-only Service
+
+In this example, we will create a custom service called "TESTSRV", add it to an existing Stack definition and use the Ambari APIs to install/configure the service. This service is a CLIENT so it has two commands: install and configure.
+
+## Create and Add the Service
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTSRV</strong>` that will contain the service definition for **TESTSRV**.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV
+```
+3. Browse to the newly created `TESTSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```xml
+<?xml version="1.0"?>
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>TESTSRV</name>
+ <displayName>New Test Service</displayName>
+ <comment>A New Test Service</comment>
+ <version>0.1.0</version>
+ <components>
+ <component>
+ <name>TEST_CLIENT</name>
+ <displayName>New Test Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>SOMETHINGCUSTOM</name>
+ <commandScript>
+ <script>scripts/test_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+ </components>
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily> <!-- note: use osType rather than osFamily for Ambari 1.5.0 and 1.5.1 -->
+ </osSpecific>
+ </osSpecifics>
+ </service>
+ </services>
+</metainfo>
+```
+4. In the above, my service name is " **TESTSRV**", and it contains one component " **TEST_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTSRV` `/` **`package/scripts`** that we designated in the service metainfo.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the client';
+ def configure(self, env):
+ print 'Configure the client';
+ def somethingcustom(self, env):
+ print 'Something custom';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
+
+## Install the Service (via the Ambari REST API)
+
+1. Add the Service to the Cluster.
+
+
+```
+POST
+/api/v1/clusters/MyCluster/services
+
+{
+"ServiceInfo": {
+ "service_name":"TESTSRV"
+ }
+}
+```
+2. Add the Components to the Service. In this case, add TEST_CLIENT to TESTSRV.
+
+```
+POST
+/api/v1/clusters/MyCluster/services/TESTSRV/components/TEST_CLIENT
+```
+3. Install the component on all target hosts. For example, to install on `c6402.ambari.apache.org` and `c6403.ambari.apache.org`, first create the host_component resource on the hosts using POST.
+
+```
+POST
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+POST
+/api/v1/clusters/MyCluster/hosts/c6403.ambari.apache.org/host_components/TEST_CLIENT
+```
+4. Now have Ambari install the components on all hosts. In this single command, you are instructing Ambari to install all components related to the service. This call the `install()` method in the command script on each host.
+
+
+```
+PUT
+/api/v1/clusters/MyCluster/services/TESTSRV
+
+{
+ "RequestInfo": {
+ "context": "Install Test Srv Client"
+ },
+ "Body": {
+ "ServiceInfo": {
+ "state": "INSTALLED"
+ }
+ }
+}
+```
+5. Alternatively, instead of installing all components at the same time, you can explicitly install each host component. In this example, we will explicitly install the TEST_CLIENT on `c6402.ambari.apache.org`:
+
+```
+PUT
+/api/v1/clusters/MyCluster/hosts/c6402.ambari.apache.org/host_components/TEST_CLIENT
+
+{
+ "RequestInfo": {
+ "context":"Install Test Srv Client"
+ },
+ "Body": {
+ "HostRoles": {
+ "state":"INSTALLED"
+ }
+ }
+}
+```
+6. Use the following to configure the client on the host. This will end up calling the `configure()` method in the command script.
+
+```
+POST
+/api/v1/clusters/MyCluster/requests
+
+{
+ "RequestInfo" : {
+ "command" : "CONFIGURE",
+ "context" : "Config Test Srv Client"
+ },
+ "Requests/resource_filters": [{
+ "service_name" : "TESTSRV",
+ "component_name" : "TEST_CLIENT",
+ "hosts" : "c6403.ambari.apache.org"
+ }]
+}
+```
+7. If you want to see which hosts the component is installed.
+
+```
+GET
+/api/v1/clusters/MyCluster/components/TEST_CLIENT
+```
+
+## Install the Service (via Ambari Web "Add Services")
+
+:::caution
+The ability to add custom services via Ambari Web is new as of Ambari 1.7.0.
+:::
+
+1. In Ambari Web, browse to Services and click the **Actions** button in the Service navigation area on the left.
+
+2. The "Add Services" wizard launches. You will see an option to include "My Test Service" (which is the `<displayname></displayname>` of the service as defined in the service `metainfo.xml` file).
+
+3. Select "My Test Service" and click Next.
+
+4. Select the hosts to install the "New Test Client" and click Next.
+
+5. Once complete, the "My Test Service" will be available Service navigation area.
+
+6. If you want to add the "New Test Client" to any hosts, you can browse to Hosts and navigate to a specific host and click "+ Add".
+
+
+#### Example: Implementing a Custom Client-only Service (with Configs)
+
+In this example, we will create a custom service called "TESTCONFIGSRV" and add it to an existing Stack definition. This service is a CLIENT so it has two commands: install and configure. And the service also includes a configuration type "test-config".
+
+## Create and Add the Service to the Stack
+
+1. On the Ambari Server, browse to the `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services` directory. In this case, we will browse to the HDP 2.0 Stack definition.
+
+```
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services
+```
+2. Create a directory named `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/<strong>TESTCONFIGSRV</strong>` that will contain the service definition for TESTCONFIGSRV.
+
+```
+mkdir /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV
+```
+3. Browse to the newly created `TESTCONFIGSRV` directory, create a `metainfo.xml` file that describes the new service. For example:
+
+```
+2.0
+
+ TESTCONFIGSRV
+ New Test Config Service
+ A New Test Config Service
+ 0.1.0
+
+ TESTCONFIG_CLIENT
+ New Test Config Client
+ CLIENT
+ 1+
+
+ scripts/test_client.py
+ PYTHON
+ 600
+
+ any
+```
+4. In the above, my service name is " **TESTCONFIGSRV**", and it contains one component " **TESTCONFIG_CLIENT**" that is of component category " **CLIENT**". That client is managed via the command script `scripts/test_client.py`. Next, let's create that command script.
+
+5. Create a directory for the command script `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTCONFIGSRV` `/` **`package/scripts`** that we designated in the service metainfo `<commandscript></commandscript>`.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/package/scripts
+```
+6. Browse to the scripts directory and create the `test_client.py` file. For example:
+
+```python
+import sys
+from resource_management import *
+
+class TestClient(Script):
+ def install(self, env):
+ print 'Install the config client';
+ def configure(self, env):
+ print 'Configure the config client';
+
+if __name__ == "__main__":
+ TestClient().execute()
+```
+7. Now let's define a config type for this service. Create a directory for the configuration dictionary file `/var/lib/ambari-server/resources/stacks/HDP/2.0.6/servi` `ces/` `TESTCONFIGSRV` `/` **`configuration`**.
+
+```
+mkdir -p /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/TESTCONFIGSRV/configuration
+```
+8. Browse to the configuration directory and create the `test-config.xml` file. For example:
+
+```
+some.test.property
+ this.is.the.default.value
+ This is a kool description.
+
+```
+9. Now, restart Ambari Server for this new service definition to be distributed to all the Agents in the cluster.
+
+```bash
+ambari-server restart
+```
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/stack-inheritance.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/stack-inheritance.md
new file mode 100644
index 0000000..8d5184d
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/stack-inheritance.md
@@ -0,0 +1,68 @@
+
+# Stack Inheritance
+
+Each stack version must provide a metainfo.xml descriptor file which can declare whether the stack inherits from another stack version:
+
+```xml
+<metainfo>
+ <versions>
+ <active>true</active>
+ </versions>
+ <extends>2.3</extends>
+ ...
+</metainfo>
+```
+
+When a stack inherits from another stack version, how its defining files and directories are inherited follows a number of different patterns.
+
+The following files should not be redefined at the child stack version level:
+
+* properties/stack_features.json
+* properties/stack_tools.json
+
+Note: These files should only exist at the base stack level.
+
+The following files if defined in the current stack version replace the definitions from the parent stack version:
+
+* kerberos.json
+* widgets.json
+
+The following files if defined in the current stack version are merged with the parent stack version:
+
+* configuration/cluster-env.xml
+
+* role_command_order.json
+
+Note: All the services' role command orders will be merge with the stack's role command order to provide a master list.
+
+All attributes of the current stack version's metainfo.xml will replace those defined in the parent stack version.
+
+The following directories if defined in the current stack version replace those from the parent stack version:
+
+* hooks
+
+This means the files included in those directories at the parent level will not be inherited. You will need to copy all the files you wish to keep from that directory structure.
+
+The following directories are not inherited:
+
+* repos
+* upgrades
+
+The repos/repoinfo.xml file should be defined in every stack version. The upgrades directory and its corresponding XML files should be defined in all stack versions that support upgrade.
+
+## Services Folder
+
+The services folder is a special case. There are two inheritance mechanisms at work here. First the stack_advisor.py will automatically import the parent stack version's stack_advisor.py script but the rest of the inheritance behavior is up to the script's author. There are several examples of [stack_advisor.py](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.3/services/stack_advisor.py) files in the Ambari server source.
+
+```python
+ class HDP23StackAdvisor(HDP22StackAdvisor):
+ def __init__(self):
+ super(HDP23StackAdvisor, self).__init__()
+ Logger.initialize_logger()
+
+ def getComponentLayoutValidations(self, services, hosts):
+ parentItems = super(HDP23StackAdvisor, self).getComponentLayoutValidations(services, hosts)
+ ...
+```
+
+Services defined within the services folder follow the rules for [service inheritance](./custom-services.md#Service20%Inheritance). By default if a service does not declare an explicit inheritance (via the **extends** tag), the service will inherit from the service defined at the parent stack version.
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/stack-properties.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/stack-properties.md
new file mode 100644
index 0000000..af33f0c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/stack-properties.md
@@ -0,0 +1,119 @@
+# Stack Properties
+
+Similar to stack configurations, most properties are defined at the service level, however there are global properties which can be defined at the stack-version level affecting across all services.
+
+Some examples are: [stack-selector and conf-selector](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json#L2) specific names or what [stack versions certain stack features](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json#L5) are supported by. Most of these properties were introduced in Ambari 2.4 version in the effort of parameterize stack information and facilitate the reuse of common-services code by other distributions.
+
+
+Such properties can be defined in .json format in the [properties folder](https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties) of the stack.
+
+
+
+# Stack features
+
+Stacks can support different features depending on their version, for example: upgrade support, NFS support, support for specific new components (such as Ranger, Phoenix )...
+
+
+Stack featurization was added as part of the HDP stack configurations on [HDP/2.0.6/configuration/cluster-env.xml](http://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml), introducing a new stack_features property which value is processed in the stack engine from an external property file.
+
+```xml
+<!-- Define stack_features property in the base stack. DO NOT override this property for each stack version -->
+<property>
+ <name>stack_features</name>
+ <value/>
+ <description>List of features supported by the stack</description>
+ <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+ <value-attributes>
+ <property-file-name>stack_features.json</property-file-name>
+ <property-file-type>json</property-file-type>
+ <read-only>true</read-only>
+ <overridable>false</overridable>
+ <visible>false</visible>
+ </value-attributes>
+ <on-ambari-upgrade add="true"/>
+</property>
+```
+
+Stack Features properties are defined in [stack_features.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json) file under /HDP/2.0.6/properties. These features support is now available for access at service-level code to change certain service behaviors or configurations. This is an example of features described in stack_features.jon file:
+
+
+```json
+"stack_features": [
+ {
+ "name": "snappy",
+ "description": "Snappy compressor/decompressor support",
+ "min_version": "2.0.0.0",
+ "max_version": "2.2.0.0"
+ },
+ {
+ "name": "lzo",
+ "description": "LZO libraries support",
+ "min_version": "2.2.1.0"
+ },
+ {
+ "name": "express_upgrade",
+ "description": "Express upgrade support",
+ "min_version": "2.1.0.0"
+ },
+ {
+ "name": "rolling_upgrade",
+ "description": "Rolling upgrade support",
+ "min_version": "2.2.0.0"
+ }
+ ]
+}
+```
+
+where min_version/max_version are optional constraints.
+
+Feature constants, matching features names, such has ROLLING_UPGRADE = "rolling_upgrade" has been added to a new StackFeature class in [resource_management/libraries/functions/constants.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/constants.py#L38)
+
+
+```python
+class StackFeature:
+"""
+ Stack Feature supported
+"""
+ SNAPPY = "snappy"
+ LZO = "lzo"
+ EXPRESS_UPGRADE = "express_upgrade"
+ ROLLING_UPGRADE = "rolling_upgrade"
+```
+
+Additionally, corresponding helper functions has been introduced in [resource_management/libraries/functions/stack_fetaures.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/stack_features.py) to parse the .json file content and called from service code to check if the stack supports the specific feature.
+
+This is an example where the new stack featurization design is used in service code:
+
+```python
+if params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version):
+ conf_select.select(params.stack_name, "hive", params.version)
+ stack_select.select("hive-server2", params.version)
+```
+
+# Stack Tools
+
+
+Similar to stack features, stack-selector and conf-selector tools are now stack-driven instead of hardcoding hdp-select and conf-select. They are defined in [stack_tools.json](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_tools.json) file under /HDP/2.0.6/properties
+
+And declared as part of the HDP stack configurations as a new property on [/HDP/2.0.6/configuration/cluster-env.xml](https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml)
+
+
+```xml
+<!-- Define stack_tools property in the base stack. DO NOT override this property for each stack version -->
+<property>
+ <name>stack_tools</name>
+ <value/>
+ <description>Stack specific tools</description>
+ <property-type>VALUE_FROM_PROPERTY_FILE</property-type>
+ <value-attributes>
+ <property-file-name>stack_tools.json</property-file-name>
+ <property-file-type>json</property-file-type>
+ <read-only>true</read-only>
+ <overridable>false</overridable>
+ <visible>false</visible>
+ </value-attributes>
+ <on-ambari-upgrade add="true"/>
+</property>
+```
+
+Corresponding helper functions have been added in [resource_management/libraries/functions/stack_tools.py](https://github.com/apache/ambari/blob/trunk/ambari-common/src/main/python/resource_management/libraries/functions/stack_tools.py). These helper functions are used to remove hardcodings in resource_management library.
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md
new file mode 100644
index 0000000..95023bf
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/version-functions-conf-select-and-stack-select.md
@@ -0,0 +1,67 @@
+# Version functions: conf-select and stack-select
+
+Especially during upgrade, it is important to be able to set the current stack and configuration versions. For non-custom services, these functions are implemented in the conf-select and stack-select functions. These can be imported in a service's scripts with the following imports:
+
+```py
+from resource_management.libraries.functions import conf_select
+
+from resource_management.libraries.functions import stack_select
+```
+
+Typically the select functions, which is used to set the stack and configuration versions, are called in the pre_upgrade_restart function during a rolling upgrade:
+
+```py
+ def pre_upgrade_restart(self, env, upgrade_type=None):
+ import params
+ env.set_params(params)
+
+ # this function should not execute if the version can't be determined or
+ # the stack does not support rolling upgrade
+ if not (params.version and check_stack_feature(StackFeature.ROLLING_UPGRADE, params.version)):
+ return
+
+ Logger.info("Executing <My Service> Stack Upgrade pre-restart")
+ conf_select.select(params.stack_name, "<my_service>", params.version)
+ stack_select.select("<my_service>", params.version)
+```
+
+The select functions will set up symlinks for the current stack or configuration version. For the stack, this will set up the links from the stack root current directory to the particular stack version. For example:
+
+```
+/usr/hdp/current/hadoop-client -> /usr/hdp/2.5.0.0/hadoop
+```
+
+For the configuration version, this will set up the links for all the configuration directories, as follows:
+
+```
+/etc/hadoop/conf -> /usr/hdp/current/hadoop-client/conf
+
+/usr/hdp/current/hadoop-client/conf -> /etc/hadoop/2.5.0.0/0
+```
+
+The stack_select and conf_select functions can also be used to return the hadoop directories:
+
+```
+hadoop_prefix = stack_select.get_hadoop_dir("home")
+
+hadoop_bin_dir = stack_select.get_hadoop_dir("bin")
+
+hadoop_conf_dir = conf_select.get_hadoop_conf_dir()
+```
+
+The conf_select api is as follows:
+
+```py
+def select(stack_name, package, version, try_create=True, ignore_errors=False)
+
+def get_hadoop_conf_dir(force_latest_on_upgrade=False)
+```
+
+The stack_select api is as follows:
+```py
+def select(component, version)
+
+def get_hadoop_dir(target, force_latest_on_upgrade=False)
+```
+
+Unfortunately for custom services these functions are not available to set up their configuration or stack versions. A custom service could implement their own functions to set up the proper links.
diff --git a/versioned_docs/version-2.7.6/ambari-design/stack-and-services/writing-metainfo.md b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/writing-metainfo.md
new file mode 100644
index 0000000..2645fe9
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/stack-and-services/writing-metainfo.md
@@ -0,0 +1,247 @@
+# Writing metainfo.xml
+
+metainfo.xml is a declarative definition of an Ambari managed service describing its content. Itis the most critical file for any service definition. This section describes various key sub-sections within a metainfo.xml file.
+
+_Non-mandatory fields are described in italics._
+
+The top level fields to describe a service are as follows:
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+name | the name of the service. A name has to be unique among all the services that are included in the stack definition containing the service. | HDFS
+displayName | the display name of the service | HDFS
+version | the version of the service. name and version together uniquely identify a service. Usually, the version is the version of the service binary itself. | 2.1.0.2.0
+components | the list of component that the service is comprised of | `<check out HDFS metainfo>`
+osSpecifics | OS specific package information for the service | `<check out HDFS metainfo>`
+commandScript | service level commands may also be defined. The command is executed on a component instance that is a client | `<check out HDFS metainfo>`
+comment | a short description describing the service | Apache Hadoop Distributed File System
+requiredServices | what other services that should be present on the cluster | `<check out HDFS metainfo>`
+configuration-dependencies | configuration files that are expected by the service (config files owned by other services are specified in this list) | `<check out HDFS metainfo>`
+restartRequiredAfterRackChange | Restart Required After Rack Change | true / false
+configuration-dir | Use this to specify a different config directory if not 'configuration' | -
+
+**service/components - A service contains several components. The fields associated with a component are**:
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+name | name of the component | HDFS
+displayName | display name of the component. | HDFS
+category | type of the component - MASTER, SLAVE, and CLIENT | 2.1.0.2.0
+commandScript | application wide commands may also be defined. The command is executed on a component instance that is a client | `<check out HDFS metainfo>`
+cardinality | allowed/expected number of instances | For example, 1-2 for MASTER, 1+ for Slave
+reassignAllowed | whether the component can be reassigned / moved to a different host. | true / false
+versionAdvertised | does the component advertise its version - used during rolling/express upgrade | Apache Hadoop Distributed File System
+timelineAppid | This will be the component name under which the metrics from this component will be collected. | `<check out HDFS metainfo>`
+dependencies | the list of components that this component depends on | `<check out HDFS metainfo>`
+customCommands | a set of custom commands associated with the component in addition to standard commands. | RESTART_LLAP (Check out HIVE metainfo)
+
+**service/osSpecifics - OS specific package names (rpm or deb packages)**
+
+Feild | What is it used for | Sample Values
+------|---------------------|---------------
+osFamily | the os family for which the package is applicable | any => all<br></br>amazon2015,redhat6,debian7,ubuntu12,ubuntu14,ubuntu16
+packages | list of packages that are needed to deploy the service | `<check out HDFS metainfo>`
+package/name | name of the package (will be used by the yum/zypper/apt commands) | Eg hadoop-lzo.
+
+**service/commandScript - the script that implements service check**
+
+Feild | What is it used for
+------|---------------------
+script | the relative path to the script
+scriptType | the type of the script, currently only supported type is PYTHON
+timeout | custom timeout for the command - this supersedes ambari default
+
+sample values:
+
+```xml
+<commandScript>
+ <script>scripts/service_check.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>300</timeout>
+</commandScript>
+```
+**service/component/dependencies/dependency**
+
+Feild | What is it used for
+------|---------------------
+name | name of the component it depends on
+scope | cluster / host. specifies whether the dependent component<br></br>should be present in the same cluster or the same host.
+auto-deploy | custom timeout for the command - this supersedes ambari default
+conditions | Conditions in which this dependency exists. For example, the presence of a property in a config.
+
+sample values:
+
+```xml
+<dependency>
+ <name>HDFS/ZKFC</name>
+ <scope>cluster</scope>
+ <auto-deploy>
+ <enabled>false</enabled>
+ </auto-deploy>
+ <conditions>
+ <condition xsi:type="propertyExists">
+ <configType>hdfs-site</configType>
+ <property>dfs.nameservices</property>
+ </condition>
+ </conditions>
+</dependency>
+```
+
+**service/component/commandScript - the script that implements components specific default commands (Similar to service/commandScript )**
+
+**service/component/logs - provides log search integration.**
+
+Feild | What is it used for
+------|---------------------
+logId | logid of the component
+primary | primary log id or not.
+
+sample values:
+
+```xml
+<log>
+ <logId>hdfs_namenode</logId>
+ <primary>true</primary>
+</log>
+```
+
+**service/component/customCommand - custom commands can be added to components.**
+
+- **name**: name of the custom command
+- **commandScript**: the details of the script that implements the custom command
+- commandScript/script: the relative path to the script
+- commandScript/scriptType: the type of the script, currently only supported type is PYTHON
+- commandScript/timeout: custom timeout for the command - this supersedes ambari default
+
+**service/component/configFiles - list of config files to be available when client config is to be downloaded (used to configure service clients that are not managed by Ambari)**
+
+- **type**: the type of file to be generated, xml or env sh, yaml, etc
+- **fileName**: name of the generated file
+- **dictionary**: data dictionary that contains the config properties (relevant to how ambari manages config bags internally)
+
+## Sample metainfo.xml
+
+```xml
+<metainfo>
+ <schemaVersion>2.0</schemaVersion>
+ <services>
+ <service>
+ <name>HBASE</name>
+ <displayName>HBase</displayName>
+ <comment>Non-relational distributed database and centralized service for configuration management &
+ synchronization
+ </comment>
+ <version>0.96.0.2.0</version>
+ <components>
+ <component>
+ <name>HBASE_MASTER</name>
+ <displayName>HBase Master</displayName>
+ <category>MASTER</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <timelineAppid>HBASE</timelineAppid>
+ <dependencies>
+ <dependency>
+ <name>HDFS/HDFS_CLIENT</name>
+ <scope>host</scope>
+ <auto-deploy>
+ <enabled>true</enabled>
+ </auto-deploy>
+ </dependency>
+ <dependency>
+ <name>ZOOKEEPER/ZOOKEEPER_SERVER</name>
+ <scope>cluster</scope>
+ <auto-deploy>
+ <enabled>true</enabled>
+ <co-locate>HBASE/HBASE_MASTER</co-locate>
+ </auto-deploy>
+ </dependency>
+ </dependencies>
+ <commandScript>
+ <script>scripts/hbase_master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>1200</timeout>
+ </commandScript>
+ <customCommands>
+ <customCommand>
+ <name>DECOMMISSION</name>
+ <commandScript>
+ <script>scripts/hbase_master.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>600</timeout>
+ </commandScript>
+ </customCommand>
+ </customCommands>
+ </component>
+
+ <component>
+ <name>HBASE_REGIONSERVER</name>
+ <displayName>RegionServer</displayName>
+ <category>SLAVE</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <timelineAppid>HBASE</timelineAppid>
+ <commandScript>
+ <script>scripts/hbase_regionserver.py</script>
+ <scriptType>PYTHON</scriptType>
+ </commandScript>
+ </component>
+
+ <component>
+ <name>HBASE_CLIENT</name>
+ <displayName>HBase Client</displayName>
+ <category>CLIENT</category>
+ <cardinality>1+</cardinality>
+ <versionAdvertised>true</versionAdvertised>
+ <commandScript>
+ <script>scripts/hbase_client.py</script>
+ <scriptType>PYTHON</scriptType>
+ </commandScript>
+ <configFiles>
+ <configFile>
+ <type>xml</type>
+ <fileName>hbase-site.xml</fileName>
+ <dictionaryName>hbase-site</dictionaryName>
+ </configFile>
+ <configFile>
+ <type>env</type>
+ <fileName>hbase-env.sh</fileName>
+ <dictionaryName>hbase-env</dictionaryName>
+ </configFile>
+ </configFiles>
+ </component>
+ </components>
+
+ <osSpecifics>
+ <osSpecific>
+ <osFamily>any</osFamily>
+ <packages>
+ <package>
+ <name>hbase</name>
+ </package>
+ </packages>
+ </osSpecific>
+ </osSpecifics>
+
+ <commandScript>
+ <script>scripts/service_check.py</script>
+ <scriptType>PYTHON</scriptType>
+ <timeout>300</timeout>
+ </commandScript>
+
+ <requiredServices>
+ <service>ZOOKEEPER</service>
+ <service>HDFS</service>
+ </requiredServices>
+
+ <configuration-dependencies>
+ <config-type>core-site</config-type>
+ <config-type>hbase-site</config-type>
+ <config-type>ranger-hbase-policymgr-ssl</config-type>
+ <config-type>ranger-hbase-security</config-type>
+ </configuration-dependencies>
+
+ </service>
+ </services>
+</metainfo>
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-design/technology-stack.md b/versioned_docs/version-2.7.6/ambari-design/technology-stack.md
new file mode 100644
index 0000000..92de946
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/technology-stack.md
@@ -0,0 +1,30 @@
+---
+sidebar_position: 1
+---
+
+# Technology Stack
+
+## Ambari Server
+
+- Server code: Java 1.7 / 1.8
+- Agent scripts: Python
+- Database: Postgres, Oracle, MySQL
+- ORM: EclipseLink
+- Security: Spring Security with remote LDAP integration and local database
+- REST server: Jersey (JAX-RS)
+- Dependency Injection: Guice
+- Unit Testing: JUnit
+- Mocks: EasyMock
+- Configuration management: Python
+
+## Ambari Web
+
+- Frontend code: JavaScript
+- Client-side MVC framework: Ember.js / AngularJS
+- Templating: Handlebars.js (integrated with Ember.js)
+- DOM manipulation: jQuery
+- Look and feel: Bootstrap 2
+- CSS preprocessor: LESS
+- Unit Testing: Mocha
+- Mocks: Sinon.js
+- Application assembler/tester: Brunch / Grunt / Gulp
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/framework-services.md b/versioned_docs/version-2.7.6/ambari-design/views/framework-services.md
new file mode 100644
index 0000000..47a639e
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/framework-services.md
@@ -0,0 +1,110 @@
+# Framework Services
+
+This section describes the framework services that are available for views.
+
+
+## ViewContext
+
+The view server-side resources have access to a [ViewContext](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewContext.java) object. The view context provides information about the current authenticated user, the view definition, the instance configuration properties, instance data and the view controller.
+
+```java
+/**
+ * The view context.
+
+ */
+ @Inject
+ ViewContext context;
+```
+
+## Instance Data
+
+The view framework exposes a way to store key/value pair "instance data". This data is scoped to a given view instance and user. Instance data is meant to be used for information such as "user prefs" or other lightweight information that supports the experience of your view application. You can access the instance data get and put methods from the [ViewContext](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewContext.java) object.
+
+Checkout the **Favorite View** for example usage of the instance data API.
+
+[https://github.com/apache/ambari/tree/trunk/ambari-views/examples/favorite-view](https://github.com/apache/ambari/tree/trunk/ambari-views/examples/favorite-view)
+
+```java
+/**
+ * Context object available to the view components to provide access to
+ * the view and instance attributes as well as run time information about
+ * the current execution context.
+
+ */
+public interface ViewContext {
+
+ /**
+ * Save an instance data value for the given key.
+
+ *
+ * @param key the key
+ * @param value the value
+ *
+ * @throws IllegalStateException if no instance is associated
+ */
+ public void putInstanceData(String key, String value);
+ /**
+ * Get the instance data value for the given key.
+
+ *
+ * @param key the key
+ *
+ * @return the instance data value; null if no instance is associated
+ */
+ public String getInstanceData(String key);
+
+}
+```
+
+## Instance Configuration Properties
+
+The instance configuration properties (set when you created your view instance) are accessible from the view context:
+
+```java
+viewContext.getProperties();
+```
+
+Configuration properties also supports a set of pre-defined **variables** that will be replaced when you read the property from the view context. For example, if your view requires a configuration parameter "hdfs.file.path" and that path is going to be set based on the username, when you configure the view instance, set the configuration property like so:
+
+```
+"hdfs.file.path" : "/this/is/some/path/${username}"
+```
+
+When you get this property from the view context, the `${username}` variable will be replaced automatically.
+
+```java
+viewContext.getProperties().get("hdfs.file.path") returns "/this/is/some/path/pramod"
+```
+
+Instance parameters support the following pre-defined variables: `${username}`, `${viewName}` and `${instanceName}`
+
+## Events
+
+Events are an important component of the views framework. Events allow the view to interact with the framework on lifecycle changes (i.e. "Framework Events") such as deploy, create and destroy. As well, once a user has collection of views available, eventing allows the views to communicate with other views (i.e. "View Events").
+
+### Framework Events
+
+To register to receive framework events, in the `view.xml`, specify a `<view-class>this.is.my.view-clazz</view-class>` which is a class that implements the [View](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/View.java) interface.
+
+
+
+Event | Description
+---------|-------
+onDeploy() | Called when a view is deployed.
+onCreate() | Called when a view instance is created.
+onDestroy() | Called when a view instance is destroy.
+
+### Views Events
+
+Views can pass events between views. Obtain the [ViewController](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewController.java) object that allows you to **register listeners** for view events and to **fire events** for other listeners. A view can register an event [Listener](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/events/Listener.java) (via the [ViewController](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/ViewController.java)) for other views by **view name**, or by **view name + version**. When an [Event](http://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/events/Event.java) is fired from the source view, all registered listeners will receive the event.
+
+
+
+1. Obtain the view controller and register a listener.
+
+```java
+viewContext.getViewController().registerListener(...);
+```
+2. Fire the event. `viewContext.getViewController().fireEvent(...);`
+
+3. The framework will notify all registered listeners. The listener implementation can process the event as appropriate. `listener.notify(...)`
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/imgs/fmwk-events.jpg b/versioned_docs/version-2.7.6/ambari-design/views/imgs/fmwk-events.jpg
new file mode 100644
index 0000000..a1b4881
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/imgs/fmwk-events.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-components.jpg b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-components.jpg
new file mode 100644
index 0000000..1f95d12
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-components.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-events.jpg b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-events.jpg
new file mode 100644
index 0000000..62f0532
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-events.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-lifecycle.png b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-lifecycle.png
new file mode 100644
index 0000000..1cd7cc1
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-lifecycle.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-versions.jpg b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-versions.jpg
new file mode 100644
index 0000000..4dcd45d
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/imgs/view-versions.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/index.md b/versioned_docs/version-2.7.6/ambari-design/views/index.md
new file mode 100644
index 0000000..6e8a74a
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/index.md
@@ -0,0 +1,90 @@
+# Views
+
+:::info
+This capability is currently under development.
+:::info
+
+**Ambari Views** offer a systematic way to plug-in UI capabilities to surface custom visualization, management and monitoring features in Ambari Web. A " **view**" is a way of extending Ambari that allows 3rd parties to plug in new resource types along with the APIs, providers and UI to support them. In other words, a view is an application that is deployed into the Ambari container.
+
+
+## Useful Resources
+
+Resource | Link
+---------|-------
+Views Overview | http://www.slideshare.net/hortonworks/ambari-views-overview
+Views Framework API Docs | https://github.com/apache/ambari/blob/trunk/ambari-views/docs/index.md
+Views Framework Examples | https://github.com/apache/ambari/tree/trunk/ambari-views/examples
+
+## Terminology
+
+The following section describes the basic terminology associated with views.
+
+Term | Description
+---------|-------
+View Name | The name of the view. The view name identifies the view to Ambari.
+View Version | The version of the view. A unique view name can have multiple versions deployed in Ambari.
+View Package | This is the JAR package that contains the **view definition** and all view resources (server-side resources and client-side assets) for a given view version. See [View Package](#View20%Package) for more information on the contents and structure of the package.
+View Definition | This defines the view name, version, resources and required/optional configuration parameters for a view. The view definition file is included in the view package. See View Definition for more information on the view definition file syntax and features.
+View Instance | An unique instance of a view, that is based on a view definition and specific version that is configured. See Versions and Instances for more information.
+View API | The REST API for viewing the list of deployed views and creating view instances. See View API for more information.
+Framework Services | The server-side of the view framework exposes certain services for use with your views. This includes persistence of view instance data and view eventing. See Framework Services for more information.
+
+## Components of a View
+
+A view can consist of **client-side assets** (i.e. the UI that is exposed in Ambari Web) and **server-side resources** (i.e. the classes that expose REST end points). When the view loads into Ambari Web, the view UI can use the view server-side resources as necessary to deliver the view functionality.
+
+
+
+### Client-side Assets
+
+The view does not limit or restrict what client-side technologies a view uses. You can package client-side dependencies (such as JavaScript and CSS frameworks) with your view.
+
+### Server-side Resources
+
+A view can expose resources as REST end points to be used in conjunction with the client-side to deliver the functionality of your view application. Thees resources are written in Java and can be anything from a servlet to a regular REST service to an Ambari ResourceProvider (i.e. a special type of REST service that handles some REST capabilities such as partial response and pagination – if you adhere to the Ambari ResourceProvider interface). See [Framework Services](./framework-services.md) for more information on capabilities that the framework exposes on the server-side for views.
+
+:::info
+Checkout the **Weather View** as an example of a view that exposes servlet and REST endpoints.
+
+[https://github.com/apache/ambari/tree/trunk/ambari-views/examples/weather-view](https://github.com/apache/ambari/tree/trunk/ambari-views/examples/weather-view)
+:::
+
+## View Package
+
+The assets associated with a view are delivered as a JAR package. The **view definition file** must be at the root of the package. UI assets and server-side classes are served from the root. Dependent Java libraries are placed in the `WEB-INF/lib` directory.
+
+```
+view.jar
+|
+|- view.xml
+|
+|-
+|
+|- index.html
+| |
+| |_
+|
+|_ WEB-INF
+ |
+ |_ lib/*.jar
+```
+
+## Versions and Instances
+
+Multiple versions of a given view can be deployed into Ambari and multiple instances of each view can be created for each version. For example, I can have a view named FILES and deploy versions 0.1.0 and 0.2.0. I can then create instances of each version FILES{0.1.0} and FILES{0.2.0} allowing some Ambari users to have an older version of FILES (0.1.0), and other users to have the newer FILES version (0.2.0). I can also create multiple instances for each version, configuring each differently.
+
+
+
+### Instance Configuration Parameters
+
+As part of a view definition, the instance configuration parameters are specified (i.e. "these parameters are needed to configure an instance of this view"). When you create a view instance, you specify the configuration parameters specific to that instance. Since parameters are scoped to a particular view instance, you can have multiple instances of a view, each instance configured differently.
+
+Using the example above, I can create two instances of the FILES{0.2.0} version, one instance that is configured a certain way and the second that is configured differently. This allows some Ambari users to use FILES one way, and other users a different way.
+
+See [Framework Services](./framework-services.md) for more information on instance configuration properties.
+
+## View Lifecycle
+
+The lifecycle of a view is shown below. As you deploy a view and create instances of a view, server-side framework events are invoked. See [Framework Services](./framework-services.md) for more information on capabilities that the framework exposes on the server-side for views.
+
+
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/view-api.md b/versioned_docs/version-2.7.6/ambari-design/views/view-api.md
new file mode 100644
index 0000000..023db84
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/view-api.md
@@ -0,0 +1,195 @@
+# View API
+
+This section describes basic usage of the View REST API. Browse https://github.com/apache/ambari/blob/trunk/ambari-views/docs/index.md for detailed usage information and examples.
+
+## Get List of Deployed Views
+
+1. Gets the list of all deployed views
+
+```
+GET /api/v1/views
+
+200 - OK
+```
+
+2. Once you have a list of views, you can drill-into a view and see the available versions.
+
+```
+GET /api/v1/views/FILES
+
+200 - OK
+```
+
+3. You can go a level deeper and see more information about that specific version for the view, such as the parameters and the archive name, and a list of all instances of the view for that specific view version.
+
+```
+GET /api/v1/views/FILES/versions/0.1.0
+
+200 - OK
+```
+
+## Creating a View Instance: Files View
+
+The following example shows creating an instance of the [Files View](https://github.com/apache/ambari/tree/trunk/contrib/views/files), name FILES, version 0.1.0 view called "MyFiles".
+
+1. Create the view instance.
+
+```
+POST /api/v1/views/FILES/versions/0.1.0/instances/MyFiles
+
+[ {
+"ViewInstanceInfo" : {
+ "properties" : {
+ "dataworker.defaultFs" : "webhdfs://your.namenode.host:50070"
+ }
+ }
+} ]
+
+201 - CREATED
+```
+
+:::info
+When creating your view instance, be sure to provide all required view instance properties, otherwise you will receive a 500 with a message explaining the properties that are required.
+:::
+
+2. Restart Ambari Server to pick-up the view instance and UI resources.
+
+```bash
+ambari-server restart
+```
+
+3. Confirm the newly created view instance is available.
+
+```
+GET /api/v1/views/FILES/versions/0.1.0
+
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/FILES/versions/0.1.0/",
+ "ViewVersionInfo" : {
+ "archive" : "/var/lib/ambari-server/resources/views/work/FILES{0.1.0}",
+ "label" : "Files",
+ "masker_class" : null,
+ "parameters" : [
+ {
+ "name" : "dataworker.defaultFs",
+ "description" : "FileSystem URI",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "dataworker.username",
+ "description" : "The username (defaults to ViewContext username)",
+ "required" : false,
+ "masked" : false
+ }
+ ],
+ "version" : "0.1.0",
+ "view_name" : "FILES"
+ },
+ "permissions" : [ ],
+ "instances" : [
+ {
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/FILES/versions/0.1.0/instances/MyFiles",
+ "ViewInstanceInfo" : {
+ "instance_name" : "MyFiles",
+ "version" : "0.1.0",
+ "view_name" : "FILES"
+ }
+ }
+ ]
+}
+```
+
+Browse to the view instance directly.
+
+```
+http://c6401.ambari.apache.org:8080/views/FILES/0.1.0/MyFiles/
+
+or
+
+http://c6401.ambari.apache.org:8080/#/main/views/FILES/0.1.0/MyFiles
+```
+
+## Creating a View Instance: Capacity Scheduler View
+
+The following example shows creating an instance of the [Capacity Scheduler View](https://github.com/apache/ambari/tree/trunk/contrib/views/capacity-scheduler), name CAPACITY-SCHEDULER, version 0.1.0 view called "CS_1", using the label "Capacity Scheduler".
+
+* Create the view instance.
+
+```
+POST /api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/instances/CS_1
+
+[ {
+"ViewInstanceInfo" : {
+ "label" : "Capacity Scheduler",
+ "properties" : {
+ "ambari.server.url" : "http://c6401.ambari.apache.org:8080/api/v1/clusters/MyCluster",
+ "ambari.server.username" : "admin",
+ "ambari.server.password" : "admin"
+ }
+ }
+} ]
+
+201 - CREATED
+```
+
+:::info
+When creating your view instance, be sure to provide all **required** view instance properties, otherwise you will receive a 500 with a message explaining the properties that are required.
+:::
+
+* Confirm the newly created view instance is available.
+
+```
+GET /api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0
+
+{
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/",
+ "ViewVersionInfo" : {
+ "archive" : "/var/lib/ambari-server/resources/views/work/CAPACITY-SCHEDULER{0.1.0}",
+ "label" : "Capacity Scheduler",
+ "masker_class" : null,
+ "parameters" : [
+ {
+ "name" : "ambari.server.url",
+ "description" : "Target Ambari Server REST API cluster URL (for example: http://ambari.server:8080/api/v1/clusters/c1)",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "ambari.server.username",
+ "description" : "Target Ambari administrator username (for example: admin)",
+ "required" : true,
+ "masked" : false
+ },
+ {
+ "name" : "ambari.server.password",
+ "description" : "Target Ambari administrator password (for example: admin)",
+ "required" : true,
+ "masked" : false
+ }
+ ],
+ "version" : "0.1.0",
+ "view_name" : "CAPACITY-SCHEDULER"
+ },
+ "permissions" : [ ],
+ "instances" : [
+ {
+ "href" : "http://c6401.ambari.apache.org:8080/api/v1/views/CAPACITY-SCHEDULER/versions/0.1.0/instances/CS_1",
+ "ViewInstanceInfo" : {
+ "instance_name" : "CS_1",
+ "version" : "0.1.0",
+ "view_name" : "CAPACITY-SCHEDULER"
+ }
+ }
+ ]
+}
+```
+* Browse to the view instance directly.
+
+```
+http://c6401.ambari.apache.org:8080/views/CAPACITY-SCHEDULER/0.1.0/CS_1/
+
+or
+
+http://c6401.ambari.apache.org:8080/#/main/views/CAPACITY-SCHEDULER/0.1.0/CS_1/
+```
diff --git a/versioned_docs/version-2.7.6/ambari-design/views/view-definition.md b/versioned_docs/version-2.7.6/ambari-design/views/view-definition.md
new file mode 100644
index 0000000..b32cc8f
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-design/views/view-definition.md
@@ -0,0 +1,210 @@
+# View Definition
+
+The following describes the syntax of the View Definition File (`view.xml`) as part of the View Package.
+
+An XML Schema Definition is available [here](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/resources/view.xsd).
+
+## `<view>`
+
+The `<view>` element is the enclosing element in the Definition File. The following table describes the elements you can include in the `<view>` element:
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes | The unique name of the view. See `<name>` for more information.
+label | Yes | The display label of the view. See `<label>` for more information.
+version | Yes | The version of the view. See `<version>` for more information.
+min-ambari-version<br></br>max-ambari-version | No | The minimum and maximum Ambari version this view can be deployed with. See `<min-ambari-version>` for more information.
+description | No | The description of the view. See `<description>` for more information.
+icon | No | The 32x32 icon to display for this view. Suggested size is 32x32 and will be displayed as 8x8 and 16x16 as necessary. If this property is not set, a default view framework icon is used.
+icon64 | No | The 64x64 icon to display for this view. If this property is not set, the 32x32 sized icon will be used.
+permission | No | Defines a custom permission for this view. See `<permission>` for more information.
+parameter | No | Defines a configuration parameter that is used to when creating a view instance. See `<parameter>` for more information.
+resource | No | Defines a resource that is exposed by the view. See `<resource>` for more information.
+instance | No | Defines a static instance of the view. See `<instance>` for more information.
+view-class | No| Registers a view class to receive framework events. See `<view-class>` for more information.
+validator-class | No | Registers a validator class to receive framework events. See `<validator-class>` for more information.
+
+## `<name>`
+
+The unique name of the view. Example:
+
+```xml
+<name>MY_COOL_VIEW</name>
+```
+
+## `<label>`
+
+The label of the view. Example:
+
+```xml
+<label>My Cool View</label>
+```
+
+## `<version>`
+
+The version of the view. Example:
+
+```xml
+<version>0.1.0</version>
+```
+
+## `<min-ambari-version> <max-ambari-version>`
+
+The minimum and maximum version of Ambari server that can run this view. Example:
+
+```xml
+<min-ambari-version>1.7.0</min-ambari-version>
+<min-ambari-version>1.7.*</min-ambari-version>
+<max-ambari-version>2.0</max-ambari-version>
+```
+
+## `<description>`
+
+The description of the view. Example:
+
+```xml
+<description>This view is used to display information.</description>
+
+```
+
+## `<parameter>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes | The name of the configuration parameter.
+description | Yes | The description of the configuration parameter.
+label | No | The user friendly name of the configuration parameter (used in the Ambari Administration Interface UI).
+placeholder| No | The placeholder value for the configuration parameter (used in the Ambari Administration Interface UI).
+default-value | No| The default value for the configuration parameter (used in the Ambari Administration Interface UI).
+required | Yes |If true, the configuration parameter is required in order to create a view instance.
+masked | No | Indicated this parameter value is to be "masked" in the Ambari Web UI (i.e. not shown in the clear). Omitting this element default to not-masked. Otherwise, if true, the parameter value will be "masked" in the Web UI.
+
+```xml
+<parameter>
+ <name>someParameter</name>
+ <description>Some parameter this is used to configure an instance of this view</description>
+ <required>false</required>
+</parameter>
+```
+
+```xml
+<parameter>
+ <name>name.label.descr.default.place</name>
+ <description>Name, label, description, default and placeholder</description>
+ <label>NameLabelDescDefaultPlace</label>
+ <placeholder>this is placeholder text but you should see default</placeholder>
+ <default-value>youshouldseethisdefault</default-value>
+ <required>true</required>
+</parameter>
+```
+
+See the [Property View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/property-view/docs/index.md) to see the different parameter options in use.
+
+## `<permission>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes| The unique name of the permission.
+description| Yes| The description of the permission.
+
+```xml
+<permission>
+ <name>SOME_CUSTOM_PERM</name>
+ <description>A custom permission for this view</description>
+</permission>
+<permission>
+ <name>SOME_OTHER_PERM</name>
+ <description>Another custom permission for this view</description>
+</permission>
+```
+
+## `<resource>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes| The name of the resource. This will be the resource endpoint name of the view instance.
+plural-name | No | The plural name of the resource.
+service-class | No | The JAX-RS annotated resource service class.
+id-property | No | The resource identifier.
+provider-class | No | The Ambari ResourceProvider resource class.
+resource-class | No | The JavaBean resource class.
+
+```xml
+<resource>
+ <name>calculator</name>
+ <service-class>org.apache.ambari.view.proxy.CalculatorResource</service-class>
+</resource>
+```
+
+See the [Calculator View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/calculator-view/docs/index.md) to see a REST service endpoint view implementation.
+
+```xml
+<resource>
+ <name>city</name>
+ <plural-name>cities</plural-name>
+ <id-property>id</id-property>
+ <resource-class>org.apache.ambari.view.weather.CityResource</resource-class>
+ <provider-class>org.apache.ambari.view.weather.CityResourceProvider</provider-class>
+ <service-class>org.apache.ambari.view.weather.CityService</service-class>
+</resource>
+```
+
+See the [Weather View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/weather-view/docs/index.md) to see an Ambari ResourceProvider view implementation..
+
+## `<instance>`
+
+Element | Requred | Description
+--------|---------|---------------
+name | Yes |The unique name of the view instance.
+label | No |The display label of the view instance. If not set, the view definition `<label>` is used.
+description| No |The description of the view instance. If not set, the view definition `<description>` is used.
+visible | No |If true, for the view instance to show up in the users view instance list.
+icon | No |Overrides the view icon for this specific view instance.
+icon64 | No |Overrides the view icon64 for this specific view instance.
+property | No |Specifies any necessary configuration parameters for the view instance. See `<property>` for more information.
+
+```xml
+<instance>
+ <name>US_WEST</name>
+ <property>
+ <key>cities</key>
+ <value>Palo Alto, US;Los Angeles, US;Portland, US;Seattle, US</value>
+ </property>
+ <property>
+ <key>units</key>
+ <value>imperial</value>
+ </property>
+</instance>
+```
+
+## `<property>`
+
+Element | Requred | Description
+--------|---------|---------------
+key |Yes |The property key (for the configuration parameter to set).
+value |Yes |The property value (for the configuration parameter to set).
+
+```xml
+<property>
+ <key>units</key>
+ <value>imperial</value>
+</property>
+```
+
+## `<view-class>`
+
+Registers a view class to receive framework events. The view class must implement the [View](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/View.java) interface.
+
+```xml
+<view-class>this.is.my.viewclazz</view-class>
+```
+
+## `<validator-class>`
+
+Registers a validator class to receive property and instance validation requests. The validator class must implement the [Validator](https://github.com/apache/ambari/blob/trunk/ambari-views/src/main/java/org/apache/ambari/view/validation/Validator.java) interface.
+
+```xml
+<validator-class>org.apache.ambari.view.property.MyValidator</validator-class>
+```
+
+See [Property Validator View Example](https://github.com/apache/ambari/blob/trunk/ambari-views/examples/property-validator-view/docs/index.md) to see view property and instance validation in use.
diff --git a/versioned_docs/version-2.7.6/ambari-dev/admin-view-ambari-admin-development.md b/versioned_docs/version-2.7.6/ambari-dev/admin-view-ambari-admin-development.md
new file mode 100644
index 0000000..e8b230c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/admin-view-ambari-admin-development.md
@@ -0,0 +1,39 @@
+# Admin View (ambari-admin) Development
+
+## Frontend Development
+
+Follow the instructions here to ease frontend development for Admin View (ambari-admin module):
+
+1. Follow the Quick Start Guide to install and start Ambari Server (cluster need not be deployed).
+2. Follow the "Frontend Development" section in Quick Start Guide to check out the Ambari source using git. This makes the entire Ambari source available via /vagrant/ambari from the Vagrant VM.
+3. From the Ambari Server host:
+
+ ```bash
+ cd /var/lib/ambari-server/resources/views/work <- if this directory does not exist, you have not started ambari-server; run "ambari-server start" to start it
+ mv ADMIN_VIEW\{2.5.0.0\} /tmp
+ ln -s /vagrant/ambari/ambari-admin/src/main/resources/ui/admin-web/dist ADMIN_VIEW\{2.5.0.0\}
+ cp /tmp/ADMIN_VIEW\{2.5.0.0\}/view.xml ADMIN_VIEW\{2.5.0.0\}/
+ ambari-server restart
+ ```
+
+4. Now you can change the source code for Admin View and run gulp locally, and the changes are automatically reflected on the server.
+
+
+## Functional Tests
+
+To run end-to-end functional tests on the browser, execute:
+
+npm run update-webdriver
+npm start (This starts http server at 8000 port)
+
+Open another terminal at same path and execute: npm run protractor (does e2e test in the browser. This library works on top of selenium jar).
+
+## Unit Tests
+
+To run unit tests:
+
+Go to path: `/ambari/ambari-admin/src/main/resources/ui/admin-web`
+Execute npm run test-single-run (this uses PhantomJS headless browser; it's the same used by the ambari-web unit tests)
+
+Note:
+"npm test" command starts karma server at [http://localhost:9876/](http://localhost:9876/) and runs unit tests. This server remain up, autoreloads any changes in the test code and reruns the tests. This is useful while developing unit tests.
diff --git a/versioned_docs/version-2.7.6/ambari-dev/ambari-code-layout.md b/versioned_docs/version-2.7.6/ambari-dev/ambari-code-layout.md
new file mode 100644
index 0000000..6adf4ed
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/ambari-code-layout.md
@@ -0,0 +1,102 @@
+# Ambari Code Layout
+
+_Ambari code checkout and build instructions are available in Ambari Development page._
+_Ambari design and architecture is detailed in [Ambari Design](../ambari-design/index.md) page._
+_Understanding the architecture of Ambari is helpful in navigating code easily._
+
+Ambari's source has the following layout
+
+```
+ambari/
+ ambari-agent/
+ ambari-common/
+ ambari-project/
+ ambari-server/
+ ambari-views/
+ ambari-web/
+ contrib/
+ docs/
+```
+
+Major components of Ambari reside in their own sub-folders under the root folder, to maintain clean separation of code.
+
+Folder | Components or Purpose
+------|---------------------
+ambari-server | Code for the main Ambari server which manages Hadoop through the agents installed on each node.
+ambari-agent | Code for the Ambari agents which run on each node that the server above manages.
+ambari-web | Code for Ambari Web UI which interacts with the Ambari server above.
+ambari-views | Code for Ambari Views, the framework for extending the Ambari Web UI.
+ambari-common | Any common code between Ambari Server and Agents.
+contrib | Code for any custom contributions Ambari makes to other third party software or libraries.
+docs | Basic Ambari documentation, including the Ambari REST API.
+
+Ambari Server and Agents interact with each other via an internal JSON protocol.
+Ambari Web UI interacts with Ambari Server through the documented Ambari REST APIs.
+
+## Ambari-Server
+
+## Ambari-Agent
+
+## Ambari-Views
+
+## Ambari-Web
+
+The Ambari Web UI is a purely browser side JavaScript application based on the [Ember](http://emberjs.com/) JavaScript framework. A good understanding of [Ember](http://emberjs.com/) is necessary to easily understand the code and its layout.
+
+Being a pure JavaScript application, all UI is rendered locally in browser; with data coming from the Ambari REST APIs provided by the Ambari Server.
+
+```
+ambari-web/
+ app/
+ config.coffee
+ package.json
+ pom.xml
+ test/
+ vendor/
+```
+
+Folder | Description
+------|---------------------
+app/ |The main application code. This contains Ember's views, templates, controllers, models, routes, etc. for rendering Ambari UI
+config.coffee |[Brunch](http://brunch.io/) application builder configuration file
+package.json |[npm](https://npmjs.org/) package manager configuration file
+test/ |Javascript test files testing functionality written in app/ folder
+vendor/ |Third party javascript libraries and stylesheets used. Full list of third party libraries is documented in /ambari/ambari-web/app/assets/licenses/NOTICE.txt
+
+Developers mainly work on javascript and other files in the app/ folder. Once that is done, the final javascript is built using Brunch (a HTML5 application assembler based on node.js) into the /ambari/ambari-web/public/ folder. This folder contains the index.html which bootstraps the Ambari web application.
+
+Developers while working should use the
+
+```bash
+brunch w
+```
+
+command to launch Brunch in watch mode where it re-generates the final application on any change. Similarly
+
+```bash
+brunch watch --server (or use the shorthand: brunch w -s)
+```
+
+launches a HTTP server at http://localhost:3333 serving the final application. This is helpful in seeing UI with mock data, without the entire Ambari server deployed.
+
+Note: see "[Coding Guidelines for Ambari](./coding-guidelines-for-ambari.md)" for more details on building and running Ambari Web locally.
+
+**ambari-web/app**
+
+Since ambari-web/app/ folder is where developers spend a majority of time, the major files and their purpose is listed below.
+
+Folder or File | Description
+------|---------------------
+assets/ | Mock data under assets/data. Static files given from server via assets/font, assets/img.
+controllers/ | The C in MVC. Ember controllers for the main application controllers/main, installer controllers/wizard, and common controllers controllers/global
+data/ | Meta data for the application (UI metadata, server data metadata, etc.)
+mappers/ | Classes which map server side JSON data structures into client side Ember models.
+models/ | The M in MVC. [Ember Data](http://emberjs.com/guides/models/) models used. Clusters, Services, Hosts, Alerts, etc. models are defined here
+routes/ | [Ember routes](http://emberjs.com/guides/routing/) defining the various page redirections in the application. main.js contains the main application routes. installer.js contains installer routes. Others are routings in various wizards etc.
+styles/ | CSS stylesheets represented in the [less](http://lesscss.org/) format. This is compiled by Brunch into the ambari-web/public/stylesheets/app.css
+views/ | The V in MVC. Contains all the Ember views of the application. Main application views under views/main, installer views under views/installer, and common views under views/commons
+templates/ | The HTML templates used by the views above. Generally a view will have a template file. Sometimes views define the template content in themselves as strings
+app.js | The main Ember application
+config.js | Main configuration file for the javascript application. Developer can keep application in test mode using the App.testMode property etc.
+
+If a developer adds, removes, or renames a model, view, controller, template, route, they should update the corresponding entry in models.js, views.js, controllers.js, templates.js, routes.js files.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-dev/apache-ambari-jira.md b/versioned_docs/version-2.7.6/ambari-dev/apache-ambari-jira.md
new file mode 100644
index 0000000..8097868
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/apache-ambari-jira.md
@@ -0,0 +1,36 @@
+# Apache Ambari JIRA
+
+The following page describes the [Apache Ambari JIRA](https://issues.apache.org/jira/browse/AMBARI) components for tasks, bugs and improvements across the core project + contributions.
+
+## components
+
+Proposed Rename | Description
+------|---------------------
+alerts | JIRAs related to Ambari Alerts system.
+ambari-admin | New component specifically for Ambari Admin.
+ambari-agent | JIRAs related to the Ambari Agent.
+ambari-client | JIRAs related to the Ambari Client.
+ambari-metrics| JIRAs related to Ambari Metrics system.
+ambari-server | JIRAs related to the Ambari Server.
+ambari-shell | New component specifically for Ambari Shell.
+ambari-views | JIRAs related to the [Ambari Views framework](../ambari-design/views/index.md). Specific Views that are built on the framework will be handled with labels.
+ambari-web | New component specifically for Ambari Web.
+blueprints | JIRAs related to [Ambari Blueprints](../ambari-design/blueprints/index.md).
+contrib | JIRAs related to contributions under "contrib", such as Ambari SCOM
+documentation | JIRAs related to project documentation including the wiki.
+infra | JIRAs related to project infrastructure including builds, releases mechanics and automation
+security | JIRAs related to Ambari security features, including Kerberos.
+site | JIRAs related to the project site http://ambari.apache.org/
+stacks | JIRAs related to Ambari Stacks.
+test | JIRAs related to unit tests and test automation
+
+## Use of Labels
+
+In certain cases, the component listed above might be "too broad" and you want to designate JIRAs to a specific area of that component. To handle these scenarios, use a combination of component + labels. Some examples:
+
+Feature Area | Description|Component|Label
+-------------|------------|----------|---------
+HDP Stack | These are specific Stack implements for HDP. |stacks | HDP
+BigTop | This is a specific Stack implement for BigTop. | stacks | BigTop
+Files View | This is a specific view implementation for Files. | ambari-views | Files
+Ambari SCOM | This is a specific contribution of a Management Pack for Microsoft System Center. | contrib |Ambari-SCOM
diff --git a/versioned_docs/version-2.7.6/ambari-dev/code-review-guidelines.md b/versioned_docs/version-2.7.6/ambari-dev/code-review-guidelines.md
new file mode 100644
index 0000000..8f079a6
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/code-review-guidelines.md
@@ -0,0 +1,20 @@
+# Code Review Guidelines
+
+Please refer to [How to Contribute](./how-to-contribute.md) for instructions how to submit a code review to Github.
+
+**What makes a good code review?**
+
+- Authors should annotate source code before the review. This makes it easier for devs reviewing your code and may even help you spot bugs before they do.
+- Send small code-reviews if possible. Reviewing more than 400 lines per hour diminishes our ability to find defects.
+- Reviewing code for more than one hour also reduces our ability to find bugs.
+- If possible, try to break up large reviews into separate but functional stages. If you need to temporarily comment out unit tests, do so. Sending gigantic patches means your review will take longer since reviewers need to block out more time to go through it, and you may spend more time revving iterations and rebasing.
+
+We have a global community of committers, so please be mindful that you should **wait at least 24 hours** before merging your pull request even though you may already have the necessary +1.
+
+This encourages others to take an interest in your pull request and helps us find more bugs (it's ok to slow down in order to speed up).
+
+**Always include** **at least two committers that are familiar with that code area**.
+
+If you want to subscribe to code reviews for a particular area, [feel free to edit this section](https://cwiki.apache.org/confluence/display/AMBARI/Code+Review+Guidelines).
+
+
diff --git a/versioned_docs/version-2.7.6/ambari-dev/coding-guidelines-for-ambari.md b/versioned_docs/version-2.7.6/ambari-dev/coding-guidelines-for-ambari.md
new file mode 100644
index 0000000..48798ec
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/coding-guidelines-for-ambari.md
@@ -0,0 +1,189 @@
+# Coding Guidelines for Ambari
+
+## Ambari Web Frontend Development Environment
+
+### Application Assembler: Brunch
+
+* Brunch was used to create the application skeleton for Ambari Web.
+
+* Brunch builds and deploys code automatically in the background as you modify the source files. This lets you break up the application into a number of JS files for code organization and reuse without worrying about development turnaround or runtime load performance.
+
+* Run a Node.js-based web server with a single command so that you can easily run Ambari Web without setting up Ambari Server (you still need to run Ambari Server for true end-to-end testing).
+
+To check out Ambari Web from the Github repository and run it:
+
+* Install Node.js from [http://nodejs.org](http://nodejs.org)
+* Execute the following:
+
+```bash
+git clone https://git-wip-us.apache.org/repos/asf/ambari.git
+cd ambari/ambari-web
+sudo npm install -g brunch@1.7.20
+rm -rf node_modules public
+npm install
+brunch build
+```
+
+_Note: if you receive a "gyp + xcodebuild" error when running "npm install", confirm you have Xcode CLI tools installed (Xcode > Preferences > Downloads)_
+_Note: if you receive "Error: EMFILE, open" errors when running "brunch build", increase the ulimit for file descriptors (for example, "ulimit -n 10000")_
+
+To run the web server in isolation with Ambari Server:
+
+```
+brunch watch --server (or use the shorthand: brunch w -s)
+```
+
+The above runs Ambari Web with a local test server at localhost:3333. The login/password is admin/admin
+
+All Ambari front-end developers are highly recommended to use PhpStorm by JetBrains. JetBrains has kindly granted Apache Ambari an open-source license for PhpStorm and IntelliJ. These products are available to Ambari committers (if you are an Ambari committer, email [private@ambari.apache.org](mailto:private@ambari.apache.org) to request license keys). You can also use Eclipse if that is your preference.
+
+* IDE Plugins
+
+Go to Preferences->Plugins->Browse repositories and install "Node.js" and "Handlebars" plugins.
+
+### Coding Conventions
+
+For any JavaScript/Handlebars/LESS files, they should be formatted with the IDE to maintain consistency.
+
+Also, the IDE will give warnings in the editor about implicit globals, etc. Fix these warnings before submitting patches.
+
+We will use all default settings for Code Style in the IDE, except for the following:
+
+```
+Go to Preferences
+Code Style->General
+Line separator (for new files): Unix
+Make sure "Use tab character" is NOT checked
+Tab size: 2
+Indent: 2
+Continuation indent: 2
+Code Style->JavaScript:
+Tabs and Indents
+Make sure "use tab character" is NOT checked
+Set Tab size, Indent, and Continuation indent to "2".
+
+Spaces->Other
+Turn on "After name-value property separator ':'
+```
+
+In general, the following conventions should be followed for all JavaScript code: http://javascript.crockford.com/code.html
+
+Exceptions to the rule from the above:
+
+* We use 2 spaces instead of 4.
+
+* Variable Declarations:
+"It is preferred that each variable be given its own line and comment. They should be listed in alphabetical order."
+Comment only where it makes sense. - No need to do alphabetical sorting.
+
+* "JavaScript does not have block scope, so defining variables in blocks can confuse programmers who are experienced with other C family languages. Define all variables at the top of the function." - This does not need to be followed.
+
+### Java Import Order
+
+Some IDEs define their default import order differently and this can cause a lot of problems when creating patches and merging commits to different branches. The following are the checkstyle rules which are applied while executing the test phase of the build. Your IDE of choice should be updated to match these settings:
+
+* The use of the wild card character, '*', should be avoided and all imports should be explicitly stated.
+
+* The following order should be used for all import statements:
+ - java
+ - javax
+ - org
+ - com
+ - other
+
+### UI Unit Tests
+
+All patches must be accompanied by unit tests ensuring good coverage. When unit tests are not applicable (e.g., stylistic or layout changes, etc.), you must explicitly state in the JIRA that unit tests are not applicable.
+
+Unit tests are written using Mocha and run with the PhantomJS headless browser.
+
+To run unit tests for ambari-web, run:
+
+```bash
+cd ambari-web
+mvn test
+```
+
+## Ambari Backend Development
+
+**Following points are borrowed from hadoop wiki:**
+
+* All public classes and methods should have informative Javadoc comments.
+
+* Do not use @author tags.
+
+* Code must be formatted according to Sun's conventions, with one exception:
+* Indent two spaces per level, not four.
+
+* Contributions must pass existing unit tests.
+
+* The code changes must be accompanied by unit tests. In cases where unit tests are not possible or don't make sense an explanation should be provided on the jira.
+
+* New unit tests should be provided to demonstrate bugs and fixes. JUnit (junit4) is our test framework:
+* You must implement a class that uses @Test annotations for all test methods.
+
+* Define methods within your class whose names begin with test, and call JUnit's many assert methods to verify conditions. Please add meaningful messages to the assert statement to facilitate diagnostics.
+
+* By default, do not let tests write any temporary files to /tmp. Instead, the tests should write to the location specified by the test.build.data system property.
+
+* Logging levels should conform to Log4j levels
+* Use slf4j instead of commons logging as the logging facade.
+
+* Logger name should be the class name as far as possible.
+
+
+**Unit tests**
+
+* Developers should always run full unit tests before submitting their patch for code review and before committing to Apache. From the top-level directory,
+
+```bash
+mvn clean test
+```
+
+Sometimes it is useful to run unit tests just for the feature you are working on (e.g., Kerberos, Rolling/Express Upgrade, Stack Inheritance, Alerts, etc.). For this purpose, you can run unit tests with a given profile.
+
+The profiles run all test classes/cases annotated with a given Category, E.g.,
+
+```java
+@Category({ category.AlertTest.class})
+```
+
+To run one of the profiles, look at the available names in the top-level pom.xml . E.g.,
+
+```bash
+mvn clean test -P AlertTests # Other options are AmbariUpgradeTests, BlueprintTests, KerberosTests, MetricsTests, StackUpgradeTests
+```
+
+
+After you're done testing just that suite, **you should run a full unit test using "mvn clean test".**
+* [http://wiki.apache.org/hadoop/HowToDevelopUnitTests](http://wiki.apache.org/hadoop/HowToDevelopUnitTests)
+* The tests should be named *Test.java
+* **Unit testing with databases**
+ - We should use JavaDB as the in-memory database for unit-test. The database layer/ORM should be configurable to use in memory database. Two things are important for the db in testing.
+
+ - Ability to bootstrap the db with any initial data dynamically.
+
+ - Ability to modify the db state out of band to simulate certain test cases. One way to achieve the above could be to implement a database access layer only for testing purposes, but it might cause inconsistency in ORM objects, which needs to be figured out.
+
+* **Stub Heartbeat handler**
+ - For testing purpose it may be a good idea to implement a stub heartbeat handler that only simulates interaction with the agents but doesn't interact with any real agent: It will expose an action queue similar to the real heartbeat handler, but will not send anything anywhere, will just periodically remove the action from the queue. It will expose an interface to inject artificial responses for each of the actions, which can be used in tests to simulate agent responses. It will also expose an interface to inject node state to simulate failure of nodes or lost heartbeats. Guice framework can be used to inject stub heartbeat handler in testing.
+
+* **EasyMock**
+ - EasyMock is our mocking framework of choice. It has been successfully used in hadoop.A few important points: An example of a scenario where Easymock is apt: Suppose we are testing deployment of a service but want to bypass a service dependency or want to inject an artificial component dependency, the dependency tracker object can be mocked to simulate the desired dependency scenario. Ambari server is by and large a state driven system. EasyMock can be used to bypass the state changes and test components narrowly. However, it is probably better to rather use in-memory database to simulate state changes and use EasyMock only when certain behavior cannot be easily simulated. For example consider testing API implementation to get status of a transaction. It can be tested by mocking the action manager object, alternatively, it can be tested by setting the state in the in-memory database. In this case, the later is a more comprehensive test.
+ Avoid static methods and objects, Easymock cannot mock these. Use configuration or dependency injection to initialize static objects if they are likely to be mocked.
+ EasyMock cannot mock final classes, so those should be avoided for classes likely to be mocked. Take a look at:[http://www.easymock.org/EasyMock3_1_Documentation.html](http://www.easymock.org/EasyMock3_1_Documentation.html) for docs.
+
+**Guice**
+
+Guice is a dependency injection framework and can be used to dynamically inject pluggable components.
+Please refer http://code.google.com/p/google-guice/wikJamiroquaii/Motivation. We can use Guice in following scenarios:
+
+* Pluggable manifest generator: It may be desirable to have different implementation of manifest generator for non-puppet setup or for testing.
+
+* Injecting in-memory database (if possible) instead of a real persistent database for testing. It needs to be investigated how Guice fits with the ORM tools.
+
+* Injecting a stub implementation of heartbeat handler.
+
+* It may be a good idea to bind API implementations for management or monitoring via Guice. This will allow testing of APIs and the server independent of the implementation via mock implementations. For example, the management api implementation in coordinator can be mocked so that API definitions and URIs can be independently tested.
+
+* Injecting mock objects for dependency tracker, or stage planner for testing.
diff --git a/versioned_docs/version-2.7.6/ambari-dev/developer-tools.md b/versioned_docs/version-2.7.6/ambari-dev/developer-tools.md
new file mode 100644
index 0000000..e97b47a
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/developer-tools.md
@@ -0,0 +1,79 @@
+# Developer Tools
+
+## Diff and Merge tools
+
+Araxis has been kind enough to give us free licenses for Araxis Merge if you work on open source, just submit a request at http://www.araxis.com/buy/open-source
+
+Download from http://www.araxis.com/url/merge/download.uri
+
+You will be prompted for your serial number when you run the application for the first time. To enter a new serial number into an existing installation, click the Re-Register... button in the About window.
+
+### Integrating Araxis to Git as your Diff and Merge tool
+
+After installing Araxis Merge,
+
+On Mac OS X,
+
+- Drag Araxis across to your ~/Applications folder as normal
+- Copy the contents of the Utilities folder to (e.g.) /usr/local/araxis/bin
+- Add the path to your startup script: export PATH="$PATH:/usr/local/araxis/bin"
+
+In your .gitconfig file (tested on Mac OS X),
+
+```
+[diff]
+ tool = araxis
+[difftool]
+ prompt = false
+[merge]
+ tool = araxis_merge
+[mergetool "araxis_merge"]
+ cmd = araxisgitmerge "$PWD/$REMOTE" "$PWD/$BASE" "$PWD/$LOCAL" "$PWD/$MERGED"
+```
+
+## Git Best Practices
+
+This is just a personal preference, but it may be easier to create one Git branch per Jira/feature. E.g.,
+
+```bash
+git checkout trunk
+git checkout -b AMBARI12345 # create the branch and switch to it
+git branch --set-upstream-to=origin/trunk AMBARI12345 # set the upstream so that git pull --rebase will get the HEAD from trunk
+# Do work,
+git commit -m "AMBARI-12345. Foo (username)"
+# Do more work
+git commit --amend # edit the last commit
+git pull --rebase
+
+# If conflicts are detected, then run
+git mergetool # should be easy if you have Araxis Merge setup to do a 3-way merge
+git rebase --continue
+git push origin HEAD:trunk
+```
+
+## Useful Git Commands
+
+In your .gitconfig file,
+
+```bash
+[alias]
+ st = status
+ ci = commit
+ br = branch
+ co = checkout
+ dc = diff --cached
+ dtc = difftool --cached
+ lg = log -p
+ lsd = log --graph --decorate --pretty=oneline --abbrev-commit --all
+ slast = show --stat --oneline HEAD
+ pshow = show --no-prefix --format=format:%H --full-index
+ pconfig = config --list
+```
+
+Also, in your ~/.bashrc or ~/.profile file,
+
+```bash
+alias branchshow='for k in `git branch|perl -pe s/^..//`;do echo -e `git show --pretty=format:"%Cgreen%ci %Cblue%cr%Creset" $k|head -n 1`\\t$k;done|sort'
+```
+
+This command will show all of your branches sorted by the last commit times, which is useful if you develop one feature per branch.
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-dev/development-in-docker.md b/versioned_docs/version-2.7.6/ambari-dev/development-in-docker.md
new file mode 100644
index 0000000..1c8c14e
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/development-in-docker.md
@@ -0,0 +1,92 @@
+# Development in Docker
+
+## Overview
+
+This page describes how to develop, build and test Ambari on Docker.
+
+In order to build Ambari, there are a quite few steps to execute and it is a bit cumbersome. You can build an environment in Docker and are good to go!
+
+This is NOT meant for running production level Ambari in Docker (though you can run Ambari and deploy Hadoop in a single Docker container for testing purpose)
+
+
+
+(This is not only about Jenkins slaves but think it is your laptop)
+
+First, we will make a Docker image that has all third party libraries Ambari requires.
+
+Second, prepare your code on Docker host machine. It can be trunk or a branch, or your developing code or with a patch applied. Note that your code does not reside inside of Docker container, but on the Docker host and we link it by Docker volume (like mount)
+
+And you are ready to go!
+
+### Source code
+
+This code has been migrated to Ambari trunk.
+
+https://github.com/apache/ambari/tree/trunk/dev-support/docker
+
+## Requirements
+
+There are a few system requirements if you want to play with this document.
+
+- Docker https://docs.docker.com/#installation-guides
+
+## Create Docker Image
+
+First thing first, we have to build an Docker image for this solution. This will setup libraries including ones from yum and maven dependencies. In my environment (Centos 6.5 VM with 8GB and 4CPUs) takes 30mins. Good news is this is one time.
+
+```bash
+git clone https://github.com/apache/ambari.git
+cd ambari
+docker build -t ambari/build ./dev-support/docker/docker
+```
+
+This is going to build a image named "ambari/build" from configuration files under ./dev-support/docker/docker
+
+## Unit Test
+
+For example our unit test Jenkins job on trunk runs on Docker. If you want to replicate the environment, read this section.
+
+The basic
+
+```bash
+cd {ambari_root}
+docker run --privileged -h node1.mydomain.com -v $(pwd):/tmp/ambari ambari/build /tmp/ambari/dev-support/docker/docker/bin/ambaribuild.py test -b
+```
+
+- 'docker run' is the command to run a container from a image. Which image was run? It is 'ambari/build'
+- -h sets a host name in the container.
+- -v is to mount your Ambari code on the host to the container's /tmp. Make sure you are at the Ambari root directory.
+- ambaribuild.py runs some script to eventually run 'mvn test' for ambari.
+- -b option is to rebuild the entire source tree. It runs test as is on your host if omitted.
+
+## Deploy Hadoop
+
+You want to run Ambari and Hadoop to test your improvements that you have just coded on your host. Here is the way!
+
+
+```bash
+cd {ambari_root}
+docker run --privileged -t -p 80:80 -p 5005:5005 -p 8080:8080 -h node1.mydomain.com --name ambari1 -v $(pwd):/tmp/ambari ambari/build /tmp/ambari-build-docker/bin/ambaribuild.py deploy -b
+
+# once your are done
+docker kill ambari1 && docker rm ambari1
+```
+
+- --privileged is important as ambari-server accessing to /proc/??/exe
+- -p 80:80 to ensure you can access to web UI from your host.
+- -p 5005 is java debug port
+- 'deploy' to build, install rpms and start ambari-server and ambari-agent and deploy Hadoop through blueprint.
+
+You can take a look at https://github.com/apache/ambari/tree/trunk/dev-support/docker/docker/blueprints to see what is actually deployed.
+
+There are a few other parameters you can play.
+
+```bash
+cd {ambari_root}
+docker run --privileged -t -p 80:80 -p 5005:5005 -p 8080:8080 -h node1.mydomain.com --name ambari1 -v ${AMBARI_SRC:-$(pwd)}:/tmp/ambari ambari/build /tmp/ambari-build-docker/bin/ambaribuild.py [test|server|agent|deploy] [-b] [-s [HDP|BIGTOP|PHD]]
+```
+
+- test: mvn test
+- server: install and run ambari-server
+- agent: install and run ambari-server and ambari-agent
+- deploy: install and run ambari-server and ambari-agent, and deploy a hadoop
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-dev/development-process-for-new-major-features.md b/versioned_docs/version-2.7.6/ambari-dev/development-process-for-new-major-features.md
new file mode 100644
index 0000000..1bf5e5c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/development-process-for-new-major-features.md
@@ -0,0 +1,163 @@
+# Development Process for New Major Features
+
+
+## Goals
+
+* Make it clear to the community of new feature development happening at a high level
+* Make it easier to correlate features with JIRAs
+* Make it easier to track progress for features in development
+* Make it easier to understand estimated release schedule for features in development
+
+## Process
+
+* Create a JIRA of type "Epic" for the new feature in [Apache Ambari JIRA](https://issues.apache.org/jira/browse/AMBARI)
+* Add the feature to the [Features + Roadmap](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30755705) wiki and link it to the Epic created
+* The Epic should contain a high-level description that is easy to understand
+* The Epic should also contain the initial, detailed design (this can be in the form of a shared Google Doc for ease of collaboration,Word doc, pdf, etc)
+* Once the initial design is posted, announce to the dev mailing list to elicit feedback (Subject: [DISCUSS] _Epic Name_. Be sure include a link to the Epic JIRA in the body).It is recommended to ask for review feedback to be given by a certain date so that the review process does not drag on.
+
+* Iterate on the design based on community feedback. Incorporate multiple review cycles as needed.
+
+* Once the design is finalized, break it down into Tasks that are linked to the Epic
+* (Nice to have) Once the Tasks are defined, schedule them into sprints using the Agile Board so that it's easy to see who is working on what/when, what tasks remain but unassigned so the community can pick up work from the backlog, etc.
+
+## Feature Branches
+
+The use of feature branches allows for the large, potentially destabilizing changes to be made without affecting the stability of the trunk.
+
+## Feature Flags
+
+* Sometimes, we want to have the ability for the users to experiment with a new feature, but not make expose it as a general feature since it has not gone through rigorous testing. In other cases, we want to provide an escape hatch for certain edge-case scenarios that we may not want to expose in general because using the escape hatch is potentially dangerous and should only be reserved special occasions. For these purposes, Ambari has a notion of **feature flags**. Make use of Feature Flags when adding new features that follow under these categories. [Feature Flags](./feature-flags.md) has more details on this.
+
+## Contribution Flow
+
+[https://docs.google.com/document/d/1hz7qjGKkNeckMibEs67ZmAa2kxjie0zkG6H_IiC2RgA/edit?pli=1](https://docs.google.com/document/d/1hz7qjGKkNeckMibEs67ZmAa2kxjie0zkG6H_IiC2RgA/edit?pli=1)
+
+## Git Feature Branches
+
+
+The Git feature branch workflow is a simple, yet powerful way to develop new features in an encapsulated environment while at the same time fostering collaboration within the community. The idea is create short-lived branches where new development will take place and eventually merge the completed feature branch back into `trunk`. A short-lived branch could mean anywhere from several days to several months depending on the extent of the feature and how often the branch is merged back into `trunk`.
+
+Feature branches are also useful for changes which are not necessarily considered to be new features. They can be for proof-of-concept changes or architectural changes which have the likelihood of destabilizing `trunk`.
+
+### Benefits
+
+* Allows incremental work to proceed without destabilizing the main trunk of source control.
+
+* Smaller commits means smaller and clearer code reviews.
+
+* Each code review is not required to be fully functional allowing a more agile approach to gathering feedback on the progress of the feature.
+
+* Maintains Git history and allows for code to be backed out easily after merging.
+
+### Drawbacks
+
+* Requires frequent merges from `trunk` into your feature branch to keep merge conflicts to a minimum.
+
+* May require periodic merges of the feature branch back into trunk during development to help mitigate frequent merge conflicts.
+
+* No continuous integration coverage on feature branches. Although this is not really a drawback since most feature branches will break some aspects of CI in the early stages of the feature.
+
+### Guidelines to Follow
+
+The following simple rules can help in keeping Ambari's approach to feature branch development simple and consistent.
+
+* When creating a feature branch, it should be given a meaningful name. Acceptable names include either the name of the feature or the name of the Ambari JIRA. The branch should also always start with the text `branch-feature-`. Some examples of properly named feature branches include:
+ - `branch-feature-patch-upgrades`
+ - `branch-feature-AMBARI-12345`
+
+* Every commit in your feature branch should have an associated `AMBARI-XXXXX` JIRA. This way, when your branch is merged back into trunk, the commit history follows Ambari's conventions.
+
+* Merge frequently from trunk into your branch to keep your branch up-to-date and lessen the number of potential merge conflicts.
+
+* Do **NOT** squash commits. Every commit in your feature branch must have an `AMBARI-XXXXX` association with it.
+
+
+* Once a feature has been completed and the branch has been merged into trunk, the branch can be safely removed. Feature branches should only exist while the work is still in progress.
+
+### Approach
+
+The following steps outline the lifecycle of a feature branch. You'll notice that once the feature has been completed and merged back into trunk, the feature branch is deleted. This is an important step to keep the git branch listing as clean as possible.
+
+```
+$ git checkout -b branch-feature-AMBARI-12345 trunk
+Switched to a new branch 'branch-feature-AMBARI-12345'
+
+$ git push -u origin branch-feature-AMBARI-12345
+Total 0 (delta 0), reused 0 (delta 0)
+To https://git-wip-us.apache.org/repos/asf/ambari.git
+ * [new branch] branch-feature-AMBARI-12345 -> branch-feature-AMBARI-12345
+Branch branch-feature-AMBARI-12345 set up to track remote branch branch-feature-AMBARI-12345 from origin by rebasing.
+
+```
+
+* Branch is correctly named
+* Branch is pushed to Apache so it can be visible to other developers
+
+```bash
+$ git checkout -b branch-feature-AMBARI-12345 trunk
+Switched to a new branch 'branch-feature-AMBARI-12345'
+
+$ git add
+$ git commit -m 'AMBARI-28375 - Some Change (me)'
+
+$ git add
+$ git commit -m 'AMBARI-28499 - Another Change (me)'
+
+$ git push
+```
+
+* Each commit to the feature branch has its own AMBARI-XXXXX JIRA
+* Multiple commits are allowed before pushing the changes to the feature branch
+
+```bash
+$ git checkout branch-feature-AMBARI-12345
+Switched to branch 'branch-feature-AMBARI-18456'
+
+$ git merge trunk
+Updating ed28ff4..3ab2a7c
+Fast-forward
+ ambari-server/include.xml | 0
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+ create mode 100644 ambari-server/include.xml
+```
+
+* Merging trunk into the feature branch often (daily, hourly) allows for faster and easier merge conflict resolution
+* Fast-forwards are OK here since trunk is always the source of truth and we don't need extra "merge" commits in the feature branch
+
+```bash
+$ git checkout trunk
+Switched to branch 'trunk'
+
+$ git merge --no-ff branch-feature-AMBARI-12345
+Updating ed28ff4..3ab2a7c
+ ambari-server/include.xml | 0
+ 1 file changed, 0 insertions(+), 0 deletions(-)
+ create mode 100644 ambari-server/include.xml
+```
+
+Notice that the `--no-ff` option was provided when merging back into `trunk`. This is to ensure that an additional "merge" commit is created which references all feature branch commits. With this single merge commit, the entire merge can be easily backed out if a problem was discovered which destabilized trunk.
+
+* The feature is merged successfully with a "merge" commit back to trunk
+* This can be done multiple times during the course of the feature development as long as the code merged back to trunk is stable
+
+```bash
+$ git checkout trunk
+Switched to branch 'trunk'
+
+$ git branch -d branch-feature-AMBARI-12345
+Deleted branch branch-feature-AMBARI-12345 (was ed28ff4).
+
+$ git push origin --delete branch-feature-AMBARI-12345
+To https://git-wip-us.apache.org/repos/asf/ambari.git
+ - [deleted] branch-feature-AMBARI-12345
+
+$ git remote update origin --prune
+Fetching origin
+From https://git-wip-us.apache.org/repos/asf/ambari
+ x [deleted] (none) -> branch-feature-AMBARI-56789
+```
+
+* Cleanup the branch when done, both locally and remotely
+* Prune your local branches which no longer track remote branches
+
diff --git a/versioned_docs/version-2.7.6/ambari-dev/feature-flags.md b/versioned_docs/version-2.7.6/ambari-dev/feature-flags.md
new file mode 100644
index 0000000..ed13903
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/feature-flags.md
@@ -0,0 +1,13 @@
+# Feature Flags
+
+* Sometimes, we want to have the ability for the end users to experiment with a new feature, but not expose it as a general feature since it has not gone through rigorous testing and use of the feature could result in problems. In other cases, we want to provide an escape hatch for certain edge-case scenarios that we may not want to expose in general because using the escape hatch is potentially dangerous and should only be reserved special occasions. For these purposes, Ambari has a notion of **feature flags**.
+
+* Feature flags can be created as an attribute of App.supports map under [https://github.com/apache/ambari/blob/trunk/ambari-web/app/config.js](https://github.com/apache/ambari/blob/trunk/ambari-web/app/config.js)
+* Those boolean flags are exposed in the Ambari Web UI via `<ambari-server-protocol>://<ambari-server-host>:<ambari-server-port>/#/experimental`
+ * The end user can go to the above URL to turn certain experimental features on.
+
+ 
+
+* In Ambari Web code, we should toggle experimental features on/off via the App.supports object.
+
+* You will see sample usage if you recursively grep for "App.supports" under the ambari-web project.
diff --git a/versioned_docs/version-2.7.6/ambari-dev/how-to-commit.md b/versioned_docs/version-2.7.6/ambari-dev/how-to-commit.md
new file mode 100644
index 0000000..1c6bb2f
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/how-to-commit.md
@@ -0,0 +1,14 @@
+# How to Commit
+
+This document describes how to commit changes to Ambari. It assumes a knowledge of Git. While it is for committers to use as a guide, it also provides contributors an idea of how the commit process actually works.
+
+In general we are very conservative about changing the Apache Ambari code base. It is ground truth for systems that use it, so we need to make sure that it is reliable. For this reason we use Review Then Commit (RTC) http://www.apache.org/foundation/glossary.html#ReviewThenCommit change policy.
+
+Except for some very rare cases any change to the Apache Ambari code base will start off as a Jira. (In some cases a change may relate to more than one Jira. Also, there are cases when a Jira results in multiple commits.) Generally, the process of getting ready to commit when the Jira has a patch associated with it and the contributor decides that it is ready for review and marks it patch available.
+
+A committer must sign off on a patch. It is very helpful if the community also reviews the patch, but in the end a committer must take responsibility for the correctness of the patch. If the patch is simple enough and the committer feels confident in the review, a single +1 from a committer is sufficient to commit the patch. (Remember committers cannot review their own patch. If a committer submits a patch, they should make sure that another committer reviews it.)
+
+Follow the instructions in [How to Contribute](./how-to-contribute.md) guide to commit changes to Ambari.
+
+If the Jira is a bug fix you may also need to commit the patch to the latest branch in git (trunk).
+
diff --git a/versioned_docs/version-2.7.6/ambari-dev/how-to-contribute.md b/versioned_docs/version-2.7.6/ambari-dev/how-to-contribute.md
new file mode 100644
index 0000000..1e44e8c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/how-to-contribute.md
@@ -0,0 +1,127 @@
+# How to Contribute
+
+## Contributing code changes
+
+### Checkout source code
+
+* Fork the project from Github at https://github.com/apache/ambari if you haven't already
+* Clone this fork:
+
+```bash
+# Replace [forked-repository-url] with your git clone url
+git clone [forked-repository-url] ambari
+```
+
+* Set upstream remote:
+
+```bash
+cd ambari
+git remote add upstream https://github.com/apache/ambari.git
+```
+
+### Keep your Fork Up to Date
+
+```bash
+# Fetch from upstream remote
+git fetch upstream
+# Checkout the branch that needs to sync
+git checkout trunk
+# Merge with remote
+git merge upstream/trunk
+```
+
+Repeat these steps for all the branches that needs to be synced with the remote.
+
+### JIRA
+
+Apache Ambari uses JIRA to track issues including bugs and improvements, and uses Github pull requests to manage code reviews and code merges. Major design changes are discussed in JIRA and implementation changes are discussed in pull requests after a pull request is created.
+
+* Find an existing Apache JIRA that the change pertains to
+ * Do not create a new JIRA if the change is minor and relates to an existing JIRA; add to the existing discussion and work instead
+ * Look for existing pull requests that are linked from the JIRA, to understand if someone is already working on the JIRA
+
+* If the change is new, then create a new JIRA:
+ * Provide a descriptive Title
+ * Write a detailed Description. For bug reports, this should ideally include a short reproduction of the problem. For new features, it may include a design document.
+ * Fill the required fields:
+ * Issue Type. Bug and Task are the most frequently used issue types in Ambari.
+ * Priority. Their meaning is roughly:
+ * Blocker: pointless to release without this change as the release would be unusable to a large minority of users
+ * Critical: a large minority of users are missing important functionality without this, and/or a workaround is difficult
+ * Major: a small minority of users are missing important functionality without this, and there is a workaround
+ * Minor: a niche use case is missing some support, but it does not affect usage or is easily worked around
+ * Trivial: a nice-to-have change but unlikely to be any problem in practice otherwise
+ * Component. Choose the components that are affected by this change. Choose from Ambari Components
+ * Affects Version. For Bugs, assign at least one version that is known to exhibit the problem or need the change
+ * Do not include a patch file; pull requests are used to propose the actual change.
+
+### Pull Request
+
+Apache Ambari uses [Github pull requests](https://github.com/apache/ambari/pulls) to review and merge changes to the source code. Before creating a pull request, one must have a fork of apache/ambari checked out. Follow instructions in step 1 to create a fork if you haven't already.
+
+#### Commit and Push changes
+
+- Create a branch AMBARI-XXXXX-branchName before starting to make any code changes. Ex: If the Fix Version of the JIRA you are working on is 2.6.2, then create a branch based on branch-2.6
+
+ ```bash
+ git checkout branch-2.6
+ git pull upstream branch-2.6
+ git checkout -b AMBARI-XXXXX-branch-2.6
+ ```
+
+- Mark the status of the related JIRA as "In Progress" to let others know that you have started working on the JIRA.
+- Make changes to the code and commit them to the newly created branch.
+- Run all the tests that are applicable and make sure that all unit tests pass
+- Push your changes. Provide your Github user id and [personal access token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/) when asked for user name and password
+
+ ```bash
+ git push origin AMBARI-XXXXX-branch-2.6
+ ```
+
+#### Create Pull Request
+
+Navigate to your fork in Github and [create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/). The pull request needs to be opened against the branch you want the patch to land.
+
+The pull request title should be of the form **[AMBARI-xxxx] Title**, where AMBARI-xxxx is the relevant JIRA number
+
+- If the pull request is still a work in progress, and so is not ready to be merged, but needs to be pushed to Github to facilitate review, then add **[WIP]** after the **AMBARI-XXXX**
+- Consider identifying committers or other contributors who have worked on the code being changed. Find the file(s) in Github and click “Blame” to see a line-by-line annotation of who changed the code last. You can add @username in the PR description or as a comment to request review from a developer.
+- Note: Contributors do not have access to edit or add reviewers in the "Reviewers" widget. Contributors can only @mention to get the attention of committers.
+- The related JIRA will automatically have a link to the PR as shown below. Mark the status of JIRA as "Patch Available" manually.
+
+
+
+#### Jenkins Job
+
+* A Jenkins Job is configured to be triggered everytime a new pull request is created. The job is configured to perform the following tasks:
+ * Validate the merge
+ * Build Ambari
+ * Run unit tests
+* It reports the outcome of the build as an integrated check in the pull request as shown below.
+
+
+
+* It is the responsibility of the contributor of the pull request to make sure that the build passes. Pull requests should not be merged if the Jenkins job fails to validate the merge.
+* To re-trigger the build job, just comment "retest this please" in the PR. Visit this page to check the latest build jobs.
+
+#### Repeat
+
+Repeat the above steps for patches that needs to land in multiple branches. Ex: If a patch needs to be committed to branches branch-2.6 and trunk, then you need to create two branches and open two pull requests by following the above steps.
+
+## Review Process
+
+Ambari uses Github for code reviews. All committers are required to follow the instructions in this [page](https://gitbox.apache.org/setup/) and link their github accounts with gitbox to gain Merge access to [apache/ambari](https://github.com/apache/ambari) in github.
+
+To try out the changes locally, you can checkout the pull request locally by following the instructions in this [guide](https://help.github.com/articles/checking-out-pull-requests-locally/).
+
+* Other reviewers, including committers can try out the changes locally and either approve or give their comments as suggestions on the pull request by submitting a review on the pull request. More help can be found here.
+* If more changes are required, reviewers are encouraged to leave their comments on the lines of code that require changes. The author of the pull request can then update the code and push another commit to the same branch to update the pull request and notify the committers.
+* The pull request can be merged if atleast one committer has approved it or commented "LGTM" which means "Looks Good To Me" and the jenkins job validated the merge successfully. If you comment LGTM you will be expected to help with bugs or follow-up issues on the patch. (Remember committers cannot review their own patch. If a committer opens a PR, they should make sure that another committer reviews it.)
+* Sometimes, other changes might be merged which conflict with the pull request’s changes. The PR can’t be merged until the conflict is resolved. This can be resolved by running **git fetch** upstream followed by git rebase **upstream/[branch-name]** and resolving the conflicts by hand, then pushing the result to your branch.
+* If a PR is merged, promptly close the PR and resolve the JIRA as "Fixed".
+
+## Apache Ambari Committers
+
+Please read more on Apache Committers at: http://www.apache.org/dev/committers.html
+
+In general a contributor that makes sustained, welcome contributions to the project may be invited to become a committer, though the exact timing of such invitations depends on many factors. Sustained contributions over 6 months is a welcome sign of contributor showing interest in the project. A contributor receptive to feedback and following development guidelines stated above is a good sign for being a committer to the project. We have seen contributors contributing 20-30 patches become committers but again this is very subjective and can vary on the patches submitted to the project. Ultimately it is the Ambari PMC that suggests and votes for committers in the project.
diff --git a/versioned_docs/version-2.7.6/ambari-dev/imgs/experimental-features .png b/versioned_docs/version-2.7.6/ambari-dev/imgs/experimental-features .png
new file mode 100644
index 0000000..211a596
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/imgs/experimental-features .png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-dev/imgs/jenkins-job.png b/versioned_docs/version-2.7.6/ambari-dev/imgs/jenkins-job.png
new file mode 100644
index 0000000..0e6476c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/imgs/jenkins-job.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-dev/imgs/pull-request.png b/versioned_docs/version-2.7.6/ambari-dev/imgs/pull-request.png
new file mode 100644
index 0000000..2a8b408
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/imgs/pull-request.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-dev/imgs/reviewers.png b/versioned_docs/version-2.7.6/ambari-dev/imgs/reviewers.png
new file mode 100644
index 0000000..c19354d
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/imgs/reviewers.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-dev/imgs/with-without-docker.png b/versioned_docs/version-2.7.6/ambari-dev/imgs/with-without-docker.png
new file mode 100644
index 0000000..2d6e38c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/imgs/with-without-docker.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-dev/index.md b/versioned_docs/version-2.7.6/ambari-dev/index.md
new file mode 100644
index 0000000..3dba30d
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/index.md
@@ -0,0 +1,311 @@
+# Ambari Development
+
+## Checking out Ambari source
+
+Follow the instructions under [Checkout source code](./how-to-contribute.md) section of "How to contribute" guide.
+
+We'll refer to the top-level "ambari" directory as `AMBARI_DIR` in this document.
+
+## Tools needed to build Ambari
+
+The following tools are needed to build Ambari from source.
+
+Alternatively, you can easily launch a VM that is preconfigured with all the tools that you need. See the **Pre-Configured Development Environment** section in the [Quick Start Guide](../quick-start/quick-start-guide.md).
+
+* xCode (if using Mac - free download from the apple store)
+* JDK 8 (Ambari 2.6 and below can be compiled with JDK 7, from Ambari 2.7, it can be compiled with at least JDK 8)
+* [Apache Maven](http://maven.apache.org/download.html) 3.3.9 or later
+Tip:In order to persist your changes to the JAVA_HOME environment variable and add Maven to your path, create the following files:
+File: ~/.profile
+
+```bash
+source ~/.bashrc
+```
+
+File: ~/.bashrc
+
+```bash
+export PATH=/usr/local/apache-maven-3.3.9/bin:$PATH
+export JAVA_HOME=$(/usr/libexec/java_home)
+export _JAVA_OPTIONS="-Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true"
+```
+
+
+* Python 2.6 (Ambari 2.7 or later require Python 2.7 as minimum supported version)
+* Python setuptools:
+for Python 2.6: D [ownload](http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c11-py2.6.egg#md5=bfa92100bd772d5a213eedd356d64086) setuptools and run:
+
+```bash
+sh setuptools-0.6c11-py2.6.egg
+```
+
+for Python 2.7: D [ownload](https://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg#md5=fe1f997bc722265116870bc7919059ea) setuptools and run:
+
+```bash
+sh setuptools-0.6c11-py2.7.egg
+```
+
+
+* rpmbuild (rpm-build package)
+* g++ (gcc-c++ package)
+
+## Running Unit Tests
+
+* `mvn clean test`
+* Run unit tests in a single module:
+
+```bash
+mvn -pl ambari-server test
+```
+
+
+* Run only Java tests:
+
+```bash
+mvn -pl ambari-server -DskipPythonTests
+```
+
+
+* Run only specific Java tests:
+
+```bash
+mvn -pl ambari-server -DskipPythonTests -Dtest=AgentHostInfoTest test
+```
+
+
+* Run only Python tests:
+
+```bash
+mvn -pl ambari-server -DskipSurefireTests test
+```
+
+
+* Run only specific Python tests:
+
+```bash
+mvn -pl ambari-server -DskipSurefireTests -Dpython.test.mask=TestUtils.py test
+```
+
+
+* Run only Checkstyle and RAT checks:
+
+```bash
+mvn -pl ambari-server -DskipTests test
+```
+
+
+
+NOTE: Please make sure you have npm in the path before running the unit tests.
+
+## Generating Findbugs Report
+
+* mvn clean install
+
+This will generate xml and html report unders target/findbugs. You can also add flags to skip unit tests to generate report faster.
+
+## Building Ambari
+
+Note: if you can an error that too many files are open while building, then run: ulimit -n 10000 (for example)
+
+To build Ambari RPMs, run the following.
+
+Note: Replace ${AMBARI_VERSION} with a 4-digit version you want the artifacts to be (e.g., -DnewVersion=1.6.1.1)
+
+**Note**: If running into errors while compiling the ambari-metrics package due to missing the artifacts of jms, jmxri, jmxtools:
+
+```
+[ERROR] Failed to execute goal on project ambari-metrics-kafka-sink: Could not resolve dependencies for project org.apache.ambari:ambari-metrics-kafka-sink:jar:2.0.0-0: The following artifacts could not be resolved: javax.jms:jms:jar:1.1, com.sun.jdmk:jmxtools:jar:1.2.1, com.sun.jmx:jmxri:jar:1.2.1: Could not transfer artifact javax.jms:jms:jar:1.1 from/to java.net (https://maven-repository.dev.java.net/nonav/repository): No connector available to access repository java.net (https://maven-repository.dev.java.net/nonav/repository) of type legacy using the available factories WagonRepositoryConnectorFactory
+```
+
+The work around is to manually install the three missing artifacts:
+
+```
+mvn install:install-file -Dfile=jms-1.1.pom -DgroupId=javax.jms -DartifactId=jms -Dversion=1.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxtools-1.2.1.pom -DgroupId=com.sun.jdmk -DartifactId=jmxtools -Dversion=1.2.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxri-1.2.1.pom -DgroupId=com.sun.jmx -DartifactId=jmxri -Dversion=1.2.1 -Dpackaging=jar
+
+If when compiling it seems stuck, and you've already increased Java and Maven heapsize, it could be that Ambari Views has a lot of artifacts, and the rat-check is choking up. In this case, try running
+
+git clean -df (this will remove untracked files and directories)
+mvn clean package -DskipTests -Drat.ignoreErrors=true
+or
+mvn clean package -DskipTests -Drat.skip
+```
+
+## Setting the Version Using Maven
+
+Ambari 2.8+ uses a newer method to update the version when building Ambari.
+
+**RHEL/CentOS 6**:
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package rpm:rpm -DskipTests -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+**SUSE/SLES 11**
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package rpm:rpm -DskipTests -Psuse11 -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+**Ubuntu 12**:
+
+```
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+mvn -B clean install package jdeb:jdeb -DskipTests -Dpython.ver="python >= 2.6" -Preplaceurl
+```
+
+Ambari Server will create following packages
+
+* RPM will be created under `AMBARI_DIR`/ambari-server/target/rpm/ambari-server/RPMS/noarch.
+
+* DEB will be created under `AMBARI_DIR`/ambari-server/target/
+
+Ambari Agent will create following packages
+
+* RPM will be created under `AMBARI_DIR`/ambari-agent/target/rpm/ambari-agent/RPMS/x86_64.
+
+* DEB will be created under `AMBARI_DIR`/ambari-agent/target
+
+Optional parameters:
+
+* -X -e: add these options for more verbose output by Maven. Useful when debugging Maven issues.
+
+* -DdefaultStackVersion=STACK-VERSION
+* Sets the default stack and version to be used for installation (e.g., -DdefaultStackVersion=HDP-1.3.0)
+* -DenableExperimental=true
+* Enables experimental features to be available via Ambari Web (default is false)
+* All views can be packaged in RPM by adding _-Dviews_ parameter
+ - _mvn -B clean install package rpm:rpm -Dviews -DskipTests_
+* Specific views can be built by adding `--projects` parameter to the _-Dviews_
+ - _mvn -B clean install package rpm:rpm --projects ambari-web,ambari-project,ambari-views,ambari-admin,contrib/views/files,contrib/views/pig,ambari-server,ambari-agent,ambari-client,ambari-shell -Dviews -DskipTests_
+
+
+_NOTE: Run everything as `root` below._
+
+## Building Ambari Metrics
+
+If you plan on installing the Ambari Metrics service, you will also need to build the Ambari Metrics project.
+
+```bash
+cd ambari-metrics
+mvn clean package -Dbuild-rpm -DskipTests
+
+For Ubuntu:
+cd ambari-metrics
+mvn clean package -Dbuild-deb -DskipTests
+```
+
+**Note:**
+
+The metrics rpms will be found at: ambari-metrics-assembly/target/. These would be need for installing the Ambari Metrics service.
+
+## Running the Ambari Server
+
+First, install the Ambari Server RPM.
+
+**On RHEL/CentOS:**
+
+```bash
+yum install ambari-server/target/rpm/ambari-server/RPMS/noarch/ambari-server-*.noarch.rpm
+```
+
+On SUSE/SLES:
+
+```bash
+zypper install ambari-server/target/rpm/ambari-server/RPMS/noarch/ambari-server-*.noarch.rpm
+```
+
+**On Ubuntu 12:**
+
+```bash
+dpkg --install ambari-server/target/ambari-server-*.deb # Will fail with missing dependencies errors
+apt-get update # Update locations of dependencies
+apt-get install -f # Install all failed dependencies
+dpkg --install ambari-server/target/ambari-server-*.deb # Will succeed
+```
+
+Initialize Ambari Server:
+
+```bash
+ambari-server setup
+```
+
+Start up Ambari Server:
+
+```
+ambari-server start
+```
+
+See Ambari Server log:
+
+```bash
+tail -f /var/log/ambari-server/ambari-server.log
+```
+
+To access Ambari, go to
+
+```
+http://{ambari-server-hostname}:8080
+```
+
+from your web browser and log in with username _admin_ and password _admin_.
+
+## Install and Start the Ambari Agent Manually on Each Host in the Cluster
+
+Install the Ambari Agent RPM.
+
+On RHEL/CentOS:
+
+SUSE/SLES:
+
+```bash
+zypper install ambari-agent/target/rpm/ambari-agent/RPMS/x86_64/ambari-agent-*.rpm
+```
+
+Ubuntu12:
+
+```bash
+dpkg --install ambari-agent/target/ambari-agent-*.deb
+```
+
+Edit the location of Ambari Server in /etc/ambari-agent/conf/ambari-agent.ini by editing the _hostname_ line.
+
+Start Ambari Agent:
+
+```
+ambari-agent start
+```
+
+See Ambari Agent log:
+
+```bash
+tail -f /var/log/ambari-agent/ambari-agent.log
+```
+
+## Setting up Ambari in Eclipse
+
+```
+$ mvn clean eclipse:eclipse
+```
+
+After doing the above you should be able to import the project via Eclipse "Import > Maven > Existing Maven Project". Choose the root directory where you cloned the git repository. You should be able to see the following projects on eclipse:
+
+```
+ambari
+|
+|- ambari-project
+|- ambari-server
+|- ambari-agent
+|- ambari-web
+```
+
+Select the top-level "ambari pom.xml" and click Finish.
diff --git a/versioned_docs/version-2.7.6/ambari-dev/releasing-ambari.md b/versioned_docs/version-2.7.6/ambari-dev/releasing-ambari.md
new file mode 100644
index 0000000..d041946
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/releasing-ambari.md
@@ -0,0 +1,401 @@
+# Releasing Ambari
+
+## Useful Links
+
+### [Publishing Maven Artifacts](http://apache.org/dev/publishing-maven-artifacts.html)
+
+* Setting up release signing keys
+* Uploading artifacts to staging and release repositories
+
+### [Apache Release Guidelines](http://www.apache.org/legal/release-policy.html)
+
+* Release requirements
+* Process for staging
+
+## Preparing for release
+
+Setup for first time release managers
+
+If you are being a release manager for the first time, you will need to run the following additional steps so that you are not blocked during the actual release process.
+
+**Configure SSH/SFTP to home.apache.org**
+
+SFTP to home.apache.org supports only Key-Based SSH Logins
+
+```bash
+# Generate RSA Keys
+ mkdir ~/.ssh
+ chmod 700 ~/.ssh
+ ssh-keygen -t rsa -b 4096
+
+# Note: This will create ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub files will be generated
+
+# Upload Public RSA Key
+Login at http://id.apache.org
+Add Public SSH Key to your profile from ~/.ssh/id_rsa.pub
+SSH Key (authorized_keys line):
+Submit changes
+
+# Verify SSH to minotaur.apache.org works
+ssh -i ~/.ssh/id_rsa {username}@minotaur.apache.org
+
+# SFTP to home.apache.org
+sftp {username}@home.apache.org
+mkdir public_html
+cd public_html
+put test #This test file is a sample empty file present in current working directory from which you sftp.
+
+Verify URL http://home.apache.org/{username}/test
+```
+
+**Generate OpenGPG Key**
+
+You should get a signing key, keep it in a safe place, upload the public key to apache, and build a web of trust.
+
+Ref: http://zacharyvoase.com/2009/08/20/openpgp/
+
+```bash
+gpg2 --gen-key
+gpg2 --keyserver pgp.mit.edu --send-key {key}
+gpg2 --armor --export {username}@apache.org > {username}.asc
+Copy over {username}.asc to {username}@home.apache.org:public_html/~{username}.asc
+Verify URL http://home.apache.org/~{username}/{username}.asc
+Query PGP KeyServer http://pgp.mit.edu:11371/pks/lookup?search=0x{key}&op=vindex
+
+Web of Trust:
+Request others to sign your PGP key.
+
+Login at http://id.apache.org
+Add OpenPGP Fingerprint to your profile
+OpenPGP Public Key Primary Fingerprint: XXXX YYYY ZZZZ ....
+Submit changes
+Verify that the public PGP key is exported to http://home.apache.org/keys/committer/{username}.asc
+```
+
+**Email dev@ambari.apache.org mailing list notifying that you will be creating the release branch at least one week in advance**
+
+```
+Subject: Preparing Ambari X.Y.Z branch
+
+Hi developers and PMCs,
+
+I am proposing cutting a new branch branch-X.Y for Ambari X.Y.Z on __________ as per the outlined tasks in the Ambari Feature + Roadmap page (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=30755705).
+
+After making the branch, we (i.e., development community) should only accept blocker or critical bug fixes into the branch and harden it until it meets a high enough quality bar.
+
+If you have a bug fix, it should first be committed to trunk, and after ensuring that it does not break any tests (including smoke tests), then it should be integrated to the Ambari branch-X.Y
+If you have any doubts whether a fix should be committed into branch-X.Y, please email me for input at ____________
+Stay tuned for updates on the release process.
+
+Thanks
+```
+
+**Create the release branch**
+
+Create a branch for a release using branch-X.Y (ex: branch-2.1) as the name of the branch.
+
+Note: Going forward, we should be creating branch-{majorVersion}.{minorVersion), so that the same branch can be used for maintenance releases.
+
+**Checkout the release branch**
+
+```bash
+git checkout branch-X.Y
+```
+
+**Update Ambari REST API docs**
+
+Starting with Ambari's `<span>trunk</span>` branch as of Ambari 2.8, the release manager should generate documentation from existing source code. The documentation should be checked back into the branch before performing the release.
+
+```bash
+# Generate the following artifacts:
+# - Configuration markdown at docs/configuration/index.md
+# - swagger.json and index.html at docs/api/generated/
+cd ambari-server/
+mvn clean compile exec:java@configuration-markdown test -Drat.skip -Dcheckstyle.skip -DskipTests -Dgenerate.swagger.resources
+
+# Review and Commit the changes to branch-X.Y
+git commit
+```
+
+**Update release version**
+
+Once the branch has been created, the release version must be set and committed. The changes should be committed to the release branch.
+
+**Ambari 2.8+**
+
+Starting with Ambari 2.8, the build process relies on [Maven 3.5+ which allows the](https://maven.apache.org/maven-ci-friendly.html) [use of the `${revision}` tag](https://maven.apache.org/maven-ci-friendly.html). This means that the release version can be defined once in the root `pom.xml` and then inherited by all submodules. In order to build Ambari with a specific build number, there are two methods:
+
+```bash
+mvn -Drevision=2.8.0.0.0 ...
+Editing the root pom.xml to include the new build number
+<revision>2.8.0.0-SNAPSHOT</revision>
+```
+
+To be consistent with prior releases, the `pom.xml` should be updated in order to contain the new version.
+
+**Steps followed for 2.8.0 release as a reference**
+
+```bash
+# Update the revision property to the release version
+mvn versions:set-property -Dproperty=revision -DnewVersion=2.8.0.0.0
+
+# Remove .versionsBackup files
+git clean -f -x
+
+# Review and commit the changes to branch-X.Y
+git commit
+```
+:::danger
+Ambari 2.7 and Earlier Releases (Deprecated)
+:::
+
+Older Ambari branches still required that you update every `pom.xml` manually through the below process:
+
+**Steps followed for 2.2.0 release as a reference**
+
+```bash
+# Update the release version
+mvn versions:set -DnewVersion=2.2.0.0.0
+pushd ambari-metrics
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd contrib/ambari-log4j
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd contrib/ambari-scom
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+pushd docs
+mvn versions:set -DnewVersion=2.2.0.0.0
+popd
+
+# Update the ambari.version properties in all pom.xml
+$ find . -name "pom.xml" | xargs grep "ambari\.version"
+
+./contrib/ambari-scom/ambari-scom-server/pom.xml: 2.1.0-SNAPSHOT
+./contrib/ambari-scom/ambari-scom-server/pom.xml: ${ambari.version}
+./contrib/views/hive/pom.xml: 2.1.0.0.0
+./contrib/views/jobs/pom.xml: ${ambari.version}
+./contrib/views/pig/pom.xml: 2.1.0.0.0
+./contrib/views/pom.xml: 2.1.0.0.0
+./contrib/views/storm/pom.xml: ${ambari.version}
+./contrib/views/tez/pom.xml: ${ambari.version}
+./docs/pom.xml: 2.1.0
+./docs/pom.xml: ${project.artifactId}-${ambari.version}
+
+# Update any 2.1.0-SNAPSHOT references in pom.xml
+$ grep -r --include "pom.xml" "2.1.0-SNAPSHOT" .
+
+# Remove .versionsBackup files
+git clean -f -x -d
+
+# Review and commit the changes to branch-X.Y
+git commit
+```
+
+**Update KEYS**
+
+If this is the first time you have taken release management responsibilities, make sure to update the KEYS file and commit the updated KEYS in both the ambari trunk branch and the release branch. Also in addition to updating the KEYS file in the tree, you also need to p ush the KEYS file to [https://dist.apache.org/repos/dist/release/ambari/](https://dist.apache.org/repos/dist/release/ambari/)
+
+```bash
+gpg2 --list-keys jluniya@apache.org >> KEYS
+gpg2 --armor --export jluniya@apache.org >> KEYS
+
+# commit the changes to both trunk and new release branch
+git commit
+
+# push the updated KEYS file to https://dist.apache.org/repos/dist/release/ambari/.
+
+# Only PMCs members can do this 'svn' step.
+
+svn co https://dist.apache.org/repos/dist/release/ambari ambari_svn
+cp {path_to_keys_file}/KEYS ambari_svn/KEYS
+svn update KEYS
+svn commit -m "Updating KEYS for Ambari"
+```
+
+**Setup Build**
+
+Setup Jenkins Job for the new branch on http://builds.apache.org
+
+## Creating Release Candidate
+
+```
+Note: The first release candidate is rc0. The following documented process assumes rc0, but replace it with the appropriate rc number as required.
+
+```
+
+**Checkout the release branch**
+
+```
+git checkout branch-X.Y
+```
+
+**Create a Release Tag from the release branch**
+
+```bash
+git tag -a release-X.Y.Z-rc0 -m 'Ambari X.Y.Z RC0'
+git push origin release-X.Y.Z-rc0
+```
+
+**Create a tarball**
+
+```bash
+# create a clean copy of the source
+ cd ambari-git-X.Y.Z
+ git clean -f -x -d
+ cd ..
+
+ cp -R ambari-git-X.Y.Z apache-ambari-X.Y.Z-src
+
+ # create ambari-web/public by running the build instructions per https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Development
+ # once ambari-web/public is created, copy it as ambari-web/public-static
+ cp -R ambari-git-X.Y.Z/ambari-web/public apache-ambari-X.Y.Z-src/ambari-web/public-static
+
+ # make sure apache rat tool runs successfully
+ cp -R apache-ambari-X.Y.Z-src apache-ambari-X.Y.Z-ratcheck
+ cd apache-ambari-X.Y.Z-ratcheck
+ mvn clean apache-rat:check
+ cd ..
+
+ # if rat check fails, file JIRAs and fix them before proceeding.
+
+ # tar it up, but exclude git artifacts
+ tar --exclude=.git --exclude=.gitignore --exclude=.gitattributes -zcvf apache-ambari-X.Y.Z-src.tar.gz apache-ambari-X.Y.Z-src
+```
+
+**Sign the tarball**
+
+```bash
+gpg2 --armor --output apache-ambari-X.Y.Z-src.tar.gz.asc --detach-sig apache-ambari-X.Y.Z-src.tar.gz
+```
+
+**Generate SHA512 checksums:**
+
+```
+sha512sum apache-ambari-X.Y.Z-src.tar.gz > apache-ambari-X.Y.Z-src.tar.gz.sha512
+```
+
+or
+
+```
+openssl sha512 apache-ambari-X.Y.Z-src.tar.gz > apache-ambari-X.Y.Z-src.tar.gz.sha512
+```
+
+**Upload the artifacts to your apache home:**
+
+The artifacts then need to be copied over (SFTP) to
+
+```
+public_html/apache-ambari-X.Y.Z-rc0
+```
+
+## Voting on Release Candidate
+
+**Call for a vote on the dev@ambari.apache.org mailing list with something like this:**
+
+I have created an ambari-** release candidate.
+
+GIT source tag (r***)
+
+```
+https://git-wip-us.apache.org/repos/asf/ambari/repo?p=ambari.git;a=log;h=refs/tags/release-x.y.z-rc0
+```
+
+Staging site: http://home.apache.org/user_name/apache-ambari-X.Y.Z-rc0
+
+Vote will be open for 72 hours.
+
+```
+[ ] +1 approve
+[ ] +0 no opinion
+[ ] -1 disapprove (and reason why)
+```
+
+Once the vote passes/fails, send out an email with subject like "[RESULT] [VOTE] Apache Ambari x.y.z rc0" to dev@ambari.apache.org. For the vote to pass, 3 +1 votes are required. If the vote does not pass another release candidate will need to be created after addressing the feedback from the community.
+
+## Publishing and Announcement
+
+* Login to [https://id.apache.org](https://id.apache.org) and verify the fingerprint of PGP key used to sign above is provided. (gpg --fingerprint)
+* Upload your PGP public key only to _/home/_
+
+Publish the release as below:
+
+```bash
+svn co https://dist.apache.org/repos/dist/release/ambari ambari
+
+# Note : Only PMCs members can do this 'svn' step.
+
+cd ambari
+mkdir ambari-X.Y.Z
+scp ~/public_html/apache-ambari-X.Y.Z-rc0/* ambari-X.Y.Z
+svn add ambari-X.Y.Z
+svn rm ambari-A.B.C # Remove the older release from the mirror. Only the latest version should appear in dist.
+
+svn commit -m "Committing Release X.Y.Z"
+```
+
+Create the release tag:
+
+```bash
+git tag -a release-X.Y.Z -m 'Ambari X.Y.Z'
+git push origin release-X.Y.Z
+```
+
+Note that it takes 24 hours for the changes to propagate to the mirrors.
+
+Wait 24 hours and verify that the bits are available in the mirrors before sending an announcement.
+
+**Update Ambari Website and Wiki**
+
+http://ambari.apache.org is checked in Git in `/ambari/docs/src/site` folder.
+
+```bash
+cd docs
+mvn versions:set -DnewVersion=X.Y.Z
+
+# Make necessary changes, typically to pom.xml, site.xml, index.apt, and whats-new.apt
+mvn clean site
+```
+
+Examine the changes under _/ambari/docs/target_ folder.
+
+Update the wiki to add pages for installation of the new version. _Usually you can copy the pages for the last release and make the URL changes to point to new repo/tarball location._
+
+**Send out Announcement to dev@ambari.apache.org and user@ambari.apache.org.**
+
+Subject: [ANNOUNCE] Apache Ambari X.Y.Z.
+
+The Apache Ambari team is proud to announce Apache Ambari version X.Y.Z
+
+Apache Ambari is a tool for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari consists of a set of RESTful APIs and a browser-based management console UI.
+
+The release bits are at: http://www.apache.org/dyn/closer.cgi/ambari/ambari-X.Y.Z.
+
+To use the released bits please use the following documentation:
+
+https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+X.Y.Z
+
+We would like to thank all the contributors that made the release possible.
+
+Regards,
+
+The Ambari Team
+
+**Submit release data to Apache reporter database.**
+
+This step can be done only by a project PMC. If release manager is not an Ambari PMC then please reach out to an existing Ambari PMC or contact Ambari PMC chair to complete this step.
+
+- Login to https://reporter.apache.org/addrelease.html?ambari with apache credentials.
+- Fill out the fields:
+ - Committe: ambari
+ - Full version name: 2.2.0
+ - Date of release (YYYY-MM-DD): 2015-12-19
+- Submit the data
+- Verify that the submitted data is reflected at https://reporter.apache.org/?ambari
+
+Performing this step keeps [https://reporter.apache.org/?ambari](https://reporter.apache.org/?ambari) site updated and people using the Apache Reporter Service will be able to see the latest release data for Ambari.
+
+## Publish Ambari artifacts to Maven central
+
+Please use the following [document](https://docs.google.com/document/d/1RjWQOaTUne6t8DPJorPhOMWAfOb6Xou6sAdHk96CHDw/edit) to publish Ambari artifacts to Maven central.
diff --git a/versioned_docs/version-2.7.6/ambari-dev/unit-test-reports.md b/versioned_docs/version-2.7.6/ambari-dev/unit-test-reports.md
new file mode 100644
index 0000000..9fadf3f
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/unit-test-reports.md
@@ -0,0 +1,6 @@
+# Unit Test Reports
+
+Branch | Unit Test Report URL
+-------|-------------
+trunk | https://builds.apache.org/job/Ambari-trunk-Commit/
+branch-2.2 | https://builds.apache.org/job/Ambari-branch-2.2/
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-dev/verifying-release-candidate.md b/versioned_docs/version-2.7.6/ambari-dev/verifying-release-candidate.md
new file mode 100644
index 0000000..707db13
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-dev/verifying-release-candidate.md
@@ -0,0 +1,107 @@
+# Verifying Release Candidate
+
+[Apache Release Process](http://www.apache.org/dev/release-publishing)
+
+The steps are based on what is needed on a fresh centos6 VM created based on [Quick Start Guide](../quick-start/quick-start-guide.md)
+
+## Verify hashes and signature
+
+```bash
+mkdir -p /usr/work/ambari
+pushd /usr/work/ambari
+```
+
+_Download the src tarball, asc signature, and md5/sha1 hashes._
+
+Verify the hashes
+
+```
+openssl md5 apache-ambari-2.4.1-src.tar.gz | diff apache-ambari-2.4.1-src.tar.gz.md5 -
+openssl sha1 apache-ambari-2.4.1-src.tar.gz | diff apache-ambari-2.4.1-src.tar.gz.sha1 -
+```
+
+Verify the signature
+
+```bash
+gpg --keyserver pgpkeys.mit.edu --recv-key <key ID>
+gpg apache-ambari-2.4.1-src.tar.gz.asc
+```
+
+## Compiling the code
+
+If you are verifying the release on a clean machine (e.g. freshly installed VM) then you need to run several preparatory step.
+
+### Install mvn
+
+```bash
+mkdir /usr/local/apache-maven
+cd /usr/local/apache-maven
+wget http://mirror.olnevhost.net/pub/apache/maven/binaries/apache-maven-3.2.1-bin.tar.gz
+tar -xvf apache-maven-3.2.1-bin.tar.gz
+export M2_HOME=/usr/local/apache-maven/apache-maven-3.2.1
+export M2=$M2_HOME/bin
+export PATH=$M2:$PATH
+```
+
+### Install java
+
+```bash
+mkdir /usr/jdk
+cd /usr/jdk
+cp "FROM SOURCE"/jdk-7u67-linux-x64.tar.gz . (or download the latest)
+tar -xvf jdk-7u67-linux-x64.tar.gz
+export PATH=$PATH:/usr/jdk/jdk1.7.0_67/bin
+export JAVA_HOME=/usr/jdk/jdk1.7.0_67
+export _JAVA_OPTIONS="-Xmx2048m -XX:MaxPermSize=1024m -Djava.awt.headless=true"
+```
+
+### Install packages
+
+```bash
+yum install -y git
+curl --silent --location https://rpm.nodesource.com/setup | bash -
+yum install -y nodejs
+yum install -y gcc-c++ make
+npm install -g brunch@1.7.20
+yum install -y libfreetype.so.6
+yum install -y freetype
+yum install -y fontconfig
+yum install -y python-devel
+yum install -y rpm-build
+```
+
+### Install python tools
+
+```bash
+wget http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c11-py2.6.egg --no-check-certificate
+
+sh setuptools-0.6c11-py2.6.egg
+```
+
+### Additional steps
+
+These steps may not be needed in every environment. You can either perform these steps before build or after, and if, you encounter specific errors.
+
+_Install pom files needed by ambari-metrics-kafka-sink_
+
+
+```bash
+mkdir /tmp/pom-files
+pushd /tmp/pom-files
+cp "FROM SOURCE"/jms-1.1.pom .
+cp "FROM SOURCE"/jmxri-1.2.1.pom .
+cp "FROM SOURCE"/jmxtools-1.2.1.pom .
+mvn install:install-file -Dfile=jmxri-1.2.1.pom -DgroupId=com.sun.jmx -DartifactId=jmxri -Dversion=1.2.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jms-1.1.pom -DgroupId=javax.jms -DartifactId=jms -Dversion=1.1 -Dpackaging=jar
+mvn install:install-file -Dfile=jmxtools-1.2.1.pom -DgroupId=com.sun.jdmk -DartifactId=jmxtools -Dversion=1.2.1 -Dpackaging=jar
+popd
+```
+
+### Compile the code
+
+```bash
+pushd /usr/work/ambari
+tar -xvf apache-ambari-2.4.1-src.tar.gz
+cd apache-ambari-2.4.1-src
+mvn clean install -DskipTests
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step1.png b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step1.png
new file mode 100644
index 0000000..814b402
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step1.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step2.png b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step2.png
new file mode 100644
index 0000000..c903003
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step2.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step3.png b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step3.png
new file mode 100644
index 0000000..a0548b5
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step3.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step4.png b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step4.png
new file mode 100644
index 0000000..5f1eb47
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step4.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step5.png b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step5.png
new file mode 100644
index 0000000..68e4ca6
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step5.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step6.png b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step6.png
new file mode 100644
index 0000000..65a83b3
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/imgs/step6.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/index.md b/versioned_docs/version-2.7.6/ambari-plugin-contribution/index.md
new file mode 100644
index 0000000..fe9d0b3
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/index.md
@@ -0,0 +1,9 @@
+# Ambari Plugin Contributions
+
+These are independent extensions that are contributed to the Ambari codebase.
+
+```mdx-code-block
+import DocCardList from '@theme/DocCardList';
+
+<DocCardList />
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg
new file mode 100644
index 0000000..0414dd4
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom-arch.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png
new file mode 100644
index 0000000..365b9f3
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom-msi2.png
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg
new file mode 100644
index 0000000..5cd861d
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/imgs/ambari-scom.jpg
Binary files differ
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/index.md b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/index.md
new file mode 100644
index 0000000..efe224f
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/index.md
@@ -0,0 +1,63 @@
+# Ambari SCOM Management Pack
+
+This information is intended for **Apache Hadoop** and **Microsoft System Center Operations Manager** users who install the **Ambari SCOM Management Pack**.
+
+## Introduction
+
+### Versions
+
+Ambari SCOM Version | Ambari Server Version | Version
+--------------------|------------------------|---------
+2.0.0 | 1.5.1 | 1.5.1.2.0.0.0-673
+1.0.0 | 1.4.4 | 1.4.4.1.0.0.1-472
+0.9.0 | 1.2.5 | 1.2.5.0.9.0.0-60
+
+The Ambari SCOM contribution can be found in the Apache Ambari project:
+
+- https://github.com/apache/ambari/tree/trunk/contrib/ambari-scom
+
+### Useful Resources
+
+The following links connect you to information about common tasks that are associated with System Center Management Packs:
+
+- [Administering the Management Pack Life Cycle](http://go.microsoft.com/fwlink/?LinkId=211463)
+- [How to Import a Management Pack in Operations Manager 2007](http://go.microsoft.com/fwlink/?LinkID=142351)
+- [How to Monitor Using Overrides](http://go.microsoft.com/fwlink/?LinkID=117777)
+- [How to Create a Run As Account in Operations Manager 2007](http://technet.microsoft.com/en-us/library/hh321655.aspx)
+- [How to Modify an Existing Run As Profile](http://go.microsoft.com/fwlink/?LinkID=165412)
+- [How to Export Management Pack Customizations](http://go.microsoft.com/fwlink/?LinkId=209940)
+- [How to Remove a Management Pack](http://go.microsoft.com/fwlink/?LinkId=209941)
+
+For questions about Operations Manager and monitoring packs, see the [System Center Operations Manager community forum](http://social.technet.microsoft.com/Forums/systemcenter/en-US/home?category=systemcenteroperationsmanager).
+
+A useful resource is the [System Center Operations Manager Unleashed blog](http://opsmgrunleashed.wordpress.com/), which contains "By Example" posts for specific monitoring packs.
+
+## Get Started
+
+### Overview
+
+**Ambari SCOM** extends the functionality of **Microsoft System Center Operations Manager** to monitor Apache Hadoop clusters, and leverages Ambari (and the Ambari REST API) to obtain Hadoop metrics. The Ambari SCOM Management Pack will:
+
+- Automatically discover all nodes within a Hadoop cluster(s).
+- Proactively monitor the availability and capacity.
+- Proactively notify when the health is critical.
+- Intuitively and efficiently visualize the health of Hadoop cluster via dashboards
+
+
+
+### Architecture
+
+Ambari SCOM is made up of the following components
+
+Component | Description
+----------|------------
+Ambari SCOM Management Pack | The Ambari SCOM Management Pack extends the functional of Microsoft System Center Operations Manager to monitor Hadoop clusters.
+Ambari SCOM Server | The Ambari SCOM Server component connects to the Hadoop cluster components and exposes a REST API for the Ambari SCOM Management Pack.
+ResourceProvider | An Ambari ResourceProvider is a pluggable interface in Ambari enables the customization of the Ambari SCOM Server.
+ClusterLayout ResourceProvider | An Ambari ResourceProvider implementation that supplies information on cluster layout (i.e. where Hadoop master and slave components are installed) to the Ambari SCOM Server. This allows Ambari to know how and where to access components of the Hadoop cluster.
+Property ResourceProvider | An Ambari ResourceProvider implementation that integrates with the SQL Server database instance for retrieving stored Hadoop metrics.
+SQL Server | A SQL Server instance that stores the metrics emitted from Hadoop via the SqlServerSink and the Hadoop Metrics2 interface.
+SqlServerSink | This is a Hadoop Metrics2 sink designed to consume metrics emitted from the Hadoop cluster. Ambari SCOM provides a SQL Server implementation.
+
+
+
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/installation.md b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/installation.md
new file mode 100644
index 0000000..6032c1d
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/scom/installation.md
@@ -0,0 +1,340 @@
+# Installation
+
+## Prerequisite Software
+
+Setting up Ambari SCOM assumes the following prerequisite software:
+
+* Ambari SCOM 1.0
+ - Apache Hadoop 1.x cluster (HDFS and MapReduce) 1
+* Ambari SCOM 2.0
+ - Apache Hadoop 2.x cluster (HDFS and YARN/MapReduce) 2
+* JDK 1.7
+* Microsoft SQL Server 2012
+* Microsoft JDBC Driver 4.0 for SQL Server 3
+* Microsoft System Center Operations Manager (SCOM) 2012 SP1 or later
+* System Center Monitoring Agent installed on **Watcher Node** 4
+
+1 _Ambari SCOM_ 1.0 has been tested with a Hadoop cluster based on **Hortonworks Data Platform 1.3 for Windows** ("[HDP 1.3 for Windows](http://hortonworks.com/products/releases/hdp-1-3-for-windows/)")
+
+2 _Ambari SCOM_ 2.0 has been tested with a Hadoop cluster based on **Hortonworks Data Platform 2.1 for Windows** ("[HDP 2.1 for Windows](http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-Win-latest/bk_installing_hdp_for_windows/content/win-getting-ready.html)")
+
+3 Obtain the _Microsoft JDBC Driver 4.0 for SQL Server_ JAR file (`sqljdbc4.jar`) at [http://technet.microsoft.com/en-us/library/ms378749.aspx](http://technet.microsoft.com/en-us/library/ms378749.aspx)
+
+4 See Microsoft TechNet topic for [Managing Discovery and Agents](http://technet.microsoft.com/en-us/library/hh212772.aspx). Minimum Agent requirements _.NET 4_ and _PowerShell 2.0 + 3.0_
+
+## Package Contents
+
+```
+├─ ambari-scom- _**version**_.zip
+├── README.md
+├── server.zip
+├── metrics-sink.zip
+├── mp.zip
+└── ambari-scom.msi
+```
+
+File | Name | Description
+-----|------|-------------
+server.zip | Server Package | Contains the required software for configuring the Ambari SCOM Server software.
+metrics-sink.zip | Metrics Sink Package | Contains the required software for manually configuring SQL Server and the Hadoop Metrics Sink.
+ambari-scom.msi | MSI Installer | The Ambari SCOM MSI Installer for configuring the Ambari SCOM Server and Hadoop Metrics Sink
+mp.zip | Management Pack Package | Contains the Ambari SCOM Management Pack software.
+
+## Ambari SCOM Server Installation
+
+:::caution
+The **Ambari SCOM Management Pack** must connect to an Ambari SCOM Server to retrieve cluster metrics. Therefore, you need to have an Ambari SCOM Server running in your cluster. If you have already installed your Hadoop cluster (including the Ganglia Service) with Ambari (minimum **Ambari 1.5.1 for SCOM 2.0.0**) and have an Ambari Server already running + managing your Hadoop 1.x cluster, you can use that Ambari Server and point the **Management Pack** that host. You can proceed directly to [Installing Ambari SCOM Management Pack](#id-2installation-mgmtpack) and skip these steps to install an Ambari SCOM Server. If you do not have an Ambari Server running + managing your cluster, you **must** install an Ambari SCOM Server using one of the methods described below.
+:::
+
+The following methods are available for installing Ambari SCOM Server:
+
+* **Manual Installation** - This installation method requires you to configure the SQL Server database, setup the Ambari SCOM Server and configure the Hadoop Metrics Sink. This provides the most flexible install option based on your environment.
+* **MSI Installation** - This installation method installs the Ambari SCOM Server and configures the Hadoop Metrics Sink on all hosts in the cluster automatically using an MSI Installer. After launching the MSI, you provide information about your SQL Server database and the cluster for the installer to handle configuration.
+
+## Manual Installation
+
+### Configuring SQL Server
+
+1. Configure an existing SQL Server instance for "mixed mode" authentication.
+
+2. Confirm SQL Server is installed with TCP/IP active and enabled. (default port: 1433)
+3. Create a user and password. Remember this user and password as this will be the account used by the Hadoop metrics interface for capturing metrics. (default user: sa)
+4. Extract the contents of the `metrics-sink.zip` package to obtain the DDL script.
+
+5. Create the Ambari SCOM database schema by running the `Hadoop-Metrics-SQLServer-CREATE.ddl` script.
+
+:::info
+The Hadoop Metrics DDL script will create a database called "HadoopMetrics".
+:::
+
+### Configuring Hadoop Metrics Sink
+
+#### Preparing the Metrics Sink
+
+1. Extract the contents of the `metrics-sink.zip` package to obtain the `metrics-sink-<strong><em>version</em></strong>.jar` file.
+
+2. Obtain the _Microsoft JDBC Driver 4.0 for SQL Server_ `sqljdbc4.jar` file.
+
+3. Copy `sqljdbc4.jar` and `metrics-sink-version.jar` to each host in the cluster. For example, copy to `C:\Ambari\metrics-sink-version.jar` and `C:\Ambari\sqljdbc4.jar`
+on each host.
+
+#### Setup Hadoop Metrics2 Interface
+
+1. On each host in the cluster, setup the Hadoop metrics2 interface to use the `SQLServerSink`.
+
+Edit the `hadoop-metrics2.properties` file (located in the `<strong><em>{C:\hadoop\install\dir}</em></strong>\bin` folder of each host in the cluster):
+
+```
+*.sink.sql.class=org.apache.hadoop.metrics2.sink.SqlServerSink
+
+namenode.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+datanode.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+jobtracker.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+tasktracker.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+maptask.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+reducetask.sink.sql.databaseUrl=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+```
+
+:::info
+_Where:_
+
+* _server = the SQL Server hostname_
+* _port = the SQL Server port (for example, 1433)_
+* _user = the SQL Server user (for example, sa)_
+* _password = the SQL Server password (for example, BigData1)_
+:::
+
+1. Update the Java classpath for each Hadoop service to include the `metrics-sink-<strong><em>version</em></strong>.jar` and `sqljdbc4.jar` files.
+
+
+ - Example: Updating the Java classpath for _HDP for Windows_ clusters
+
+ The `service.xml` files will be located in the `C:\hadoop\install\dir\bin` folder of each host in the cluster. The Java classpath is specified for each service in the `<arguments>` element of the `service.xml` file. For example, to update the Java classpath for the `NameNode` component, edit the `C:\hadoop\bin\namenode.xml` file.
+
+ ```
+ ...
+
+ ... -classpath ...;C:\Ambari\metrics-sink-1.5.1.2.0.0.0-673.jar;C:\Ambari\sqljdbc4.jar ...
+
+ ...
+
+ ```
+
+2. Restart Hadoop for these changes to take affect.
+
+#### Verify Metrics Collection
+
+1. Confirm metrics are being captured in the SQL Server database by querying the `MetricRecord` table:
+
+```sql
+select * from HadoopMetrics.dbo.MetricRecord
+```
+:::info
+In the above SQL statement, `HadoopMetrics` is the database name.
+:::
+
+### Installing and Configuring Ambari SCOM Server
+
+#### Running the Server
+
+1. Designate a machine in the cluster to run the Ambari SCOM Server.
+
+2. Extract the contents of the `server.zip` package to obtain the Ambari SCOM Server packages.
+
+```
+├── ambari-scom-server- **_version_**-conf.zip
+├── ambari-scom-server- **_version_**-lib.zip
+└── ambari-scom-server- **_version_**.jar
+```
+
+3. Extract the contents of the `ambari-scom-server-version-lib.zip` package to obtain the Ambari SCOM dependencies.
+
+4. Extract the contents of the `ambari-scom-server-version-conf.zip` package to obtain the Ambari SCOM configuration files.
+
+5. From the configuration files, edit the `ambari.properties` file:
+
+```
+scom.sink.db.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
+scom.sink.db.url=jdbc:sqlserver://[server]:[port];databaseName=HadoopMetrics;user=[user];password=[password]
+```
+
+:::info
+_Where:_
+ - _server = the SQL Server hostname_
+ - _port = the SQL Server port (for example, 1433)_
+ - _user = the SQL Server user (for example, sa)_
+ - _password = the SQL Server password (for example, BigData1)_
+:::
+
+6. Run the `org.apache.ambari.scom.AmbariServer` class from the Java command line to start the Ambari SCOM Server.
+
+:::info
+Be sure to include the following in the classpath:
+ - `ambari-scom-server-version.jar` file
+ - configuration folder containing the Ambari SCOM configuration files
+ - lib folder containing the Ambari SCOM dependencies
+ - folder containing the `clusterproperties.txt` file from the Hadoop install. For example, `c:\hadoop\install\dir`
+ - `sqljdbc4.jar` SQLServer JDBC Driver file
+::
+
+For example:
+
+```bash
+java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Xms512m -Xmx2048m -cp "c:\ambari-scom\server\conf;c:\ambari-scom\server\lib\*;c:\jdbc\sqljdbc4.jar;c:\hadoop\install\dir;c:\ambari-scom\server\ambari-scom-server-1.5.1.2.0.0.0-673.jar" org.apache.ambari.scom.AmbariServer
+```
+
+:::info
+In the above command, be sure to replace the Ambari SCOM version in the `ambari-scom-server-version.jar` and replace `c:\hadoop\install\dir` with the folder containing the `clusterproperties.txt` file.
+:::
+
+#### Verify the Server API
+
+1. From a browser access the API
+
+```
+http://[ambari-scom-server]:8080/api/v1/clusters
+```
+2. Verify that metrics are being reported.
+
+```
+http://[ambari-scom-server]:8080/api/v1/clusters/ambari/services/HDFS/components/NAMENODE
+```
+
+## MSI Installation
+
+### Configuring SQL Server
+
+1. Configure an existing SQL Server instance for "mixed mode" authentication.
+
+2. Confirm SQL Server is installed with TCP/IP active and enabled. (default port: 1433)
+3. Create a user and password. (default user: sa)
+
+### Running the MSI Installer
+
+1. Designate a machine in the cluster to run the Ambari SCOM Server.
+
+2. Extract the contents of the `server.zip` package to obtain the Ambari SCOM Server packages.
+
+3. Run the `ambari-scom.msi` installer. The "Ambari SCOM Setup" dialog appears:
+
+ 
+
+4. Provide the following information:
+
+Field | Description
+------|------------
+Ambari SCOM package directory | The directory where the installer will place the Ambari SCOM Server packages. For example: C:\Ambari
+SQL Server hostname | The hostname of the SQL Server instance for Ambari SCOM Server to use to store Hadoop metrics.
+SQL Server port | The port of the SQL Server instance.
+SQL Server login | The login username.
+SQL Server password | The login password
+Path to SQL Server JDBC Driver (sqljdbc4.jar) | The path to the JDBC Driver JAR file.
+Path to the cluster layout file (clusterproperties.txt) | The path to the cluster layout properties file.
+
+5. You can optionally select to Start Services
+6. Click Install
+7. After completion, links are created on the desktop to "Start Ambari SCOM Server", "Browse Ambari API" and "Browse Ambari API Metrics". After starting the Ambari SCOM Server, browse the API and Metrics to confirm the server is working properly.
+
+:::info
+The MSI installer installation log can be found at `C:\AmbariInstallFiles\AmbariSetupTools\ambari.winpkg.install.log`
+:::
+
+### Installing Ambari SCOM Management Pack
+
+:::info
+Before installing the Management pack, be sure to install the Ambari SCOM Server using the Ambari SCOM Server Installation instructions.
+:::
+
+#### Import the Management Pack
+
+Perform the following to import the Ambari SCOM Management Pack into System Center Operations Manager.
+
+1. Extract the contents of the `mp.zip` package to obtain the Ambari SCOM management pack (`.mpb`) files.
+
+2. Ensure Windows Server 2012 running SCOM with SQL Server (full text search).
+
+3. Open System Center Operations Manager.
+
+4. Go to Administration -> Management Packs.
+
+5. From the Tasks panel, select Import Management Packs...
+
+6. In the Import Management Packs dialog, select Add -> Add from disk...
+
+7. You are prompted to search the Online Catalog. Click "No".
+
+8. Browse for the Ambari SCOM management pack files.
+
+9. Select the following files:
+
+```
+Ambari.SCOM.Monitoring.mpb
+Ambari.SCOM.Management.mpb
+Ambari.SCOM.Presentation.mpb
+```
+10. Click "Open"
+11. Review the Import list and click "Install".
+
+12. The Ambari SCOM Management Pack installation will start.
+
+:::info
+The Ambari SCOM package also includes `AmbariSCOMManagementPack.msi` which is an alternative packaging of the `mp.zip`. This MSI is being made in **beta** form in this release.
+:::
+
+#### Create Run As Account
+
+Perform the following to configure a account to use when the Ambari SCOM Management Pack talks to the Ambari SCOM Server.
+
+1. After Importing the Management Pack is complete, go to Administration -> Run As Configuration -> Accounts.
+
+2. In the Tasks panel, select "Create Run as Account..."
+3. You are presented with the Create Run As Account Wizard.
+
+4. Go thru the wizard, select Run As account type "Basic Authentication".
+
+5. Give the account a Display name and click "Next".
+
+6. Enter the account name and password for the Ambari SCOM Server. This account will be used to connect to the Ambari SCOM Server to access the Ambari REST API. Default is account name is "admin" and password is "admin".
+
+7. Click "Next"
+8. Select the "Less secure" distribution security option.
+
+9. Click "Next" and complete the wizard.
+
+#### Configure the Management Pack
+
+Perform the following to configure the Ambari SCOM Management Pack to talk to the Ambari SCOM Server.
+
+1. Go to Authoring -> Management Pack Templates -> Ambari SCOM
+2. In the Tasks panel, select "Add Monitoring Wizard".
+
+3. Select monitoring type "Ambari SCOM"
+4. Provide a name and select the destination management pack.
+
+5. Provide the Ambari URI with is the address of the Ambari SCOM Server in the format:
+
+```
+http://[ambari-scom-server]:8080/api/
+```
+
+:::info
+In the above Ambari URI, `ambari-scom-server` is the Ambari SCOM Server.
+:::
+
+6. Select the Run As Account that you created in Create Run As Account.
+
+7. Select "Watcher Node". If node is not listed, click "Add" and browse for the node. Click "Next".
+
+8. Complete the Add Monitoring Wizard and proceed to the Monitoring Scenariosfor information on using the management pack.
+
+#### Best Practice: Create Management Pack for Customizations
+
+By default, Operations Manager saves all customizations such as overrides to the **Default Management Pack**. As a best practice, you should instead create a separate management pack for each sealed management pack you want to customize.
+
+When you create a management pack for the purpose of storing customized settings for a sealed management pack, it is helpful to base the name of the new management pack on the name of the management pack that it is customizing, such as **Ambari SCOM Customizations**.
+
+Creating a new management pack for storing customizations of each sealed management pack makes it easier to export the customizations from a test environment to a production environment. It also makes it easier to delete a management pack, because you must delete any dependencies before you can delete a management pack. If customizations for all management packs are saved in the **Default Management Pack** and you need to delete a single management pack, you must first delete the **Default Management Pack**, which also deletes customizations to other management packs.
+
+## Monitoring Scenarios
+
+[Monitoring Scenarios](https://cwiki.apache.org/confluence/display/AMBARI/3.+Monitoring+Scenarios)
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/ambari-plugin-contribution/step-by-step.md b/versioned_docs/version-2.7.6/ambari-plugin-contribution/step-by-step.md
new file mode 100644
index 0000000..41d96d2
--- /dev/null
+++ b/versioned_docs/version-2.7.6/ambari-plugin-contribution/step-by-step.md
@@ -0,0 +1,52 @@
+# Step-by-step guide on adding a dashboard widget for a host.
+
+## Create your own dashboard widget for hosts:
+
+Requirements:
+
+- Jmxtrans
+ - Jmxtrans is the application chosen to compile rrd files in order to produce graphing data on ganglia.
+https://github.com/jmxtrans/jmxtrans
+- .rrd files
+ - All the Ganglia rrd files are stored in the /var/lib/rrds directory on the host machine where the Ganglia server is installed.
+ - In this example I’ll be using the “**Nimbus_JVM_Memory_Heap_used.rrd**” file for the data of my custom widget.
+
+**Step 1**:
+
+First we need to go add the rrd file in the “**ganglia_properties.json**” file which is located in the `ambari\ambari-server\src\main\resources` directory of your Ambari source code. This is necessary so that the Ambari-server can call your rrd file from Ganglia via the Ambari API.
+
+
+
+Line 108: Create the path for the metrics to be included in the API.
+
+Line 109: Specify the rrd file.
+
+**Step 2**:
+
+Now we are going to add the API path created in step 1 at line 108, to the “**update_controller.js**” file located in the `ambari\ambari-web\app\controllers\global` directory, so that our graph data can be updated frequently.
+
+
+
+**Step 3**:
+
+Create a JavaScript file for the view of the template of our custom widget and save it in the `ambari\ambari-web\app\views\main\host\metrics` directory of your Ambari source code. In this case I saved my file as “**nimbus.js**”
+
+
+
+**Step 4**:
+
+Add the JavaScript file you created in the previous step into the “**views.js**” file located in the `ambari\ambari-web\app` directory.
+
+
+
+**Step 5**:
+
+Add the .js file view created in step 3 in the “**metrics.hbs**” template file located in the `ambari\ambari-web\app\templates\main\host` directory.
+
+
+
+**Step 6**:
+
+Add the API call to the “**ajax.js**” file located in the `ambari\ambari-web\app\utils` directory.
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.7.6/introduction.md b/versioned_docs/version-2.7.6/introduction.md
new file mode 100644
index 0000000..58a8611
--- /dev/null
+++ b/versioned_docs/version-2.7.6/introduction.md
@@ -0,0 +1,48 @@
+---
+sidebar_position: 1
+---
+
+# Introduction
+
+The Apache Ambari project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs.
+
+Ambari enables System Administrators to:
+
+* Provision a Hadoop Cluster
+ - Ambari provides a step-by-step wizard for installing Hadoop services across any number of hosts.
+
+ - Ambari handles configuration of Hadoop services for the cluster.
+
+* Manage a Hadoop Cluster
+ - Ambari provides central management for starting, stopping, and reconfiguring Hadoop services across the entire cluster.
+
+* Monitor a Hadoop Cluster
+ - Ambari provides a dashboard for monitoring health and status of the Hadoop cluster.
+
+ - Ambari leverages [Ambari Metrics System](https://issues.apache.org/jira/browse/AMBARI-5707) for metrics collection.
+
+ - Ambari leverages [Ambari Alert Framework](https://issues.apache.org/jira/browse/AMBARI-6354) for system alerting and will notify you when your attention is needed (e.g., a node goes down, remaining disk space is low, etc).
+
+Ambari enables Application Developers and System Integrators to:
+
+* Easily integrate Hadoop provisioning, management, and monitoring capabilities to their own applications with the [Ambari REST APIs](https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md).
+
+## Getting Started with Ambari
+
+Follow the [installation guide for Ambari 2.7.6](https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+2.7.6).
+
+Note: Ambari currently supports the 64-bit version of the following Operating Systems:
+
+* RHEL (Redhat Enterprise Linux) 7.4, 7.3, 7.2
+* CentOS 7.4, 7.3, 7.2
+* OEL (Oracle Enterprise Linux) 7.4, 7.3, 7.2
+* Amazon Linux 2
+* SLES (SuSE Linux Enterprise Server) 12 SP3, 12 SP2
+* Ubuntu 14 and 16
+* Debian 9
+
+## Get Involved
+
+Visit the [Ambari Wiki](https://cwiki.apache.org/confluence/display/AMBARI/Ambari) for design documents, roadmap, development guidelines, etc.
+
+[Join the Ambari User Meetup Group](http://www.meetup.com/Apache-Ambari-User-Group). You can see the slides from [April 2, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/109316812/), [June 25, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/119184782/), and [September 25, 2013](http://www.meetup.com/Apache-Ambari-User-Group/events/134373312/) meetups.
diff --git a/versioned_docs/version-2.7.6/quick-start/_category_.json b/versioned_docs/version-2.7.6/quick-start/_category_.json
new file mode 100644
index 0000000..b1ce0cc
--- /dev/null
+++ b/versioned_docs/version-2.7.6/quick-start/_category_.json
@@ -0,0 +1,8 @@
+{
+ "label": "Quick Start",
+ "position": 2,
+ "link": {
+ "type": "generated-index",
+ "description": "Ambari Quick Start"
+ }
+}
diff --git a/versioned_docs/version-2.7.6/quick-start/quick-start-for-new-vm-users.md b/versioned_docs/version-2.7.6/quick-start/quick-start-for-new-vm-users.md
new file mode 100644
index 0000000..6b7469c
--- /dev/null
+++ b/versioned_docs/version-2.7.6/quick-start/quick-start-for-new-vm-users.md
@@ -0,0 +1,410 @@
+---
+sidebar_position: 2
+---
+
+# Quick Start for New VM Users
+
+This Quick Start guide is for readers who are new to the use of virtual machines, Apache Ambari, and/or the Apache Hadoop component stack, who would like to install and use a small local Hadoop cluster. The instructions are for a local host machine running OS X.
+
+The following instructions cover four main steps for installing Ambari and HDP using VirtualBox and Vagrant:
+
+1. Install VirtualBox and Vagrant.
+
+2. Install and start one or more Linux virtual machines. Each machine represents a node in a cluster.
+
+3. On one of the virtual machines, download, install, and deploy the version of Ambari you wish to use.
+
+4. Using Ambari, deploy the version of HDP you wish to use.
+
+When you complete the example in this Quick Start, you should have a three-node cluster of virtual machines running Ambari 2.4.1.0 and HDP 2.5.0 (unless you specify different repository versions).
+
+Once VirtualBox and Vagrant are installed, steps 2 through 4 can be done multiple times to change versions, create a larger cluster, and so on. There is no need to repeat step 1 unless you want to upgrade VirtualBox and/or Vagrant later.
+
+Note: these steps were most recently tested on MacOS 10.11.6, El Capitan.
+
+
+## Terminology
+
+A _virtual machine_, or _VM_, is a software program that exhibits the behavior of a separate computer and is capable of running applications and programs within its own environment.
+
+A virtual machine is often called a _guest_, because it runs within another computing environment--usually known as the _host_. For example, if you install three Linux VMs on a Mac, the Mac is the host machine; the three Linux VMs are guests.
+
+Multiple virtual machines can exist within a single host at one time. In the following examples, one or more virtual machines run on a _host_ machine running OS X. OS X is the primary operating system. The virtual machines (guests) are installed under OS X. The virtual machines run Linux in separate environments on OS X. Thus, your Mac is the "host" machine, and the virtual machines that run Ambari and Hadoop are called "guest" machines.
+
+## Prerequisites
+
+You will need the following resources for this Quick Start:
+
+* A solid internet connection, preferably with at least 5 MB available download bandwidth.
+
+* If you are installing the VMs on a Mac, at least 16 GB of memory (assuming 3 GB per VM)
+
+## Install VirtualBox and Vagrant
+
+VirtualBox is a software virtualization package that installs on an operating system as an application. It allows you to run multiple virtual machines at the same time. In this Quick Start you will use VirtualBox to run Linux nodes within VirtualBox on OS X:
+
+
+
+Vagrant is a tool that makes it easier to work with virtual machines. It helps automate the work of setting up, running, and removingvirtual machine environments. Using Vagrant, you can install and run a preconfigured cluster environment with Ambari and the HDP stack.
+
+1. Download and install VirtualBox from [https://www.virtualbox.org/wiki/Downloads](https://www.virtualbox.org/wiki/Downloads). This Quick Start has been tested on version 5.1.6.
+
+2. Download and install Vagrant from [https://www.vagrantup.com/downloads.html](https://www.vagrantup.com/downloads.html) [.<br></br>](http://downloads.vagrantup.com)
+3. Clone the `ambari-vagrant` GitHub repository into a convenient folder on your Mac. Navigate to the folder, and enter the following command from the terminal:
+
+```bash
+git clone https://github.com/u39kun/ambari-vagrant.git
+```
+
+The repository contains scripts for setting up Ambari virtual machines on several Linux distributions.
+
+4. Add virtual machine hostnames and addresses to the `/etc/hosts` file on your computer. The following command copies a set of host names and addresses from `ambari-vagrant/append-to-etc-hosts.txt` to the end of the `/etc/hosts` files:
+
+```bash
+sudo -s 'cat ambari-vagrant/append-to-etc-hosts.txt >> /etc/hosts'
+```
+5. Use the `vagrant` command to create a private key to use with Ambari:
+
+```bash
+vagrant
+```
+
+The `vagrant` command displays Vagrant command information, and then it creates a private key in the file `~/.vagrant.d/insecure_private_key`.
+
+## Start Linux Virtual Machines
+
+The `ambari-vagrant` directory (cloned from GitHub) contains several subdirectories, each for a specific Linux distribution. Each subdirectory has scripts and configuration files for running Ambari and HDP on that version of Linux.
+
+To start one or more virtual machines:
+
+1. Change your current directory to `ambari-vagrant`:
+
+```bash
+cd ambari-vagrant
+```
+
+If you run an `ls` command on the `ambari-vagrant` directory, you will see subdirectories for several different operating systems and operating system versions.
+
+2. `cd` into the OS subdirectory for the OS you wish to use. CentOS is recommended, because it is quicker to launch than other operating systems.
+
+The remainder of this example uses CentOS 7.0. (To install and use a different version or distribution of Linux, specify the other directory name in place of `centos7.0`.)
+
+```bash
+cd centos7.0
+```
+
+**Important**: All VM `vagrant` commands operate within your current directory. Be sure to run them from the local (Mac) subdirectory associated with the VM operating system that you have chosen to use. If you attempt to run a `vagrant` command from another directory, it will not find the VM.
+
+Copy the private key into the directory associated with the chosen operating system.
+
+For this example, which uses `centos7.0`, issue the following command:
+
+```bash
+cp ~/.vagrant.d/insecure_private_key .
+```
+3. (Optional) If you have at least 16 GB of memory on your Mac, consider increasing the amount of memory allocated to the VMs.
+
+Edit the following line in `Vagrantfile` , increasing allocated memory from 3072 to 4096 or more; for example:
+
+```bash
+vb.customize ["modifyvm", :id, "--memory", 4096] # RAM allocated to each VM
+```
+4. Every virtual machine will have a directory called `/vagrant` inside the VM. This corresponds to the `ambari-vagrant/<os></os>` directory on your local computer, making it easy to transfer files back and forth between your host Mac and the virtual machine. If you have any files to access from within the VM, you can place them in this shared directory.
+
+5. Start one or more VMs, using the `./up.sh` command. Each VM will run one HDP node.Recommendation: if you have at least 16GB of RAM on your Mac and wish to run a small cluster, start with three nodes.
+
+```bash
+./up.sh
+```
+
+For example, the following command starts 3 VMs:<br></br> `./up.sh 3`
+
+On an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes five minutes. For CentOS 7.0, the hostnames are `c7001`, `c7002`, and `c7003`.
+
+Additional notes:
+- If you ran the VMs before and used `vagrant destroy` to remove the VM's, this is the step at which you would recreate and start the VMs.
+
+- The default `Vagrantfile` (in each OS subdirectory) can create up to 10 virtual machines.
+
+- The `up.sh 3` command is equivalent to `vagrant up c700{1..3}`.
+
+- The fully-qualified domain name (FQDN) for each VM has the format `<os-code>[01-10].ambari.apache.org</os-code>`, where `<os-code></os-code>` is `c59` (CentOS 5.9), `c64` (CentOS 6.4), etc. For example, `c5901.ambari.apache.org` will be the FQDN for node 01 running CentOS 5.9.
+
+- The IP address for each VM has the format `192.168.<os-subnet>.1[01-10]</os-subnet>`, where `<os-subnet></os-subnet>` is `64` for CentOS 6.4, `70` for CentOS 7.0, and so on. For example, `192.168.70.101` will be the IP address for CentOS 7.0 node `c7001`.
+
+6. Check the status of your VM(s), and review any errors. The following example shows the results of `./upsh 3` for three VMs running with CentOS 7.0:
+
+```
+LMBP:centos7.0 lkg$ vagrant status
+
+Current machine states:
+c7001 running (virtualbox)
+c7002 running (virtualbox)
+c7003 running (virtualbox)
+c7004 not created (virtualbox)
+c7005 not created (virtualbox)
+c7006 not created (virtualbox)
+c7007 not created (virtualbox)
+c7008 not created (virtualbox)
+c7009 not created (virtualbox)
+c7010 not created (virtualbox)
+```
+
+In the preceding list, three virtual machines are installed and running.
+
+7. At this point, you can snapshot the VMs to have a fresh set of running machines to reuse if desired. This is especially helpful when installing Apache Ambari and the HDP stack for the first time; it allows you to back out to fresh VMs and reinstall Ambari and HDP if you encounter errors. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Access Virtual Machines
+
+Use the following steps when you want to access a running virtual machine:
+
+1. To log on to a virtual machine, use the `vagrant ssh` command on your host machine, and specify the hostname; for example:
+
+```
+LMBP:centos7.0 lkg$ vagrant ssh c7001
+
+Last login: Tue Jan 12 11:20:28 2016
+[vagrant@c7001 ~]$
+```
+
+From this point onward, this terminal window is connected to the virtual machine until yo u exit the virtual machine. All commands go to the VM, not to your Mac.
+
+_**Recommendation**_: Open a second terminal window for your Mac. This is useful when accessing the Ambari Web UI. To distinguish between the two, terminal windows typically list the computer name or VM hostname on each command-line prompt and at the top of the terminal window.
+
+2. When you first access the VM you will be logged in as user `vagrant`. Switch to the `root` user; be sure to include the space between "su" and "-":
+
+```
+[vagrant@c7001 ~]$ sudo su -
+
+Last login: Sun Sep 25 01:34:28 AEST 2016 on pts/0
+root@c7001:~#
+```
+
+If at any time you wish to return the terminal window to your host machine:
+
+1.
+
+ 1. Use the `logout` command to log out of root
+ 2. Use the `exit` command to return to your host machine (Mac)
+
+At this point, the VMs are still running in the background. You can re-issue the `vagrant ssh` command later, to reconnect, or you can stop the virtual machines. For more information, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Install Ambari on the Virtual Machines
+
+**Prerequisites**: Before installing Ambari, the following software packages must be installed on your VM:
+
+* rpm
+* curl
+* wget
+* pdsh
+
+On CentOS: to check if a package is installed, run `yum info <package-name></package-name>`. To install a package, run `yum install <package-name></package-name>`.
+
+To install Ambari, you can build it yourself from source (see [Ambari Development](../ambari-dev/index.md)), or you can use published binaries.
+
+As this is a Quick Start Guide to get you going quickly, ready-made publicly-available binaries are referenced. Note that these binaries were built and publicly made available via Hortonworks, a commercial vendor for Hadoop. This is for your convenience. Note that using the binaries shown here would make HDP, Hortonworks' distribution, available to be installed via Apache Ambari. The instructions here should still work (only the repo URLs need to be changed) if you have Ambari binaries from any other vendor/organization/individuals (the instructions here can be updated if anyone wanted to expand this to include such ready-made, publicly accessible binaries from any source - such contributions are welcome). This would also work if you had built the binaries yourself.
+
+From the terminal window o n the VM where you want to run the main Ambari service, download the Ambari repository. The following comman ds download Ambari version 2.5.1.0 and install `ambari-server`. To install a different version of Ambari, specify the appropriate repo URL. Choose the appropriate commands for the operating system on your VMs:
+
+```bash
+# CentOS 6 (for CentOS 7, replace centos6 with centos7 in the repo URL)
+#
+# to test public release 2.5.1
+wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo
+yum install ambari-server -y
+
+# Ubuntu 14 (for Ubuntu 16, replace ubuntu14 with ubuntu16 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/apt/sources.list.d/ambari.list http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.5.1.0/ambari.list
+apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
+apt-get update
+apt-get install ambari-server -y
+
+# SUSE 11 (for SUSE 12, replace suse11 with suse12 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/zypp/repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.5.1.0/ambari.repo
+zypper install ambari-server -y
+```
+
+On an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes seven minutes. Timing also depends on internet download speeds.
+
+To install Ambari with default settings, set up and start `ambari-server`:
+
+```bash
+ambari-server setup -s
+ambari-server start
+```
+
+To check Ambari Server status, issue the following command:<br></br> `ambari-server status`
+
+
+After Ambari Server has started, launch a browser on your host machine (Mac). Access the Ambari Web UI at ` http://<hostname>.<a href="http://ambari.apache.org">ambari.apache.org</a>:8080</hostname>`. The `<hostname></hostname>` part of the URL specifies the VM where you installed Ambari;for example:
+
+```
+http://c7001.ambari.apache.org:8080
+```
+
+**Note**: The Ambari Server can take some time to launch and be ready to accept connections. Keep trying the URL until you see the login page.
+
+
+At this point, you can snapshot the VMs to have a cluster with Ambari installed, to rerun later if desired. This is especially helpful when installing Apache Ambari and the HDP stack for the first time; it allows you to back out to fresh VMs running Ambari, and reinstall a fresh HDP stack if you encounter errors. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Install the HDP Stack
+
+The following instructions describe basic steps for using Ambari to install HDP components.
+
+1. On the Ambari screen, login using default username `admin`, password `admin`.
+
+2. On the welcome page, choose "Launch Install Wizard."
+3. Specify a name for your cluster, and then click Next.
+
+4. On the Select Version page, choose which version of HDP to install, and then click Next.
+
+5. On the Install Options page, complete the following steps:
+ 1. List the FQDNs of the virtual machines. For example:
+
+```
+c7001.ambari.apache.org
+c7002.ambari.apache.org
+c7003.ambari.apache.org
+```
+
+Alternatively, you can use a range expression:
+
+```
+c70[01-03].ambari.apache.org
+```
+ 2. Upload the `insecure_private_key` file that you created earlier: browse to the `ambari-vagrant` directory, navigate to the operating system folder for your VM's, and choose the key file.
+
+ 3. Change the SSH User Account to `vagrant`.
+
+ 4. Click "Register and Confirm."
+
+6. On the Confirm Hosts page, Ambari displays installation status.
+
+If you see a yellow banner with the following message, click on the link to review warnings:
+
+See the Troubleshooting section (later on this page) for more information.
+
+7. When all host checks pass, close the warning window:
+
+
+
+8. Click Next to continue:
+9. On the Choose Services page, uncheck any components that you do not expect to use. If any are required for selected components, Ambari will request to add them back in.
+
+10. On the Assign Masters screen, choose hosts or simply click Next to use default values.
+
+11. On the Assign Slaves and Clients screen, choose hosts or simply click Next to use default values.
+
+12. On the Customize Services screen
+ 1. Review services with warning notes, such as Hive and Ambari Metrics in the following image:
+
+ 2. Specify missing property values (such as admin passwords) as directed by the installation wizard. When all configurations have been addressed, click Next.
+
+13. On the Review screen, review the service definitions, and then click Next.
+
+14. The Install, Start and Test page shows deployment status. This step takes a while; on an early 2013 MacBook Pro, 2.7 GHz core i7 and 16 GB RAM, this step takes 45 minutes.
+
+15. When the cluster installs successfully, you can snapshot the VMs to have a fresh cluster with Ambari and HDP installed, to rerun later if desired. This allows you to experiment with the cluster and quickly restore back to a previous state if you wish. For more information about snapshots, see the `vagrant snapshot` command in "Basic Vagrant Commands," later in this Quick Start.
+
+## Troubleshooting
+
+This subsection describes a few error conditions that might occur during Ambari installation and HDP cluster deployment:
+
+### Confirm Hosts
+
+If you see an error similar to the following on the Confirm Hosts page of the Ambari installation wizard, click the link to see the warnings:
+'Some warnings were encountered while performing checks against the 3 registered hosts above. Click here to see the warnings."
+
+**`ntpd` Error**
+
+On the Host Checks window, the following warning indicates that you need to start `ntpd` on each host:
+
+
+
+To start the services, for each VM navigate to a terminal window (on your Mac, `vagrant ssh <vm-name></vm-name>`). Issue the following commands:
+
+`service ntpd start`<br></br> `service ntpd status`
+
+You should see messages confirming that `ntpd` is running. Navigate back to the Host Checks window of the Ambari installation wizard and click Rerun Checks. When all checks complete successfully, click Close to continue the installation process.
+
+### Install, Start and Test
+
+If the Install, Start and Test step fails with the following error during DataNode deployment:
+
+```
+Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
+Requires: snappy(x86-64) = 1.0.5-1.el6
+Installed: snappy-1.1.0-3.el7.x86_64 (@anaconda/7.2)
+```
+
+Run the following commands under the root account on each VM:
+
+```bash
+yum remove -y snappy-1.1.0-3.el7.x86_64
+yum install snappy-devel -y
+```
+
+### Stopping and Restarting Virtual Machines
+
+Hadoop is a complex ecosystem with a lot of status checks and cross-component messages. This can make it challenging to halt and restart several VMs and restore them later without warnings or errors.
+
+**Recommendations**
+
+If you would like to save state for a period of time and you plan to stop using your Mac during that time, if you sleep your Mac the cluster should continue from where it left off after you wake the Mac.
+
+When stopping a set of VMs--if you don't need to save cluster state--it can be helpful to stop all services first, stop ambari-server (`ambari-server stop`), and then issue a Vagrant `halt` or `suspend` command.
+
+When restarting a cluster after halting or taking a snapshot, check Ambari server status and restart it if necessary:
+
+```bash
+ambari-server status ambari-server start
+```
+
+After logging into the Ambari Web UI, expect to see alert warnings or errors due to timeout conditions. Check the associated messages to determine whether they might affect your use of the virtual cluster. If so, it can be helpful to stop and restart one or more associated components.
+
+# Reference: Basic Vagrant Commands
+
+The following table lists several common Vagrant commands. For more information, see Vagrant [Command-Line Interface](https://www.vagrantup.com/docs/cli/) documentation.
+
+**vagrant up**
+
+Create and configure guest machines. Example: `vagrant up c6406`
+
+`up.sh` is a wrapper for this call. You can use this command to start more VMs after you called `up.sh`.
+
+Note: if you do not specify the `<vm-name></vm-name>` parameter, Vagrant will attempt to start ten VMs.
+
+**vagrant suspend []**
+
+Save the current running state of a VM and stop the VM. A suspend effectively saves the _exact point-in-time state_ of a machine. When you issue a `resume` command, the VM begins running immediately from that point, rather than doing a full boot.
+
+When you are ready to begin working with it again, run `vagrant up`. The machine will resume where you left off. The main benefit of `suspend` is that it is very fast; it usually takes only 5 to 10 seconds to stop and start your work. The downside is that the operation uses disk space for the VM and to store all VM state information (in RAM, when running) on disk.
+
+Optional: Specify a specific VM to suspend.
+
+**vagrant halt** `vagrant up` `halt` **vagrant destroy -f [**
+
+Remove all traces of the guest machine from your system. The `destroy` command stops the guest machine, powers it down, and removes all guest hard disks. When you are ready to begin working with it again, run `vagrant up`. The benefit of this all disk space and RAM consumed by the guest machine are reclaimed; your host machine is left clean. The downside is that the `vagrant up` operation will take extra time; rebuilding the environment takes the longest (compared with `suspend` and `halt`) because it re-imports and re-provisions the machine.
+
+Optional: Specify a specific VM to destroy.
+
+**vagrant ssh**
+
+Starts a SSH session to the host.
+
+Example: `vagrant ssh c6401`
+**vagrant status []** **vagrant snapshot**
+
+A Vagrant snapshot saves the current state of a VM so that you can restart the VM from the same point at a future time. Commands include push, pop, save, restore, list, and delete. For more information, see [https://www.vagrantup.com/docs/cli/snapshot.html](https://www.vagrantup.com/docs/cli/snapshot.html).
+
+Note: Upon resuming a snapshot, you may find that time-sensitive services such as the (HBase RegionServer) may be down. If this happens, you will need to restart those services.
+
+**vagrant --help**
+
+**Recommendation**: After you start the VMs--but before you run anything on the VMs–save a snapshot. This allows you to restore the initial state of your VMs. This process is much faster than starting the VMs from scratch and then reinstalling Ambari and HDP. You can return to the initial state without destroying other named snapshots that you create later.
+
+More information: [https://www.vagrantup.com/docs/getting-started/teardown.html](https://www.vagrantup.com/docs/getting-started/teardown.html)
+
+If you have favorite ways of starting and stopping VMs running a Hadoop cluster, please feel free to share them in the Comments section. Thanks!
diff --git a/versioned_docs/version-2.7.6/quick-start/quick-start-guide.md b/versioned_docs/version-2.7.6/quick-start/quick-start-guide.md
new file mode 100644
index 0000000..ddab91f
--- /dev/null
+++ b/versioned_docs/version-2.7.6/quick-start/quick-start-guide.md
@@ -0,0 +1,252 @@
+---
+sidebar_position: 1
+---
+
+# Quick Start Guide
+
+This document shows how to quickly set up a cluster using Ambari on your local machine using virtual machines.
+
+This utilizes [VirtualBox](https://www.virtualbox.org/) and [Vagrant](https://www.vagrantup.com/) so you will need to install both.
+
+Note that the steps were tested on MacOS 10.8.4 / 10.8.5.
+
+After you have installed VirtualBox and Vagrant on your computer, check out the "ambari-vagrant" repo on github:
+
+```bash
+git clone https://github.com/u39kun/ambari-vagrant.git
+```
+
+Edit your **/etc/hosts** on your computer so that you will be able to resolve hostnames for the VMs:
+
+```bash
+sudo -s 'cat ambari-vagrant/append-to-etc-hosts.txt >> /etc/hosts'
+```
+
+Copy the private key to your home directory (or some place convenient for you) so that it's easily accessible for uploading via Ambari Web:
+
+```bash
+vagrant
+```
+
+The above command shows the command usage and also creates a private key as ~/.vagrant.d/insecure_private_key. This key will be used in the following steps.
+
+First, change directory to ambari-vagrant:
+
+```bash
+cd ambari-vagrant
+```
+
+You will see subdirectories for different OS's. "cd" into the OS that you want to test. **centos6.8** is recommended as this is quicker to launch than other OS's.
+
+Now you can start VMs with the following command:
+
+```bash
+cd centos6.8
+cp ~/.vagrant.d/insecure_private_key .
+./up.sh
+```
+
+For example, **up.sh 3** starts 3 VMs. 3 seems to be a good number with 16GB of RAM without taxing the system too much.
+
+With the default **Vagrantfile**, you can specify up to 10 (if your computer can handle it; you can even add more).
+
+VMs will have the FQDN
+
+If it is your first time running a vagrant command, run:
+
+```bash
+vagrant init
+```
+
+Log into the VM:
+
+```bash
+vagrant ssh c6801
+```
+
+```bash
+sudo su -
+```
+
+to make yourself root.
+
+To install Ambari, you can build it yourself from source (see [Ambari Development](../ambari-dev/index.md)), or you can use published binaries.
+
+As this is a Quick Start Guide to get you going quickly, ready-made, publicly available binaries are referenced in the steps below.
+
+Note that these binaries were built and made publicly available via Hortonworks, a commercial vendor for Hadoop. This is for your convenience. Note that using the binaries shown here would make HDP, Hortonworks' distribution, available to be installed via Apache Ambari. The instructions here should still work (only the repo URLs need to be changed) if you have Ambari binaries from any other vendor/organization/individuals (the instructions here can be updated if anyone wanted to expand this to include such ready-made, publicly accessible binaries from any source - such contributions are welcome). This would also work if you had built the binaries yourself.
+
+```bash
+# CentOS 6 (for CentOS 7, replace centos6 with centos7 in the repo URL)
+#
+# to test public release 2.5.1
+wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0/ambari.repo
+yum install ambari-server -y
+
+# Ubuntu 14 (for Ubuntu 16, replace ubuntu14 with ubuntu16 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/apt/sources.list.d/ambari.list http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.5.1.0/ambari.list
+apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
+apt-get update
+apt-get install ambari-server -y
+
+# SUSE 11 (for SUSE 12, replace suse11 with suse12 in the repo URL)
+# to test public release 2.5.1
+wget -O /etc/zypp/repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.5.1.0/ambari.repo
+zypper install ambari-server -y
+```
+
+Ambari offers many installation options (see [Ambari User Guides](https://cwiki.apache.org/confluence/display/AMBARI/Ambari+User+Guides)), but to get up and running quickly with default settings, you can run the following to set up and start ambari-server.
+
+```bash
+ambari-server setup -s
+ambari-server start
+```
+
+_For frontend developers only: see Frontend Development section below for extra setup instructions._
+
+Once Ambari Server is started, hit [http://c6801.ambari.apache.org:8080](http://c6801.ambari.apache.org:8080) (URL depends on the OS being tested) from your browser on your local computer.
+
+Note that Ambari Server can take some time to fully come up and ready to accept connections. Keep hitting the URL until you get the login page.
+
+Once you are at the login page, login with the default username **admin** and password **admin**.
+
+On the Install Options page, use the FQDNs of the VMs. For example:
+
+```
+c6801.ambari.apache.org
+c6802.ambari.apache.org
+c6803.ambari.apache.org
+```
+
+Alternatively, you can use a range expression:
+
+```
+c68[01-03].ambari.apache.org
+```
+
+Specify the the non-root SSH user **vagrant**, and upload **insecure_private_key** file that you copied earlier as the private key.
+
+Follow the onscreen instructions to install your cluster.
+
+When done testing, run **vagrant destroy -f** to purge the VMs.
+
+**vagrant up**
+Starts a specific VM. up.sh is a wrapper for this call.
+
+Note: if you don't specify the
+
+**vagrant destroy -f**
+Destroys all VMs launched from the current directory (deletes them from disk as well)
+You can optionally specify a specific VM to destroy
+
+**vagrant suspend**
+Suspends (snapshot) all VMs launched from the current directory so that you can resume them later
+You can optionally specify a specific VM to suspend
+
+**vagrant resume**
+Resumes all suspended VMs launched from the current directory
+You can optionally specify a specific VM to resume
+
+**vagrant status**
+Shows which VMs are running, suspended, etc.
+
+## Modifying RAM for the VMs
+
+Each VM is allocated 2GB of RAM. These can be changed by editing **Vagrantfile**. To change the RAM allocation, modify the following line:
+
+```bash
+vb.customize ["modifyvm", :id, "--memory", 2048]
+```
+
+## Taking Snapshots
+
+Vagrant makes it easy to take snapshots of the entire cluster.
+
+First, install the snapshot plugin:
+
+```bash
+vagrant plugin install vagrant-vbox-snapshot --plugin-version=0.0.2
+```
+
+This enables the "vagrant snapshot" command. Note that the above installs vesion 0.0.2. if you install the latest plugin version 0.0.3 does not allow taking snapshots of the whole cluster at the same time (you have to specify a VM name).
+
+Run **vagrant snapshot** to see the syntax.
+
+Note that the plugin tries to take a snapshot of all VMs configured in Vagrantfile. If you are always using 3 VMs, for example, you can comment out c68[04-10] in Vagrantfile so that the snapshot commands only operate on c68[01-03].
+
+Note: Upon resuming a snapshot, you may find that time-sensitive services may be down (e.g, HBase RegionServer is down, etc.)
+
+Tip: After starting the VMs but before you do anything on the VMs, run "vagrant snapshot take init". This way, you can go back to the initial state of the VMs by running "vagrant snapshot go init"; this only takes seconds (much faster than starting the VMs up from scratch by using up.sh or "vagrant up"). Another advantage of this is that you can always go back to the initial state without destroying other named snapshots that you created.
+
+## Misc
+
+All VMs launched will have a directory called **/vagrant** inside the VM. This maps to the **ambari-vagrant/** directory on your local computer. You can use this shared directory mapping to push files, etc.
+
+If you want to test OS's other than what's currently in the **ambari-vagrant** repo, please see [http://www.vagrantbox.es/](http://www.vagrantbox.es/) for all the readily available OS images you can test. Note that Ambari currently works on RHEL 5/6, CentOS 5/6, Oracle Linux 5/6, SUSE 11, and SLES 11. Ubuntu support is work in progress.
+
+## Kerberos Support
+
+Ambari supports adding Kerberos security to an existing Ambari-installed cluster. First setup any one host as the KDC as follows:
+
+Install the Kerberos server on the chosen host. e.g. for Centos/RedHat
+
+```bash
+yum install krb5-server krb5-libs krb5-auth-dialog rng-tools -y
+```
+
+Create the Kerberos database.
+
+```bash
+rngd -r /dev/urandom -o /dev/random
+/usr/sbin/kdb5_util create -s
+```
+
+Update `/etc/krb5.conf` on the KDC host. e.g. if your realm is `EXAMPLE.COM` and kdc host is `c6801.ambari.apache.org`
+
+```bash
+[realms]
+ EXAMPLE.COM = {
+ admin_server = c6801.ambari.apache.org
+ kdc = c6801.ambari.apache.org
+ }
+```
+
+Restart Kerberos services. e.g. for Centos/RedHat
+
+```bash
+/etc/rc.d/init.d/krb5kdc restart
+/etc/rc.d/init.d/kadmin restart
+```
+
+```bash
+$ sudo kadmin.local
+kadmin.local: add_principal admin/admin@EXAMPLE.COM
+WARNING: no policy specified for admin/admin@EXAMPLE.COM; defaulting to no policy
+Enter password for principal "admin/admin@EXAMPLE.COM":
+Re-enter password for principal "admin/admin@EXAMPLE.COM":
+Principal "admin/admin@EXAMPLE.COM" created.
+
+```
+
+Remember the password for this principal. The Ambari Kerberos Wizard will request it later.Distribute the updated `/etc/krb5.conf` file to remaining hosts in the cluster.
+
+Navigate to _Ambari Dashboard —> Admin —> Kerberos_ to launch the Kerberos Wizard and follow the wizard steps. If you run into errors, the Ambari server logs can be found at `/var/log/ambari-server/ambari-server.log`.
+
+## Pre-Configured Development Environment
+
+Simply edit **Vagrantfile** to launch a VM with all the tools necessary to build Ambari from source.
+
+```bash
+cd ambari-vagrant/centos6.8
+vi Vagrantfile Frontend DevelopmentYou can use this set up to develop and test out Ambari Web frontend code against a real Ambari Server on a multi-node environment.You need to first fork the apache/ambari repository if you haven't already. Read the How to Contribute guide for instructions on how to fork.On the host machine:cd ambari-vagrant/centos6.8
+# Replace the [forked-repository-url] with your fork's clone url
+git clone [forked-repository-url] ambari
+cd ambari/ambari-web
+npm install
+brunch wOn c6801 (where Ambari Server is installed):cd /usr/lib/ambari-server
+mv web web-orig
+ln -s /vagrant/ambari/ambari-web/public web
+ambari-server restartWith this setup, whenever you change the content of ambari-web files (under ambari-vagrant/ambari/) on the host machine, brunch will pick up changes in the background and update ambari-vagrant/ambari/ambari-web/public. Because of the symbolic link, the changes are automatically picked up by Ambari Server. All you have to do is hit refresh on the browser to see the frontend code changes reflected.Not seeing code changes as expected? If you have run the maven command to build Ambari previously, you will see files called app.js.gz and vendor.js.gz under the public folder. You need to delete these files for the frontend code changes to be effective, as the app.js.gz and vendor.js.gz files take precedence over app.js and vendor.js, respectively.
+
+```
diff --git a/versioned_sidebars/version-2.7.5-sidebars.json b/versioned_sidebars/version-2.7.5-sidebars.json
new file mode 100644
index 0000000..202f413
--- /dev/null
+++ b/versioned_sidebars/version-2.7.5-sidebars.json
@@ -0,0 +1,158 @@
+{
+ "ambariSidebar": [
+ "introduction",
+ {
+ "type": "category",
+ "label": "Quick Start",
+ "collapsible": true,
+ "collapsed": false,
+ "items": [
+ "quick-start/quick-start-guide",
+ "quick-start/quick-start-for-new-vm-users"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Ambari Design",
+ "collapsible": true,
+ "collapsed": false,
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/index"
+ },
+ "items": [
+ "ambari-design/alerts",
+ {
+ "type": "category",
+ "label": "Automated Kerberizaton",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/kerberos/index"
+ },
+ "items": [
+ "ambari-design/kerberos/kerberos_descriptor",
+ "ambari-design/kerberos/kerberos_service",
+ "ambari-design/kerberos/enabling_kerberos"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Blueprints",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/blueprints/index"
+ },
+ "items": [
+ "ambari-design/blueprints/blueprint-ha",
+ "ambari-design/blueprints/blueprint-ranger"
+ ]
+ },
+ "ambari-design/enhanced-configs/index",
+ "ambari-design/service-dashboard/index",
+ {
+ "type": "category",
+ "label": "Metrics",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/metrics/index"
+ },
+ "items": [
+ "ambari-design/metrics/metrics-collector-api-specification",
+ "ambari-design/metrics/configuration",
+ "ambari-design/metrics/operations",
+ "ambari-design/metrics/troubleshooting",
+ "ambari-design/metrics/metrics-api-specification",
+ "ambari-design/metrics/stack-defined-metrics",
+ "ambari-design/metrics/upgrading-ambari-metrics-system",
+ "ambari-design/metrics/ambari-server-metrics",
+ "ambari-design/metrics/ambari-metrics-whitelisting"
+ ]
+ },
+ "ambari-design/quick-links",
+ {
+ "type": "category",
+ "label": "Stacks and Services",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/stack-and-services/index"
+ },
+ "items": [
+ "ambari-design/stack-and-services/overview",
+ "ambari-design/stack-and-services/custom-services",
+ "ambari-design/stack-and-services/defining-a-custom-stack-and-services",
+ "ambari-design/stack-and-services/extensions",
+ "ambari-design/stack-and-services/how-to-define-stacks-and-services",
+ "ambari-design/stack-and-services/management-packs",
+ "ambari-design/stack-and-services/stack-inheritance",
+ "ambari-design/stack-and-services/stack-properties",
+ "ambari-design/stack-and-services/writing-metainfo",
+ "ambari-design/stack-and-services/faq",
+ "ambari-design/stack-and-services/hooks",
+ "ambari-design/stack-and-services/version-functions-conf-select-and-stack-select"
+ ]
+ },
+ "ambari-design/technology-stack",
+ {
+ "type": "category",
+ "label": "Views",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/views/index"
+ },
+ "items": [
+ "ambari-design/views/framework-services",
+ "ambari-design/views/view-api",
+ "ambari-design/views/view-definition"
+ ]
+ }
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Ambari Development",
+ "link": {
+ "type": "doc",
+ "id": "ambari-dev/index"
+ },
+ "items": [
+ "ambari-dev/development-process-for-new-major-features",
+ "ambari-dev/ambari-code-layout",
+ "ambari-dev/apache-ambari-jira",
+ "ambari-dev/coding-guidelines-for-ambari",
+ "ambari-dev/how-to-commit",
+ "ambari-dev/how-to-contribute",
+ "ambari-dev/code-review-guidelines",
+ "ambari-dev/releasing-ambari",
+ "ambari-dev/admin-view-ambari-admin-development",
+ "ambari-dev/unit-test-reports",
+ "ambari-dev/development-in-docker",
+ "ambari-dev/developer-tools",
+ "ambari-dev/feature-flags",
+ "ambari-dev/verifying-release-candidate"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Ambari Plugin Contributions",
+ "link": {
+ "type": "doc",
+ "id": "ambari-plugin-contribution/index"
+ },
+ "items": [
+ {
+ "type": "category",
+ "label": "Ambari SCOM Management Pack",
+ "link": {
+ "type": "doc",
+ "id": "ambari-plugin-contribution/scom/index"
+ },
+ "items": [
+ "ambari-plugin-contribution/scom/installation"
+ ]
+ },
+ "ambari-plugin-contribution/step-by-step"
+ ]
+ },
+ "ambari-alerts"
+ ]
+}
diff --git a/versioned_sidebars/version-2.7.6-sidebars.json b/versioned_sidebars/version-2.7.6-sidebars.json
new file mode 100644
index 0000000..202f413
--- /dev/null
+++ b/versioned_sidebars/version-2.7.6-sidebars.json
@@ -0,0 +1,158 @@
+{
+ "ambariSidebar": [
+ "introduction",
+ {
+ "type": "category",
+ "label": "Quick Start",
+ "collapsible": true,
+ "collapsed": false,
+ "items": [
+ "quick-start/quick-start-guide",
+ "quick-start/quick-start-for-new-vm-users"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Ambari Design",
+ "collapsible": true,
+ "collapsed": false,
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/index"
+ },
+ "items": [
+ "ambari-design/alerts",
+ {
+ "type": "category",
+ "label": "Automated Kerberizaton",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/kerberos/index"
+ },
+ "items": [
+ "ambari-design/kerberos/kerberos_descriptor",
+ "ambari-design/kerberos/kerberos_service",
+ "ambari-design/kerberos/enabling_kerberos"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Blueprints",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/blueprints/index"
+ },
+ "items": [
+ "ambari-design/blueprints/blueprint-ha",
+ "ambari-design/blueprints/blueprint-ranger"
+ ]
+ },
+ "ambari-design/enhanced-configs/index",
+ "ambari-design/service-dashboard/index",
+ {
+ "type": "category",
+ "label": "Metrics",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/metrics/index"
+ },
+ "items": [
+ "ambari-design/metrics/metrics-collector-api-specification",
+ "ambari-design/metrics/configuration",
+ "ambari-design/metrics/operations",
+ "ambari-design/metrics/troubleshooting",
+ "ambari-design/metrics/metrics-api-specification",
+ "ambari-design/metrics/stack-defined-metrics",
+ "ambari-design/metrics/upgrading-ambari-metrics-system",
+ "ambari-design/metrics/ambari-server-metrics",
+ "ambari-design/metrics/ambari-metrics-whitelisting"
+ ]
+ },
+ "ambari-design/quick-links",
+ {
+ "type": "category",
+ "label": "Stacks and Services",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/stack-and-services/index"
+ },
+ "items": [
+ "ambari-design/stack-and-services/overview",
+ "ambari-design/stack-and-services/custom-services",
+ "ambari-design/stack-and-services/defining-a-custom-stack-and-services",
+ "ambari-design/stack-and-services/extensions",
+ "ambari-design/stack-and-services/how-to-define-stacks-and-services",
+ "ambari-design/stack-and-services/management-packs",
+ "ambari-design/stack-and-services/stack-inheritance",
+ "ambari-design/stack-and-services/stack-properties",
+ "ambari-design/stack-and-services/writing-metainfo",
+ "ambari-design/stack-and-services/faq",
+ "ambari-design/stack-and-services/hooks",
+ "ambari-design/stack-and-services/version-functions-conf-select-and-stack-select"
+ ]
+ },
+ "ambari-design/technology-stack",
+ {
+ "type": "category",
+ "label": "Views",
+ "link": {
+ "type": "doc",
+ "id": "ambari-design/views/index"
+ },
+ "items": [
+ "ambari-design/views/framework-services",
+ "ambari-design/views/view-api",
+ "ambari-design/views/view-definition"
+ ]
+ }
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Ambari Development",
+ "link": {
+ "type": "doc",
+ "id": "ambari-dev/index"
+ },
+ "items": [
+ "ambari-dev/development-process-for-new-major-features",
+ "ambari-dev/ambari-code-layout",
+ "ambari-dev/apache-ambari-jira",
+ "ambari-dev/coding-guidelines-for-ambari",
+ "ambari-dev/how-to-commit",
+ "ambari-dev/how-to-contribute",
+ "ambari-dev/code-review-guidelines",
+ "ambari-dev/releasing-ambari",
+ "ambari-dev/admin-view-ambari-admin-development",
+ "ambari-dev/unit-test-reports",
+ "ambari-dev/development-in-docker",
+ "ambari-dev/developer-tools",
+ "ambari-dev/feature-flags",
+ "ambari-dev/verifying-release-candidate"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Ambari Plugin Contributions",
+ "link": {
+ "type": "doc",
+ "id": "ambari-plugin-contribution/index"
+ },
+ "items": [
+ {
+ "type": "category",
+ "label": "Ambari SCOM Management Pack",
+ "link": {
+ "type": "doc",
+ "id": "ambari-plugin-contribution/scom/index"
+ },
+ "items": [
+ "ambari-plugin-contribution/scom/installation"
+ ]
+ },
+ "ambari-plugin-contribution/step-by-step"
+ ]
+ },
+ "ambari-alerts"
+ ]
+}
diff --git a/versions.json b/versions.json
new file mode 100644
index 0000000..1d73b34
--- /dev/null
+++ b/versions.json
@@ -0,0 +1,4 @@
+[
+ "2.7.6",
+ "2.7.5"
+]
diff --git a/yarn.lock b/yarn.lock
new file mode 100644
index 0000000..ce5295a
--- /dev/null
+++ b/yarn.lock
@@ -0,0 +1,9207 @@
+# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
+# yarn lockfile v1
+
+
+"@algolia/autocomplete-core@1.7.1":
+ version "1.7.1"
+ resolved "https://registry.yarnpkg.com/@algolia/autocomplete-core/-/autocomplete-core-1.7.1.tgz#025538b8a9564a9f3dd5bcf8a236d6951c76c7d1"
+ integrity sha512-eiZw+fxMzNQn01S8dA/hcCpoWCOCwcIIEUtHHdzN5TGB3IpzLbuhqFeTfh2OUhhgkE8Uo17+wH+QJ/wYyQmmzg==
+ dependencies:
+ "@algolia/autocomplete-shared" "1.7.1"
+
+"@algolia/autocomplete-preset-algolia@1.7.1":
+ version "1.7.1"
+ resolved "https://registry.yarnpkg.com/@algolia/autocomplete-preset-algolia/-/autocomplete-preset-algolia-1.7.1.tgz#7dadc5607097766478014ae2e9e1c9c4b3f957c8"
+ integrity sha512-pJwmIxeJCymU1M6cGujnaIYcY3QPOVYZOXhFkWVM7IxKzy272BwCvMFMyc5NpG/QmiObBxjo7myd060OeTNJXg==
+ dependencies:
+ "@algolia/autocomplete-shared" "1.7.1"
+
+"@algolia/autocomplete-shared@1.7.1":
+ version "1.7.1"
+ resolved "https://registry.yarnpkg.com/@algolia/autocomplete-shared/-/autocomplete-shared-1.7.1.tgz#95c3a0b4b78858fed730cf9c755b7d1cd0c82c74"
+ integrity sha512-eTmGVqY3GeyBTT8IWiB2K5EuURAqhnumfktAEoHxfDY2o7vg2rSnO16ZtIG0fMgt3py28Vwgq42/bVEuaQV7pg==
+
+"@algolia/cache-browser-local-storage@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/cache-browser-local-storage/-/cache-browser-local-storage-4.14.2.tgz#d5b1b90130ca87c6321de876e167df9ec6524936"
+ integrity sha512-FRweBkK/ywO+GKYfAWbrepewQsPTIEirhi1BdykX9mxvBPtGNKccYAxvGdDCumU1jL4r3cayio4psfzKMejBlA==
+ dependencies:
+ "@algolia/cache-common" "4.14.2"
+
+"@algolia/cache-common@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/cache-common/-/cache-common-4.14.2.tgz#b946b6103c922f0c06006fb6929163ed2c67d598"
+ integrity sha512-SbvAlG9VqNanCErr44q6lEKD2qoK4XtFNx9Qn8FK26ePCI8I9yU7pYB+eM/cZdS9SzQCRJBbHUumVr4bsQ4uxg==
+
+"@algolia/cache-in-memory@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/cache-in-memory/-/cache-in-memory-4.14.2.tgz#88e4a21474f9ac05331c2fa3ceb929684a395a24"
+ integrity sha512-HrOukWoop9XB/VFojPv1R5SVXowgI56T9pmezd/djh2JnVN/vXswhXV51RKy4nCpqxyHt/aGFSq2qkDvj6KiuQ==
+ dependencies:
+ "@algolia/cache-common" "4.14.2"
+
+"@algolia/client-account@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/client-account/-/client-account-4.14.2.tgz#b76ac1ba9ea71e8c3f77a1805b48350dc0728a16"
+ integrity sha512-WHtriQqGyibbb/Rx71YY43T0cXqyelEU0lB2QMBRXvD2X0iyeGl4qMxocgEIcbHyK7uqE7hKgjT8aBrHqhgc1w==
+ dependencies:
+ "@algolia/client-common" "4.14.2"
+ "@algolia/client-search" "4.14.2"
+ "@algolia/transporter" "4.14.2"
+
+"@algolia/client-analytics@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/client-analytics/-/client-analytics-4.14.2.tgz#ca04dcaf9a78ee5c92c5cb5e9c74cf031eb2f1fb"
+ integrity sha512-yBvBv2mw+HX5a+aeR0dkvUbFZsiC4FKSnfqk9rrfX+QrlNOKEhCG0tJzjiOggRW4EcNqRmaTULIYvIzQVL2KYQ==
+ dependencies:
+ "@algolia/client-common" "4.14.2"
+ "@algolia/client-search" "4.14.2"
+ "@algolia/requester-common" "4.14.2"
+ "@algolia/transporter" "4.14.2"
+
+"@algolia/client-common@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/client-common/-/client-common-4.14.2.tgz#e1324e167ffa8af60f3e8bcd122110fd0bfd1300"
+ integrity sha512-43o4fslNLcktgtDMVaT5XwlzsDPzlqvqesRi4MjQz2x4/Sxm7zYg5LRYFol1BIhG6EwxKvSUq8HcC/KxJu3J0Q==
+ dependencies:
+ "@algolia/requester-common" "4.14.2"
+ "@algolia/transporter" "4.14.2"
+
+"@algolia/client-personalization@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/client-personalization/-/client-personalization-4.14.2.tgz#656bbb6157a3dd1a4be7de65e457fda136c404ec"
+ integrity sha512-ACCoLi0cL8CBZ1W/2juehSltrw2iqsQBnfiu/Rbl9W2yE6o2ZUb97+sqN/jBqYNQBS+o0ekTMKNkQjHHAcEXNw==
+ dependencies:
+ "@algolia/client-common" "4.14.2"
+ "@algolia/requester-common" "4.14.2"
+ "@algolia/transporter" "4.14.2"
+
+"@algolia/client-search@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/client-search/-/client-search-4.14.2.tgz#357bdb7e640163f0e33bad231dfcc21f67dc2e92"
+ integrity sha512-L5zScdOmcZ6NGiVbLKTvP02UbxZ0njd5Vq9nJAmPFtjffUSOGEp11BmD2oMJ5QvARgx2XbX4KzTTNS5ECYIMWw==
+ dependencies:
+ "@algolia/client-common" "4.14.2"
+ "@algolia/requester-common" "4.14.2"
+ "@algolia/transporter" "4.14.2"
+
+"@algolia/events@^4.0.1":
+ version "4.0.1"
+ resolved "https://registry.yarnpkg.com/@algolia/events/-/events-4.0.1.tgz#fd39e7477e7bc703d7f893b556f676c032af3950"
+ integrity sha512-FQzvOCgoFXAbf5Y6mYozw2aj5KCJoA3m4heImceldzPSMbdyS4atVjJzXKMsfX3wnZTFYwkkt8/z8UesLHlSBQ==
+
+"@algolia/logger-common@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/logger-common/-/logger-common-4.14.2.tgz#b74b3a92431f92665519d95942c246793ec390ee"
+ integrity sha512-/JGlYvdV++IcMHBnVFsqEisTiOeEr6cUJtpjz8zc0A9c31JrtLm318Njc72p14Pnkw3A/5lHHh+QxpJ6WFTmsA==
+
+"@algolia/logger-console@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/logger-console/-/logger-console-4.14.2.tgz#ec49cb47408f5811d4792598683923a800abce7b"
+ integrity sha512-8S2PlpdshbkwlLCSAB5f8c91xyc84VM9Ar9EdfE9UmX+NrKNYnWR1maXXVDQQoto07G1Ol/tYFnFVhUZq0xV/g==
+ dependencies:
+ "@algolia/logger-common" "4.14.2"
+
+"@algolia/requester-browser-xhr@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/requester-browser-xhr/-/requester-browser-xhr-4.14.2.tgz#a2cd4d9d8d90d53109cc7f3682dc6ebf20f798f2"
+ integrity sha512-CEh//xYz/WfxHFh7pcMjQNWgpl4wFB85lUMRyVwaDPibNzQRVcV33YS+63fShFWc2+42YEipFGH2iPzlpszmDw==
+ dependencies:
+ "@algolia/requester-common" "4.14.2"
+
+"@algolia/requester-common@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/requester-common/-/requester-common-4.14.2.tgz#bc4e9e5ee16c953c0ecacbfb334a33c30c28b1a1"
+ integrity sha512-73YQsBOKa5fvVV3My7iZHu1sUqmjjfs9TteFWwPwDmnad7T0VTCopttcsM3OjLxZFtBnX61Xxl2T2gmG2O4ehg==
+
+"@algolia/requester-node-http@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/requester-node-http/-/requester-node-http-4.14.2.tgz#7c1223a1785decaab1def64c83dade6bea45e115"
+ integrity sha512-oDbb02kd1o5GTEld4pETlPZLY0e+gOSWjWMJHWTgDXbv9rm/o2cF7japO6Vj1ENnrqWvLBmW1OzV9g6FUFhFXg==
+ dependencies:
+ "@algolia/requester-common" "4.14.2"
+
+"@algolia/transporter@4.14.2":
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/@algolia/transporter/-/transporter-4.14.2.tgz#77c069047fb1a4359ee6a51f51829508e44a1e3d"
+ integrity sha512-t89dfQb2T9MFQHidjHcfhh6iGMNwvuKUvojAj+JsrHAGbuSy7yE4BylhLX6R0Q1xYRoC4Vvv+O5qIw/LdnQfsQ==
+ dependencies:
+ "@algolia/cache-common" "4.14.2"
+ "@algolia/logger-common" "4.14.2"
+ "@algolia/requester-common" "4.14.2"
+
+"@ampproject/remapping@^2.1.0":
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/@ampproject/remapping/-/remapping-2.2.0.tgz#56c133824780de3174aed5ab6834f3026790154d"
+ integrity sha512-qRmjj8nj9qmLTQXXmaR1cck3UXSRMPrbsLJAasZpF+t3riI71BXed5ebIOYwQntykeZuhjsdweEc9BxH5Jc26w==
+ dependencies:
+ "@jridgewell/gen-mapping" "^0.1.0"
+ "@jridgewell/trace-mapping" "^0.3.9"
+
+"@babel/code-frame@^7.0.0", "@babel/code-frame@^7.10.4", "@babel/code-frame@^7.16.0", "@babel/code-frame@^7.18.6", "@babel/code-frame@^7.8.3":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/code-frame/-/code-frame-7.18.6.tgz#3b25d38c89600baa2dcc219edfa88a74eb2c427a"
+ integrity sha512-TDCmlK5eOvH+eH7cdAFlNXeVJqWIQ7gW9tY1GJIpUtFb6CmjVyq2VM3u71bOyR8CRihcCgMUYoDNyLXao3+70Q==
+ dependencies:
+ "@babel/highlight" "^7.18.6"
+
+"@babel/compat-data@^7.17.7", "@babel/compat-data@^7.18.8", "@babel/compat-data@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/compat-data/-/compat-data-7.19.0.tgz#2a592fd89bacb1fcde68de31bee4f2f2dacb0e86"
+ integrity sha512-y5rqgTTPTmaF5e2nVhOxw+Ur9HDJLsWb6U/KpgUzRZEdPfE6VOubXBKLdbcUTijzRptednSBDQbYZBOSqJxpJw==
+
+"@babel/core@7.12.9":
+ version "7.12.9"
+ resolved "https://registry.yarnpkg.com/@babel/core/-/core-7.12.9.tgz#fd450c4ec10cdbb980e2928b7aa7a28484593fc8"
+ integrity sha512-gTXYh3M5wb7FRXQy+FErKFAv90BnlOuNn1QkCK2lREoPAjrQCO49+HVSrFoe5uakFAF5eenS75KbO2vQiLrTMQ==
+ dependencies:
+ "@babel/code-frame" "^7.10.4"
+ "@babel/generator" "^7.12.5"
+ "@babel/helper-module-transforms" "^7.12.1"
+ "@babel/helpers" "^7.12.5"
+ "@babel/parser" "^7.12.7"
+ "@babel/template" "^7.12.7"
+ "@babel/traverse" "^7.12.9"
+ "@babel/types" "^7.12.7"
+ convert-source-map "^1.7.0"
+ debug "^4.1.0"
+ gensync "^1.0.0-beta.1"
+ json5 "^2.1.2"
+ lodash "^4.17.19"
+ resolve "^1.3.2"
+ semver "^5.4.1"
+ source-map "^0.5.0"
+
+"@babel/core@^7.18.5", "@babel/core@^7.18.6":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/core/-/core-7.19.0.tgz#d2f5f4f2033c00de8096be3c9f45772563e150c3"
+ integrity sha512-reM4+U7B9ss148rh2n1Qs9ASS+w94irYXga7c2jaQv9RVzpS7Mv1a9rnYYwuDa45G+DkORt9g6An2k/V4d9LbQ==
+ dependencies:
+ "@ampproject/remapping" "^2.1.0"
+ "@babel/code-frame" "^7.18.6"
+ "@babel/generator" "^7.19.0"
+ "@babel/helper-compilation-targets" "^7.19.0"
+ "@babel/helper-module-transforms" "^7.19.0"
+ "@babel/helpers" "^7.19.0"
+ "@babel/parser" "^7.19.0"
+ "@babel/template" "^7.18.10"
+ "@babel/traverse" "^7.19.0"
+ "@babel/types" "^7.19.0"
+ convert-source-map "^1.7.0"
+ debug "^4.1.0"
+ gensync "^1.0.0-beta.2"
+ json5 "^2.2.1"
+ semver "^6.3.0"
+
+"@babel/generator@^7.12.5", "@babel/generator@^7.18.7", "@babel/generator@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/generator/-/generator-7.19.0.tgz#785596c06425e59334df2ccee63ab166b738419a"
+ integrity sha512-S1ahxf1gZ2dpoiFgA+ohK9DIpz50bJ0CWs7Zlzb54Z4sG8qmdIrGrVqmy1sAtTVRb+9CU6U8VqT9L0Zj7hxHVg==
+ dependencies:
+ "@babel/types" "^7.19.0"
+ "@jridgewell/gen-mapping" "^0.3.2"
+ jsesc "^2.5.1"
+
+"@babel/helper-annotate-as-pure@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.18.6.tgz#eaa49f6f80d5a33f9a5dd2276e6d6e451be0a6bb"
+ integrity sha512-duORpUiYrEpzKIop6iNbjnwKLAKnJ47csTyRACyEmWj0QdUrm5aqNJGHSSEQSUAvNW0ojX0dOmK9dZduvkfeXA==
+ dependencies:
+ "@babel/types" "^7.18.6"
+
+"@babel/helper-builder-binary-assignment-operator-visitor@^7.18.6":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/helper-builder-binary-assignment-operator-visitor/-/helper-builder-binary-assignment-operator-visitor-7.18.9.tgz#acd4edfd7a566d1d51ea975dff38fd52906981bb"
+ integrity sha512-yFQ0YCHoIqarl8BCRwBL8ulYUaZpz3bNsA7oFepAzee+8/+ImtADXNOmO5vJvsPff3qi+hvpkY/NYBTrBQgdNw==
+ dependencies:
+ "@babel/helper-explode-assignable-expression" "^7.18.6"
+ "@babel/types" "^7.18.9"
+
+"@babel/helper-compilation-targets@^7.17.7", "@babel/helper-compilation-targets@^7.18.9", "@babel/helper-compilation-targets@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helper-compilation-targets/-/helper-compilation-targets-7.19.0.tgz#537ec8339d53e806ed422f1e06c8f17d55b96bb0"
+ integrity sha512-Ai5bNWXIvwDvWM7njqsG3feMlL9hCVQsPYXodsZyLwshYkZVJt59Gftau4VrE8S9IT9asd2uSP1hG6wCNw+sXA==
+ dependencies:
+ "@babel/compat-data" "^7.19.0"
+ "@babel/helper-validator-option" "^7.18.6"
+ browserslist "^4.20.2"
+ semver "^6.3.0"
+
+"@babel/helper-create-class-features-plugin@^7.18.6", "@babel/helper-create-class-features-plugin@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.19.0.tgz#bfd6904620df4e46470bae4850d66be1054c404b"
+ integrity sha512-NRz8DwF4jT3UfrmUoZjd0Uph9HQnP30t7Ash+weACcyNkiYTywpIjDBgReJMKgr+n86sn2nPVVmJ28Dm053Kqw==
+ dependencies:
+ "@babel/helper-annotate-as-pure" "^7.18.6"
+ "@babel/helper-environment-visitor" "^7.18.9"
+ "@babel/helper-function-name" "^7.19.0"
+ "@babel/helper-member-expression-to-functions" "^7.18.9"
+ "@babel/helper-optimise-call-expression" "^7.18.6"
+ "@babel/helper-replace-supers" "^7.18.9"
+ "@babel/helper-split-export-declaration" "^7.18.6"
+
+"@babel/helper-create-regexp-features-plugin@^7.18.6", "@babel/helper-create-regexp-features-plugin@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helper-create-regexp-features-plugin/-/helper-create-regexp-features-plugin-7.19.0.tgz#7976aca61c0984202baca73d84e2337a5424a41b"
+ integrity sha512-htnV+mHX32DF81amCDrwIDr8nrp1PTm+3wfBN9/v8QJOLEioOCOG7qNyq0nHeFiWbT3Eb7gsPwEmV64UCQ1jzw==
+ dependencies:
+ "@babel/helper-annotate-as-pure" "^7.18.6"
+ regexpu-core "^5.1.0"
+
+"@babel/helper-define-polyfill-provider@^0.3.2", "@babel/helper-define-polyfill-provider@^0.3.3":
+ version "0.3.3"
+ resolved "https://registry.yarnpkg.com/@babel/helper-define-polyfill-provider/-/helper-define-polyfill-provider-0.3.3.tgz#8612e55be5d51f0cd1f36b4a5a83924e89884b7a"
+ integrity sha512-z5aQKU4IzbqCC1XH0nAqfsFLMVSo22SBKUc0BxGrLkolTdPTructy0ToNnlO2zA4j9Q/7pjMZf0DSY+DSTYzww==
+ dependencies:
+ "@babel/helper-compilation-targets" "^7.17.7"
+ "@babel/helper-plugin-utils" "^7.16.7"
+ debug "^4.1.1"
+ lodash.debounce "^4.0.8"
+ resolve "^1.14.2"
+ semver "^6.1.2"
+
+"@babel/helper-environment-visitor@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/helper-environment-visitor/-/helper-environment-visitor-7.18.9.tgz#0c0cee9b35d2ca190478756865bb3528422f51be"
+ integrity sha512-3r/aACDJ3fhQ/EVgFy0hpj8oHyHpQc+LPtJoY9SzTThAsStm4Ptegq92vqKoE3vD706ZVFWITnMnxucw+S9Ipg==
+
+"@babel/helper-explode-assignable-expression@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-explode-assignable-expression/-/helper-explode-assignable-expression-7.18.6.tgz#41f8228ef0a6f1a036b8dfdfec7ce94f9a6bc096"
+ integrity sha512-eyAYAsQmB80jNfg4baAtLeWAQHfHFiR483rzFK+BhETlGZaQC9bsfrugfXDCbRHLQbIA7U5NxhhOxN7p/dWIcg==
+ dependencies:
+ "@babel/types" "^7.18.6"
+
+"@babel/helper-function-name@^7.18.9", "@babel/helper-function-name@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helper-function-name/-/helper-function-name-7.19.0.tgz#941574ed5390682e872e52d3f38ce9d1bef4648c"
+ integrity sha512-WAwHBINyrpqywkUH0nTnNgI5ina5TFn85HKS0pbPDfxFfhyR/aNQEn4hGi1P1JyT//I0t4OgXUlofzWILRvS5w==
+ dependencies:
+ "@babel/template" "^7.18.10"
+ "@babel/types" "^7.19.0"
+
+"@babel/helper-hoist-variables@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-hoist-variables/-/helper-hoist-variables-7.18.6.tgz#d4d2c8fb4baeaa5c68b99cc8245c56554f926678"
+ integrity sha512-UlJQPkFqFULIcyW5sbzgbkxn2FKRgwWiRexcuaR8RNJRy8+LLveqPjwZV/bwrLZCN0eUHD/x8D0heK1ozuoo6Q==
+ dependencies:
+ "@babel/types" "^7.18.6"
+
+"@babel/helper-member-expression-to-functions@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.18.9.tgz#1531661e8375af843ad37ac692c132841e2fd815"
+ integrity sha512-RxifAh2ZoVU67PyKIO4AMi1wTenGfMR/O/ae0CCRqwgBAt5v7xjdtRw7UoSbsreKrQn5t7r89eruK/9JjYHuDg==
+ dependencies:
+ "@babel/types" "^7.18.9"
+
+"@babel/helper-module-imports@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-module-imports/-/helper-module-imports-7.18.6.tgz#1e3ebdbbd08aad1437b428c50204db13c5a3ca6e"
+ integrity sha512-0NFvs3VkuSYbFi1x2Vd6tKrywq+z/cLeYC/RJNFrIX/30Bf5aiGYbtvGXolEktzJH8o5E5KJ3tT+nkxuuZFVlA==
+ dependencies:
+ "@babel/types" "^7.18.6"
+
+"@babel/helper-module-transforms@^7.12.1", "@babel/helper-module-transforms@^7.18.6", "@babel/helper-module-transforms@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helper-module-transforms/-/helper-module-transforms-7.19.0.tgz#309b230f04e22c58c6a2c0c0c7e50b216d350c30"
+ integrity sha512-3HBZ377Fe14RbLIA+ac3sY4PTgpxHVkFrESaWhoI5PuyXPBBX8+C34qblV9G89ZtycGJCmCI/Ut+VUDK4bltNQ==
+ dependencies:
+ "@babel/helper-environment-visitor" "^7.18.9"
+ "@babel/helper-module-imports" "^7.18.6"
+ "@babel/helper-simple-access" "^7.18.6"
+ "@babel/helper-split-export-declaration" "^7.18.6"
+ "@babel/helper-validator-identifier" "^7.18.6"
+ "@babel/template" "^7.18.10"
+ "@babel/traverse" "^7.19.0"
+ "@babel/types" "^7.19.0"
+
+"@babel/helper-optimise-call-expression@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.18.6.tgz#9369aa943ee7da47edab2cb4e838acf09d290ffe"
+ integrity sha512-HP59oD9/fEHQkdcbgFCnbmgH5vIQTJbxh2yf+CdM89/glUNnuzr87Q8GIjGEnOktTROemO0Pe0iPAYbqZuOUiA==
+ dependencies:
+ "@babel/types" "^7.18.6"
+
+"@babel/helper-plugin-utils@7.10.4":
+ version "7.10.4"
+ resolved "https://registry.yarnpkg.com/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.4.tgz#2f75a831269d4f677de49986dff59927533cf375"
+ integrity sha512-O4KCvQA6lLiMU9l2eawBPMf1xPP8xPfB3iEQw150hOVTqj/rfXz0ThTb4HEzqQfs2Bmo5Ay8BzxfzVtBrr9dVg==
+
+"@babel/helper-plugin-utils@^7.0.0", "@babel/helper-plugin-utils@^7.10.4", "@babel/helper-plugin-utils@^7.12.13", "@babel/helper-plugin-utils@^7.14.5", "@babel/helper-plugin-utils@^7.16.7", "@babel/helper-plugin-utils@^7.18.6", "@babel/helper-plugin-utils@^7.18.9", "@babel/helper-plugin-utils@^7.19.0", "@babel/helper-plugin-utils@^7.8.0", "@babel/helper-plugin-utils@^7.8.3":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helper-plugin-utils/-/helper-plugin-utils-7.19.0.tgz#4796bb14961521f0f8715990bee2fb6e51ce21bf"
+ integrity sha512-40Ryx7I8mT+0gaNxm8JGTZFUITNqdLAgdg0hXzeVZxVD6nFsdhQvip6v8dqkRHzsz1VFpFAaOCHNn0vKBL7Czw==
+
+"@babel/helper-remap-async-to-generator@^7.18.6", "@babel/helper-remap-async-to-generator@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/helper-remap-async-to-generator/-/helper-remap-async-to-generator-7.18.9.tgz#997458a0e3357080e54e1d79ec347f8a8cd28519"
+ integrity sha512-dI7q50YKd8BAv3VEfgg7PS7yD3Rtbi2J1XMXaalXO0W0164hYLnh8zpjRS0mte9MfVp/tltvr/cfdXPvJr1opA==
+ dependencies:
+ "@babel/helper-annotate-as-pure" "^7.18.6"
+ "@babel/helper-environment-visitor" "^7.18.9"
+ "@babel/helper-wrap-function" "^7.18.9"
+ "@babel/types" "^7.18.9"
+
+"@babel/helper-replace-supers@^7.18.6", "@babel/helper-replace-supers@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/helper-replace-supers/-/helper-replace-supers-7.18.9.tgz#1092e002feca980fbbb0bd4d51b74a65c6a500e6"
+ integrity sha512-dNsWibVI4lNT6HiuOIBr1oyxo40HvIVmbwPUm3XZ7wMh4k2WxrxTqZwSqw/eEmXDS9np0ey5M2bz9tBmO9c+YQ==
+ dependencies:
+ "@babel/helper-environment-visitor" "^7.18.9"
+ "@babel/helper-member-expression-to-functions" "^7.18.9"
+ "@babel/helper-optimise-call-expression" "^7.18.6"
+ "@babel/traverse" "^7.18.9"
+ "@babel/types" "^7.18.9"
+
+"@babel/helper-simple-access@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-simple-access/-/helper-simple-access-7.18.6.tgz#d6d8f51f4ac2978068df934b569f08f29788c7ea"
+ integrity sha512-iNpIgTgyAvDQpDj76POqg+YEt8fPxx3yaNBg3S30dxNKm2SWfYhD0TGrK/Eu9wHpUW63VQU894TsTg+GLbUa1g==
+ dependencies:
+ "@babel/types" "^7.18.6"
+
+"@babel/helper-skip-transparent-expression-wrappers@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/helper-skip-transparent-expression-wrappers/-/helper-skip-transparent-expression-wrappers-7.18.9.tgz#778d87b3a758d90b471e7b9918f34a9a02eb5818"
+ integrity sha512-imytd2gHi3cJPsybLRbmFrF7u5BIEuI2cNheyKi3/iOBC63kNn3q8Crn2xVuESli0aM4KYsyEqKyS7lFL8YVtw==
+ dependencies:
+ "@babel/types" "^7.18.9"
+
+"@babel/helper-split-export-declaration@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.18.6.tgz#7367949bc75b20c6d5a5d4a97bba2824ae8ef075"
+ integrity sha512-bde1etTx6ZyTmobl9LLMMQsaizFVZrquTEHOqKeQESMKo4PlObf+8+JA25ZsIpZhT/WEd39+vOdLXAFG/nELpA==
+ dependencies:
+ "@babel/types" "^7.18.6"
+
+"@babel/helper-string-parser@^7.18.10":
+ version "7.18.10"
+ resolved "https://registry.yarnpkg.com/@babel/helper-string-parser/-/helper-string-parser-7.18.10.tgz#181f22d28ebe1b3857fa575f5c290b1aaf659b56"
+ integrity sha512-XtIfWmeNY3i4t7t4D2t02q50HvqHybPqW2ki1kosnvWCwuCMeo81Jf0gwr85jy/neUdg5XDdeFE/80DXiO+njw==
+
+"@babel/helper-validator-identifier@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.18.6.tgz#9c97e30d31b2b8c72a1d08984f2ca9b574d7a076"
+ integrity sha512-MmetCkz9ej86nJQV+sFCxoGGrUbU3q02kgLciwkrt9QqEB7cP39oKEY0PakknEO0Gu20SskMRi+AYZ3b1TpN9g==
+
+"@babel/helper-validator-option@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/helper-validator-option/-/helper-validator-option-7.18.6.tgz#bf0d2b5a509b1f336099e4ff36e1a63aa5db4db8"
+ integrity sha512-XO7gESt5ouv/LRJdrVjkShckw6STTaB7l9BrpBaAHDeF5YZT+01PCwmR0SJHnkW6i8OwW/EVWRShfi4j2x+KQw==
+
+"@babel/helper-wrap-function@^7.18.9":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helper-wrap-function/-/helper-wrap-function-7.19.0.tgz#89f18335cff1152373222f76a4b37799636ae8b1"
+ integrity sha512-txX8aN8CZyYGTwcLhlk87KRqncAzhh5TpQamZUa0/u3an36NtDpUP6bQgBCBcLeBs09R/OwQu3OjK0k/HwfNDg==
+ dependencies:
+ "@babel/helper-function-name" "^7.19.0"
+ "@babel/template" "^7.18.10"
+ "@babel/traverse" "^7.19.0"
+ "@babel/types" "^7.19.0"
+
+"@babel/helpers@^7.12.5", "@babel/helpers@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/helpers/-/helpers-7.19.0.tgz#f30534657faf246ae96551d88dd31e9d1fa1fc18"
+ integrity sha512-DRBCKGwIEdqY3+rPJgG/dKfQy9+08rHIAJx8q2p+HSWP87s2HCrQmaAMMyMll2kIXKCW0cO1RdQskx15Xakftg==
+ dependencies:
+ "@babel/template" "^7.18.10"
+ "@babel/traverse" "^7.19.0"
+ "@babel/types" "^7.19.0"
+
+"@babel/highlight@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/highlight/-/highlight-7.18.6.tgz#81158601e93e2563795adcbfbdf5d64be3f2ecdf"
+ integrity sha512-u7stbOuYjaPezCuLj29hNW1v64M2Md2qupEKP1fHc7WdOA3DgLh37suiSrZYY7haUB7iBeQZ9P1uiRF359do3g==
+ dependencies:
+ "@babel/helper-validator-identifier" "^7.18.6"
+ chalk "^2.0.0"
+ js-tokens "^4.0.0"
+
+"@babel/parser@^7.12.7", "@babel/parser@^7.18.10", "@babel/parser@^7.18.8", "@babel/parser@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/parser/-/parser-7.19.0.tgz#497fcafb1d5b61376959c1c338745ef0577aa02c"
+ integrity sha512-74bEXKX2h+8rrfQUfsBfuZZHzsEs6Eql4pqy/T4Nn6Y9wNPggQOqD6z6pn5Bl8ZfysKouFZT/UXEH94ummEeQw==
+
+"@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression/-/plugin-bugfix-safari-id-destructuring-collision-in-function-expression-7.18.6.tgz#da5b8f9a580acdfbe53494dba45ea389fb09a4d2"
+ integrity sha512-Dgxsyg54Fx1d4Nge8UnvTrED63vrwOdPmyvPzlNN/boaliRP54pm3pGzZD1SJUwrBA+Cs/xdG8kXX6Mn/RfISQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining/-/plugin-bugfix-v8-spread-parameters-in-optional-chaining-7.18.9.tgz#a11af19aa373d68d561f08e0a57242350ed0ec50"
+ integrity sha512-AHrP9jadvH7qlOj6PINbgSuphjQUAK7AOT7DPjBo9EHoLhQTnnK5u45e1Hd4DbSQEO9nqPWtQ89r+XEOWFScKg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+ "@babel/helper-skip-transparent-expression-wrappers" "^7.18.9"
+ "@babel/plugin-proposal-optional-chaining" "^7.18.9"
+
+"@babel/plugin-proposal-async-generator-functions@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-async-generator-functions/-/plugin-proposal-async-generator-functions-7.19.0.tgz#cf5740194f170467df20581712400487efc79ff1"
+ integrity sha512-nhEByMUTx3uZueJ/QkJuSlCfN4FGg+xy+vRsfGQGzSauq5ks2Deid2+05Q3KhfaUjvec1IGhw/Zm3cFm8JigTQ==
+ dependencies:
+ "@babel/helper-environment-visitor" "^7.18.9"
+ "@babel/helper-plugin-utils" "^7.19.0"
+ "@babel/helper-remap-async-to-generator" "^7.18.9"
+ "@babel/plugin-syntax-async-generators" "^7.8.4"
+
+"@babel/plugin-proposal-class-properties@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-class-properties/-/plugin-proposal-class-properties-7.18.6.tgz#b110f59741895f7ec21a6fff696ec46265c446a3"
+ integrity sha512-cumfXOF0+nzZrrN8Rf0t7M+tF6sZc7vhQwYQck9q1/5w2OExlD+b4v4RpMJFaV1Z7WcDRgO6FqvxqxGlwo+RHQ==
+ dependencies:
+ "@babel/helper-create-class-features-plugin" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-proposal-class-static-block@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-class-static-block/-/plugin-proposal-class-static-block-7.18.6.tgz#8aa81d403ab72d3962fc06c26e222dacfc9b9020"
+ integrity sha512-+I3oIiNxrCpup3Gi8n5IGMwj0gOCAjcJUSQEcotNnCCPMEnixawOQ+KeJPlgfjzx+FKQ1QSyZOWe7wmoJp7vhw==
+ dependencies:
+ "@babel/helper-create-class-features-plugin" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/plugin-syntax-class-static-block" "^7.14.5"
+
+"@babel/plugin-proposal-dynamic-import@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-dynamic-import/-/plugin-proposal-dynamic-import-7.18.6.tgz#72bcf8d408799f547d759298c3c27c7e7faa4d94"
+ integrity sha512-1auuwmK+Rz13SJj36R+jqFPMJWyKEDd7lLSdOj4oJK0UTgGueSAtkrCvz9ewmgyU/P941Rv2fQwZJN8s6QruXw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/plugin-syntax-dynamic-import" "^7.8.3"
+
+"@babel/plugin-proposal-export-namespace-from@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-export-namespace-from/-/plugin-proposal-export-namespace-from-7.18.9.tgz#5f7313ab348cdb19d590145f9247540e94761203"
+ integrity sha512-k1NtHyOMvlDDFeb9G5PhUXuGj8m/wiwojgQVEhJ/fsVsMCpLyOP4h0uGEjYJKrRI+EVPlb5Jk+Gt9P97lOGwtA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+ "@babel/plugin-syntax-export-namespace-from" "^7.8.3"
+
+"@babel/plugin-proposal-json-strings@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-json-strings/-/plugin-proposal-json-strings-7.18.6.tgz#7e8788c1811c393aff762817e7dbf1ebd0c05f0b"
+ integrity sha512-lr1peyn9kOdbYc0xr0OdHTZ5FMqS6Di+H0Fz2I/JwMzGmzJETNeOFq2pBySw6X/KFL5EWDjlJuMsUGRFb8fQgQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/plugin-syntax-json-strings" "^7.8.3"
+
+"@babel/plugin-proposal-logical-assignment-operators@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-logical-assignment-operators/-/plugin-proposal-logical-assignment-operators-7.18.9.tgz#8148cbb350483bf6220af06fa6db3690e14b2e23"
+ integrity sha512-128YbMpjCrP35IOExw2Fq+x55LMP42DzhOhX2aNNIdI9avSWl2PI0yuBWarr3RYpZBSPtabfadkH2yeRiMD61Q==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+ "@babel/plugin-syntax-logical-assignment-operators" "^7.10.4"
+
+"@babel/plugin-proposal-nullish-coalescing-operator@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-nullish-coalescing-operator/-/plugin-proposal-nullish-coalescing-operator-7.18.6.tgz#fdd940a99a740e577d6c753ab6fbb43fdb9467e1"
+ integrity sha512-wQxQzxYeJqHcfppzBDnm1yAY0jSRkUXR2z8RePZYrKwMKgMlE8+Z6LUno+bd6LvbGh8Gltvy74+9pIYkr+XkKA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/plugin-syntax-nullish-coalescing-operator" "^7.8.3"
+
+"@babel/plugin-proposal-numeric-separator@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-numeric-separator/-/plugin-proposal-numeric-separator-7.18.6.tgz#899b14fbafe87f053d2c5ff05b36029c62e13c75"
+ integrity sha512-ozlZFogPqoLm8WBr5Z8UckIoE4YQ5KESVcNudyXOR8uqIkliTEgJ3RoketfG6pmzLdeZF0H/wjE9/cCEitBl7Q==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/plugin-syntax-numeric-separator" "^7.10.4"
+
+"@babel/plugin-proposal-object-rest-spread@7.12.1":
+ version "7.12.1"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.12.1.tgz#def9bd03cea0f9b72283dac0ec22d289c7691069"
+ integrity sha512-s6SowJIjzlhx8o7lsFx5zmY4At6CTtDvgNQDdPzkBQucle58A6b/TTeEBYtyDgmcXjUTM+vE8YOGHZzzbc/ioA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.10.4"
+ "@babel/plugin-syntax-object-rest-spread" "^7.8.0"
+ "@babel/plugin-transform-parameters" "^7.12.1"
+
+"@babel/plugin-proposal-object-rest-spread@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.18.9.tgz#f9434f6beb2c8cae9dfcf97d2a5941bbbf9ad4e7"
+ integrity sha512-kDDHQ5rflIeY5xl69CEqGEZ0KY369ehsCIEbTGb4siHG5BE9sga/T0r0OUwyZNLMmZE79E1kbsqAjwFCW4ds6Q==
+ dependencies:
+ "@babel/compat-data" "^7.18.8"
+ "@babel/helper-compilation-targets" "^7.18.9"
+ "@babel/helper-plugin-utils" "^7.18.9"
+ "@babel/plugin-syntax-object-rest-spread" "^7.8.3"
+ "@babel/plugin-transform-parameters" "^7.18.8"
+
+"@babel/plugin-proposal-optional-catch-binding@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-optional-catch-binding/-/plugin-proposal-optional-catch-binding-7.18.6.tgz#f9400d0e6a3ea93ba9ef70b09e72dd6da638a2cb"
+ integrity sha512-Q40HEhs9DJQyaZfUjjn6vE8Cv4GmMHCYuMGIWUnlxH6400VGxOuwWsPt4FxXxJkC/5eOzgn0z21M9gMT4MOhbw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/plugin-syntax-optional-catch-binding" "^7.8.3"
+
+"@babel/plugin-proposal-optional-chaining@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-optional-chaining/-/plugin-proposal-optional-chaining-7.18.9.tgz#e8e8fe0723f2563960e4bf5e9690933691915993"
+ integrity sha512-v5nwt4IqBXihxGsW2QmCWMDS3B3bzGIk/EQVZz2ei7f3NJl8NzAJVvUmpDW5q1CRNY+Beb/k58UAH1Km1N411w==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+ "@babel/helper-skip-transparent-expression-wrappers" "^7.18.9"
+ "@babel/plugin-syntax-optional-chaining" "^7.8.3"
+
+"@babel/plugin-proposal-private-methods@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-private-methods/-/plugin-proposal-private-methods-7.18.6.tgz#5209de7d213457548a98436fa2882f52f4be6bea"
+ integrity sha512-nutsvktDItsNn4rpGItSNV2sz1XwS+nfU0Rg8aCx3W3NOKVzdMjJRu0O5OkgDp3ZGICSTbgRpxZoWsxoKRvbeA==
+ dependencies:
+ "@babel/helper-create-class-features-plugin" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-proposal-private-property-in-object@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-private-property-in-object/-/plugin-proposal-private-property-in-object-7.18.6.tgz#a64137b232f0aca3733a67eb1a144c192389c503"
+ integrity sha512-9Rysx7FOctvT5ouj5JODjAFAkgGoudQuLPamZb0v1TGLpapdNaftzifU8NTWQm0IRjqoYypdrSmyWgkocDQ8Dw==
+ dependencies:
+ "@babel/helper-annotate-as-pure" "^7.18.6"
+ "@babel/helper-create-class-features-plugin" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/plugin-syntax-private-property-in-object" "^7.14.5"
+
+"@babel/plugin-proposal-unicode-property-regex@^7.18.6", "@babel/plugin-proposal-unicode-property-regex@^7.4.4":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-proposal-unicode-property-regex/-/plugin-proposal-unicode-property-regex-7.18.6.tgz#af613d2cd5e643643b65cded64207b15c85cb78e"
+ integrity sha512-2BShG/d5yoZyXZfVePH91urL5wTG6ASZU9M4o03lKK8u8UW1y08OMttBSOADTcJrnPMpvDXRG3G8fyLh4ovs8w==
+ dependencies:
+ "@babel/helper-create-regexp-features-plugin" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-syntax-async-generators@^7.8.4":
+ version "7.8.4"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-async-generators/-/plugin-syntax-async-generators-7.8.4.tgz#a983fb1aeb2ec3f6ed042a210f640e90e786fe0d"
+ integrity sha512-tycmZxkGfZaxhMRbXlPXuVFpdWlXpir2W4AMhSJgRKzk/eDlIXOhb2LHWoLpDF7TEHylV5zNhykX6KAgHJmTNw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.0"
+
+"@babel/plugin-syntax-class-properties@^7.12.13":
+ version "7.12.13"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-class-properties/-/plugin-syntax-class-properties-7.12.13.tgz#b5c987274c4a3a82b89714796931a6b53544ae10"
+ integrity sha512-fm4idjKla0YahUNgFNLCB0qySdsoPiZP3iQE3rky0mBUtMZ23yDJ9SJdg6dXTSDnulOVqiF3Hgr9nbXvXTQZYA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.12.13"
+
+"@babel/plugin-syntax-class-static-block@^7.14.5":
+ version "7.14.5"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-class-static-block/-/plugin-syntax-class-static-block-7.14.5.tgz#195df89b146b4b78b3bf897fd7a257c84659d406"
+ integrity sha512-b+YyPmr6ldyNnM6sqYeMWE+bgJcJpO6yS4QD7ymxgH34GBPNDM/THBh8iunyvKIZztiwLH4CJZ0RxTk9emgpjw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.14.5"
+
+"@babel/plugin-syntax-dynamic-import@^7.8.3":
+ version "7.8.3"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-dynamic-import/-/plugin-syntax-dynamic-import-7.8.3.tgz#62bf98b2da3cd21d626154fc96ee5b3cb68eacb3"
+ integrity sha512-5gdGbFon+PszYzqs83S3E5mpi7/y/8M9eC90MRTZfduQOYW76ig6SOSPNe41IG5LoP3FGBn2N0RjVDSQiS94kQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.0"
+
+"@babel/plugin-syntax-export-namespace-from@^7.8.3":
+ version "7.8.3"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-export-namespace-from/-/plugin-syntax-export-namespace-from-7.8.3.tgz#028964a9ba80dbc094c915c487ad7c4e7a66465a"
+ integrity sha512-MXf5laXo6c1IbEbegDmzGPwGNTsHZmEy6QGznu5Sh2UCWvueywb2ee+CCE4zQiZstxU9BMoQO9i6zUFSY0Kj0Q==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.3"
+
+"@babel/plugin-syntax-import-assertions@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-import-assertions/-/plugin-syntax-import-assertions-7.18.6.tgz#cd6190500a4fa2fe31990a963ffab4b63e4505e4"
+ integrity sha512-/DU3RXad9+bZwrgWJQKbr39gYbJpLJHezqEzRzi/BHRlJ9zsQb4CK2CA/5apllXNomwA1qHwzvHl+AdEmC5krQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-syntax-json-strings@^7.8.3":
+ version "7.8.3"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-json-strings/-/plugin-syntax-json-strings-7.8.3.tgz#01ca21b668cd8218c9e640cb6dd88c5412b2c96a"
+ integrity sha512-lY6kdGpWHvjoe2vk4WrAapEuBR69EMxZl+RoGRhrFGNYVK8mOPAW8VfbT/ZgrFbXlDNiiaxQnAtgVCZ6jv30EA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.0"
+
+"@babel/plugin-syntax-jsx@7.12.1":
+ version "7.12.1"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.12.1.tgz#9d9d357cc818aa7ae7935917c1257f67677a0926"
+ integrity sha512-1yRi7yAtB0ETgxdY9ti/p2TivUxJkTdhu/ZbF9MshVGqOx1TdB3b7xCXs49Fupgg50N45KcAsRP/ZqWjs9SRjg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.10.4"
+
+"@babel/plugin-syntax-jsx@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.18.6.tgz#a8feef63b010150abd97f1649ec296e849943ca0"
+ integrity sha512-6mmljtAedFGTWu2p/8WIORGwy+61PLgOMPOdazc7YoJ9ZCWUyFy3A6CpPkRKLKD1ToAesxX8KGEViAiLo9N+7Q==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-syntax-logical-assignment-operators@^7.10.4":
+ version "7.10.4"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-logical-assignment-operators/-/plugin-syntax-logical-assignment-operators-7.10.4.tgz#ca91ef46303530448b906652bac2e9fe9941f699"
+ integrity sha512-d8waShlpFDinQ5MtvGU9xDAOzKH47+FFoney2baFIoMr952hKOLp1HR7VszoZvOsV/4+RRszNY7D17ba0te0ig==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.10.4"
+
+"@babel/plugin-syntax-nullish-coalescing-operator@^7.8.3":
+ version "7.8.3"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-nullish-coalescing-operator/-/plugin-syntax-nullish-coalescing-operator-7.8.3.tgz#167ed70368886081f74b5c36c65a88c03b66d1a9"
+ integrity sha512-aSff4zPII1u2QD7y+F8oDsz19ew4IGEJg9SVW+bqwpwtfFleiQDMdzA/R+UlWDzfnHFCxxleFT0PMIrR36XLNQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.0"
+
+"@babel/plugin-syntax-numeric-separator@^7.10.4":
+ version "7.10.4"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-numeric-separator/-/plugin-syntax-numeric-separator-7.10.4.tgz#b9b070b3e33570cd9fd07ba7fa91c0dd37b9af97"
+ integrity sha512-9H6YdfkcK/uOnY/K7/aA2xpzaAgkQn37yzWUMRK7OaPOqOpGS1+n0H5hxT9AUw9EsSjPW8SVyMJwYRtWs3X3ug==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.10.4"
+
+"@babel/plugin-syntax-object-rest-spread@7.8.3", "@babel/plugin-syntax-object-rest-spread@^7.8.0", "@babel/plugin-syntax-object-rest-spread@^7.8.3":
+ version "7.8.3"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz#60e225edcbd98a640332a2e72dd3e66f1af55871"
+ integrity sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.0"
+
+"@babel/plugin-syntax-optional-catch-binding@^7.8.3":
+ version "7.8.3"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-optional-catch-binding/-/plugin-syntax-optional-catch-binding-7.8.3.tgz#6111a265bcfb020eb9efd0fdfd7d26402b9ed6c1"
+ integrity sha512-6VPD0Pc1lpTqw0aKoeRTMiB+kWhAoT24PA+ksWSBrFtl5SIRVpZlwN3NNPQjehA2E/91FV3RjLWoVTglWcSV3Q==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.0"
+
+"@babel/plugin-syntax-optional-chaining@^7.8.3":
+ version "7.8.3"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-optional-chaining/-/plugin-syntax-optional-chaining-7.8.3.tgz#4f69c2ab95167e0180cd5336613f8c5788f7d48a"
+ integrity sha512-KoK9ErH1MBlCPxV0VANkXW2/dw4vlbGDrFgz8bmUsBGYkFRcbRwMh6cIJubdPrkxRwuGdtCk0v/wPTKbQgBjkg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.8.0"
+
+"@babel/plugin-syntax-private-property-in-object@^7.14.5":
+ version "7.14.5"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-private-property-in-object/-/plugin-syntax-private-property-in-object-7.14.5.tgz#0dc6671ec0ea22b6e94a1114f857970cd39de1ad"
+ integrity sha512-0wVnp9dxJ72ZUJDV27ZfbSj6iHLoytYZmh3rFcxNnvsJF3ktkzLDZPy/mA17HGsaQT3/DQsWYX1f1QGWkCoVUg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.14.5"
+
+"@babel/plugin-syntax-top-level-await@^7.14.5":
+ version "7.14.5"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-top-level-await/-/plugin-syntax-top-level-await-7.14.5.tgz#c1cfdadc35a646240001f06138247b741c34d94c"
+ integrity sha512-hx++upLv5U1rgYfwe1xBQUhRmU41NEvpUvrp8jkrSCdvGSnM5/qdRMtylJ6PG5OFkBaHkbTAKTnd3/YyESRHFw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.14.5"
+
+"@babel/plugin-syntax-typescript@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-syntax-typescript/-/plugin-syntax-typescript-7.18.6.tgz#1c09cd25795c7c2b8a4ba9ae49394576d4133285"
+ integrity sha512-mAWAuq4rvOepWCBid55JuRNvpTNf2UGVgoz4JV0fXEKolsVZDzsa4NqCef758WZJj/GDu0gVGItjKFiClTAmZA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-arrow-functions@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-arrow-functions/-/plugin-transform-arrow-functions-7.18.6.tgz#19063fcf8771ec7b31d742339dac62433d0611fe"
+ integrity sha512-9S9X9RUefzrsHZmKMbDXxweEH+YlE8JJEuat9FdvW9Qh1cw7W64jELCtWNkPBPX5En45uy28KGvA/AySqUh8CQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-async-to-generator@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-async-to-generator/-/plugin-transform-async-to-generator-7.18.6.tgz#ccda3d1ab9d5ced5265fdb13f1882d5476c71615"
+ integrity sha512-ARE5wZLKnTgPW7/1ftQmSi1CmkqqHo2DNmtztFhvgtOWSDfq0Cq9/9L+KnZNYSNrydBekhW3rwShduf59RoXag==
+ dependencies:
+ "@babel/helper-module-imports" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/helper-remap-async-to-generator" "^7.18.6"
+
+"@babel/plugin-transform-block-scoped-functions@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-block-scoped-functions/-/plugin-transform-block-scoped-functions-7.18.6.tgz#9187bf4ba302635b9d70d986ad70f038726216a8"
+ integrity sha512-ExUcOqpPWnliRcPqves5HJcJOvHvIIWfuS4sroBUenPuMdmW+SMHDakmtS7qOo13sVppmUijqeTv7qqGsvURpQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-block-scoping@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-block-scoping/-/plugin-transform-block-scoping-7.18.9.tgz#f9b7e018ac3f373c81452d6ada8bd5a18928926d"
+ integrity sha512-5sDIJRV1KtQVEbt/EIBwGy4T01uYIo4KRB3VUqzkhrAIOGx7AoctL9+Ux88btY0zXdDyPJ9mW+bg+v+XEkGmtw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-classes@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-classes/-/plugin-transform-classes-7.19.0.tgz#0e61ec257fba409c41372175e7c1e606dc79bb20"
+ integrity sha512-YfeEE9kCjqTS9IitkgfJuxjcEtLUHMqa8yUJ6zdz8vR7hKuo6mOy2C05P0F1tdMmDCeuyidKnlrw/iTppHcr2A==
+ dependencies:
+ "@babel/helper-annotate-as-pure" "^7.18.6"
+ "@babel/helper-compilation-targets" "^7.19.0"
+ "@babel/helper-environment-visitor" "^7.18.9"
+ "@babel/helper-function-name" "^7.19.0"
+ "@babel/helper-optimise-call-expression" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.19.0"
+ "@babel/helper-replace-supers" "^7.18.9"
+ "@babel/helper-split-export-declaration" "^7.18.6"
+ globals "^11.1.0"
+
+"@babel/plugin-transform-computed-properties@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-computed-properties/-/plugin-transform-computed-properties-7.18.9.tgz#2357a8224d402dad623caf6259b611e56aec746e"
+ integrity sha512-+i0ZU1bCDymKakLxn5srGHrsAPRELC2WIbzwjLhHW9SIE1cPYkLCL0NlnXMZaM1vhfgA2+M7hySk42VBvrkBRw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-destructuring@^7.18.13":
+ version "7.18.13"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-destructuring/-/plugin-transform-destructuring-7.18.13.tgz#9e03bc4a94475d62b7f4114938e6c5c33372cbf5"
+ integrity sha512-TodpQ29XekIsex2A+YJPj5ax2plkGa8YYY6mFjCohk/IG9IY42Rtuj1FuDeemfg2ipxIFLzPeA83SIBnlhSIow==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-dotall-regex@^7.18.6", "@babel/plugin-transform-dotall-regex@^7.4.4":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-dotall-regex/-/plugin-transform-dotall-regex-7.18.6.tgz#b286b3e7aae6c7b861e45bed0a2fafd6b1a4fef8"
+ integrity sha512-6S3jpun1eEbAxq7TdjLotAsl4WpQI9DxfkycRcKrjhQYzU87qpXdknpBg/e+TdcMehqGnLFi7tnFUBR02Vq6wg==
+ dependencies:
+ "@babel/helper-create-regexp-features-plugin" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-duplicate-keys@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-duplicate-keys/-/plugin-transform-duplicate-keys-7.18.9.tgz#687f15ee3cdad6d85191eb2a372c4528eaa0ae0e"
+ integrity sha512-d2bmXCtZXYc59/0SanQKbiWINadaJXqtvIQIzd4+hNwkWBgyCd5F/2t1kXoUdvPMrxzPvhK6EMQRROxsue+mfw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-exponentiation-operator@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-exponentiation-operator/-/plugin-transform-exponentiation-operator-7.18.6.tgz#421c705f4521888c65e91fdd1af951bfefd4dacd"
+ integrity sha512-wzEtc0+2c88FVR34aQmiz56dxEkxr2g8DQb/KfaFa1JYXOFVsbhvAonFN6PwVWj++fKmku8NP80plJ5Et4wqHw==
+ dependencies:
+ "@babel/helper-builder-binary-assignment-operator-visitor" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-for-of@^7.18.8":
+ version "7.18.8"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-for-of/-/plugin-transform-for-of-7.18.8.tgz#6ef8a50b244eb6a0bdbad0c7c61877e4e30097c1"
+ integrity sha512-yEfTRnjuskWYo0k1mHUqrVWaZwrdq8AYbfrpqULOJOaucGSp4mNMVps+YtA8byoevxS/urwU75vyhQIxcCgiBQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-function-name@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-function-name/-/plugin-transform-function-name-7.18.9.tgz#cc354f8234e62968946c61a46d6365440fc764e0"
+ integrity sha512-WvIBoRPaJQ5yVHzcnJFor7oS5Ls0PYixlTYE63lCj2RtdQEl15M68FXQlxnG6wdraJIXRdR7KI+hQ7q/9QjrCQ==
+ dependencies:
+ "@babel/helper-compilation-targets" "^7.18.9"
+ "@babel/helper-function-name" "^7.18.9"
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-literals@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-literals/-/plugin-transform-literals-7.18.9.tgz#72796fdbef80e56fba3c6a699d54f0de557444bc"
+ integrity sha512-IFQDSRoTPnrAIrI5zoZv73IFeZu2dhu6irxQjY9rNjTT53VmKg9fenjvoiOWOkJ6mm4jKVPtdMzBY98Fp4Z4cg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-member-expression-literals@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-member-expression-literals/-/plugin-transform-member-expression-literals-7.18.6.tgz#ac9fdc1a118620ac49b7e7a5d2dc177a1bfee88e"
+ integrity sha512-qSF1ihLGO3q+/g48k85tUjD033C29TNTVB2paCwZPVmOsjn9pClvYYrM2VeJpBY2bcNkuny0YUyTNRyRxJ54KA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-modules-amd@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-modules-amd/-/plugin-transform-modules-amd-7.18.6.tgz#8c91f8c5115d2202f277549848874027d7172d21"
+ integrity sha512-Pra5aXsmTsOnjM3IajS8rTaLCy++nGM4v3YR4esk5PCsyg9z8NA5oQLwxzMUtDBd8F+UmVza3VxoAaWCbzH1rg==
+ dependencies:
+ "@babel/helper-module-transforms" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+ babel-plugin-dynamic-import-node "^2.3.3"
+
+"@babel/plugin-transform-modules-commonjs@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-modules-commonjs/-/plugin-transform-modules-commonjs-7.18.6.tgz#afd243afba166cca69892e24a8fd8c9f2ca87883"
+ integrity sha512-Qfv2ZOWikpvmedXQJDSbxNqy7Xr/j2Y8/KfijM0iJyKkBTmWuvCA1yeH1yDM7NJhBW/2aXxeucLj6i80/LAJ/Q==
+ dependencies:
+ "@babel/helper-module-transforms" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/helper-simple-access" "^7.18.6"
+ babel-plugin-dynamic-import-node "^2.3.3"
+
+"@babel/plugin-transform-modules-systemjs@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-modules-systemjs/-/plugin-transform-modules-systemjs-7.19.0.tgz#5f20b471284430f02d9c5059d9b9a16d4b085a1f"
+ integrity sha512-x9aiR0WXAWmOWsqcsnrzGR+ieaTMVyGyffPVA7F8cXAGt/UxefYv6uSHZLkAFChN5M5Iy1+wjE+xJuPt22H39A==
+ dependencies:
+ "@babel/helper-hoist-variables" "^7.18.6"
+ "@babel/helper-module-transforms" "^7.19.0"
+ "@babel/helper-plugin-utils" "^7.19.0"
+ "@babel/helper-validator-identifier" "^7.18.6"
+ babel-plugin-dynamic-import-node "^2.3.3"
+
+"@babel/plugin-transform-modules-umd@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-modules-umd/-/plugin-transform-modules-umd-7.18.6.tgz#81d3832d6034b75b54e62821ba58f28ed0aab4b9"
+ integrity sha512-dcegErExVeXcRqNtkRU/z8WlBLnvD4MRnHgNs3MytRO1Mn1sHRyhbcpYbVMGclAqOjdW+9cfkdZno9dFdfKLfQ==
+ dependencies:
+ "@babel/helper-module-transforms" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-named-capturing-groups-regex@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-named-capturing-groups-regex/-/plugin-transform-named-capturing-groups-regex-7.19.0.tgz#58c52422e4f91a381727faed7d513c89d7f41ada"
+ integrity sha512-HDSuqOQzkU//kfGdiHBt71/hkDTApw4U/cMVgKgX7PqfB3LOaK+2GtCEsBu1dL9CkswDm0Gwehht1dCr421ULQ==
+ dependencies:
+ "@babel/helper-create-regexp-features-plugin" "^7.19.0"
+ "@babel/helper-plugin-utils" "^7.19.0"
+
+"@babel/plugin-transform-new-target@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-new-target/-/plugin-transform-new-target-7.18.6.tgz#d128f376ae200477f37c4ddfcc722a8a1b3246a8"
+ integrity sha512-DjwFA/9Iu3Z+vrAn+8pBUGcjhxKguSMlsFqeCKbhb9BAV756v0krzVK04CRDi/4aqmk8BsHb4a/gFcaA5joXRw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-object-super@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-object-super/-/plugin-transform-object-super-7.18.6.tgz#fb3c6ccdd15939b6ff7939944b51971ddc35912c"
+ integrity sha512-uvGz6zk+pZoS1aTZrOvrbj6Pp/kK2mp45t2B+bTDre2UgsZZ8EZLSJtUg7m/no0zOJUWgFONpB7Zv9W2tSaFlA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/helper-replace-supers" "^7.18.6"
+
+"@babel/plugin-transform-parameters@^7.12.1", "@babel/plugin-transform-parameters@^7.18.8":
+ version "7.18.8"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-parameters/-/plugin-transform-parameters-7.18.8.tgz#ee9f1a0ce6d78af58d0956a9378ea3427cccb48a"
+ integrity sha512-ivfbE3X2Ss+Fj8nnXvKJS6sjRG4gzwPMsP+taZC+ZzEGjAYlvENixmt1sZ5Ca6tWls+BlKSGKPJ6OOXvXCbkFg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-property-literals@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-property-literals/-/plugin-transform-property-literals-7.18.6.tgz#e22498903a483448e94e032e9bbb9c5ccbfc93a3"
+ integrity sha512-cYcs6qlgafTud3PAzrrRNbQtfpQ8+y/+M5tKmksS9+M1ckbH6kzY8MrexEM9mcA6JDsukE19iIRvAyYl463sMg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-react-constant-elements@^7.17.12":
+ version "7.18.12"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-react-constant-elements/-/plugin-transform-react-constant-elements-7.18.12.tgz#edf3bec47eb98f14e84fa0af137fcc6aad8e0443"
+ integrity sha512-Q99U9/ttiu+LMnRU8psd23HhvwXmKWDQIpocm0JKaICcZHnw+mdQbHm6xnSy7dOl8I5PELakYtNBubNQlBXbZw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-react-display-name@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-react-display-name/-/plugin-transform-react-display-name-7.18.6.tgz#8b1125f919ef36ebdfff061d664e266c666b9415"
+ integrity sha512-TV4sQ+T013n61uMoygyMRm+xf04Bd5oqFpv2jAEQwSZ8NwQA7zeRPg1LMVg2PWi3zWBz+CLKD+v5bcpZ/BS0aA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-react-jsx-development@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-react-jsx-development/-/plugin-transform-react-jsx-development-7.18.6.tgz#dbe5c972811e49c7405b630e4d0d2e1380c0ddc5"
+ integrity sha512-SA6HEjwYFKF7WDjWcMcMGUimmw/nhNRDWxr+KaLSCrkD/LMDBvWRmHAYgE1HDeF8KUuI8OAu+RT6EOtKxSW2qA==
+ dependencies:
+ "@babel/plugin-transform-react-jsx" "^7.18.6"
+
+"@babel/plugin-transform-react-jsx@^7.18.6":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-react-jsx/-/plugin-transform-react-jsx-7.19.0.tgz#b3cbb7c3a00b92ec8ae1027910e331ba5c500eb9"
+ integrity sha512-UVEvX3tXie3Szm3emi1+G63jyw1w5IcMY0FSKM+CRnKRI5Mr1YbCNgsSTwoTwKphQEG9P+QqmuRFneJPZuHNhg==
+ dependencies:
+ "@babel/helper-annotate-as-pure" "^7.18.6"
+ "@babel/helper-module-imports" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.19.0"
+ "@babel/plugin-syntax-jsx" "^7.18.6"
+ "@babel/types" "^7.19.0"
+
+"@babel/plugin-transform-react-pure-annotations@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-react-pure-annotations/-/plugin-transform-react-pure-annotations-7.18.6.tgz#561af267f19f3e5d59291f9950fd7b9663d0d844"
+ integrity sha512-I8VfEPg9r2TRDdvnHgPepTKvuRomzA8+u+nhY7qSI1fR2hRNebasZEETLyM5mAUr0Ku56OkXJ0I7NHJnO6cJiQ==
+ dependencies:
+ "@babel/helper-annotate-as-pure" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-regenerator@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-regenerator/-/plugin-transform-regenerator-7.18.6.tgz#585c66cb84d4b4bf72519a34cfce761b8676ca73"
+ integrity sha512-poqRI2+qiSdeldcz4wTSTXBRryoq3Gc70ye7m7UD5Ww0nE29IXqMl6r7Nd15WBgRd74vloEMlShtH6CKxVzfmQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ regenerator-transform "^0.15.0"
+
+"@babel/plugin-transform-reserved-words@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-reserved-words/-/plugin-transform-reserved-words-7.18.6.tgz#b1abd8ebf8edaa5f7fe6bbb8d2133d23b6a6f76a"
+ integrity sha512-oX/4MyMoypzHjFrT1CdivfKZ+XvIPMFXwwxHp/r0Ddy2Vuomt4HDFGmft1TAY2yiTKiNSsh3kjBAzcM8kSdsjA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-runtime@^7.18.6":
+ version "7.18.10"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-runtime/-/plugin-transform-runtime-7.18.10.tgz#37d14d1fa810a368fd635d4d1476c0154144a96f"
+ integrity sha512-q5mMeYAdfEbpBAgzl7tBre/la3LeCxmDO1+wMXRdPWbcoMjR3GiXlCLk7JBZVVye0bqTGNMbt0yYVXX1B1jEWQ==
+ dependencies:
+ "@babel/helper-module-imports" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.9"
+ babel-plugin-polyfill-corejs2 "^0.3.2"
+ babel-plugin-polyfill-corejs3 "^0.5.3"
+ babel-plugin-polyfill-regenerator "^0.4.0"
+ semver "^6.3.0"
+
+"@babel/plugin-transform-shorthand-properties@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-shorthand-properties/-/plugin-transform-shorthand-properties-7.18.6.tgz#6d6df7983d67b195289be24909e3f12a8f664dc9"
+ integrity sha512-eCLXXJqv8okzg86ywZJbRn19YJHU4XUa55oz2wbHhaQVn/MM+XhukiT7SYqp/7o00dg52Rj51Ny+Ecw4oyoygw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-spread@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-spread/-/plugin-transform-spread-7.19.0.tgz#dd60b4620c2fec806d60cfaae364ec2188d593b6"
+ integrity sha512-RsuMk7j6n+r752EtzyScnWkQyuJdli6LdO5Klv8Yx0OfPVTcQkIUfS8clx5e9yHXzlnhOZF3CbQ8C2uP5j074w==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.19.0"
+ "@babel/helper-skip-transparent-expression-wrappers" "^7.18.9"
+
+"@babel/plugin-transform-sticky-regex@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-sticky-regex/-/plugin-transform-sticky-regex-7.18.6.tgz#c6706eb2b1524028e317720339583ad0f444adcc"
+ integrity sha512-kfiDrDQ+PBsQDO85yj1icueWMfGfJFKN1KCkndygtu/C9+XUfydLC8Iv5UYJqRwy4zk8EcplRxEOeLyjq1gm6Q==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/plugin-transform-template-literals@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-template-literals/-/plugin-transform-template-literals-7.18.9.tgz#04ec6f10acdaa81846689d63fae117dd9c243a5e"
+ integrity sha512-S8cOWfT82gTezpYOiVaGHrCbhlHgKhQt8XH5ES46P2XWmX92yisoZywf5km75wv5sYcXDUCLMmMxOLCtthDgMA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-typeof-symbol@^7.18.9":
+ version "7.18.9"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-typeof-symbol/-/plugin-transform-typeof-symbol-7.18.9.tgz#c8cea68263e45addcd6afc9091429f80925762c0"
+ integrity sha512-SRfwTtF11G2aemAZWivL7PD+C9z52v9EvMqH9BuYbabyPuKUvSWks3oCg6041pT925L4zVFqaVBeECwsmlguEw==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-typescript@^7.18.6":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-typescript/-/plugin-transform-typescript-7.19.0.tgz#50c3a68ec8efd5e040bde2cd764e8e16bc0cbeaf"
+ integrity sha512-DOOIywxPpkQHXijXv+s9MDAyZcLp12oYRl3CMWZ6u7TjSoCBq/KqHR/nNFR3+i2xqheZxoF0H2XyL7B6xeSRuA==
+ dependencies:
+ "@babel/helper-create-class-features-plugin" "^7.19.0"
+ "@babel/helper-plugin-utils" "^7.19.0"
+ "@babel/plugin-syntax-typescript" "^7.18.6"
+
+"@babel/plugin-transform-unicode-escapes@^7.18.10":
+ version "7.18.10"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-unicode-escapes/-/plugin-transform-unicode-escapes-7.18.10.tgz#1ecfb0eda83d09bbcb77c09970c2dd55832aa246"
+ integrity sha512-kKAdAI+YzPgGY/ftStBFXTI1LZFju38rYThnfMykS+IXy8BVx+res7s2fxf1l8I35DV2T97ezo6+SGrXz6B3iQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.9"
+
+"@babel/plugin-transform-unicode-regex@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/plugin-transform-unicode-regex/-/plugin-transform-unicode-regex-7.18.6.tgz#194317225d8c201bbae103364ffe9e2cea36cdca"
+ integrity sha512-gE7A6Lt7YLnNOL3Pb9BNeZvi+d8l7tcRrG4+pwJjK9hD2xX4mEvjlQW60G9EEmfXVYRPv9VRQcyegIVHCql/AA==
+ dependencies:
+ "@babel/helper-create-regexp-features-plugin" "^7.18.6"
+ "@babel/helper-plugin-utils" "^7.18.6"
+
+"@babel/preset-env@^7.18.2", "@babel/preset-env@^7.18.6":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/preset-env/-/preset-env-7.19.0.tgz#fd18caf499a67d6411b9ded68dc70d01ed1e5da7"
+ integrity sha512-1YUju1TAFuzjIQqNM9WsF4U6VbD/8t3wEAlw3LFYuuEr+ywqLRcSXxFKz4DCEj+sN94l/XTDiUXYRrsvMpz9WQ==
+ dependencies:
+ "@babel/compat-data" "^7.19.0"
+ "@babel/helper-compilation-targets" "^7.19.0"
+ "@babel/helper-plugin-utils" "^7.19.0"
+ "@babel/helper-validator-option" "^7.18.6"
+ "@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression" "^7.18.6"
+ "@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining" "^7.18.9"
+ "@babel/plugin-proposal-async-generator-functions" "^7.19.0"
+ "@babel/plugin-proposal-class-properties" "^7.18.6"
+ "@babel/plugin-proposal-class-static-block" "^7.18.6"
+ "@babel/plugin-proposal-dynamic-import" "^7.18.6"
+ "@babel/plugin-proposal-export-namespace-from" "^7.18.9"
+ "@babel/plugin-proposal-json-strings" "^7.18.6"
+ "@babel/plugin-proposal-logical-assignment-operators" "^7.18.9"
+ "@babel/plugin-proposal-nullish-coalescing-operator" "^7.18.6"
+ "@babel/plugin-proposal-numeric-separator" "^7.18.6"
+ "@babel/plugin-proposal-object-rest-spread" "^7.18.9"
+ "@babel/plugin-proposal-optional-catch-binding" "^7.18.6"
+ "@babel/plugin-proposal-optional-chaining" "^7.18.9"
+ "@babel/plugin-proposal-private-methods" "^7.18.6"
+ "@babel/plugin-proposal-private-property-in-object" "^7.18.6"
+ "@babel/plugin-proposal-unicode-property-regex" "^7.18.6"
+ "@babel/plugin-syntax-async-generators" "^7.8.4"
+ "@babel/plugin-syntax-class-properties" "^7.12.13"
+ "@babel/plugin-syntax-class-static-block" "^7.14.5"
+ "@babel/plugin-syntax-dynamic-import" "^7.8.3"
+ "@babel/plugin-syntax-export-namespace-from" "^7.8.3"
+ "@babel/plugin-syntax-import-assertions" "^7.18.6"
+ "@babel/plugin-syntax-json-strings" "^7.8.3"
+ "@babel/plugin-syntax-logical-assignment-operators" "^7.10.4"
+ "@babel/plugin-syntax-nullish-coalescing-operator" "^7.8.3"
+ "@babel/plugin-syntax-numeric-separator" "^7.10.4"
+ "@babel/plugin-syntax-object-rest-spread" "^7.8.3"
+ "@babel/plugin-syntax-optional-catch-binding" "^7.8.3"
+ "@babel/plugin-syntax-optional-chaining" "^7.8.3"
+ "@babel/plugin-syntax-private-property-in-object" "^7.14.5"
+ "@babel/plugin-syntax-top-level-await" "^7.14.5"
+ "@babel/plugin-transform-arrow-functions" "^7.18.6"
+ "@babel/plugin-transform-async-to-generator" "^7.18.6"
+ "@babel/plugin-transform-block-scoped-functions" "^7.18.6"
+ "@babel/plugin-transform-block-scoping" "^7.18.9"
+ "@babel/plugin-transform-classes" "^7.19.0"
+ "@babel/plugin-transform-computed-properties" "^7.18.9"
+ "@babel/plugin-transform-destructuring" "^7.18.13"
+ "@babel/plugin-transform-dotall-regex" "^7.18.6"
+ "@babel/plugin-transform-duplicate-keys" "^7.18.9"
+ "@babel/plugin-transform-exponentiation-operator" "^7.18.6"
+ "@babel/plugin-transform-for-of" "^7.18.8"
+ "@babel/plugin-transform-function-name" "^7.18.9"
+ "@babel/plugin-transform-literals" "^7.18.9"
+ "@babel/plugin-transform-member-expression-literals" "^7.18.6"
+ "@babel/plugin-transform-modules-amd" "^7.18.6"
+ "@babel/plugin-transform-modules-commonjs" "^7.18.6"
+ "@babel/plugin-transform-modules-systemjs" "^7.19.0"
+ "@babel/plugin-transform-modules-umd" "^7.18.6"
+ "@babel/plugin-transform-named-capturing-groups-regex" "^7.19.0"
+ "@babel/plugin-transform-new-target" "^7.18.6"
+ "@babel/plugin-transform-object-super" "^7.18.6"
+ "@babel/plugin-transform-parameters" "^7.18.8"
+ "@babel/plugin-transform-property-literals" "^7.18.6"
+ "@babel/plugin-transform-regenerator" "^7.18.6"
+ "@babel/plugin-transform-reserved-words" "^7.18.6"
+ "@babel/plugin-transform-shorthand-properties" "^7.18.6"
+ "@babel/plugin-transform-spread" "^7.19.0"
+ "@babel/plugin-transform-sticky-regex" "^7.18.6"
+ "@babel/plugin-transform-template-literals" "^7.18.9"
+ "@babel/plugin-transform-typeof-symbol" "^7.18.9"
+ "@babel/plugin-transform-unicode-escapes" "^7.18.10"
+ "@babel/plugin-transform-unicode-regex" "^7.18.6"
+ "@babel/preset-modules" "^0.1.5"
+ "@babel/types" "^7.19.0"
+ babel-plugin-polyfill-corejs2 "^0.3.2"
+ babel-plugin-polyfill-corejs3 "^0.5.3"
+ babel-plugin-polyfill-regenerator "^0.4.0"
+ core-js-compat "^3.22.1"
+ semver "^6.3.0"
+
+"@babel/preset-modules@^0.1.5":
+ version "0.1.5"
+ resolved "https://registry.yarnpkg.com/@babel/preset-modules/-/preset-modules-0.1.5.tgz#ef939d6e7f268827e1841638dc6ff95515e115d9"
+ integrity sha512-A57th6YRG7oR3cq/yt/Y84MvGgE0eJG2F1JLhKuyG+jFxEgrd/HAMJatiFtmOiZurz+0DkrvbheCLaV5f2JfjA==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.0.0"
+ "@babel/plugin-proposal-unicode-property-regex" "^7.4.4"
+ "@babel/plugin-transform-dotall-regex" "^7.4.4"
+ "@babel/types" "^7.4.4"
+ esutils "^2.0.2"
+
+"@babel/preset-react@^7.17.12", "@babel/preset-react@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/preset-react/-/preset-react-7.18.6.tgz#979f76d6277048dc19094c217b507f3ad517dd2d"
+ integrity sha512-zXr6atUmyYdiWRVLOZahakYmOBHtWc2WGCkP8PYTgZi0iJXDY2CN180TdrIW4OGOAdLc7TifzDIvtx6izaRIzg==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/helper-validator-option" "^7.18.6"
+ "@babel/plugin-transform-react-display-name" "^7.18.6"
+ "@babel/plugin-transform-react-jsx" "^7.18.6"
+ "@babel/plugin-transform-react-jsx-development" "^7.18.6"
+ "@babel/plugin-transform-react-pure-annotations" "^7.18.6"
+
+"@babel/preset-typescript@^7.17.12", "@babel/preset-typescript@^7.18.6":
+ version "7.18.6"
+ resolved "https://registry.yarnpkg.com/@babel/preset-typescript/-/preset-typescript-7.18.6.tgz#ce64be3e63eddc44240c6358daefac17b3186399"
+ integrity sha512-s9ik86kXBAnD760aybBucdpnLsAt0jK1xqJn2juOn9lkOvSHV60os5hxoVJsPzMQxvnUJFAlkont2DvvaYEBtQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "^7.18.6"
+ "@babel/helper-validator-option" "^7.18.6"
+ "@babel/plugin-transform-typescript" "^7.18.6"
+
+"@babel/runtime-corejs3@^7.18.6":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/runtime-corejs3/-/runtime-corejs3-7.19.0.tgz#0df75cb8e5ecba3ca9e658898694e5326d52397f"
+ integrity sha512-JyXXoCu1N8GLuKc2ii8y5RGma5FMpFeO2nAQIe0Yzrbq+rQnN+sFj47auLblR5ka6aHNGPDgv8G/iI2Grb0ldQ==
+ dependencies:
+ core-js-pure "^3.20.2"
+ regenerator-runtime "^0.13.4"
+
+"@babel/runtime@^7.1.2", "@babel/runtime@^7.10.2", "@babel/runtime@^7.10.3", "@babel/runtime@^7.12.1", "@babel/runtime@^7.12.13", "@babel/runtime@^7.12.5", "@babel/runtime@^7.18.6", "@babel/runtime@^7.8.4":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/runtime/-/runtime-7.19.0.tgz#22b11c037b094d27a8a2504ea4dcff00f50e2259"
+ integrity sha512-eR8Lo9hnDS7tqkO7NsV+mKvCmv5boaXFSZ70DnfhcgiEne8hv9oCEd36Klw74EtizEqLsy4YnW8UWwpBVolHZA==
+ dependencies:
+ regenerator-runtime "^0.13.4"
+
+"@babel/template@^7.12.7", "@babel/template@^7.18.10":
+ version "7.18.10"
+ resolved "https://registry.yarnpkg.com/@babel/template/-/template-7.18.10.tgz#6f9134835970d1dbf0835c0d100c9f38de0c5e71"
+ integrity sha512-TI+rCtooWHr3QJ27kJxfjutghu44DLnasDMwpDqCXVTal9RLp3RSYNh4NdBrRP2cQAoG9A8juOQl6P6oZG4JxA==
+ dependencies:
+ "@babel/code-frame" "^7.18.6"
+ "@babel/parser" "^7.18.10"
+ "@babel/types" "^7.18.10"
+
+"@babel/traverse@^7.12.9", "@babel/traverse@^7.18.8", "@babel/traverse@^7.18.9", "@babel/traverse@^7.19.0":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/traverse/-/traverse-7.19.0.tgz#eb9c561c7360005c592cc645abafe0c3c4548eed"
+ integrity sha512-4pKpFRDh+utd2mbRC8JLnlsMUii3PMHjpL6a0SZ4NMZy7YFP9aXORxEhdMVOc9CpWtDF09IkciQLEhK7Ml7gRA==
+ dependencies:
+ "@babel/code-frame" "^7.18.6"
+ "@babel/generator" "^7.19.0"
+ "@babel/helper-environment-visitor" "^7.18.9"
+ "@babel/helper-function-name" "^7.19.0"
+ "@babel/helper-hoist-variables" "^7.18.6"
+ "@babel/helper-split-export-declaration" "^7.18.6"
+ "@babel/parser" "^7.19.0"
+ "@babel/types" "^7.19.0"
+ debug "^4.1.0"
+ globals "^11.1.0"
+
+"@babel/types@^7.12.7", "@babel/types@^7.18.10", "@babel/types@^7.18.4", "@babel/types@^7.18.6", "@babel/types@^7.18.9", "@babel/types@^7.19.0", "@babel/types@^7.4.4":
+ version "7.19.0"
+ resolved "https://registry.yarnpkg.com/@babel/types/-/types-7.19.0.tgz#75f21d73d73dc0351f3368d28db73465f4814600"
+ integrity sha512-YuGopBq3ke25BVSiS6fgF49Ul9gH1x70Bcr6bqRLjWCkcX8Hre1/5+z+IiWOIerRMSSEfGZVB9z9kyq7wVs9YA==
+ dependencies:
+ "@babel/helper-string-parser" "^7.18.10"
+ "@babel/helper-validator-identifier" "^7.18.6"
+ to-fast-properties "^2.0.0"
+
+"@colors/colors@1.5.0":
+ version "1.5.0"
+ resolved "https://registry.yarnpkg.com/@colors/colors/-/colors-1.5.0.tgz#bb504579c1cae923e6576a4f5da43d25f97bdbd9"
+ integrity sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ==
+
+"@commitlint/config-conventional@^17.1.0":
+ version "17.1.0"
+ resolved "https://registry.yarnpkg.com/@commitlint/config-conventional/-/config-conventional-17.1.0.tgz#9bd852766e08842bfe0fe4deb40e152eb718ec1b"
+ integrity sha512-WU2p0c9/jLi8k2q2YrDV96Y8XVswQOceIQ/wyJvQxawJSCasLdRB3kUIYdNjOCJsxkpoUlV/b90ZPxp1MYZDiA==
+ dependencies:
+ conventional-changelog-conventionalcommits "^5.0.0"
+
+"@commitlint/config-validator@^17.1.0":
+ version "17.1.0"
+ resolved "https://registry.yarnpkg.com/@commitlint/config-validator/-/config-validator-17.1.0.tgz#51d09ca53d7a0d19736abf34eb18a66efce0f97a"
+ integrity sha512-Q1rRRSU09ngrTgeTXHq6ePJs2KrI+axPTgkNYDWSJIuS1Op4w3J30vUfSXjwn5YEJHklK3fSqWNHmBhmTR7Vdg==
+ dependencies:
+ "@commitlint/types" "^17.0.0"
+ ajv "^8.11.0"
+
+"@commitlint/execute-rule@^17.0.0":
+ version "17.0.0"
+ resolved "https://registry.yarnpkg.com/@commitlint/execute-rule/-/execute-rule-17.0.0.tgz#186e9261fd36733922ae617497888c4bdb6e5c92"
+ integrity sha512-nVjL/w/zuqjCqSJm8UfpNaw66V9WzuJtQvEnCrK4jDw6qKTmZB+1JQ8m6BQVZbNBcwfYdDNKnhIhqI0Rk7lgpQ==
+
+"@commitlint/load@>6.1.1":
+ version "17.1.2"
+ resolved "https://registry.yarnpkg.com/@commitlint/load/-/load-17.1.2.tgz#19c88be570d8666bbd32f9b3d81925a08328bc13"
+ integrity sha512-sk2p/jFYAWLChIfOIp/MGSIn/WzZ0vkc3afw+l4X8hGEYkvDe4gQUUAVxjl/6xMRn0HgnSLMZ04xXh5pkTsmgg==
+ dependencies:
+ "@commitlint/config-validator" "^17.1.0"
+ "@commitlint/execute-rule" "^17.0.0"
+ "@commitlint/resolve-extends" "^17.1.0"
+ "@commitlint/types" "^17.0.0"
+ "@types/node" "^14.0.0"
+ chalk "^4.1.0"
+ cosmiconfig "^7.0.0"
+ cosmiconfig-typescript-loader "^4.0.0"
+ lodash "^4.17.19"
+ resolve-from "^5.0.0"
+ ts-node "^10.8.1"
+ typescript "^4.6.4"
+
+"@commitlint/resolve-extends@^17.1.0":
+ version "17.1.0"
+ resolved "https://registry.yarnpkg.com/@commitlint/resolve-extends/-/resolve-extends-17.1.0.tgz#7cf04fa13096c8a6544a4af13321fdf8d0d50694"
+ integrity sha512-jqKm00LJ59T0O8O4bH4oMa4XyJVEOK4GzH8Qye9XKji+Q1FxhZznxMV/bDLyYkzbTodBt9sL0WLql8wMtRTbqQ==
+ dependencies:
+ "@commitlint/config-validator" "^17.1.0"
+ "@commitlint/types" "^17.0.0"
+ import-fresh "^3.0.0"
+ lodash "^4.17.19"
+ resolve-from "^5.0.0"
+ resolve-global "^1.0.0"
+
+"@commitlint/types@^17.0.0":
+ version "17.0.0"
+ resolved "https://registry.yarnpkg.com/@commitlint/types/-/types-17.0.0.tgz#3b4604c1a0f06c340ce976e6c6903d4f56e3e690"
+ integrity sha512-hBAw6U+SkAT5h47zDMeOu3HSiD0SODw4Aq7rRNh1ceUmL7GyLKYhPbUvlRWqZ65XjBLPHZhFyQlRaPNz8qvUyQ==
+ dependencies:
+ chalk "^4.1.0"
+
+"@cspotcode/source-map-support@^0.8.0":
+ version "0.8.1"
+ resolved "https://registry.yarnpkg.com/@cspotcode/source-map-support/-/source-map-support-0.8.1.tgz#00629c35a688e05a88b1cda684fb9d5e73f000a1"
+ integrity sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==
+ dependencies:
+ "@jridgewell/trace-mapping" "0.3.9"
+
+"@docsearch/css@3.2.1":
+ version "3.2.1"
+ resolved "https://registry.yarnpkg.com/@docsearch/css/-/css-3.2.1.tgz#c05d7818b0e43b42f9efa2d82a11c36606b37b27"
+ integrity sha512-gaP6TxxwQC+K8D6TRx5WULUWKrcbzECOPA2KCVMuI+6C7dNiGUk5yXXzVhc5sld79XKYLnO9DRTI4mjXDYkh+g==
+
+"@docsearch/react@^3.1.1":
+ version "3.2.1"
+ resolved "https://registry.yarnpkg.com/@docsearch/react/-/react-3.2.1.tgz#112ad88db07367fa6fd933d67d58421d8d8289aa"
+ integrity sha512-EzTQ/y82s14IQC5XVestiK/kFFMe2aagoYFuTAIfIb/e+4FU7kSMKonRtLwsCiLQHmjvNQq+HO+33giJ5YVtaQ==
+ dependencies:
+ "@algolia/autocomplete-core" "1.7.1"
+ "@algolia/autocomplete-preset-algolia" "1.7.1"
+ "@docsearch/css" "3.2.1"
+ algoliasearch "^4.0.0"
+
+"@docusaurus/core@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/core/-/core-2.1.0.tgz#4aedc306f4c4cd2e0491b641bf78941d4b480ab6"
+ integrity sha512-/ZJ6xmm+VB9Izbn0/s6h6289cbPy2k4iYFwWDhjiLsVqwa/Y0YBBcXvStfaHccudUC3OfP+26hMk7UCjc50J6Q==
+ dependencies:
+ "@babel/core" "^7.18.6"
+ "@babel/generator" "^7.18.7"
+ "@babel/plugin-syntax-dynamic-import" "^7.8.3"
+ "@babel/plugin-transform-runtime" "^7.18.6"
+ "@babel/preset-env" "^7.18.6"
+ "@babel/preset-react" "^7.18.6"
+ "@babel/preset-typescript" "^7.18.6"
+ "@babel/runtime" "^7.18.6"
+ "@babel/runtime-corejs3" "^7.18.6"
+ "@babel/traverse" "^7.18.8"
+ "@docusaurus/cssnano-preset" "2.1.0"
+ "@docusaurus/logger" "2.1.0"
+ "@docusaurus/mdx-loader" "2.1.0"
+ "@docusaurus/react-loadable" "5.5.2"
+ "@docusaurus/utils" "2.1.0"
+ "@docusaurus/utils-common" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ "@slorber/static-site-generator-webpack-plugin" "^4.0.7"
+ "@svgr/webpack" "^6.2.1"
+ autoprefixer "^10.4.7"
+ babel-loader "^8.2.5"
+ babel-plugin-dynamic-import-node "^2.3.3"
+ boxen "^6.2.1"
+ chalk "^4.1.2"
+ chokidar "^3.5.3"
+ clean-css "^5.3.0"
+ cli-table3 "^0.6.2"
+ combine-promises "^1.1.0"
+ commander "^5.1.0"
+ copy-webpack-plugin "^11.0.0"
+ core-js "^3.23.3"
+ css-loader "^6.7.1"
+ css-minimizer-webpack-plugin "^4.0.0"
+ cssnano "^5.1.12"
+ del "^6.1.1"
+ detect-port "^1.3.0"
+ escape-html "^1.0.3"
+ eta "^1.12.3"
+ file-loader "^6.2.0"
+ fs-extra "^10.1.0"
+ html-minifier-terser "^6.1.0"
+ html-tags "^3.2.0"
+ html-webpack-plugin "^5.5.0"
+ import-fresh "^3.3.0"
+ leven "^3.1.0"
+ lodash "^4.17.21"
+ mini-css-extract-plugin "^2.6.1"
+ postcss "^8.4.14"
+ postcss-loader "^7.0.0"
+ prompts "^2.4.2"
+ react-dev-utils "^12.0.1"
+ react-helmet-async "^1.3.0"
+ react-loadable "npm:@docusaurus/react-loadable@5.5.2"
+ react-loadable-ssr-addon-v5-slorber "^1.0.1"
+ react-router "^5.3.3"
+ react-router-config "^5.1.1"
+ react-router-dom "^5.3.3"
+ rtl-detect "^1.0.4"
+ semver "^7.3.7"
+ serve-handler "^6.1.3"
+ shelljs "^0.8.5"
+ terser-webpack-plugin "^5.3.3"
+ tslib "^2.4.0"
+ update-notifier "^5.1.0"
+ url-loader "^4.1.1"
+ wait-on "^6.0.1"
+ webpack "^5.73.0"
+ webpack-bundle-analyzer "^4.5.0"
+ webpack-dev-server "^4.9.3"
+ webpack-merge "^5.8.0"
+ webpackbar "^5.0.2"
+
+"@docusaurus/cssnano-preset@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/cssnano-preset/-/cssnano-preset-2.1.0.tgz#5b42107769b7cbc61655496090bc262d7788d6ab"
+ integrity sha512-pRLewcgGhOies6pzsUROfmPStDRdFw+FgV5sMtLr5+4Luv2rty5+b/eSIMMetqUsmg3A9r9bcxHk9bKAKvx3zQ==
+ dependencies:
+ cssnano-preset-advanced "^5.3.8"
+ postcss "^8.4.14"
+ postcss-sort-media-queries "^4.2.1"
+ tslib "^2.4.0"
+
+"@docusaurus/logger@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/logger/-/logger-2.1.0.tgz#86c97e948f578814d3e61fc2b2ad283043cbe87a"
+ integrity sha512-uuJx2T6hDBg82joFeyobywPjSOIfeq05GfyKGHThVoXuXsu1KAzMDYcjoDxarb9CoHCI/Dor8R2MoL6zII8x1Q==
+ dependencies:
+ chalk "^4.1.2"
+ tslib "^2.4.0"
+
+"@docusaurus/mdx-loader@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/mdx-loader/-/mdx-loader-2.1.0.tgz#3fca9576cc73a22f8e7d9941985590b9e47a8526"
+ integrity sha512-i97hi7hbQjsD3/8OSFhLy7dbKGH8ryjEzOfyhQIn2CFBYOY3ko0vMVEf3IY9nD3Ld7amYzsZ8153RPkcnXA+Lg==
+ dependencies:
+ "@babel/parser" "^7.18.8"
+ "@babel/traverse" "^7.18.8"
+ "@docusaurus/logger" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@mdx-js/mdx" "^1.6.22"
+ escape-html "^1.0.3"
+ file-loader "^6.2.0"
+ fs-extra "^10.1.0"
+ image-size "^1.0.1"
+ mdast-util-to-string "^2.0.0"
+ remark-emoji "^2.2.0"
+ stringify-object "^3.3.0"
+ tslib "^2.4.0"
+ unified "^9.2.2"
+ unist-util-visit "^2.0.3"
+ url-loader "^4.1.1"
+ webpack "^5.73.0"
+
+"@docusaurus/module-type-aliases@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/module-type-aliases/-/module-type-aliases-2.1.0.tgz#322f8fd5b436af2154c0dddfa173435730e66261"
+ integrity sha512-Z8WZaK5cis3xEtyfOT817u9xgGUauT0PuuVo85ysnFRX8n7qLN1lTPCkC+aCmFm/UcV8h/W5T4NtIsst94UntQ==
+ dependencies:
+ "@docusaurus/react-loadable" "5.5.2"
+ "@docusaurus/types" "2.1.0"
+ "@types/history" "^4.7.11"
+ "@types/react" "*"
+ "@types/react-router-config" "*"
+ "@types/react-router-dom" "*"
+ react-helmet-async "*"
+ react-loadable "npm:@docusaurus/react-loadable@5.5.2"
+
+"@docusaurus/plugin-content-blog@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.1.0.tgz#32b1a7cd4b0026f4a76fce4edc5cfdd0edb1ec42"
+ integrity sha512-xEp6jlu92HMNUmyRBEeJ4mCW1s77aAEQO4Keez94cUY/Ap7G/r0Awa6xSLff7HL0Fjg8KK1bEbDy7q9voIavdg==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/logger" "2.1.0"
+ "@docusaurus/mdx-loader" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@docusaurus/utils-common" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ cheerio "^1.0.0-rc.12"
+ feed "^4.2.2"
+ fs-extra "^10.1.0"
+ lodash "^4.17.21"
+ reading-time "^1.5.0"
+ tslib "^2.4.0"
+ unist-util-visit "^2.0.3"
+ utility-types "^3.10.0"
+ webpack "^5.73.0"
+
+"@docusaurus/plugin-content-docs@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-docs/-/plugin-content-docs-2.1.0.tgz#3fcdf258c13dde27268ce7108a102b74ca4c279b"
+ integrity sha512-Rup5pqXrXlKGIC4VgwvioIhGWF7E/NNSlxv+JAxRYpik8VKlWsk9ysrdHIlpX+KJUCO9irnY21kQh2814mlp/Q==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/logger" "2.1.0"
+ "@docusaurus/mdx-loader" "2.1.0"
+ "@docusaurus/module-type-aliases" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ "@types/react-router-config" "^5.0.6"
+ combine-promises "^1.1.0"
+ fs-extra "^10.1.0"
+ import-fresh "^3.3.0"
+ js-yaml "^4.1.0"
+ lodash "^4.17.21"
+ tslib "^2.4.0"
+ utility-types "^3.10.0"
+ webpack "^5.73.0"
+
+"@docusaurus/plugin-content-pages@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-pages/-/plugin-content-pages-2.1.0.tgz#714d24f71d49dbfed888f50c15e975c2154c3ce8"
+ integrity sha512-SwZdDZRlObHNKXTnFo7W2aF6U5ZqNVI55Nw2GCBryL7oKQSLeI0lsrMlMXdzn+fS7OuBTd3MJBO1T4Zpz0i/+g==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/mdx-loader" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ fs-extra "^10.1.0"
+ tslib "^2.4.0"
+ webpack "^5.73.0"
+
+"@docusaurus/plugin-debug@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/plugin-debug/-/plugin-debug-2.1.0.tgz#b3145affb40e25cf342174638952a5928ddaf7dc"
+ integrity sha512-8wsDq3OIfiy6440KLlp/qT5uk+WRHQXIXklNHEeZcar+Of0TZxCNe2FBpv+bzb/0qcdP45ia5i5WmR5OjN6DPw==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ fs-extra "^10.1.0"
+ react-json-view "^1.21.3"
+ tslib "^2.4.0"
+
+"@docusaurus/plugin-google-analytics@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-analytics/-/plugin-google-analytics-2.1.0.tgz#c9a7269817b38e43484d38fad9996e39aac4196c"
+ integrity sha512-4cgeqIly/wcFVbbWP03y1QJJBgH8W+Bv6AVbWnsXNOZa1yB3AO6hf3ZdeQH9x20v9T2pREogVgAH0rSoVnNsgg==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ tslib "^2.4.0"
+
+"@docusaurus/plugin-google-gtag@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-gtag/-/plugin-google-gtag-2.1.0.tgz#e4f351dcd98b933538d55bb742650a2a36ca9a32"
+ integrity sha512-/3aDlv2dMoCeiX2e+DTGvvrdTA+v3cKQV3DbmfsF4ENhvc5nKV23nth04Z3Vq0Ci1ui6Sn80TkhGk/tiCMW2AA==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ tslib "^2.4.0"
+
+"@docusaurus/plugin-sitemap@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.1.0.tgz#b316bb9a42a1717845e26bd4e2d3071748a54b47"
+ integrity sha512-2Y6Br8drlrZ/jN9MwMBl0aoi9GAjpfyfMBYpaQZXimbK+e9VjYnujXlvQ4SxtM60ASDgtHIAzfVFBkSR/MwRUw==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/logger" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@docusaurus/utils-common" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ fs-extra "^10.1.0"
+ sitemap "^7.1.1"
+ tslib "^2.4.0"
+
+"@docusaurus/preset-classic@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/preset-classic/-/preset-classic-2.1.0.tgz#45b23c8ec10c96ded9ece128fac3a39b10bcbc56"
+ integrity sha512-NQMnaq974K4BcSMXFSJBQ5itniw6RSyW+VT+6i90kGZzTwiuKZmsp0r9lC6BYAvvVMQUNJQwrETmlu7y2XKW7w==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/plugin-content-blog" "2.1.0"
+ "@docusaurus/plugin-content-docs" "2.1.0"
+ "@docusaurus/plugin-content-pages" "2.1.0"
+ "@docusaurus/plugin-debug" "2.1.0"
+ "@docusaurus/plugin-google-analytics" "2.1.0"
+ "@docusaurus/plugin-google-gtag" "2.1.0"
+ "@docusaurus/plugin-sitemap" "2.1.0"
+ "@docusaurus/theme-classic" "2.1.0"
+ "@docusaurus/theme-common" "2.1.0"
+ "@docusaurus/theme-search-algolia" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+
+"@docusaurus/react-loadable@5.5.2", "react-loadable@npm:@docusaurus/react-loadable@5.5.2":
+ version "5.5.2"
+ resolved "https://registry.yarnpkg.com/@docusaurus/react-loadable/-/react-loadable-5.5.2.tgz#81aae0db81ecafbdaee3651f12804580868fa6ce"
+ integrity sha512-A3dYjdBGuy0IGT+wyLIGIKLRE+sAk1iNk0f1HjNDysO7u8lhL4N3VEm+FAubmJbAztn94F7MxBTPmnixbiyFdQ==
+ dependencies:
+ "@types/react" "*"
+ prop-types "^15.6.2"
+
+"@docusaurus/theme-classic@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/theme-classic/-/theme-classic-2.1.0.tgz#d957a907ea8dd035c1cf911d0fbe91d8f24aef3f"
+ integrity sha512-xn8ZfNMsf7gaSy9+ClFnUu71o7oKgMo5noYSS1hy3svNifRTkrBp6+MReLDsmIaj3mLf2e7+JCBYKBFbaGzQng==
+ dependencies:
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/mdx-loader" "2.1.0"
+ "@docusaurus/module-type-aliases" "2.1.0"
+ "@docusaurus/plugin-content-blog" "2.1.0"
+ "@docusaurus/plugin-content-docs" "2.1.0"
+ "@docusaurus/plugin-content-pages" "2.1.0"
+ "@docusaurus/theme-common" "2.1.0"
+ "@docusaurus/theme-translations" "2.1.0"
+ "@docusaurus/types" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@docusaurus/utils-common" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ "@mdx-js/react" "^1.6.22"
+ clsx "^1.2.1"
+ copy-text-to-clipboard "^3.0.1"
+ infima "0.2.0-alpha.42"
+ lodash "^4.17.21"
+ nprogress "^0.2.0"
+ postcss "^8.4.14"
+ prism-react-renderer "^1.3.5"
+ prismjs "^1.28.0"
+ react-router-dom "^5.3.3"
+ rtlcss "^3.5.0"
+ tslib "^2.4.0"
+ utility-types "^3.10.0"
+
+"@docusaurus/theme-common@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/theme-common/-/theme-common-2.1.0.tgz#dff4d5d1e29efc06125dc06f7b259f689bb3f24d"
+ integrity sha512-vT1otpVPbKux90YpZUnvknsn5zvpLf+AW1W0EDcpE9up4cDrPqfsh0QoxGHFJnobE2/qftsBFC19BneN4BH8Ag==
+ dependencies:
+ "@docusaurus/mdx-loader" "2.1.0"
+ "@docusaurus/module-type-aliases" "2.1.0"
+ "@docusaurus/plugin-content-blog" "2.1.0"
+ "@docusaurus/plugin-content-docs" "2.1.0"
+ "@docusaurus/plugin-content-pages" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@types/history" "^4.7.11"
+ "@types/react" "*"
+ "@types/react-router-config" "*"
+ clsx "^1.2.1"
+ parse-numeric-range "^1.3.0"
+ prism-react-renderer "^1.3.5"
+ tslib "^2.4.0"
+ utility-types "^3.10.0"
+
+"@docusaurus/theme-search-algolia@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/theme-search-algolia/-/theme-search-algolia-2.1.0.tgz#e7cdf64b6f7a15b07c6dcf652fd308cfdaabb0ee"
+ integrity sha512-rNBvi35VvENhucslEeVPOtbAzBdZY/9j55gdsweGV5bYoAXy4mHB6zTGjealcB4pJ6lJY4a5g75fXXMOlUqPfg==
+ dependencies:
+ "@docsearch/react" "^3.1.1"
+ "@docusaurus/core" "2.1.0"
+ "@docusaurus/logger" "2.1.0"
+ "@docusaurus/plugin-content-docs" "2.1.0"
+ "@docusaurus/theme-common" "2.1.0"
+ "@docusaurus/theme-translations" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ "@docusaurus/utils-validation" "2.1.0"
+ algoliasearch "^4.13.1"
+ algoliasearch-helper "^3.10.0"
+ clsx "^1.2.1"
+ eta "^1.12.3"
+ fs-extra "^10.1.0"
+ lodash "^4.17.21"
+ tslib "^2.4.0"
+ utility-types "^3.10.0"
+
+"@docusaurus/theme-translations@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/theme-translations/-/theme-translations-2.1.0.tgz#ce9a2955afd49bff364cfdfd4492b226f6dd3b6e"
+ integrity sha512-07n2akf2nqWvtJeMy3A+7oSGMuu5F673AovXVwY0aGAux1afzGCiqIFlYW3EP0CujvDJAEFSQi/Tetfh+95JNg==
+ dependencies:
+ fs-extra "^10.1.0"
+ tslib "^2.4.0"
+
+"@docusaurus/types@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/types/-/types-2.1.0.tgz#01e13cd9adb268fffe87b49eb90302d5dc3edd6b"
+ integrity sha512-BS1ebpJZnGG6esKqsjtEC9U9qSaPylPwlO7cQ1GaIE7J/kMZI3FITnNn0otXXu7c7ZTqhb6+8dOrG6fZn6fqzQ==
+ dependencies:
+ "@types/history" "^4.7.11"
+ "@types/react" "*"
+ commander "^5.1.0"
+ joi "^17.6.0"
+ react-helmet-async "^1.3.0"
+ utility-types "^3.10.0"
+ webpack "^5.73.0"
+ webpack-merge "^5.8.0"
+
+"@docusaurus/utils-common@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/utils-common/-/utils-common-2.1.0.tgz#248434751096f8c6c644ed65eed2a5a070a227f8"
+ integrity sha512-F2vgmt4yRFgRQR2vyEFGTWeyAdmgKbtmu3sjHObF0tjjx/pN0Iw/c6eCopaH34E6tc9nO0nvp01pwW+/86d1fg==
+ dependencies:
+ tslib "^2.4.0"
+
+"@docusaurus/utils-validation@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/utils-validation/-/utils-validation-2.1.0.tgz#c8cf1d8454d924d9a564fefa86436268f43308e3"
+ integrity sha512-AMJzWYKL3b7FLltKtDXNLO9Y649V2BXvrnRdnW2AA+PpBnYV78zKLSCz135cuWwRj1ajNtP4onbXdlnyvCijGQ==
+ dependencies:
+ "@docusaurus/logger" "2.1.0"
+ "@docusaurus/utils" "2.1.0"
+ joi "^17.6.0"
+ js-yaml "^4.1.0"
+ tslib "^2.4.0"
+
+"@docusaurus/utils@2.1.0":
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/@docusaurus/utils/-/utils-2.1.0.tgz#b77b45b22e61eb6c2dcad8a7e96f6db0409b655f"
+ integrity sha512-fPvrfmAuC54n8MjZuG4IysaMdmvN5A/qr7iFLbSGSyDrsbP4fnui6KdZZIa/YOLIPLec8vjZ8RIITJqF18mx4A==
+ dependencies:
+ "@docusaurus/logger" "2.1.0"
+ "@svgr/webpack" "^6.2.1"
+ file-loader "^6.2.0"
+ fs-extra "^10.1.0"
+ github-slugger "^1.4.0"
+ globby "^11.1.0"
+ gray-matter "^4.0.3"
+ js-yaml "^4.1.0"
+ lodash "^4.17.21"
+ micromatch "^4.0.5"
+ resolve-pathname "^3.0.0"
+ shelljs "^0.8.5"
+ tslib "^2.4.0"
+ url-loader "^4.1.1"
+ webpack "^5.73.0"
+
+"@hapi/hoek@^9.0.0":
+ version "9.3.0"
+ resolved "https://registry.yarnpkg.com/@hapi/hoek/-/hoek-9.3.0.tgz#8368869dcb735be2e7f5cb7647de78e167a251fb"
+ integrity sha512-/c6rf4UJlmHlC9b5BaNvzAcFv7HZ2QHaV0D4/HNlBdvFnvQq8RI4kYdhyPCl7Xj+oWvTWQ8ujhqS53LIgAe6KQ==
+
+"@hapi/topo@^5.0.0":
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/@hapi/topo/-/topo-5.1.0.tgz#dc448e332c6c6e37a4dc02fd84ba8d44b9afb012"
+ integrity sha512-foQZKJig7Ob0BMAYBfcJk8d77QtOe7Wo4ox7ff1lQYoNNAb6jwcY1ncdoy2e9wQZzvNy7ODZCYJkK8kzmcAnAg==
+ dependencies:
+ "@hapi/hoek" "^9.0.0"
+
+"@hutson/parse-repository-url@^3.0.0":
+ version "3.0.2"
+ resolved "https://registry.yarnpkg.com/@hutson/parse-repository-url/-/parse-repository-url-3.0.2.tgz#98c23c950a3d9b6c8f0daed06da6c3af06981340"
+ integrity sha512-H9XAx3hc0BQHY6l+IFSWHDySypcXsvsuLhgYLUGywmJ5pswRVQJUHpOsobnLYp2ZUaUlKiKDrgWWhosOwAEM8Q==
+
+"@jridgewell/gen-mapping@^0.1.0":
+ version "0.1.1"
+ resolved "https://registry.yarnpkg.com/@jridgewell/gen-mapping/-/gen-mapping-0.1.1.tgz#e5d2e450306a9491e3bd77e323e38d7aff315996"
+ integrity sha512-sQXCasFk+U8lWYEe66WxRDOE9PjVz4vSM51fTu3Hw+ClTpUSQb718772vH3pyS5pShp6lvQM7SxgIDXXXmOX7w==
+ dependencies:
+ "@jridgewell/set-array" "^1.0.0"
+ "@jridgewell/sourcemap-codec" "^1.4.10"
+
+"@jridgewell/gen-mapping@^0.3.0", "@jridgewell/gen-mapping@^0.3.2":
+ version "0.3.2"
+ resolved "https://registry.yarnpkg.com/@jridgewell/gen-mapping/-/gen-mapping-0.3.2.tgz#c1aedc61e853f2bb9f5dfe6d4442d3b565b253b9"
+ integrity sha512-mh65xKQAzI6iBcFzwv28KVWSmCkdRBWoOh+bYQGW3+6OZvbbN3TqMGo5hqYxQniRcH9F2VZIoJCm4pa3BPDK/A==
+ dependencies:
+ "@jridgewell/set-array" "^1.0.1"
+ "@jridgewell/sourcemap-codec" "^1.4.10"
+ "@jridgewell/trace-mapping" "^0.3.9"
+
+"@jridgewell/resolve-uri@^3.0.3":
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/@jridgewell/resolve-uri/-/resolve-uri-3.1.0.tgz#2203b118c157721addfe69d47b70465463066d78"
+ integrity sha512-F2msla3tad+Mfht5cJq7LSXcdudKTWCVYUgw6pLFOOHSTtZlj6SWNYAp+AhuqLmWdBO2X5hPrLcu8cVP8fy28w==
+
+"@jridgewell/set-array@^1.0.0", "@jridgewell/set-array@^1.0.1":
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/@jridgewell/set-array/-/set-array-1.1.2.tgz#7c6cf998d6d20b914c0a55a91ae928ff25965e72"
+ integrity sha512-xnkseuNADM0gt2bs+BvhO0p78Mk762YnZdsuzFV018NoG1Sj1SCQvpSqa7XUaTam5vAGasABV9qXASMKnFMwMw==
+
+"@jridgewell/source-map@^0.3.2":
+ version "0.3.2"
+ resolved "https://registry.yarnpkg.com/@jridgewell/source-map/-/source-map-0.3.2.tgz#f45351aaed4527a298512ec72f81040c998580fb"
+ integrity sha512-m7O9o2uR8k2ObDysZYzdfhb08VuEml5oWGiosa1VdaPZ/A6QyPkAJuwN0Q1lhULOf6B7MtQmHENS743hWtCrgw==
+ dependencies:
+ "@jridgewell/gen-mapping" "^0.3.0"
+ "@jridgewell/trace-mapping" "^0.3.9"
+
+"@jridgewell/sourcemap-codec@^1.4.10":
+ version "1.4.14"
+ resolved "https://registry.yarnpkg.com/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.4.14.tgz#add4c98d341472a289190b424efbdb096991bb24"
+ integrity sha512-XPSJHWmi394fuUuzDnGz1wiKqWfo1yXecHQMRf2l6hztTO+nPru658AyDngaBe7isIxEkRsPR3FZh+s7iVa4Uw==
+
+"@jridgewell/trace-mapping@0.3.9":
+ version "0.3.9"
+ resolved "https://registry.yarnpkg.com/@jridgewell/trace-mapping/-/trace-mapping-0.3.9.tgz#6534fd5933a53ba7cbf3a17615e273a0d1273ff9"
+ integrity sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==
+ dependencies:
+ "@jridgewell/resolve-uri" "^3.0.3"
+ "@jridgewell/sourcemap-codec" "^1.4.10"
+
+"@jridgewell/trace-mapping@^0.3.14", "@jridgewell/trace-mapping@^0.3.9":
+ version "0.3.15"
+ resolved "https://registry.yarnpkg.com/@jridgewell/trace-mapping/-/trace-mapping-0.3.15.tgz#aba35c48a38d3fd84b37e66c9c0423f9744f9774"
+ integrity sha512-oWZNOULl+UbhsgB51uuZzglikfIKSUBO/M9W2OfEjn7cmqoAiCgmv9lyACTUacZwBz0ITnJ2NqjU8Tx0DHL88g==
+ dependencies:
+ "@jridgewell/resolve-uri" "^3.0.3"
+ "@jridgewell/sourcemap-codec" "^1.4.10"
+
+"@leichtgewicht/ip-codec@^2.0.1":
+ version "2.0.4"
+ resolved "https://registry.yarnpkg.com/@leichtgewicht/ip-codec/-/ip-codec-2.0.4.tgz#b2ac626d6cb9c8718ab459166d4bb405b8ffa78b"
+ integrity sha512-Hcv+nVC0kZnQ3tD9GVu5xSMR4VVYOteQIr/hwFPVEvPdlXqgGEuRjiheChHgdM+JyqdgNcmzZOX/tnl0JOiI7A==
+
+"@mdx-js/mdx@^1.6.22":
+ version "1.6.22"
+ resolved "https://registry.yarnpkg.com/@mdx-js/mdx/-/mdx-1.6.22.tgz#8a723157bf90e78f17dc0f27995398e6c731f1ba"
+ integrity sha512-AMxuLxPz2j5/6TpF/XSdKpQP1NlG0z11dFOlq+2IP/lSgl11GY8ji6S/rgsViN/L0BDvHvUMruRb7ub+24LUYA==
+ dependencies:
+ "@babel/core" "7.12.9"
+ "@babel/plugin-syntax-jsx" "7.12.1"
+ "@babel/plugin-syntax-object-rest-spread" "7.8.3"
+ "@mdx-js/util" "1.6.22"
+ babel-plugin-apply-mdx-type-prop "1.6.22"
+ babel-plugin-extract-import-names "1.6.22"
+ camelcase-css "2.0.1"
+ detab "2.0.4"
+ hast-util-raw "6.0.1"
+ lodash.uniq "4.5.0"
+ mdast-util-to-hast "10.0.1"
+ remark-footnotes "2.0.0"
+ remark-mdx "1.6.22"
+ remark-parse "8.0.3"
+ remark-squeeze-paragraphs "4.0.0"
+ style-to-object "0.3.0"
+ unified "9.2.0"
+ unist-builder "2.0.3"
+ unist-util-visit "2.0.3"
+
+"@mdx-js/react@^1.6.22":
+ version "1.6.22"
+ resolved "https://registry.yarnpkg.com/@mdx-js/react/-/react-1.6.22.tgz#ae09b4744fddc74714ee9f9d6f17a66e77c43573"
+ integrity sha512-TDoPum4SHdfPiGSAaRBw7ECyI8VaHpK8GJugbJIJuqyh6kzw9ZLJZW3HGL3NNrJGxcAixUvqROm+YuQOo5eXtg==
+
+"@mdx-js/util@1.6.22":
+ version "1.6.22"
+ resolved "https://registry.yarnpkg.com/@mdx-js/util/-/util-1.6.22.tgz#219dfd89ae5b97a8801f015323ffa4b62f45718b"
+ integrity sha512-H1rQc1ZOHANWBvPcW+JpGwr+juXSxM8Q8YCkm3GhZd8REu1fHR3z99CErO1p9pkcfcxZnMdIZdIsXkOHY0NilA==
+
+"@nodelib/fs.scandir@2.1.5":
+ version "2.1.5"
+ resolved "https://registry.yarnpkg.com/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz#7619c2eb21b25483f6d167548b4cfd5a7488c3d5"
+ integrity sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==
+ dependencies:
+ "@nodelib/fs.stat" "2.0.5"
+ run-parallel "^1.1.9"
+
+"@nodelib/fs.stat@2.0.5", "@nodelib/fs.stat@^2.0.2":
+ version "2.0.5"
+ resolved "https://registry.yarnpkg.com/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz#5bd262af94e9d25bd1e71b05deed44876a222e8b"
+ integrity sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==
+
+"@nodelib/fs.walk@^1.2.3":
+ version "1.2.8"
+ resolved "https://registry.yarnpkg.com/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz#e95737e8bb6746ddedf69c556953494f196fe69a"
+ integrity sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==
+ dependencies:
+ "@nodelib/fs.scandir" "2.1.5"
+ fastq "^1.6.0"
+
+"@polka/url@^1.0.0-next.20":
+ version "1.0.0-next.21"
+ resolved "https://registry.yarnpkg.com/@polka/url/-/url-1.0.0-next.21.tgz#5de5a2385a35309427f6011992b544514d559aa1"
+ integrity sha512-a5Sab1C4/icpTZVzZc5Ghpz88yQtGOyNqYXcZgOssB2uuAr+wF/MvN6bgtW32q7HHrvBki+BsZ0OuNv6EV3K9g==
+
+"@sideway/address@^4.1.3":
+ version "4.1.4"
+ resolved "https://registry.yarnpkg.com/@sideway/address/-/address-4.1.4.tgz#03dccebc6ea47fdc226f7d3d1ad512955d4783f0"
+ integrity sha512-7vwq+rOHVWjyXxVlR76Agnvhy8I9rpzjosTESvmhNeXOXdZZB15Fl+TI9x1SiHZH5Jv2wTGduSxFDIaq0m3DUw==
+ dependencies:
+ "@hapi/hoek" "^9.0.0"
+
+"@sideway/formula@^3.0.0":
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/@sideway/formula/-/formula-3.0.0.tgz#fe158aee32e6bd5de85044be615bc08478a0a13c"
+ integrity sha512-vHe7wZ4NOXVfkoRb8T5otiENVlT7a3IAiw7H5M2+GO+9CDgcVUUsX1zalAztCmwyOr2RUTGJdgB+ZvSVqmdHmg==
+
+"@sideway/pinpoint@^2.0.0":
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/@sideway/pinpoint/-/pinpoint-2.0.0.tgz#cff8ffadc372ad29fd3f78277aeb29e632cc70df"
+ integrity sha512-RNiOoTPkptFtSVzQevY/yWtZwf/RxyVnPy/OcA9HBM3MlGDnBEYL5B41H0MTn0Uec8Hi+2qUtTfG2WWZBmMejQ==
+
+"@sindresorhus/is@^0.14.0":
+ version "0.14.0"
+ resolved "https://registry.yarnpkg.com/@sindresorhus/is/-/is-0.14.0.tgz#9fb3a3cf3132328151f353de4632e01e52102bea"
+ integrity sha512-9NET910DNaIPngYnLLPeg+Ogzqsi9uM4mSboU5y6p8S5DzMTVEsJZrawi+BoDNUVBa2DhJqQYUFvMDfgU062LQ==
+
+"@slorber/static-site-generator-webpack-plugin@^4.0.7":
+ version "4.0.7"
+ resolved "https://registry.yarnpkg.com/@slorber/static-site-generator-webpack-plugin/-/static-site-generator-webpack-plugin-4.0.7.tgz#fc1678bddefab014e2145cbe25b3ce4e1cfc36f3"
+ integrity sha512-Ug7x6z5lwrz0WqdnNFOMYrDQNTPAprvHLSh6+/fmml3qUiz6l5eq+2MzLKWtn/q5K5NpSiFsZTP/fck/3vjSxA==
+ dependencies:
+ eval "^0.1.8"
+ p-map "^4.0.0"
+ webpack-sources "^3.2.2"
+
+"@svgr/babel-plugin-add-jsx-attribute@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-add-jsx-attribute/-/babel-plugin-add-jsx-attribute-6.3.1.tgz#b9a5d84902be75a05ede92e70b338d28ab63fa74"
+ integrity sha512-jDBKArXYO1u0B1dmd2Nf8Oy6aTF5vLDfLoO9Oon/GLkqZ/NiggYWZA+a2HpUMH4ITwNqS3z43k8LWApB8S583w==
+
+"@svgr/babel-plugin-remove-jsx-attribute@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-remove-jsx-attribute/-/babel-plugin-remove-jsx-attribute-6.3.1.tgz#4877995452efc997b36777abe1fde9705ef78e8b"
+ integrity sha512-dQzyJ4prwjcFd929T43Z8vSYiTlTu8eafV40Z2gO7zy/SV5GT+ogxRJRBIKWomPBOiaVXFg3jY4S5hyEN3IBjQ==
+
+"@svgr/babel-plugin-remove-jsx-empty-expression@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-remove-jsx-empty-expression/-/babel-plugin-remove-jsx-empty-expression-6.3.1.tgz#2d67a0e92904c9be149a5b22d3a3797ce4d7b514"
+ integrity sha512-HBOUc1XwSU67fU26V5Sfb8MQsT0HvUyxru7d0oBJ4rA2s4HW3PhyAPC7fV/mdsSGpAvOdd8Wpvkjsr0fWPUO7A==
+
+"@svgr/babel-plugin-replace-jsx-attribute-value@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-replace-jsx-attribute-value/-/babel-plugin-replace-jsx-attribute-value-6.3.1.tgz#306f5247139c53af70d1778f2719647c747998ee"
+ integrity sha512-C12e6aN4BXAolRrI601gPn5MDFCRHO7C4TM8Kks+rDtl8eEq+NN1sak0eAzJu363x3TmHXdZn7+Efd2nr9I5dA==
+
+"@svgr/babel-plugin-svg-dynamic-title@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-svg-dynamic-title/-/babel-plugin-svg-dynamic-title-6.3.1.tgz#6ce26d34cbc93eb81737ef528528907c292e7aa2"
+ integrity sha512-6NU55Mmh3M5u2CfCCt6TX29/pPneutrkJnnDCHbKZnjukZmmgUAZLtZ2g6ZoSPdarowaQmAiBRgAHqHmG0vuqA==
+
+"@svgr/babel-plugin-svg-em-dimensions@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-svg-em-dimensions/-/babel-plugin-svg-em-dimensions-6.3.1.tgz#5ade2a724b290873c30529d1d8cd23523856287a"
+ integrity sha512-HV1NGHYTTe1vCNKlBgq/gKuCSfaRlKcHIADn7P8w8U3Zvujdw1rmusutghJ1pZJV7pDt3Gt8ws+SVrqHnBO/Qw==
+
+"@svgr/babel-plugin-transform-react-native-svg@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-transform-react-native-svg/-/babel-plugin-transform-react-native-svg-6.3.1.tgz#d654f509d692c3a09dfb475757a44bd9f6ad7ddf"
+ integrity sha512-2wZhSHvTolFNeKDAN/ZmIeSz2O9JSw72XD+o2bNp2QAaWqa8KGpn5Yk5WHso6xqfSAiRzAE+GXlsrBO4UP9LLw==
+
+"@svgr/babel-plugin-transform-svg-component@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-plugin-transform-svg-component/-/babel-plugin-transform-svg-component-6.3.1.tgz#21a285dbffdce9567c437ebf0d081bf9210807e6"
+ integrity sha512-cZ8Tr6ZAWNUFfDeCKn/pGi976iWSkS8ijmEYKosP+6ktdZ7lW9HVLHojyusPw3w0j8PI4VBeWAXAmi/2G7owxw==
+
+"@svgr/babel-preset@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/babel-preset/-/babel-preset-6.3.1.tgz#8bd1ead79637d395e9362b01dd37cfd59702e152"
+ integrity sha512-tQtWtzuMMQ3opH7je+MpwfuRA1Hf3cKdSgTtAYwOBDfmhabP7rcTfBi3E7V3MuwJNy/Y02/7/RutvwS1W4Qv9g==
+ dependencies:
+ "@svgr/babel-plugin-add-jsx-attribute" "^6.3.1"
+ "@svgr/babel-plugin-remove-jsx-attribute" "^6.3.1"
+ "@svgr/babel-plugin-remove-jsx-empty-expression" "^6.3.1"
+ "@svgr/babel-plugin-replace-jsx-attribute-value" "^6.3.1"
+ "@svgr/babel-plugin-svg-dynamic-title" "^6.3.1"
+ "@svgr/babel-plugin-svg-em-dimensions" "^6.3.1"
+ "@svgr/babel-plugin-transform-react-native-svg" "^6.3.1"
+ "@svgr/babel-plugin-transform-svg-component" "^6.3.1"
+
+"@svgr/core@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/core/-/core-6.3.1.tgz#752adf49d8d5473b15d76ca741961de093f715bd"
+ integrity sha512-Sm3/7OdXbQreemf9aO25keerZSbnKMpGEfmH90EyYpj1e8wMD4TuwJIb3THDSgRMWk1kYJfSRulELBy4gVgZUA==
+ dependencies:
+ "@svgr/plugin-jsx" "^6.3.1"
+ camelcase "^6.2.0"
+ cosmiconfig "^7.0.1"
+
+"@svgr/hast-util-to-babel-ast@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/hast-util-to-babel-ast/-/hast-util-to-babel-ast-6.3.1.tgz#59614e24d2a4a28010e02089213b3448d905769d"
+ integrity sha512-NgyCbiTQIwe3wHe/VWOUjyxmpUmsrBjdoIxKpXt3Nqc3TN30BpJG22OxBvVzsAh9jqep0w0/h8Ywvdk3D9niNQ==
+ dependencies:
+ "@babel/types" "^7.18.4"
+ entities "^4.3.0"
+
+"@svgr/plugin-jsx@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/plugin-jsx/-/plugin-jsx-6.3.1.tgz#de7b2de824296b836d6b874d498377896e367f50"
+ integrity sha512-r9+0mYG3hD4nNtUgsTXWGYJomv/bNd7kC16zvsM70I/bGeoCi/3lhTmYqeN6ChWX317OtQCSZZbH4wq9WwoXbw==
+ dependencies:
+ "@babel/core" "^7.18.5"
+ "@svgr/babel-preset" "^6.3.1"
+ "@svgr/hast-util-to-babel-ast" "^6.3.1"
+ svg-parser "^2.0.4"
+
+"@svgr/plugin-svgo@^6.3.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/plugin-svgo/-/plugin-svgo-6.3.1.tgz#3c1ff2efaed10e5c5d35a6cae7bacaedc18b5d4a"
+ integrity sha512-yJIjTDKPYqzFVjmsbH5EdIwEsmKxjxdXSGJVLeUgwZOZPAkNQmD1v7LDbOdOKbR44FG8465Du+zWPdbYGnbMbw==
+ dependencies:
+ cosmiconfig "^7.0.1"
+ deepmerge "^4.2.2"
+ svgo "^2.8.0"
+
+"@svgr/webpack@^6.2.1":
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/@svgr/webpack/-/webpack-6.3.1.tgz#001d03236ebb03bf47c0a4b92d5423e05095ebe6"
+ integrity sha512-eODxwIUShLxSMaRjzJtrj9wg89D75JLczvWg9SaB5W+OtVTkiC1vdGd8+t+pf5fTlBOy4RRXAq7x1E3DUl3D0A==
+ dependencies:
+ "@babel/core" "^7.18.5"
+ "@babel/plugin-transform-react-constant-elements" "^7.17.12"
+ "@babel/preset-env" "^7.18.2"
+ "@babel/preset-react" "^7.17.12"
+ "@babel/preset-typescript" "^7.17.12"
+ "@svgr/core" "^6.3.1"
+ "@svgr/plugin-jsx" "^6.3.1"
+ "@svgr/plugin-svgo" "^6.3.1"
+
+"@szmarczak/http-timer@^1.1.2":
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/@szmarczak/http-timer/-/http-timer-1.1.2.tgz#b1665e2c461a2cd92f4c1bbf50d5454de0d4b421"
+ integrity sha512-XIB2XbzHTN6ieIjfIMV9hlVcfPU26s2vafYWQcZHWXHOxiaRZYEDKEwdl129Zyg50+foYV2jCgtrqSA6qNuNSA==
+ dependencies:
+ defer-to-connect "^1.0.1"
+
+"@trysound/sax@0.2.0":
+ version "0.2.0"
+ resolved "https://registry.yarnpkg.com/@trysound/sax/-/sax-0.2.0.tgz#cccaab758af56761eb7bf37af6f03f326dd798ad"
+ integrity sha512-L7z9BgrNEcYyUYtF+HaEfiS5ebkh9jXqbszz7pC0hRBPaatV0XjSD3+eHrpqFemQfgwiFF0QPIarnIihIDn7OA==
+
+"@tsconfig/docusaurus@^1.0.5":
+ version "1.0.6"
+ resolved "https://registry.yarnpkg.com/@tsconfig/docusaurus/-/docusaurus-1.0.6.tgz#7305a7fa590decc0d5968500234e95fd68788978"
+ integrity sha512-1QxDaP54hpzM6bq9E+yFEo4F9WbWHhsDe4vktZXF/iDlc9FqGr9qlg+3X/nuKQXx8QxHV7ue8NXFazzajsxFBA==
+
+"@tsconfig/node10@^1.0.7":
+ version "1.0.9"
+ resolved "https://registry.yarnpkg.com/@tsconfig/node10/-/node10-1.0.9.tgz#df4907fc07a886922637b15e02d4cebc4c0021b2"
+ integrity sha512-jNsYVVxU8v5g43Erja32laIDHXeoNvFEpX33OK4d6hljo3jDhCBDhx5dhCCTMWUojscpAagGiRkBKxpdl9fxqA==
+
+"@tsconfig/node12@^1.0.7":
+ version "1.0.11"
+ resolved "https://registry.yarnpkg.com/@tsconfig/node12/-/node12-1.0.11.tgz#ee3def1f27d9ed66dac6e46a295cffb0152e058d"
+ integrity sha512-cqefuRsh12pWyGsIoBKJA9luFu3mRxCA+ORZvA4ktLSzIuCUtWVxGIuXigEwO5/ywWFMZ2QEGKWvkZG1zDMTag==
+
+"@tsconfig/node14@^1.0.0":
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/@tsconfig/node14/-/node14-1.0.3.tgz#e4386316284f00b98435bf40f72f75a09dabf6c1"
+ integrity sha512-ysT8mhdixWK6Hw3i1V2AeRqZ5WfXg1G43mqoYlM2nc6388Fq5jcXyr5mRsqViLx/GJYdoL0bfXD8nmF+Zn/Iow==
+
+"@tsconfig/node16@^1.0.2":
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/@tsconfig/node16/-/node16-1.0.3.tgz#472eaab5f15c1ffdd7f8628bd4c4f753995ec79e"
+ integrity sha512-yOlFc+7UtL/89t2ZhjPvvB/DeAr3r+Dq58IgzsFkOAvVC6NMJXmCGjbptdXdR9qsX7pKcTL+s87FtYREi2dEEQ==
+
+"@types/body-parser@*":
+ version "1.19.2"
+ resolved "https://registry.yarnpkg.com/@types/body-parser/-/body-parser-1.19.2.tgz#aea2059e28b7658639081347ac4fab3de166e6f0"
+ integrity sha512-ALYone6pm6QmwZoAgeyNksccT9Q4AWZQ6PvfwR37GT6r6FWUPguq6sUmNGSMV2Wr761oQoBxwGGa6DR5o1DC9g==
+ dependencies:
+ "@types/connect" "*"
+ "@types/node" "*"
+
+"@types/bonjour@^3.5.9":
+ version "3.5.10"
+ resolved "https://registry.yarnpkg.com/@types/bonjour/-/bonjour-3.5.10.tgz#0f6aadfe00ea414edc86f5d106357cda9701e275"
+ integrity sha512-p7ienRMiS41Nu2/igbJxxLDWrSZ0WxM8UQgCeO9KhoVF7cOVFkrKsiDr1EsJIla8vV3oEEjGcz11jc5yimhzZw==
+ dependencies:
+ "@types/node" "*"
+
+"@types/connect-history-api-fallback@^1.3.5":
+ version "1.3.5"
+ resolved "https://registry.yarnpkg.com/@types/connect-history-api-fallback/-/connect-history-api-fallback-1.3.5.tgz#d1f7a8a09d0ed5a57aee5ae9c18ab9b803205dae"
+ integrity sha512-h8QJa8xSb1WD4fpKBDcATDNGXghFj6/3GRWG6dhmRcu0RX1Ubasur2Uvx5aeEwlf0MwblEC2bMzzMQntxnw/Cw==
+ dependencies:
+ "@types/express-serve-static-core" "*"
+ "@types/node" "*"
+
+"@types/connect@*":
+ version "3.4.35"
+ resolved "https://registry.yarnpkg.com/@types/connect/-/connect-3.4.35.tgz#5fcf6ae445e4021d1fc2219a4873cc73a3bb2ad1"
+ integrity sha512-cdeYyv4KWoEgpBISTxWvqYsVy444DOqehiF3fM3ne10AmJ62RSyNkUnxMJXHQWRQQX2eR94m5y1IZyDwBjV9FQ==
+ dependencies:
+ "@types/node" "*"
+
+"@types/eslint-scope@^3.7.3":
+ version "3.7.4"
+ resolved "https://registry.yarnpkg.com/@types/eslint-scope/-/eslint-scope-3.7.4.tgz#37fc1223f0786c39627068a12e94d6e6fc61de16"
+ integrity sha512-9K4zoImiZc3HlIp6AVUDE4CWYx22a+lhSZMYNpbjW04+YF0KWj4pJXnEMjdnFTiQibFFmElcsasJXDbdI/EPhA==
+ dependencies:
+ "@types/eslint" "*"
+ "@types/estree" "*"
+
+"@types/eslint@*":
+ version "8.4.6"
+ resolved "https://registry.yarnpkg.com/@types/eslint/-/eslint-8.4.6.tgz#7976f054c1bccfcf514bff0564c0c41df5c08207"
+ integrity sha512-/fqTbjxyFUaYNO7VcW5g+4npmqVACz1bB7RTHYuLj+PRjw9hrCwrUXVQFpChUS0JsyEFvMZ7U/PfmvWgxJhI9g==
+ dependencies:
+ "@types/estree" "*"
+ "@types/json-schema" "*"
+
+"@types/estree@*":
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/@types/estree/-/estree-1.0.0.tgz#5fb2e536c1ae9bf35366eed879e827fa59ca41c2"
+ integrity sha512-WulqXMDUTYAXCjZnk6JtIHPigp55cVtDgDrO2gHRwhyJto21+1zbVCtOYB2L1F9w4qCQ0rOGWBnBe0FNTiEJIQ==
+
+"@types/estree@^0.0.51":
+ version "0.0.51"
+ resolved "https://registry.yarnpkg.com/@types/estree/-/estree-0.0.51.tgz#cfd70924a25a3fd32b218e5e420e6897e1ac4f40"
+ integrity sha512-CuPgU6f3eT/XgKKPqKd/gLZV1Xmvf1a2R5POBOGQa6uv82xpls89HU5zKeVoyR8XzHd1RGNOlQlvUe3CFkjWNQ==
+
+"@types/express-serve-static-core@*", "@types/express-serve-static-core@^4.17.18":
+ version "4.17.31"
+ resolved "https://registry.yarnpkg.com/@types/express-serve-static-core/-/express-serve-static-core-4.17.31.tgz#a1139efeab4e7323834bb0226e62ac019f474b2f"
+ integrity sha512-DxMhY+NAsTwMMFHBTtJFNp5qiHKJ7TeqOo23zVEM9alT1Ml27Q3xcTH0xwxn7Q0BbMcVEJOs/7aQtUWupUQN3Q==
+ dependencies:
+ "@types/node" "*"
+ "@types/qs" "*"
+ "@types/range-parser" "*"
+
+"@types/express@*", "@types/express@^4.17.13":
+ version "4.17.14"
+ resolved "https://registry.yarnpkg.com/@types/express/-/express-4.17.14.tgz#143ea0557249bc1b3b54f15db4c81c3d4eb3569c"
+ integrity sha512-TEbt+vaPFQ+xpxFLFssxUDXj5cWCxZJjIcB7Yg0k0GMHGtgtQgpvx/MUQUeAkNbA9AAGrwkAsoeItdTgS7FMyg==
+ dependencies:
+ "@types/body-parser" "*"
+ "@types/express-serve-static-core" "^4.17.18"
+ "@types/qs" "*"
+ "@types/serve-static" "*"
+
+"@types/hast@^2.0.0":
+ version "2.3.4"
+ resolved "https://registry.yarnpkg.com/@types/hast/-/hast-2.3.4.tgz#8aa5ef92c117d20d974a82bdfb6a648b08c0bafc"
+ integrity sha512-wLEm0QvaoawEDoTRwzTXp4b4jpwiJDvR5KMnFnVodm3scufTlBOWRD6N1OBf9TZMhjlNsSfcO5V+7AF4+Vy+9g==
+ dependencies:
+ "@types/unist" "*"
+
+"@types/history@^4.7.11":
+ version "4.7.11"
+ resolved "https://registry.yarnpkg.com/@types/history/-/history-4.7.11.tgz#56588b17ae8f50c53983a524fc3cc47437969d64"
+ integrity sha512-qjDJRrmvBMiTx+jyLxvLfJU7UznFuokDv4f3WRuriHKERccVpFU+8XMQUAbDzoiJCsmexxRExQeMwwCdamSKDA==
+
+"@types/html-minifier-terser@^6.0.0":
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/@types/html-minifier-terser/-/html-minifier-terser-6.1.0.tgz#4fc33a00c1d0c16987b1a20cf92d20614c55ac35"
+ integrity sha512-oh/6byDPnL1zeNXFrDXFLyZjkr1MsBG667IM792caf1L2UPOOMf65NFzjUH/ltyfwjAGfs1rsX1eftK0jC/KIg==
+
+"@types/http-proxy@^1.17.8":
+ version "1.17.9"
+ resolved "https://registry.yarnpkg.com/@types/http-proxy/-/http-proxy-1.17.9.tgz#7f0e7931343761efde1e2bf48c40f02f3f75705a"
+ integrity sha512-QsbSjA/fSk7xB+UXlCT3wHBy5ai9wOcNDWwZAtud+jXhwOM3l+EYZh8Lng4+/6n8uar0J7xILzqftJdJ/Wdfkw==
+ dependencies:
+ "@types/node" "*"
+
+"@types/json-schema@*", "@types/json-schema@^7.0.4", "@types/json-schema@^7.0.5", "@types/json-schema@^7.0.8", "@types/json-schema@^7.0.9":
+ version "7.0.11"
+ resolved "https://registry.yarnpkg.com/@types/json-schema/-/json-schema-7.0.11.tgz#d421b6c527a3037f7c84433fd2c4229e016863d3"
+ integrity sha512-wOuvG1SN4Us4rez+tylwwwCV1psiNVOkJeM3AUWUNWg/jDQY2+HE/444y5gc+jBmRqASOm2Oeh5c1axHobwRKQ==
+
+"@types/mdast@^3.0.0":
+ version "3.0.10"
+ resolved "https://registry.yarnpkg.com/@types/mdast/-/mdast-3.0.10.tgz#4724244a82a4598884cbbe9bcfd73dff927ee8af"
+ integrity sha512-W864tg/Osz1+9f4lrGTZpCSO5/z4608eUp19tbozkq2HJK6i3z1kT0H9tlADXuYIb1YYOBByU4Jsqkk75q48qA==
+ dependencies:
+ "@types/unist" "*"
+
+"@types/mime@*":
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/@types/mime/-/mime-3.0.1.tgz#5f8f2bca0a5863cb69bc0b0acd88c96cb1d4ae10"
+ integrity sha512-Y4XFY5VJAuw0FgAqPNd6NNoV44jbq9Bz2L7Rh/J6jLTiHBSBJa9fxqQIvkIld4GsoDOcCbvzOUAbLPsSKKg+uA==
+
+"@types/minimist@^1.2.0":
+ version "1.2.2"
+ resolved "https://registry.yarnpkg.com/@types/minimist/-/minimist-1.2.2.tgz#ee771e2ba4b3dc5b372935d549fd9617bf345b8c"
+ integrity sha512-jhuKLIRrhvCPLqwPcx6INqmKeiA5EWrsCOPhrlFSrbrmU4ZMPjj5Ul/oLCMDO98XRUIwVm78xICz4EPCektzeQ==
+
+"@types/node@*":
+ version "18.7.18"
+ resolved "https://registry.yarnpkg.com/@types/node/-/node-18.7.18.tgz#633184f55c322e4fb08612307c274ee6d5ed3154"
+ integrity sha512-m+6nTEOadJZuTPkKR/SYK3A2d7FZrgElol9UP1Kae90VVU4a6mxnPuLiIW1m4Cq4gZ/nWb9GrdVXJCoCazDAbg==
+
+"@types/node@^14.0.0":
+ version "14.18.29"
+ resolved "https://registry.yarnpkg.com/@types/node/-/node-14.18.29.tgz#a0c58d67a42f8953c13d32f0acda47ed26dfce40"
+ integrity sha512-LhF+9fbIX4iPzhsRLpK5H7iPdvW8L4IwGciXQIOEcuF62+9nw/VQVsOViAOOGxY3OlOKGLFv0sWwJXdwQeTn6A==
+
+"@types/node@^17.0.5":
+ version "17.0.45"
+ resolved "https://registry.yarnpkg.com/@types/node/-/node-17.0.45.tgz#2c0fafd78705e7a18b7906b5201a522719dc5190"
+ integrity sha512-w+tIMs3rq2afQdsPJlODhoUEKzFP1ayaoyl1CcnwtIlsVe7K7bA1NGm4s3PraqTLlXnbIN84zuBlxBWo1u9BLw==
+
+"@types/normalize-package-data@^2.4.0":
+ version "2.4.1"
+ resolved "https://registry.yarnpkg.com/@types/normalize-package-data/-/normalize-package-data-2.4.1.tgz#d3357479a0fdfdd5907fe67e17e0a85c906e1301"
+ integrity sha512-Gj7cI7z+98M282Tqmp2K5EIsoouUEzbBJhQQzDE3jSIRk6r9gsz0oUokqIUR4u1R3dMHo0pDHM7sNOHyhulypw==
+
+"@types/parse-json@^4.0.0":
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/@types/parse-json/-/parse-json-4.0.0.tgz#2f8bb441434d163b35fb8ffdccd7138927ffb8c0"
+ integrity sha512-//oorEZjL6sbPcKUaCdIGlIUeH26mgzimjBB77G6XRgnDl/L5wOnpyBGRe/Mmf5CVW3PwEBE1NjiMZ/ssFh4wA==
+
+"@types/parse5@^5.0.0":
+ version "5.0.3"
+ resolved "https://registry.yarnpkg.com/@types/parse5/-/parse5-5.0.3.tgz#e7b5aebbac150f8b5fdd4a46e7f0bd8e65e19109"
+ integrity sha512-kUNnecmtkunAoQ3CnjmMkzNU/gtxG8guhi+Fk2U/kOpIKjIMKnXGp4IJCgQJrXSgMsWYimYG4TGjz/UzbGEBTw==
+
+"@types/prop-types@*":
+ version "15.7.5"
+ resolved "https://registry.yarnpkg.com/@types/prop-types/-/prop-types-15.7.5.tgz#5f19d2b85a98e9558036f6a3cacc8819420f05cf"
+ integrity sha512-JCB8C6SnDoQf0cNycqd/35A7MjcnK+ZTqE7judS6o7utxUCg6imJg3QK2qzHKszlTjcj2cn+NwMB2i96ubpj7w==
+
+"@types/qs@*":
+ version "6.9.7"
+ resolved "https://registry.yarnpkg.com/@types/qs/-/qs-6.9.7.tgz#63bb7d067db107cc1e457c303bc25d511febf6cb"
+ integrity sha512-FGa1F62FT09qcrueBA6qYTrJPVDzah9a+493+o2PCXsesWHIn27G98TsSMs3WPNbZIEj4+VJf6saSFpvD+3Zsw==
+
+"@types/range-parser@*":
+ version "1.2.4"
+ resolved "https://registry.yarnpkg.com/@types/range-parser/-/range-parser-1.2.4.tgz#cd667bcfdd025213aafb7ca5915a932590acdcdc"
+ integrity sha512-EEhsLsD6UsDM1yFhAvy0Cjr6VwmpMWqFBCb9w07wVugF7w9nfajxLuVmngTIpgS6svCnm6Vaw+MZhoDCKnOfsw==
+
+"@types/react-router-config@*", "@types/react-router-config@^5.0.6":
+ version "5.0.6"
+ resolved "https://registry.yarnpkg.com/@types/react-router-config/-/react-router-config-5.0.6.tgz#87c5c57e72d241db900d9734512c50ccec062451"
+ integrity sha512-db1mx37a1EJDf1XeX8jJN7R3PZABmJQXR8r28yUjVMFSjkmnQo6X6pOEEmNl+Tp2gYQOGPdYbFIipBtdElZ3Yg==
+ dependencies:
+ "@types/history" "^4.7.11"
+ "@types/react" "*"
+ "@types/react-router" "*"
+
+"@types/react-router-dom@*":
+ version "5.3.3"
+ resolved "https://registry.yarnpkg.com/@types/react-router-dom/-/react-router-dom-5.3.3.tgz#e9d6b4a66fcdbd651a5f106c2656a30088cc1e83"
+ integrity sha512-kpqnYK4wcdm5UaWI3fLcELopqLrHgLqNsdpHauzlQktfkHL3npOSwtj1Uz9oKBAzs7lFtVkV8j83voAz2D8fhw==
+ dependencies:
+ "@types/history" "^4.7.11"
+ "@types/react" "*"
+ "@types/react-router" "*"
+
+"@types/react-router@*":
+ version "5.1.19"
+ resolved "https://registry.yarnpkg.com/@types/react-router/-/react-router-5.1.19.tgz#9b404246fba7f91474d7008a3d48c17b6e075ad6"
+ integrity sha512-Fv/5kb2STAEMT3wHzdKQK2z8xKq38EDIGVrutYLmQVVLe+4orDFquU52hQrULnEHinMKv9FSA6lf9+uNT1ITtA==
+ dependencies:
+ "@types/history" "^4.7.11"
+ "@types/react" "*"
+
+"@types/react@*":
+ version "18.0.20"
+ resolved "https://registry.yarnpkg.com/@types/react/-/react-18.0.20.tgz#e4c36be3a55eb5b456ecf501bd4a00fd4fd0c9ab"
+ integrity sha512-MWul1teSPxujEHVwZl4a5HxQ9vVNsjTchVA+xRqv/VYGCuKGAU6UhfrTdF5aBefwD1BHUD8i/zq+O/vyCm/FrA==
+ dependencies:
+ "@types/prop-types" "*"
+ "@types/scheduler" "*"
+ csstype "^3.0.2"
+
+"@types/retry@0.12.0":
+ version "0.12.0"
+ resolved "https://registry.yarnpkg.com/@types/retry/-/retry-0.12.0.tgz#2b35eccfcee7d38cd72ad99232fbd58bffb3c84d"
+ integrity sha512-wWKOClTTiizcZhXnPY4wikVAwmdYHp8q6DmC+EJUzAMsycb7HB32Kh9RN4+0gExjmPmZSAQjgURXIGATPegAvA==
+
+"@types/sax@^1.2.1":
+ version "1.2.4"
+ resolved "https://registry.yarnpkg.com/@types/sax/-/sax-1.2.4.tgz#8221affa7f4f3cb21abd22f244cfabfa63e6a69e"
+ integrity sha512-pSAff4IAxJjfAXUG6tFkO7dsSbTmf8CtUpfhhZ5VhkRpC4628tJhh3+V6H1E+/Gs9piSzYKT5yzHO5M4GG9jkw==
+ dependencies:
+ "@types/node" "*"
+
+"@types/scheduler@*":
+ version "0.16.2"
+ resolved "https://registry.yarnpkg.com/@types/scheduler/-/scheduler-0.16.2.tgz#1a62f89525723dde24ba1b01b092bf5df8ad4d39"
+ integrity sha512-hppQEBDmlwhFAXKJX2KnWLYu5yMfi91yazPb2l+lbJiwW+wdo1gNeRA+3RgNSO39WYX2euey41KEwnqesU2Jew==
+
+"@types/serve-index@^1.9.1":
+ version "1.9.1"
+ resolved "https://registry.yarnpkg.com/@types/serve-index/-/serve-index-1.9.1.tgz#1b5e85370a192c01ec6cec4735cf2917337a6278"
+ integrity sha512-d/Hs3nWDxNL2xAczmOVZNj92YZCS6RGxfBPjKzuu/XirCgXdpKEb88dYNbrYGint6IVWLNP+yonwVAuRC0T2Dg==
+ dependencies:
+ "@types/express" "*"
+
+"@types/serve-static@*", "@types/serve-static@^1.13.10":
+ version "1.15.0"
+ resolved "https://registry.yarnpkg.com/@types/serve-static/-/serve-static-1.15.0.tgz#c7930ff61afb334e121a9da780aac0d9b8f34155"
+ integrity sha512-z5xyF6uh8CbjAu9760KDKsH2FcDxZ2tFCsA4HIMWE6IkiYMXfVoa+4f9KX+FN0ZLsaMw1WNG2ETLA6N+/YA+cg==
+ dependencies:
+ "@types/mime" "*"
+ "@types/node" "*"
+
+"@types/sockjs@^0.3.33":
+ version "0.3.33"
+ resolved "https://registry.yarnpkg.com/@types/sockjs/-/sockjs-0.3.33.tgz#570d3a0b99ac995360e3136fd6045113b1bd236f"
+ integrity sha512-f0KEEe05NvUnat+boPTZ0dgaLZ4SfSouXUgv5noUiefG2ajgKjmETo9ZJyuqsl7dfl2aHlLJUiki6B4ZYldiiw==
+ dependencies:
+ "@types/node" "*"
+
+"@types/unist@*", "@types/unist@^2.0.0", "@types/unist@^2.0.2", "@types/unist@^2.0.3":
+ version "2.0.6"
+ resolved "https://registry.yarnpkg.com/@types/unist/-/unist-2.0.6.tgz#250a7b16c3b91f672a24552ec64678eeb1d3a08d"
+ integrity sha512-PBjIUxZHOuj0R15/xuwJYjFi+KZdNFrehocChv4g5hu6aFroHue8m0lBP0POdK2nKzbw0cgV1mws8+V/JAcEkQ==
+
+"@types/ws@^8.5.1":
+ version "8.5.3"
+ resolved "https://registry.yarnpkg.com/@types/ws/-/ws-8.5.3.tgz#7d25a1ffbecd3c4f2d35068d0b283c037003274d"
+ integrity sha512-6YOoWjruKj1uLf3INHH7D3qTXwFfEsg1kf3c0uDdSBJwfa/llkwIjrAGV7j7mVgGNbzTQ3HiHKKDXl6bJPD97w==
+ dependencies:
+ "@types/node" "*"
+
+"@webassemblyjs/ast@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/ast/-/ast-1.11.1.tgz#2bfd767eae1a6996f432ff7e8d7fc75679c0b6a7"
+ integrity sha512-ukBh14qFLjxTQNTXocdyksN5QdM28S1CxHt2rdskFyL+xFV7VremuBLVbmCePj+URalXBENx/9Lm7lnhihtCSw==
+ dependencies:
+ "@webassemblyjs/helper-numbers" "1.11.1"
+ "@webassemblyjs/helper-wasm-bytecode" "1.11.1"
+
+"@webassemblyjs/floating-point-hex-parser@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.11.1.tgz#f6c61a705f0fd7a6aecaa4e8198f23d9dc179e4f"
+ integrity sha512-iGRfyc5Bq+NnNuX8b5hwBrRjzf0ocrJPI6GWFodBFzmFnyvrQ83SHKhmilCU/8Jv67i4GJZBMhEzltxzcNagtQ==
+
+"@webassemblyjs/helper-api-error@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/helper-api-error/-/helper-api-error-1.11.1.tgz#1a63192d8788e5c012800ba6a7a46c705288fd16"
+ integrity sha512-RlhS8CBCXfRUR/cwo2ho9bkheSXG0+NwooXcc3PAILALf2QLdFyj7KGsKRbVc95hZnhnERon4kW/D3SZpp6Tcg==
+
+"@webassemblyjs/helper-buffer@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/helper-buffer/-/helper-buffer-1.11.1.tgz#832a900eb444884cde9a7cad467f81500f5e5ab5"
+ integrity sha512-gwikF65aDNeeXa8JxXa2BAk+REjSyhrNC9ZwdT0f8jc4dQQeDQ7G4m0f2QCLPJiMTTO6wfDmRmj/pW0PsUvIcA==
+
+"@webassemblyjs/helper-numbers@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/helper-numbers/-/helper-numbers-1.11.1.tgz#64d81da219fbbba1e3bd1bfc74f6e8c4e10a62ae"
+ integrity sha512-vDkbxiB8zfnPdNK9Rajcey5C0w+QJugEglN0of+kmO8l7lDb77AnlKYQF7aarZuCrv+l0UvqL+68gSDr3k9LPQ==
+ dependencies:
+ "@webassemblyjs/floating-point-hex-parser" "1.11.1"
+ "@webassemblyjs/helper-api-error" "1.11.1"
+ "@xtuc/long" "4.2.2"
+
+"@webassemblyjs/helper-wasm-bytecode@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.11.1.tgz#f328241e41e7b199d0b20c18e88429c4433295e1"
+ integrity sha512-PvpoOGiJwXeTrSf/qfudJhwlvDQxFgelbMqtq52WWiXC6Xgg1IREdngmPN3bs4RoO83PnL/nFrxucXj1+BX62Q==
+
+"@webassemblyjs/helper-wasm-section@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.11.1.tgz#21ee065a7b635f319e738f0dd73bfbda281c097a"
+ integrity sha512-10P9No29rYX1j7F3EVPX3JvGPQPae+AomuSTPiF9eBQeChHI6iqjMIwR9JmOJXwpnn/oVGDk7I5IlskuMwU/pg==
+ dependencies:
+ "@webassemblyjs/ast" "1.11.1"
+ "@webassemblyjs/helper-buffer" "1.11.1"
+ "@webassemblyjs/helper-wasm-bytecode" "1.11.1"
+ "@webassemblyjs/wasm-gen" "1.11.1"
+
+"@webassemblyjs/ieee754@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/ieee754/-/ieee754-1.11.1.tgz#963929e9bbd05709e7e12243a099180812992614"
+ integrity sha512-hJ87QIPtAMKbFq6CGTkZYJivEwZDbQUgYd3qKSadTNOhVY7p+gfP6Sr0lLRVTaG1JjFj+r3YchoqRYxNH3M0GQ==
+ dependencies:
+ "@xtuc/ieee754" "^1.2.0"
+
+"@webassemblyjs/leb128@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/leb128/-/leb128-1.11.1.tgz#ce814b45574e93d76bae1fb2644ab9cdd9527aa5"
+ integrity sha512-BJ2P0hNZ0u+Th1YZXJpzW6miwqQUGcIHT1G/sf72gLVD9DZ5AdYTqPNbHZh6K1M5VmKvFXwGSWZADz+qBWxeRw==
+ dependencies:
+ "@xtuc/long" "4.2.2"
+
+"@webassemblyjs/utf8@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/utf8/-/utf8-1.11.1.tgz#d1f8b764369e7c6e6bae350e854dec9a59f0a3ff"
+ integrity sha512-9kqcxAEdMhiwQkHpkNiorZzqpGrodQQ2IGrHHxCy+Ozng0ofyMA0lTqiLkVs1uzTRejX+/O0EOT7KxqVPuXosQ==
+
+"@webassemblyjs/wasm-edit@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/wasm-edit/-/wasm-edit-1.11.1.tgz#ad206ebf4bf95a058ce9880a8c092c5dec8193d6"
+ integrity sha512-g+RsupUC1aTHfR8CDgnsVRVZFJqdkFHpsHMfJuWQzWU3tvnLC07UqHICfP+4XyL2tnr1amvl1Sdp06TnYCmVkA==
+ dependencies:
+ "@webassemblyjs/ast" "1.11.1"
+ "@webassemblyjs/helper-buffer" "1.11.1"
+ "@webassemblyjs/helper-wasm-bytecode" "1.11.1"
+ "@webassemblyjs/helper-wasm-section" "1.11.1"
+ "@webassemblyjs/wasm-gen" "1.11.1"
+ "@webassemblyjs/wasm-opt" "1.11.1"
+ "@webassemblyjs/wasm-parser" "1.11.1"
+ "@webassemblyjs/wast-printer" "1.11.1"
+
+"@webassemblyjs/wasm-gen@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/wasm-gen/-/wasm-gen-1.11.1.tgz#86c5ea304849759b7d88c47a32f4f039ae3c8f76"
+ integrity sha512-F7QqKXwwNlMmsulj6+O7r4mmtAlCWfO/0HdgOxSklZfQcDu0TpLiD1mRt/zF25Bk59FIjEuGAIyn5ei4yMfLhA==
+ dependencies:
+ "@webassemblyjs/ast" "1.11.1"
+ "@webassemblyjs/helper-wasm-bytecode" "1.11.1"
+ "@webassemblyjs/ieee754" "1.11.1"
+ "@webassemblyjs/leb128" "1.11.1"
+ "@webassemblyjs/utf8" "1.11.1"
+
+"@webassemblyjs/wasm-opt@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/wasm-opt/-/wasm-opt-1.11.1.tgz#657b4c2202f4cf3b345f8a4c6461c8c2418985f2"
+ integrity sha512-VqnkNqnZlU5EB64pp1l7hdm3hmQw7Vgqa0KF/KCNO9sIpI6Fk6brDEiX+iCOYrvMuBWDws0NkTOxYEb85XQHHw==
+ dependencies:
+ "@webassemblyjs/ast" "1.11.1"
+ "@webassemblyjs/helper-buffer" "1.11.1"
+ "@webassemblyjs/wasm-gen" "1.11.1"
+ "@webassemblyjs/wasm-parser" "1.11.1"
+
+"@webassemblyjs/wasm-parser@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/wasm-parser/-/wasm-parser-1.11.1.tgz#86ca734534f417e9bd3c67c7a1c75d8be41fb199"
+ integrity sha512-rrBujw+dJu32gYB7/Lup6UhdkPx9S9SnobZzRVL7VcBH9Bt9bCBLEuX/YXOOtBsOZ4NQrRykKhffRWHvigQvOA==
+ dependencies:
+ "@webassemblyjs/ast" "1.11.1"
+ "@webassemblyjs/helper-api-error" "1.11.1"
+ "@webassemblyjs/helper-wasm-bytecode" "1.11.1"
+ "@webassemblyjs/ieee754" "1.11.1"
+ "@webassemblyjs/leb128" "1.11.1"
+ "@webassemblyjs/utf8" "1.11.1"
+
+"@webassemblyjs/wast-printer@1.11.1":
+ version "1.11.1"
+ resolved "https://registry.yarnpkg.com/@webassemblyjs/wast-printer/-/wast-printer-1.11.1.tgz#d0c73beda8eec5426f10ae8ef55cee5e7084c2f0"
+ integrity sha512-IQboUWM4eKzWW+N/jij2sRatKMh99QEelo3Eb2q0qXkvPRISAj8Qxtmw5itwqK+TTkBuUIE45AxYPToqPtL5gg==
+ dependencies:
+ "@webassemblyjs/ast" "1.11.1"
+ "@xtuc/long" "4.2.2"
+
+"@xtuc/ieee754@^1.2.0":
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/@xtuc/ieee754/-/ieee754-1.2.0.tgz#eef014a3145ae477a1cbc00cd1e552336dceb790"
+ integrity sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==
+
+"@xtuc/long@4.2.2":
+ version "4.2.2"
+ resolved "https://registry.yarnpkg.com/@xtuc/long/-/long-4.2.2.tgz#d291c6a4e97989b5c61d9acf396ae4fe133a718d"
+ integrity sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==
+
+JSONStream@^1.0.4:
+ version "1.3.5"
+ resolved "https://registry.yarnpkg.com/JSONStream/-/JSONStream-1.3.5.tgz#3208c1f08d3a4d99261ab64f92302bc15e111ca0"
+ integrity sha512-E+iruNOY8VV9s4JEbe1aNEm6MiszPRr/UfcHMz0TQh1BXSxHK+ASV1R6W4HpjBhSeS+54PIsAMCBmwD06LLsqQ==
+ dependencies:
+ jsonparse "^1.2.0"
+ through ">=2.2.7 <3"
+
+accepts@~1.3.4, accepts@~1.3.5, accepts@~1.3.8:
+ version "1.3.8"
+ resolved "https://registry.yarnpkg.com/accepts/-/accepts-1.3.8.tgz#0bf0be125b67014adcb0b0921e62db7bffe16b2e"
+ integrity sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==
+ dependencies:
+ mime-types "~2.1.34"
+ negotiator "0.6.3"
+
+acorn-import-assertions@^1.7.6:
+ version "1.8.0"
+ resolved "https://registry.yarnpkg.com/acorn-import-assertions/-/acorn-import-assertions-1.8.0.tgz#ba2b5939ce62c238db6d93d81c9b111b29b855e9"
+ integrity sha512-m7VZ3jwz4eK6A4Vtt8Ew1/mNbP24u0FhdyfA7fSvnJR6LMdfOYnmuIrrJAgrYfYJ10F/otaHTtrtrtmHdMNzEw==
+
+acorn-walk@^8.0.0, acorn-walk@^8.1.1:
+ version "8.2.0"
+ resolved "https://registry.yarnpkg.com/acorn-walk/-/acorn-walk-8.2.0.tgz#741210f2e2426454508853a2f44d0ab83b7f69c1"
+ integrity sha512-k+iyHEuPgSw6SbuDpGQM+06HQUa04DZ3o+F6CSzXMvvI5KMvnaEqXe+YVe555R9nn6GPt404fos4wcgpw12SDA==
+
+acorn@^8.0.4, acorn@^8.4.1, acorn@^8.5.0, acorn@^8.7.1:
+ version "8.8.0"
+ resolved "https://registry.yarnpkg.com/acorn/-/acorn-8.8.0.tgz#88c0187620435c7f6015803f5539dae05a9dbea8"
+ integrity sha512-QOxyigPVrpZ2GXT+PFyZTl6TtOFc5egxHIP9IlQ+RbupQuX4RkT/Bee4/kQuC02Xkzg84JcT7oLYtDIQxp+v7w==
+
+add-stream@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/add-stream/-/add-stream-1.0.0.tgz#6a7990437ca736d5e1288db92bd3266d5f5cb2aa"
+ integrity sha512-qQLMr+8o0WC4FZGQTcJiKBVC59JylcPSrTtk6usvmIDFUOCKegapy1VHQwRbFMOFyb/inzUVqHs+eMYKDM1YeQ==
+
+address@^1.0.1, address@^1.1.2:
+ version "1.2.1"
+ resolved "https://registry.yarnpkg.com/address/-/address-1.2.1.tgz#25bb61095b7522d65b357baa11bc05492d4c8acd"
+ integrity sha512-B+6bi5D34+fDYENiH5qOlA0cV2rAGKuWZ9LeyUUehbXy8e0VS9e498yO0Jeeh+iM+6KbfudHTFjXw2MmJD4QRA==
+
+aggregate-error@^3.0.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/aggregate-error/-/aggregate-error-3.1.0.tgz#92670ff50f5359bdb7a3e0d40d0ec30c5737687a"
+ integrity sha512-4I7Td01quW/RpocfNayFdFVk1qSuoh0E7JrbRJ16nH01HhKFQ88INq9Sd+nd72zqRySlr9BmDA8xlEJ6vJMrYA==
+ dependencies:
+ clean-stack "^2.0.0"
+ indent-string "^4.0.0"
+
+ajv-formats@^2.1.1:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/ajv-formats/-/ajv-formats-2.1.1.tgz#6e669400659eb74973bbf2e33327180a0996b520"
+ integrity sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==
+ dependencies:
+ ajv "^8.0.0"
+
+ajv-keywords@^3.4.1, ajv-keywords@^3.5.2:
+ version "3.5.2"
+ resolved "https://registry.yarnpkg.com/ajv-keywords/-/ajv-keywords-3.5.2.tgz#31f29da5ab6e00d1c2d329acf7b5929614d5014d"
+ integrity sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==
+
+ajv-keywords@^5.0.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/ajv-keywords/-/ajv-keywords-5.1.0.tgz#69d4d385a4733cdbeab44964a1170a88f87f0e16"
+ integrity sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==
+ dependencies:
+ fast-deep-equal "^3.1.3"
+
+ajv@^6.12.2, ajv@^6.12.4, ajv@^6.12.5:
+ version "6.12.6"
+ resolved "https://registry.yarnpkg.com/ajv/-/ajv-6.12.6.tgz#baf5a62e802b07d977034586f8c3baf5adf26df4"
+ integrity sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==
+ dependencies:
+ fast-deep-equal "^3.1.1"
+ fast-json-stable-stringify "^2.0.0"
+ json-schema-traverse "^0.4.1"
+ uri-js "^4.2.2"
+
+ajv@^8.0.0, ajv@^8.11.0, ajv@^8.8.0:
+ version "8.11.0"
+ resolved "https://registry.yarnpkg.com/ajv/-/ajv-8.11.0.tgz#977e91dd96ca669f54a11e23e378e33b884a565f"
+ integrity sha512-wGgprdCvMalC0BztXvitD2hC04YffAvtsUn93JbGXYLAtCUO4xd17mCCZQxUOItiBwZvJScWo8NIvQMQ71rdpg==
+ dependencies:
+ fast-deep-equal "^3.1.1"
+ json-schema-traverse "^1.0.0"
+ require-from-string "^2.0.2"
+ uri-js "^4.2.2"
+
+algoliasearch-helper@^3.10.0:
+ version "3.11.1"
+ resolved "https://registry.yarnpkg.com/algoliasearch-helper/-/algoliasearch-helper-3.11.1.tgz#d83ab7f1a2a374440686ef7a144b3c288b01188a"
+ integrity sha512-mvsPN3eK4E0bZG0/WlWJjeqe/bUD2KOEVOl0GyL/TGXn6wcpZU8NOuztGHCUKXkyg5gq6YzUakVTmnmSSO5Yiw==
+ dependencies:
+ "@algolia/events" "^4.0.1"
+
+algoliasearch@^4.0.0, algoliasearch@^4.13.1:
+ version "4.14.2"
+ resolved "https://registry.yarnpkg.com/algoliasearch/-/algoliasearch-4.14.2.tgz#63f142583bfc3a9bd3cd4a1b098bf6fe58e56f6c"
+ integrity sha512-ngbEQonGEmf8dyEh5f+uOIihv4176dgbuOZspiuhmTTBRBuzWu3KCGHre6uHj5YyuC7pNvQGzB6ZNJyZi0z+Sg==
+ dependencies:
+ "@algolia/cache-browser-local-storage" "4.14.2"
+ "@algolia/cache-common" "4.14.2"
+ "@algolia/cache-in-memory" "4.14.2"
+ "@algolia/client-account" "4.14.2"
+ "@algolia/client-analytics" "4.14.2"
+ "@algolia/client-common" "4.14.2"
+ "@algolia/client-personalization" "4.14.2"
+ "@algolia/client-search" "4.14.2"
+ "@algolia/logger-common" "4.14.2"
+ "@algolia/logger-console" "4.14.2"
+ "@algolia/requester-browser-xhr" "4.14.2"
+ "@algolia/requester-common" "4.14.2"
+ "@algolia/requester-node-http" "4.14.2"
+ "@algolia/transporter" "4.14.2"
+
+ansi-align@^3.0.0, ansi-align@^3.0.1:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/ansi-align/-/ansi-align-3.0.1.tgz#0cdf12e111ace773a86e9a1fad1225c43cb19a59"
+ integrity sha512-IOfwwBF5iczOjp/WeY4YxyjqAFMQoZufdQWDd19SEExbVLNXqvpzSJ/M7Za4/sCPmQ0+GRquoA7bGcINcxew6w==
+ dependencies:
+ string-width "^4.1.0"
+
+ansi-escapes@^4.2.1:
+ version "4.3.2"
+ resolved "https://registry.yarnpkg.com/ansi-escapes/-/ansi-escapes-4.3.2.tgz#6b2291d1db7d98b6521d5f1efa42d0f3a9feb65e"
+ integrity sha512-gKXj5ALrKWQLsYG9jlTRmR/xKluxHV+Z9QEwNIgCfM1/uwPMCuzVVnh5mwTd+OuBZcwSIMbqssNWRm1lE51QaQ==
+ dependencies:
+ type-fest "^0.21.3"
+
+ansi-html-community@^0.0.8:
+ version "0.0.8"
+ resolved "https://registry.yarnpkg.com/ansi-html-community/-/ansi-html-community-0.0.8.tgz#69fbc4d6ccbe383f9736934ae34c3f8290f1bf41"
+ integrity sha512-1APHAyr3+PCamwNw3bXCPp4HFLONZt/yIH0sZp0/469KWNTEy+qN5jQ3GVX6DMZ1UXAi34yVwtTeaG/HpBuuzw==
+
+ansi-regex@^5.0.1:
+ version "5.0.1"
+ resolved "https://registry.yarnpkg.com/ansi-regex/-/ansi-regex-5.0.1.tgz#082cb2c89c9fe8659a311a53bd6a4dc5301db304"
+ integrity sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==
+
+ansi-regex@^6.0.1:
+ version "6.0.1"
+ resolved "https://registry.yarnpkg.com/ansi-regex/-/ansi-regex-6.0.1.tgz#3183e38fae9a65d7cb5e53945cd5897d0260a06a"
+ integrity sha512-n5M855fKb2SsfMIiFFoVrABHJC8QtHwVx+mHWP3QcEqBHYienj5dHSgjbxtC0WEZXYt4wcD6zrQElDPhFuZgfA==
+
+ansi-styles@^3.2.1:
+ version "3.2.1"
+ resolved "https://registry.yarnpkg.com/ansi-styles/-/ansi-styles-3.2.1.tgz#41fbb20243e50b12be0f04b8dedbf07520ce841d"
+ integrity sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==
+ dependencies:
+ color-convert "^1.9.0"
+
+ansi-styles@^4.0.0, ansi-styles@^4.1.0:
+ version "4.3.0"
+ resolved "https://registry.yarnpkg.com/ansi-styles/-/ansi-styles-4.3.0.tgz#edd803628ae71c04c85ae7a0906edad34b648937"
+ integrity sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==
+ dependencies:
+ color-convert "^2.0.1"
+
+ansi-styles@^6.1.0:
+ version "6.1.1"
+ resolved "https://registry.yarnpkg.com/ansi-styles/-/ansi-styles-6.1.1.tgz#63cd61c72283a71cb30bd881dbb60adada74bc70"
+ integrity sha512-qDOv24WjnYuL+wbwHdlsYZFy+cgPtrYw0Tn7GLORicQp9BkQLzrgI3Pm4VyR9ERZ41YTn7KlMPuL1n05WdZvmg==
+
+anymatch@~3.1.2:
+ version "3.1.2"
+ resolved "https://registry.yarnpkg.com/anymatch/-/anymatch-3.1.2.tgz#c0557c096af32f106198f4f4e2a383537e378716"
+ integrity sha512-P43ePfOAIupkguHUycrc4qJ9kz8ZiuOUijaETwX7THt0Y/GNK7v0aa8rY816xWjZ7rJdA5XdMcpVFTKMq+RvWg==
+ dependencies:
+ normalize-path "^3.0.0"
+ picomatch "^2.0.4"
+
+arg@^4.1.0:
+ version "4.1.3"
+ resolved "https://registry.yarnpkg.com/arg/-/arg-4.1.3.tgz#269fc7ad5b8e42cb63c896d5666017261c144089"
+ integrity sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==
+
+arg@^5.0.0:
+ version "5.0.2"
+ resolved "https://registry.yarnpkg.com/arg/-/arg-5.0.2.tgz#c81433cc427c92c4dcf4865142dbca6f15acd59c"
+ integrity sha512-PYjyFOLKQ9y57JvQ6QLo8dAgNqswh8M1RMJYdQduT6xbWSgK36P/Z/v+p888pM69jMMfS8Xd8F6I1kQ/I9HUGg==
+
+argparse@^1.0.7:
+ version "1.0.10"
+ resolved "https://registry.yarnpkg.com/argparse/-/argparse-1.0.10.tgz#bcd6791ea5ae09725e17e5ad988134cd40b3d911"
+ integrity sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==
+ dependencies:
+ sprintf-js "~1.0.2"
+
+argparse@^2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/argparse/-/argparse-2.0.1.tgz#246f50f3ca78a3240f6c997e8a9bd1eac49e4b38"
+ integrity sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==
+
+array-flatten@1.1.1:
+ version "1.1.1"
+ resolved "https://registry.yarnpkg.com/array-flatten/-/array-flatten-1.1.1.tgz#9a5f699051b1e7073328f2a008968b64ea2955d2"
+ integrity sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==
+
+array-flatten@^2.1.2:
+ version "2.1.2"
+ resolved "https://registry.yarnpkg.com/array-flatten/-/array-flatten-2.1.2.tgz#24ef80a28c1a893617e2149b0c6d0d788293b099"
+ integrity sha512-hNfzcOV8W4NdualtqBFPyVO+54DSJuZGY9qT4pRroB6S9e3iiido2ISIC5h9R2sPJ8H3FHCIiEnsv1lPXO3KtQ==
+
+array-ify@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/array-ify/-/array-ify-1.0.0.tgz#9e528762b4a9066ad163a6962a364418e9626ece"
+ integrity sha512-c5AMf34bKdvPhQ7tBGhqkgKNUzMr4WUs+WDtC2ZUGOUncbxKMTvqxYctiseW3+L4bA8ec+GcZ6/A/FW4m8ukng==
+
+array-union@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/array-union/-/array-union-2.1.0.tgz#b798420adbeb1de828d84acd8a2e23d3efe85e8d"
+ integrity sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw==
+
+arrify@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/arrify/-/arrify-1.0.1.tgz#898508da2226f380df904728456849c1501a4b0d"
+ integrity sha512-3CYzex9M9FGQjCGMGyi6/31c8GJbgb0qGyrx5HWxPd0aCwh4cB2YjMb2Xf9UuoogrMrlO9cTqnB5rI5GHZTcUA==
+
+asap@~2.0.3:
+ version "2.0.6"
+ resolved "https://registry.yarnpkg.com/asap/-/asap-2.0.6.tgz#e50347611d7e690943208bbdafebcbc2fb866d46"
+ integrity sha512-BSHWgDSAiKs50o2Re8ppvp3seVHXSRM44cdSsT9FfNEUUZLOGWVCsiWaRPWM1Znn+mqZ1OfVZ3z3DWEzSp7hRA==
+
+at-least-node@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/at-least-node/-/at-least-node-1.0.0.tgz#602cd4b46e844ad4effc92a8011a3c46e0238dc2"
+ integrity sha512-+q/t7Ekv1EDY2l6Gda6LLiX14rU9TV20Wa3ofeQmwPFZbOMo9DXrLbOjFaaclkXKWidIaopwAObQDqwWtGUjqg==
+
+autoprefixer@^10.3.7, autoprefixer@^10.4.7:
+ version "10.4.10"
+ resolved "https://registry.yarnpkg.com/autoprefixer/-/autoprefixer-10.4.10.tgz#a1d8891d1516155eb13a772b1289efdc61de14ef"
+ integrity sha512-nMaiDARyp1e74c8IeAXkr+BmFKa8By4Zak7tyaNPF09Iu39WFpNXOWrVirmXjKr+5cOyERwvtbMOLYz6iBJYgQ==
+ dependencies:
+ browserslist "^4.21.3"
+ caniuse-lite "^1.0.30001399"
+ fraction.js "^4.2.0"
+ normalize-range "^0.1.2"
+ picocolors "^1.0.0"
+ postcss-value-parser "^4.2.0"
+
+axios@^0.25.0:
+ version "0.25.0"
+ resolved "https://registry.yarnpkg.com/axios/-/axios-0.25.0.tgz#349cfbb31331a9b4453190791760a8d35b093e0a"
+ integrity sha512-cD8FOb0tRH3uuEe6+evtAbgJtfxr7ly3fQjYcMcuPlgkwVS9xboaVIpcDV+cYQe+yGykgwZCs1pzjntcGa6l5g==
+ dependencies:
+ follow-redirects "^1.14.7"
+
+babel-loader@^8.2.5:
+ version "8.2.5"
+ resolved "https://registry.yarnpkg.com/babel-loader/-/babel-loader-8.2.5.tgz#d45f585e654d5a5d90f5350a779d7647c5ed512e"
+ integrity sha512-OSiFfH89LrEMiWd4pLNqGz4CwJDtbs2ZVc+iGu2HrkRfPxId9F2anQj38IxWpmRfsUY0aBZYi1EFcd3mhtRMLQ==
+ dependencies:
+ find-cache-dir "^3.3.1"
+ loader-utils "^2.0.0"
+ make-dir "^3.1.0"
+ schema-utils "^2.6.5"
+
+babel-plugin-apply-mdx-type-prop@1.6.22:
+ version "1.6.22"
+ resolved "https://registry.yarnpkg.com/babel-plugin-apply-mdx-type-prop/-/babel-plugin-apply-mdx-type-prop-1.6.22.tgz#d216e8fd0de91de3f1478ef3231e05446bc8705b"
+ integrity sha512-VefL+8o+F/DfK24lPZMtJctrCVOfgbqLAGZSkxwhazQv4VxPg3Za/i40fu22KR2m8eEda+IfSOlPLUSIiLcnCQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "7.10.4"
+ "@mdx-js/util" "1.6.22"
+
+babel-plugin-dynamic-import-node@^2.3.3:
+ version "2.3.3"
+ resolved "https://registry.yarnpkg.com/babel-plugin-dynamic-import-node/-/babel-plugin-dynamic-import-node-2.3.3.tgz#84fda19c976ec5c6defef57f9427b3def66e17a3"
+ integrity sha512-jZVI+s9Zg3IqA/kdi0i6UDCybUI3aSBLnglhYbSSjKlV7yF1F/5LWv8MakQmvYpnbJDS6fcBL2KzHSxNCMtWSQ==
+ dependencies:
+ object.assign "^4.1.0"
+
+babel-plugin-extract-import-names@1.6.22:
+ version "1.6.22"
+ resolved "https://registry.yarnpkg.com/babel-plugin-extract-import-names/-/babel-plugin-extract-import-names-1.6.22.tgz#de5f9a28eb12f3eb2578bf74472204e66d1a13dc"
+ integrity sha512-yJ9BsJaISua7d8zNT7oRG1ZLBJCIdZ4PZqmH8qa9N5AK01ifk3fnkc98AXhtzE7UkfCsEumvoQWgoYLhOnJ7jQ==
+ dependencies:
+ "@babel/helper-plugin-utils" "7.10.4"
+
+babel-plugin-polyfill-corejs2@^0.3.2:
+ version "0.3.3"
+ resolved "https://registry.yarnpkg.com/babel-plugin-polyfill-corejs2/-/babel-plugin-polyfill-corejs2-0.3.3.tgz#5d1bd3836d0a19e1b84bbf2d9640ccb6f951c122"
+ integrity sha512-8hOdmFYFSZhqg2C/JgLUQ+t52o5nirNwaWM2B9LWteozwIvM14VSwdsCAUET10qT+kmySAlseadmfeeSWFCy+Q==
+ dependencies:
+ "@babel/compat-data" "^7.17.7"
+ "@babel/helper-define-polyfill-provider" "^0.3.3"
+ semver "^6.1.1"
+
+babel-plugin-polyfill-corejs3@^0.5.3:
+ version "0.5.3"
+ resolved "https://registry.yarnpkg.com/babel-plugin-polyfill-corejs3/-/babel-plugin-polyfill-corejs3-0.5.3.tgz#d7e09c9a899079d71a8b670c6181af56ec19c5c7"
+ integrity sha512-zKsXDh0XjnrUEW0mxIHLfjBfnXSMr5Q/goMe/fxpQnLm07mcOZiIZHBNWCMx60HmdvjxfXcalac0tfFg0wqxyw==
+ dependencies:
+ "@babel/helper-define-polyfill-provider" "^0.3.2"
+ core-js-compat "^3.21.0"
+
+babel-plugin-polyfill-regenerator@^0.4.0:
+ version "0.4.1"
+ resolved "https://registry.yarnpkg.com/babel-plugin-polyfill-regenerator/-/babel-plugin-polyfill-regenerator-0.4.1.tgz#390f91c38d90473592ed43351e801a9d3e0fd747"
+ integrity sha512-NtQGmyQDXjQqQ+IzRkBVwEOz9lQ4zxAQZgoAYEtU9dJjnl1Oc98qnN7jcp+bE7O7aYzVpavXE3/VKXNzUbh7aw==
+ dependencies:
+ "@babel/helper-define-polyfill-provider" "^0.3.3"
+
+bail@^1.0.0:
+ version "1.0.5"
+ resolved "https://registry.yarnpkg.com/bail/-/bail-1.0.5.tgz#b6fa133404a392cbc1f8c4bf63f5953351e7a776"
+ integrity sha512-xFbRxM1tahm08yHBP16MMjVUAvDaBMD38zsM9EMAUN61omwLmKlOpB/Zku5QkjZ8TZ4vn53pj+t518cH0S03RQ==
+
+balanced-match@^1.0.0:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-1.0.2.tgz#e83e3a7e3f300b34cb9d87f615fa0cbf357690ee"
+ integrity sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==
+
+base16@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/base16/-/base16-1.0.0.tgz#e297f60d7ec1014a7a971a39ebc8a98c0b681e70"
+ integrity sha512-pNdYkNPiJUnEhnfXV56+sQy8+AaPcG3POZAUnwr4EeqCUZFz4u2PePbo3e5Gj4ziYPCWGUZT9RHisvJKnwFuBQ==
+
+base64-js@^1.3.1:
+ version "1.5.1"
+ resolved "https://registry.yarnpkg.com/base64-js/-/base64-js-1.5.1.tgz#1b1b440160a5bf7ad40b650f095963481903930a"
+ integrity sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==
+
+batch@0.6.1:
+ version "0.6.1"
+ resolved "https://registry.yarnpkg.com/batch/-/batch-0.6.1.tgz#dc34314f4e679318093fc760272525f94bf25c16"
+ integrity sha512-x+VAiMRL6UPkx+kudNvxTl6hB2XNNCG2r+7wixVfIYwu/2HKRXimwQyaumLjMveWvT2Hkd/cAJw+QBMfJ/EKVw==
+
+big.js@^5.2.2:
+ version "5.2.2"
+ resolved "https://registry.yarnpkg.com/big.js/-/big.js-5.2.2.tgz#65f0af382f578bcdc742bd9c281e9cb2d7768328"
+ integrity sha512-vyL2OymJxmarO8gxMr0mhChsO9QGwhynfuu4+MHTAW6czfq9humCB7rKpUjDd9YUiDPU4mzpyupFSvOClAwbmQ==
+
+binary-extensions@^2.0.0:
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/binary-extensions/-/binary-extensions-2.2.0.tgz#75f502eeaf9ffde42fc98829645be4ea76bd9e2d"
+ integrity sha512-jDctJ/IVQbZoJykoeHbhXpOlNBqGNcwXJKJog42E5HDPUwQTSdjCHdihjj0DlnheQ7blbT6dHOafNAiS8ooQKA==
+
+bl@^4.1.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/bl/-/bl-4.1.0.tgz#451535264182bec2fbbc83a62ab98cf11d9f7b3a"
+ integrity sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==
+ dependencies:
+ buffer "^5.5.0"
+ inherits "^2.0.4"
+ readable-stream "^3.4.0"
+
+body-parser@1.20.0:
+ version "1.20.0"
+ resolved "https://registry.yarnpkg.com/body-parser/-/body-parser-1.20.0.tgz#3de69bd89011c11573d7bfee6a64f11b6bd27cc5"
+ integrity sha512-DfJ+q6EPcGKZD1QWUjSpqp+Q7bDQTsQIF4zfUAtZ6qk+H/3/QRhg9CEp39ss+/T2vw0+HaidC0ecJj/DRLIaKg==
+ dependencies:
+ bytes "3.1.2"
+ content-type "~1.0.4"
+ debug "2.6.9"
+ depd "2.0.0"
+ destroy "1.2.0"
+ http-errors "2.0.0"
+ iconv-lite "0.4.24"
+ on-finished "2.4.1"
+ qs "6.10.3"
+ raw-body "2.5.1"
+ type-is "~1.6.18"
+ unpipe "1.0.0"
+
+bonjour-service@^1.0.11:
+ version "1.0.14"
+ resolved "https://registry.yarnpkg.com/bonjour-service/-/bonjour-service-1.0.14.tgz#c346f5bc84e87802d08f8d5a60b93f758e514ee7"
+ integrity sha512-HIMbgLnk1Vqvs6B4Wq5ep7mxvj9sGz5d1JJyDNSGNIdA/w2MCz6GTjWTdjqOJV1bEPj+6IkxDvWNFKEBxNt4kQ==
+ dependencies:
+ array-flatten "^2.1.2"
+ dns-equal "^1.0.0"
+ fast-deep-equal "^3.1.3"
+ multicast-dns "^7.2.5"
+
+boolbase@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/boolbase/-/boolbase-1.0.0.tgz#68dff5fbe60c51eb37725ea9e3ed310dcc1e776e"
+ integrity sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww==
+
+boxen@^5.0.0:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/boxen/-/boxen-5.1.2.tgz#788cb686fc83c1f486dfa8a40c68fc2b831d2b50"
+ integrity sha512-9gYgQKXx+1nP8mP7CzFyaUARhg7D3n1dF/FnErWmu9l6JvGpNUN278h0aSb+QjoiKSWG+iZ3uHrcqk0qrY9RQQ==
+ dependencies:
+ ansi-align "^3.0.0"
+ camelcase "^6.2.0"
+ chalk "^4.1.0"
+ cli-boxes "^2.2.1"
+ string-width "^4.2.2"
+ type-fest "^0.20.2"
+ widest-line "^3.1.0"
+ wrap-ansi "^7.0.0"
+
+boxen@^6.2.1:
+ version "6.2.1"
+ resolved "https://registry.yarnpkg.com/boxen/-/boxen-6.2.1.tgz#b098a2278b2cd2845deef2dff2efc38d329b434d"
+ integrity sha512-H4PEsJXfFI/Pt8sjDWbHlQPx4zL/bvSQjcilJmaulGt5mLDorHOHpmdXAJcBcmru7PhYSp/cDMWRko4ZUMFkSw==
+ dependencies:
+ ansi-align "^3.0.1"
+ camelcase "^6.2.0"
+ chalk "^4.1.2"
+ cli-boxes "^3.0.0"
+ string-width "^5.0.1"
+ type-fest "^2.5.0"
+ widest-line "^4.0.1"
+ wrap-ansi "^8.0.1"
+
+brace-expansion@^1.1.7:
+ version "1.1.11"
+ resolved "https://registry.yarnpkg.com/brace-expansion/-/brace-expansion-1.1.11.tgz#3c7fcbf529d87226f3d2f52b966ff5271eb441dd"
+ integrity sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==
+ dependencies:
+ balanced-match "^1.0.0"
+ concat-map "0.0.1"
+
+braces@^3.0.2, braces@~3.0.2:
+ version "3.0.2"
+ resolved "https://registry.yarnpkg.com/braces/-/braces-3.0.2.tgz#3454e1a462ee8d599e236df336cd9ea4f8afe107"
+ integrity sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==
+ dependencies:
+ fill-range "^7.0.1"
+
+browserslist@^4.0.0, browserslist@^4.14.5, browserslist@^4.16.6, browserslist@^4.18.1, browserslist@^4.20.2, browserslist@^4.20.3, browserslist@^4.21.3:
+ version "4.21.3"
+ resolved "https://registry.yarnpkg.com/browserslist/-/browserslist-4.21.3.tgz#5df277694eb3c48bc5c4b05af3e8b7e09c5a6d1a"
+ integrity sha512-898rgRXLAyRkM1GryrrBHGkqA5hlpkV5MhtZwg9QXeiyLUYs2k00Un05aX5l2/yJIOObYKOpS2JNo8nJDE7fWQ==
+ dependencies:
+ caniuse-lite "^1.0.30001370"
+ electron-to-chromium "^1.4.202"
+ node-releases "^2.0.6"
+ update-browserslist-db "^1.0.5"
+
+buffer-from@^1.0.0:
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/buffer-from/-/buffer-from-1.1.2.tgz#2b146a6fd72e80b4f55d255f35ed59a3a9a41bd5"
+ integrity sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==
+
+buffer@^5.5.0:
+ version "5.7.1"
+ resolved "https://registry.yarnpkg.com/buffer/-/buffer-5.7.1.tgz#ba62e7c13133053582197160851a8f648e99eed0"
+ integrity sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==
+ dependencies:
+ base64-js "^1.3.1"
+ ieee754 "^1.1.13"
+
+bytes@3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/bytes/-/bytes-3.0.0.tgz#d32815404d689699f85a4ea4fa8755dd13a96048"
+ integrity sha512-pMhOfFDPiv9t5jjIXkHosWmkSyQbvsgEVNkz0ERHbuLh2T/7j4Mqqpz523Fe8MVY89KC6Sh/QfS2sM+SjgFDcw==
+
+bytes@3.1.2:
+ version "3.1.2"
+ resolved "https://registry.yarnpkg.com/bytes/-/bytes-3.1.2.tgz#8b0beeb98605adf1b128fa4386403c009e0221a5"
+ integrity sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==
+
+cacheable-request@^6.0.0:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/cacheable-request/-/cacheable-request-6.1.0.tgz#20ffb8bd162ba4be11e9567d823db651052ca912"
+ integrity sha512-Oj3cAGPCqOZX7Rz64Uny2GYAZNliQSqfbePrgAQ1wKAihYmCUnraBtJtKcGR4xz7wF+LoJC+ssFZvv5BgF9Igg==
+ dependencies:
+ clone-response "^1.0.2"
+ get-stream "^5.1.0"
+ http-cache-semantics "^4.0.0"
+ keyv "^3.0.0"
+ lowercase-keys "^2.0.0"
+ normalize-url "^4.1.0"
+ responselike "^1.0.2"
+
+cachedir@2.3.0:
+ version "2.3.0"
+ resolved "https://registry.yarnpkg.com/cachedir/-/cachedir-2.3.0.tgz#0c75892a052198f0b21c7c1804d8331edfcae0e8"
+ integrity sha512-A+Fezp4zxnit6FanDmv9EqXNAi3vt9DWp51/71UEhXukb7QUuvtv9344h91dyAxuTLoSYJFU299qzR3tzwPAhw==
+
+call-bind@^1.0.0, call-bind@^1.0.2:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/call-bind/-/call-bind-1.0.2.tgz#b1d4e89e688119c3c9a903ad30abb2f6a919be3c"
+ integrity sha512-7O+FbCihrB5WGbFYesctwmTKae6rOiIzmz1icreWJ+0aA7LJfuqhEso2T9ncpcFtzMQtzXf2QGGueWJGTYsqrA==
+ dependencies:
+ function-bind "^1.1.1"
+ get-intrinsic "^1.0.2"
+
+callsites@^3.0.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/callsites/-/callsites-3.1.0.tgz#b3630abd8943432f54b3f0519238e33cd7df2f73"
+ integrity sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ==
+
+camel-case@^4.1.2:
+ version "4.1.2"
+ resolved "https://registry.yarnpkg.com/camel-case/-/camel-case-4.1.2.tgz#9728072a954f805228225a6deea6b38461e1bd5a"
+ integrity sha512-gxGWBrTT1JuMx6R+o5PTXMmUnhnVzLQ9SNutD4YqKtI6ap897t3tKECYla6gCWEkplXnlNybEkZg9GEGxKFCgw==
+ dependencies:
+ pascal-case "^3.1.2"
+ tslib "^2.0.3"
+
+camelcase-css@2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/camelcase-css/-/camelcase-css-2.0.1.tgz#ee978f6947914cc30c6b44741b6ed1df7f043fd5"
+ integrity sha512-QOSvevhslijgYwRx6Rv7zKdMF8lbRmx+uQGx2+vDc+KI/eBnsy9kit5aj23AgGu3pa4t9AgwbnXWqS+iOY+2aA==
+
+camelcase-keys@^6.2.2:
+ version "6.2.2"
+ resolved "https://registry.yarnpkg.com/camelcase-keys/-/camelcase-keys-6.2.2.tgz#5e755d6ba51aa223ec7d3d52f25778210f9dc3c0"
+ integrity sha512-YrwaA0vEKazPBkn0ipTiMpSajYDSe+KjQfrjhcBMxJt/znbvlHd8Pw/Vamaz5EB4Wfhs3SUR3Z9mwRu/P3s3Yg==
+ dependencies:
+ camelcase "^5.3.1"
+ map-obj "^4.0.0"
+ quick-lru "^4.0.1"
+
+camelcase@^5.3.1:
+ version "5.3.1"
+ resolved "https://registry.yarnpkg.com/camelcase/-/camelcase-5.3.1.tgz#e3c9b31569e106811df242f715725a1f4c494320"
+ integrity sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==
+
+camelcase@^6.2.0:
+ version "6.3.0"
+ resolved "https://registry.yarnpkg.com/camelcase/-/camelcase-6.3.0.tgz#5685b95eb209ac9c0c177467778c9c84df58ba9a"
+ integrity sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==
+
+caniuse-api@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/caniuse-api/-/caniuse-api-3.0.0.tgz#5e4d90e2274961d46291997df599e3ed008ee4c0"
+ integrity sha512-bsTwuIg/BZZK/vreVTYYbSWoe2F+71P7K5QGEX+pT250DZbfU1MQ5prOKpPR+LL6uWKK3KMwMCAS74QB3Um1uw==
+ dependencies:
+ browserslist "^4.0.0"
+ caniuse-lite "^1.0.0"
+ lodash.memoize "^4.1.2"
+ lodash.uniq "^4.5.0"
+
+caniuse-lite@^1.0.0, caniuse-lite@^1.0.30001370, caniuse-lite@^1.0.30001399:
+ version "1.0.30001399"
+ resolved "https://registry.yarnpkg.com/caniuse-lite/-/caniuse-lite-1.0.30001399.tgz#1bf994ca375d7f33f8d01ce03b7d5139e8587873"
+ integrity sha512-4vQ90tMKS+FkvuVWS5/QY1+d805ODxZiKFzsU8o/RsVJz49ZSRR8EjykLJbqhzdPgadbX6wB538wOzle3JniRA==
+
+ccount@^1.0.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/ccount/-/ccount-1.1.0.tgz#246687debb6014735131be8abab2d93898f8d043"
+ integrity sha512-vlNK021QdI7PNeiUh/lKkC/mNHHfV0m/Ad5JoI0TYtlBnJAslM/JIkm/tGC88bkLIwO6OQ5uV6ztS6kVAtCDlg==
+
+chalk@^2.0.0, chalk@^2.4.1, chalk@^2.4.2:
+ version "2.4.2"
+ resolved "https://registry.yarnpkg.com/chalk/-/chalk-2.4.2.tgz#cd42541677a54333cf541a49108c1432b44c9424"
+ integrity sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==
+ dependencies:
+ ansi-styles "^3.2.1"
+ escape-string-regexp "^1.0.5"
+ supports-color "^5.3.0"
+
+chalk@^4.1.0, chalk@^4.1.1, chalk@^4.1.2:
+ version "4.1.2"
+ resolved "https://registry.yarnpkg.com/chalk/-/chalk-4.1.2.tgz#aac4e2b7734a740867aeb16bf02aad556a1e7a01"
+ integrity sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==
+ dependencies:
+ ansi-styles "^4.1.0"
+ supports-color "^7.1.0"
+
+character-entities-legacy@^1.0.0:
+ version "1.1.4"
+ resolved "https://registry.yarnpkg.com/character-entities-legacy/-/character-entities-legacy-1.1.4.tgz#94bc1845dce70a5bb9d2ecc748725661293d8fc1"
+ integrity sha512-3Xnr+7ZFS1uxeiUDvV02wQ+QDbc55o97tIV5zHScSPJpcLm/r0DFPcoY3tYRp+VZukxuMeKgXYmsXQHO05zQeA==
+
+character-entities@^1.0.0:
+ version "1.2.4"
+ resolved "https://registry.yarnpkg.com/character-entities/-/character-entities-1.2.4.tgz#e12c3939b7eaf4e5b15e7ad4c5e28e1d48c5b16b"
+ integrity sha512-iBMyeEHxfVnIakwOuDXpVkc54HijNgCyQB2w0VfGQThle6NXn50zU6V/u+LDhxHcDUPojn6Kpga3PTAD8W1bQw==
+
+character-reference-invalid@^1.0.0:
+ version "1.1.4"
+ resolved "https://registry.yarnpkg.com/character-reference-invalid/-/character-reference-invalid-1.1.4.tgz#083329cda0eae272ab3dbbf37e9a382c13af1560"
+ integrity sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg==
+
+chardet@^0.7.0:
+ version "0.7.0"
+ resolved "https://registry.yarnpkg.com/chardet/-/chardet-0.7.0.tgz#90094849f0937f2eedc2425d0d28a9e5f0cbad9e"
+ integrity sha512-mT8iDcrh03qDGRRmoA2hmBJnxpllMR+0/0qlzjqZES6NdiWDcZkCNAk4rPFZ9Q85r27unkiNNg8ZOiwZXBHwcA==
+
+cheerio-select@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/cheerio-select/-/cheerio-select-2.1.0.tgz#4d8673286b8126ca2a8e42740d5e3c4884ae21b4"
+ integrity sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g==
+ dependencies:
+ boolbase "^1.0.0"
+ css-select "^5.1.0"
+ css-what "^6.1.0"
+ domelementtype "^2.3.0"
+ domhandler "^5.0.3"
+ domutils "^3.0.1"
+
+cheerio@^1.0.0-rc.12:
+ version "1.0.0-rc.12"
+ resolved "https://registry.yarnpkg.com/cheerio/-/cheerio-1.0.0-rc.12.tgz#788bf7466506b1c6bf5fae51d24a2c4d62e47683"
+ integrity sha512-VqR8m68vM46BNnuZ5NtnGBKIE/DfN0cRIzg9n40EIq9NOv90ayxLBXA8fXC5gquFRGJSTRqBq25Jt2ECLR431Q==
+ dependencies:
+ cheerio-select "^2.1.0"
+ dom-serializer "^2.0.0"
+ domhandler "^5.0.3"
+ domutils "^3.0.1"
+ htmlparser2 "^8.0.1"
+ parse5 "^7.0.0"
+ parse5-htmlparser2-tree-adapter "^7.0.0"
+
+chokidar@^3.4.2, chokidar@^3.5.3:
+ version "3.5.3"
+ resolved "https://registry.yarnpkg.com/chokidar/-/chokidar-3.5.3.tgz#1cf37c8707b932bd1af1ae22c0432e2acd1903bd"
+ integrity sha512-Dr3sfKRP6oTcjf2JmUmFJfeVMvXBdegxB0iVQ5eb2V10uFJUCAS8OByZdVAyVb8xXNz3GjjTgj9kLWsZTqE6kw==
+ dependencies:
+ anymatch "~3.1.2"
+ braces "~3.0.2"
+ glob-parent "~5.1.2"
+ is-binary-path "~2.1.0"
+ is-glob "~4.0.1"
+ normalize-path "~3.0.0"
+ readdirp "~3.6.0"
+ optionalDependencies:
+ fsevents "~2.3.2"
+
+chrome-trace-event@^1.0.2:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/chrome-trace-event/-/chrome-trace-event-1.0.3.tgz#1015eced4741e15d06664a957dbbf50d041e26ac"
+ integrity sha512-p3KULyQg4S7NIHixdwbGX+nFHkoBiA4YQmyWtjb8XngSKV124nJmRysgAeujbUVb15vh+RvFUfCPqU7rXk+hZg==
+
+ci-info@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/ci-info/-/ci-info-2.0.0.tgz#67a9e964be31a51e15e5010d58e6f12834002f46"
+ integrity sha512-5tK7EtrZ0N+OLFMthtqOj4fI2Jeb88C4CAZPu25LDVUgXJ0A3Js4PMGqrn0JU1W0Mh1/Z8wZzYPxqUrXeBboCQ==
+
+clean-css@^5.2.2, clean-css@^5.3.0:
+ version "5.3.1"
+ resolved "https://registry.yarnpkg.com/clean-css/-/clean-css-5.3.1.tgz#d0610b0b90d125196a2894d35366f734e5d7aa32"
+ integrity sha512-lCr8OHhiWCTw4v8POJovCoh4T7I9U11yVsPjMWWnnMmp9ZowCxyad1Pathle/9HjaDp+fdQKjO9fQydE6RHTZg==
+ dependencies:
+ source-map "~0.6.0"
+
+clean-stack@^2.0.0:
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/clean-stack/-/clean-stack-2.2.0.tgz#ee8472dbb129e727b31e8a10a427dee9dfe4008b"
+ integrity sha512-4diC9HaTE+KRAMWhDhrGOECgWZxoevMc5TlkObMqNSsVU62PYzXZ/SMTjzyGAFF1YusgxGcSWTEXBhp0CPwQ1A==
+
+cli-boxes@^2.2.1:
+ version "2.2.1"
+ resolved "https://registry.yarnpkg.com/cli-boxes/-/cli-boxes-2.2.1.tgz#ddd5035d25094fce220e9cab40a45840a440318f"
+ integrity sha512-y4coMcylgSCdVinjiDBuR8PCC2bLjyGTwEmPb9NHR/QaNU6EUOXcTY/s6VjGMD6ENSEaeQYHCY0GNGS5jfMwPw==
+
+cli-boxes@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/cli-boxes/-/cli-boxes-3.0.0.tgz#71a10c716feeba005e4504f36329ef0b17cf3145"
+ integrity sha512-/lzGpEWL/8PfI0BmBOPRwp0c/wFNX1RdUML3jK/RcSBA9T8mZDdQpqYBKtCFTOfQbwPqWEOpjqW+Fnayc0969g==
+
+cli-cursor@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/cli-cursor/-/cli-cursor-3.1.0.tgz#264305a7ae490d1d03bf0c9ba7c925d1753af307"
+ integrity sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw==
+ dependencies:
+ restore-cursor "^3.1.0"
+
+cli-spinners@^2.5.0:
+ version "2.7.0"
+ resolved "https://registry.yarnpkg.com/cli-spinners/-/cli-spinners-2.7.0.tgz#f815fd30b5f9eaac02db604c7a231ed7cb2f797a"
+ integrity sha512-qu3pN8Y3qHNgE2AFweciB1IfMnmZ/fsNTEE+NOFjmGB2F/7rLhnhzppvpCnN4FovtP26k8lHyy9ptEbNwWFLzw==
+
+cli-table3@^0.6.2:
+ version "0.6.2"
+ resolved "https://registry.yarnpkg.com/cli-table3/-/cli-table3-0.6.2.tgz#aaf5df9d8b5bf12634dc8b3040806a0c07120d2a"
+ integrity sha512-QyavHCaIC80cMivimWu4aWHilIpiDpfm3hGmqAmXVL1UsnbLuBSMd21hTX6VY4ZSDSM73ESLeF8TOYId3rBTbw==
+ dependencies:
+ string-width "^4.2.0"
+ optionalDependencies:
+ "@colors/colors" "1.5.0"
+
+cli-width@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/cli-width/-/cli-width-3.0.0.tgz#a2f48437a2caa9a22436e794bf071ec9e61cedf6"
+ integrity sha512-FxqpkPPwu1HjuN93Omfm4h8uIanXofW0RxVEW3k5RKx+mJJYSthzNhp32Kzxxy3YAEZ/Dc/EWN1vZRY0+kOhbw==
+
+cliui@^7.0.2:
+ version "7.0.4"
+ resolved "https://registry.yarnpkg.com/cliui/-/cliui-7.0.4.tgz#a0265ee655476fc807aea9df3df8df7783808b4f"
+ integrity sha512-OcRE68cOsVMXp1Yvonl/fzkQOyjLSu/8bhPDfQt0e0/Eb283TKP20Fs2MqoPsr9SwA595rRCA+QMzYc9nBP+JQ==
+ dependencies:
+ string-width "^4.2.0"
+ strip-ansi "^6.0.0"
+ wrap-ansi "^7.0.0"
+
+clone-deep@^4.0.1:
+ version "4.0.1"
+ resolved "https://registry.yarnpkg.com/clone-deep/-/clone-deep-4.0.1.tgz#c19fd9bdbbf85942b4fd979c84dcf7d5f07c2387"
+ integrity sha512-neHB9xuzh/wk0dIHweyAXv2aPGZIVk3pLMe+/RNzINf17fe0OG96QroktYAUm7SM1PBnzTabaLboqqxDyMU+SQ==
+ dependencies:
+ is-plain-object "^2.0.4"
+ kind-of "^6.0.2"
+ shallow-clone "^3.0.0"
+
+clone-response@^1.0.2:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/clone-response/-/clone-response-1.0.3.tgz#af2032aa47816399cf5f0a1d0db902f517abb8c3"
+ integrity sha512-ROoL94jJH2dUVML2Y/5PEDNaSHgeOdSDicUyS7izcF63G6sTc/FTjLub4b8Il9S8S0beOfYt0TaA5qvFK+w0wA==
+ dependencies:
+ mimic-response "^1.0.0"
+
+clone@^1.0.2:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/clone/-/clone-1.0.4.tgz#da309cc263df15994c688ca902179ca3c7cd7c7e"
+ integrity sha512-JQHZ2QMW6l3aH/j6xCqQThY/9OH4D/9ls34cgkUBiEeocRTU04tHfKPBsUK1PqZCUQM7GiA0IIXJSuXHI64Kbg==
+
+clsx@^1.2.1:
+ version "1.2.1"
+ resolved "https://registry.yarnpkg.com/clsx/-/clsx-1.2.1.tgz#0ddc4a20a549b59c93a4116bb26f5294ca17dc12"
+ integrity sha512-EcR6r5a8bj6pu3ycsa/E/cKVGuTgZJZdsyUYHOksG/UHIiKfjxzRxYJpyVBwYaQeOvghal9fcc4PidlgzugAQg==
+
+collapse-white-space@^1.0.2:
+ version "1.0.6"
+ resolved "https://registry.yarnpkg.com/collapse-white-space/-/collapse-white-space-1.0.6.tgz#e63629c0016665792060dbbeb79c42239d2c5287"
+ integrity sha512-jEovNnrhMuqyCcjfEJA56v0Xq8SkIoPKDyaHahwo3POf4qcSXqMYuwNcOTzp74vTsR9Tn08z4MxWqAhcekogkQ==
+
+color-convert@^1.9.0:
+ version "1.9.3"
+ resolved "https://registry.yarnpkg.com/color-convert/-/color-convert-1.9.3.tgz#bb71850690e1f136567de629d2d5471deda4c1e8"
+ integrity sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==
+ dependencies:
+ color-name "1.1.3"
+
+color-convert@^2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/color-convert/-/color-convert-2.0.1.tgz#72d3a68d598c9bdb3af2ad1e84f21d896abd4de3"
+ integrity sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==
+ dependencies:
+ color-name "~1.1.4"
+
+color-name@1.1.3:
+ version "1.1.3"
+ resolved "https://registry.yarnpkg.com/color-name/-/color-name-1.1.3.tgz#a7d0558bd89c42f795dd42328f740831ca53bc25"
+ integrity sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==
+
+color-name@~1.1.4:
+ version "1.1.4"
+ resolved "https://registry.yarnpkg.com/color-name/-/color-name-1.1.4.tgz#c2a09a87acbde69543de6f63fa3995c826c536a2"
+ integrity sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==
+
+colord@^2.9.1:
+ version "2.9.3"
+ resolved "https://registry.yarnpkg.com/colord/-/colord-2.9.3.tgz#4f8ce919de456f1d5c1c368c307fe20f3e59fb43"
+ integrity sha512-jeC1axXpnb0/2nn/Y1LPuLdgXBLH7aDcHu4KEKfqw3CUhX7ZpfBSlPKyqXE6btIgEzfWtrX3/tyBCaCvXvMkOw==
+
+colorette@^2.0.10:
+ version "2.0.19"
+ resolved "https://registry.yarnpkg.com/colorette/-/colorette-2.0.19.tgz#cdf044f47ad41a0f4b56b3a0d5b4e6e1a2d5a798"
+ integrity sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ==
+
+combine-promises@^1.1.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/combine-promises/-/combine-promises-1.1.0.tgz#72db90743c0ca7aab7d0d8d2052fd7b0f674de71"
+ integrity sha512-ZI9jvcLDxqwaXEixOhArm3r7ReIivsXkpbyEWyeOhzz1QS0iSgBPnWvEqvIQtYyamGCYA88gFhmUrs9hrrQ0pg==
+
+comma-separated-tokens@^1.0.0:
+ version "1.0.8"
+ resolved "https://registry.yarnpkg.com/comma-separated-tokens/-/comma-separated-tokens-1.0.8.tgz#632b80b6117867a158f1080ad498b2fbe7e3f5ea"
+ integrity sha512-GHuDRO12Sypu2cV70d1dkA2EUmXHgntrzbpvOB+Qy+49ypNfGgFQIC2fhhXbnyrJRynDCAARsT7Ou0M6hirpfw==
+
+commander@^2.20.0:
+ version "2.20.3"
+ resolved "https://registry.yarnpkg.com/commander/-/commander-2.20.3.tgz#fd485e84c03eb4881c20722ba48035e8531aeb33"
+ integrity sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==
+
+commander@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/commander/-/commander-5.1.0.tgz#46abbd1652f8e059bddaef99bbdcb2ad9cf179ae"
+ integrity sha512-P0CysNDQ7rtVw4QIQtm+MRxV66vKFSvlsQvGYXZWR3qFU0jlMKHZZZgw8e+8DSah4UDKMqnknRDQz+xuQXQ/Zg==
+
+commander@^7.2.0:
+ version "7.2.0"
+ resolved "https://registry.yarnpkg.com/commander/-/commander-7.2.0.tgz#a36cb57d0b501ce108e4d20559a150a391d97ab7"
+ integrity sha512-QrWXB+ZQSVPmIWIhtEO9H+gwHaMGYiF5ChvoJ+K9ZGHG/sVsa6yiesAD1GC/x46sET00Xlwo1u49RVVVzvcSkw==
+
+commander@^8.3.0:
+ version "8.3.0"
+ resolved "https://registry.yarnpkg.com/commander/-/commander-8.3.0.tgz#4837ea1b2da67b9c616a67afbb0fafee567bca66"
+ integrity sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww==
+
+commitizen@^4.0.3:
+ version "4.2.5"
+ resolved "https://registry.yarnpkg.com/commitizen/-/commitizen-4.2.5.tgz#48e5a5c28334c6e8ed845cc24fc9f072efd3961e"
+ integrity sha512-9sXju8Qrz1B4Tw7kC5KhnvwYQN88qs2zbiB8oyMsnXZyJ24PPGiNM3nHr73d32dnE3i8VJEXddBFIbOgYSEXtQ==
+ dependencies:
+ cachedir "2.3.0"
+ cz-conventional-changelog "3.3.0"
+ dedent "0.7.0"
+ detect-indent "6.1.0"
+ find-node-modules "^2.1.2"
+ find-root "1.1.0"
+ fs-extra "9.1.0"
+ glob "7.2.3"
+ inquirer "8.2.4"
+ is-utf8 "^0.2.1"
+ lodash "4.17.21"
+ minimist "1.2.6"
+ strip-bom "4.0.0"
+ strip-json-comments "3.1.1"
+
+commondir@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/commondir/-/commondir-1.0.1.tgz#ddd800da0c66127393cca5950ea968a3aaf1253b"
+ integrity sha512-W9pAhw0ja1Edb5GVdIF1mjZw/ASI0AlShXM83UUGe2DVr5TdAPEA1OA8m/g8zWp9x6On7gqufY+FatDbC3MDQg==
+
+compare-func@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/compare-func/-/compare-func-2.0.0.tgz#fb65e75edbddfd2e568554e8b5b05fff7a51fcb3"
+ integrity sha512-zHig5N+tPWARooBnb0Zx1MFcdfpyJrfTJ3Y5L+IFvUm8rM74hHz66z0gw0x4tijh5CorKkKUCnW82R2vmpeCRA==
+ dependencies:
+ array-ify "^1.0.0"
+ dot-prop "^5.1.0"
+
+compressible@~2.0.16:
+ version "2.0.18"
+ resolved "https://registry.yarnpkg.com/compressible/-/compressible-2.0.18.tgz#af53cca6b070d4c3c0750fbd77286a6d7cc46fba"
+ integrity sha512-AF3r7P5dWxL8MxyITRMlORQNaOA2IkAFaTr4k7BUumjPtRpGDTZpl0Pb1XCO6JeDCBdp126Cgs9sMxqSjgYyRg==
+ dependencies:
+ mime-db ">= 1.43.0 < 2"
+
+compression@^1.7.4:
+ version "1.7.4"
+ resolved "https://registry.yarnpkg.com/compression/-/compression-1.7.4.tgz#95523eff170ca57c29a0ca41e6fe131f41e5bb8f"
+ integrity sha512-jaSIDzP9pZVS4ZfQ+TzvtiWhdpFhE2RDHz8QJkpX9SIpLq88VueF5jJw6t+6CUQcAoA6t+x89MLrWAqpfDE8iQ==
+ dependencies:
+ accepts "~1.3.5"
+ bytes "3.0.0"
+ compressible "~2.0.16"
+ debug "2.6.9"
+ on-headers "~1.0.2"
+ safe-buffer "5.1.2"
+ vary "~1.1.2"
+
+concat-map@0.0.1:
+ version "0.0.1"
+ resolved "https://registry.yarnpkg.com/concat-map/-/concat-map-0.0.1.tgz#d8a96bd77fd68df7793a73036a3ba0d5405d477b"
+ integrity sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==
+
+concat-stream@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/concat-stream/-/concat-stream-2.0.0.tgz#414cf5af790a48c60ab9be4527d56d5e41133cb1"
+ integrity sha512-MWufYdFw53ccGjCA+Ol7XJYpAlW6/prSMzuPOTRnJGcGzuhLn4Scrz7qf6o8bROZ514ltazcIFJZevcfbo0x7A==
+ dependencies:
+ buffer-from "^1.0.0"
+ inherits "^2.0.3"
+ readable-stream "^3.0.2"
+ typedarray "^0.0.6"
+
+configstore@^5.0.1:
+ version "5.0.1"
+ resolved "https://registry.yarnpkg.com/configstore/-/configstore-5.0.1.tgz#d365021b5df4b98cdd187d6a3b0e3f6a7cc5ed96"
+ integrity sha512-aMKprgk5YhBNyH25hj8wGt2+D52Sw1DRRIzqBwLp2Ya9mFmY8KPvvtvmna8SxVR9JMZ4kzMD68N22vlaRpkeFA==
+ dependencies:
+ dot-prop "^5.2.0"
+ graceful-fs "^4.1.2"
+ make-dir "^3.0.0"
+ unique-string "^2.0.0"
+ write-file-atomic "^3.0.0"
+ xdg-basedir "^4.0.0"
+
+connect-history-api-fallback@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/connect-history-api-fallback/-/connect-history-api-fallback-2.0.0.tgz#647264845251a0daf25b97ce87834cace0f5f1c8"
+ integrity sha512-U73+6lQFmfiNPrYbXqr6kZ1i1wiRqXnp2nhMsINseWXO8lDau0LGEffJ8kQi4EjLZympVgRdvqjAgiZ1tgzDDA==
+
+consola@^2.15.3:
+ version "2.15.3"
+ resolved "https://registry.yarnpkg.com/consola/-/consola-2.15.3.tgz#2e11f98d6a4be71ff72e0bdf07bd23e12cb61550"
+ integrity sha512-9vAdYbHj6x2fLKC4+oPH0kFzY/orMZyG2Aj+kNylHxKGJ/Ed4dpNyAQYwJOdqO4zdM7XpVHmyejQDcQHrnuXbw==
+
+content-disposition@0.5.2:
+ version "0.5.2"
+ resolved "https://registry.yarnpkg.com/content-disposition/-/content-disposition-0.5.2.tgz#0cf68bb9ddf5f2be7961c3a85178cb85dba78cb4"
+ integrity sha512-kRGRZw3bLlFISDBgwTSA1TMBFN6J6GWDeubmDE3AF+3+yXL8hTWv8r5rkLbqYXY4RjPk/EzHnClI3zQf1cFmHA==
+
+content-disposition@0.5.4:
+ version "0.5.4"
+ resolved "https://registry.yarnpkg.com/content-disposition/-/content-disposition-0.5.4.tgz#8b82b4efac82512a02bb0b1dcec9d2c5e8eb5bfe"
+ integrity sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==
+ dependencies:
+ safe-buffer "5.2.1"
+
+content-type@~1.0.4:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/content-type/-/content-type-1.0.4.tgz#e138cc75e040c727b1966fe5e5f8c9aee256fe3b"
+ integrity sha512-hIP3EEPs8tB9AT1L+NUqtwOAps4mk2Zob89MWXMHjHWg9milF/j4osnnQLXBCBFBk/tvIG/tUc9mOUJiPBhPXA==
+
+conventional-changelog-angular@^5.0.12:
+ version "5.0.13"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-angular/-/conventional-changelog-angular-5.0.13.tgz#896885d63b914a70d4934b59d2fe7bde1832b28c"
+ integrity sha512-i/gipMxs7s8L/QeuavPF2hLnJgH6pEZAttySB6aiQLWcX3puWDL3ACVmvBhJGxnAy52Qc15ua26BufY6KpmrVA==
+ dependencies:
+ compare-func "^2.0.0"
+ q "^1.5.1"
+
+conventional-changelog-atom@^2.0.8:
+ version "2.0.8"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-atom/-/conventional-changelog-atom-2.0.8.tgz#a759ec61c22d1c1196925fca88fe3ae89fd7d8de"
+ integrity sha512-xo6v46icsFTK3bb7dY/8m2qvc8sZemRgdqLb/bjpBsH2UyOS8rKNTgcb5025Hri6IpANPApbXMg15QLb1LJpBw==
+ dependencies:
+ q "^1.5.1"
+
+conventional-changelog-codemirror@^2.0.8:
+ version "2.0.8"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-codemirror/-/conventional-changelog-codemirror-2.0.8.tgz#398e9530f08ce34ec4640af98eeaf3022eb1f7dc"
+ integrity sha512-z5DAsn3uj1Vfp7po3gpt2Boc+Bdwmw2++ZHa5Ak9k0UKsYAO5mH1UBTN0qSCuJZREIhX6WU4E1p3IW2oRCNzQw==
+ dependencies:
+ q "^1.5.1"
+
+conventional-changelog-config-spec@2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-config-spec/-/conventional-changelog-config-spec-2.1.0.tgz#874a635287ef8b581fd8558532bf655d4fb59f2d"
+ integrity sha512-IpVePh16EbbB02V+UA+HQnnPIohgXvJRxHcS5+Uwk4AT5LjzCZJm5sp/yqs5C6KZJ1jMsV4paEV13BN1pvDuxQ==
+
+conventional-changelog-conventionalcommits@4.6.3, conventional-changelog-conventionalcommits@^4.5.0:
+ version "4.6.3"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-conventionalcommits/-/conventional-changelog-conventionalcommits-4.6.3.tgz#0765490f56424b46f6cb4db9135902d6e5a36dc2"
+ integrity sha512-LTTQV4fwOM4oLPad317V/QNQ1FY4Hju5qeBIM1uTHbrnCE+Eg4CdRZ3gO2pUeR+tzWdp80M2j3qFFEDWVqOV4g==
+ dependencies:
+ compare-func "^2.0.0"
+ lodash "^4.17.15"
+ q "^1.5.1"
+
+conventional-changelog-conventionalcommits@^5.0.0:
+ version "5.0.0"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-conventionalcommits/-/conventional-changelog-conventionalcommits-5.0.0.tgz#41bdce54eb65a848a4a3ffdca93e92fa22b64a86"
+ integrity sha512-lCDbA+ZqVFQGUj7h9QBKoIpLhl8iihkO0nCTyRNzuXtcd7ubODpYB04IFy31JloiJgG0Uovu8ot8oxRzn7Nwtw==
+ dependencies:
+ compare-func "^2.0.0"
+ lodash "^4.17.15"
+ q "^1.5.1"
+
+conventional-changelog-core@^4.2.1:
+ version "4.2.4"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-core/-/conventional-changelog-core-4.2.4.tgz#e50d047e8ebacf63fac3dc67bf918177001e1e9f"
+ integrity sha512-gDVS+zVJHE2v4SLc6B0sLsPiloR0ygU7HaDW14aNJE1v4SlqJPILPl/aJC7YdtRE4CybBf8gDwObBvKha8Xlyg==
+ dependencies:
+ add-stream "^1.0.0"
+ conventional-changelog-writer "^5.0.0"
+ conventional-commits-parser "^3.2.0"
+ dateformat "^3.0.0"
+ get-pkg-repo "^4.0.0"
+ git-raw-commits "^2.0.8"
+ git-remote-origin-url "^2.0.0"
+ git-semver-tags "^4.1.1"
+ lodash "^4.17.15"
+ normalize-package-data "^3.0.0"
+ q "^1.5.1"
+ read-pkg "^3.0.0"
+ read-pkg-up "^3.0.0"
+ through2 "^4.0.0"
+
+conventional-changelog-ember@^2.0.9:
+ version "2.0.9"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-ember/-/conventional-changelog-ember-2.0.9.tgz#619b37ec708be9e74a220f4dcf79212ae1c92962"
+ integrity sha512-ulzIReoZEvZCBDhcNYfDIsLTHzYHc7awh+eI44ZtV5cx6LVxLlVtEmcO+2/kGIHGtw+qVabJYjdI5cJOQgXh1A==
+ dependencies:
+ q "^1.5.1"
+
+conventional-changelog-eslint@^3.0.9:
+ version "3.0.9"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-eslint/-/conventional-changelog-eslint-3.0.9.tgz#689bd0a470e02f7baafe21a495880deea18b7cdb"
+ integrity sha512-6NpUCMgU8qmWmyAMSZO5NrRd7rTgErjrm4VASam2u5jrZS0n38V7Y9CzTtLT2qwz5xEChDR4BduoWIr8TfwvXA==
+ dependencies:
+ q "^1.5.1"
+
+conventional-changelog-express@^2.0.6:
+ version "2.0.6"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-express/-/conventional-changelog-express-2.0.6.tgz#420c9d92a347b72a91544750bffa9387665a6ee8"
+ integrity sha512-SDez2f3iVJw6V563O3pRtNwXtQaSmEfTCaTBPCqn0oG0mfkq0rX4hHBq5P7De2MncoRixrALj3u3oQsNK+Q0pQ==
+ dependencies:
+ q "^1.5.1"
+
+conventional-changelog-jquery@^3.0.11:
+ version "3.0.11"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-jquery/-/conventional-changelog-jquery-3.0.11.tgz#d142207400f51c9e5bb588596598e24bba8994bf"
+ integrity sha512-x8AWz5/Td55F7+o/9LQ6cQIPwrCjfJQ5Zmfqi8thwUEKHstEn4kTIofXub7plf1xvFA2TqhZlq7fy5OmV6BOMw==
+ dependencies:
+ q "^1.5.1"
+
+conventional-changelog-jshint@^2.0.9:
+ version "2.0.9"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-jshint/-/conventional-changelog-jshint-2.0.9.tgz#f2d7f23e6acd4927a238555d92c09b50fe3852ff"
+ integrity sha512-wMLdaIzq6TNnMHMy31hql02OEQ8nCQfExw1SE0hYL5KvU+JCTuPaDO+7JiogGT2gJAxiUGATdtYYfh+nT+6riA==
+ dependencies:
+ compare-func "^2.0.0"
+ q "^1.5.1"
+
+conventional-changelog-preset-loader@^2.3.4:
+ version "2.3.4"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-preset-loader/-/conventional-changelog-preset-loader-2.3.4.tgz#14a855abbffd59027fd602581f1f34d9862ea44c"
+ integrity sha512-GEKRWkrSAZeTq5+YjUZOYxdHq+ci4dNwHvpaBC3+ENalzFWuCWa9EZXSuZBpkr72sMdKB+1fyDV4takK1Lf58g==
+
+conventional-changelog-writer@^5.0.0:
+ version "5.0.1"
+ resolved "https://registry.yarnpkg.com/conventional-changelog-writer/-/conventional-changelog-writer-5.0.1.tgz#e0757072f045fe03d91da6343c843029e702f359"
+ integrity sha512-5WsuKUfxW7suLblAbFnxAcrvf6r+0b7GvNaWUwUIk0bXMnENP/PEieGKVUQrjPqwPT4o3EPAASBXiY6iHooLOQ==
+ dependencies:
+ conventional-commits-filter "^2.0.7"
+ dateformat "^3.0.0"
+ handlebars "^4.7.7"
+ json-stringify-safe "^5.0.1"
+ lodash "^4.17.15"
+ meow "^8.0.0"
+ semver "^6.0.0"
+ split "^1.0.0"
+ through2 "^4.0.0"
+
+conventional-changelog@3.1.25:
+ version "3.1.25"
+ resolved "https://registry.yarnpkg.com/conventional-changelog/-/conventional-changelog-3.1.25.tgz#3e227a37d15684f5aa1fb52222a6e9e2536ccaff"
+ integrity sha512-ryhi3fd1mKf3fSjbLXOfK2D06YwKNic1nC9mWqybBHdObPd8KJ2vjaXZfYj1U23t+V8T8n0d7gwnc9XbIdFbyQ==
+ dependencies:
+ conventional-changelog-angular "^5.0.12"
+ conventional-changelog-atom "^2.0.8"
+ conventional-changelog-codemirror "^2.0.8"
+ conventional-changelog-conventionalcommits "^4.5.0"
+ conventional-changelog-core "^4.2.1"
+ conventional-changelog-ember "^2.0.9"
+ conventional-changelog-eslint "^3.0.9"
+ conventional-changelog-express "^2.0.6"
+ conventional-changelog-jquery "^3.0.11"
+ conventional-changelog-jshint "^2.0.9"
+ conventional-changelog-preset-loader "^2.3.4"
+
+conventional-commit-types@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/conventional-commit-types/-/conventional-commit-types-3.0.0.tgz#7c9214e58eae93e85dd66dbfbafe7e4fffa2365b"
+ integrity sha512-SmmCYnOniSsAa9GqWOeLqc179lfr5TRu5b4QFDkbsrJ5TZjPJx85wtOr3zn+1dbeNiXDKGPbZ72IKbPhLXh/Lg==
+
+conventional-commits-filter@^2.0.7:
+ version "2.0.7"
+ resolved "https://registry.yarnpkg.com/conventional-commits-filter/-/conventional-commits-filter-2.0.7.tgz#f8d9b4f182fce00c9af7139da49365b136c8a0b3"
+ integrity sha512-ASS9SamOP4TbCClsRHxIHXRfcGCnIoQqkvAzCSbZzTFLfcTqJVugB0agRgsEELsqaeWgsXv513eS116wnlSSPA==
+ dependencies:
+ lodash.ismatch "^4.4.0"
+ modify-values "^1.0.0"
+
+conventional-commits-parser@^3.2.0:
+ version "3.2.4"
+ resolved "https://registry.yarnpkg.com/conventional-commits-parser/-/conventional-commits-parser-3.2.4.tgz#a7d3b77758a202a9b2293d2112a8d8052c740972"
+ integrity sha512-nK7sAtfi+QXbxHCYfhpZsfRtaitZLIA6889kFIouLvz6repszQDgxBu7wf2WbU+Dco7sAnNCJYERCwt54WPC2Q==
+ dependencies:
+ JSONStream "^1.0.4"
+ is-text-path "^1.0.1"
+ lodash "^4.17.15"
+ meow "^8.0.0"
+ split2 "^3.0.0"
+ through2 "^4.0.0"
+
+conventional-recommended-bump@6.1.0:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/conventional-recommended-bump/-/conventional-recommended-bump-6.1.0.tgz#cfa623285d1de554012f2ffde70d9c8a22231f55"
+ integrity sha512-uiApbSiNGM/kkdL9GTOLAqC4hbptObFo4wW2QRyHsKciGAfQuLU1ShZ1BIVI/+K2BE/W1AWYQMCXAsv4dyKPaw==
+ dependencies:
+ concat-stream "^2.0.0"
+ conventional-changelog-preset-loader "^2.3.4"
+ conventional-commits-filter "^2.0.7"
+ conventional-commits-parser "^3.2.0"
+ git-raw-commits "^2.0.8"
+ git-semver-tags "^4.1.1"
+ meow "^8.0.0"
+ q "^1.5.1"
+
+convert-source-map@^1.7.0:
+ version "1.8.0"
+ resolved "https://registry.yarnpkg.com/convert-source-map/-/convert-source-map-1.8.0.tgz#f3373c32d21b4d780dd8004514684fb791ca4369"
+ integrity sha512-+OQdjP49zViI/6i7nIJpA8rAl4sV/JdPfU9nZs3VqOwGIgizICvuN2ru6fMd+4llL0tar18UYJXfZ/TWtmhUjA==
+ dependencies:
+ safe-buffer "~5.1.1"
+
+cookie-signature@1.0.6:
+ version "1.0.6"
+ resolved "https://registry.yarnpkg.com/cookie-signature/-/cookie-signature-1.0.6.tgz#e303a882b342cc3ee8ca513a79999734dab3ae2c"
+ integrity sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==
+
+cookie@0.5.0:
+ version "0.5.0"
+ resolved "https://registry.yarnpkg.com/cookie/-/cookie-0.5.0.tgz#d1f5d71adec6558c58f389987c366aa47e994f8b"
+ integrity sha512-YZ3GUyn/o8gfKJlnlX7g7xq4gyO6OSuhGPKaaGssGB2qgDUS0gPgtTvoyZLTt9Ab6dC4hfc9dV5arkvc/OCmrw==
+
+copy-anything@^2.0.1:
+ version "2.0.6"
+ resolved "https://mirrors.cloud.tencent.com/npm/copy-anything/-/copy-anything-2.0.6.tgz#092454ea9584a7b7ad5573062b2a87f5900fc480"
+ integrity sha512-1j20GZTsvKNkc4BY3NpMOM8tt///wY3FpIzozTOFO2ffuZcV61nojHXVKIy3WM+7ADCy5FVhdZYHYDdgTU0yJw==
+ dependencies:
+ is-what "^3.14.1"
+
+copy-text-to-clipboard@^3.0.1:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/copy-text-to-clipboard/-/copy-text-to-clipboard-3.0.1.tgz#8cbf8f90e0a47f12e4a24743736265d157bce69c"
+ integrity sha512-rvVsHrpFcL4F2P8ihsoLdFHmd404+CMg71S756oRSeQgqk51U3kicGdnvfkrxva0xXH92SjGS62B0XIJsbh+9Q==
+
+copy-webpack-plugin@^11.0.0:
+ version "11.0.0"
+ resolved "https://registry.yarnpkg.com/copy-webpack-plugin/-/copy-webpack-plugin-11.0.0.tgz#96d4dbdb5f73d02dd72d0528d1958721ab72e04a"
+ integrity sha512-fX2MWpamkW0hZxMEg0+mYnA40LTosOSa5TqZ9GYIBzyJa9C3QUaMPSE2xAi/buNr8u89SfD9wHSQVBzrRa/SOQ==
+ dependencies:
+ fast-glob "^3.2.11"
+ glob-parent "^6.0.1"
+ globby "^13.1.1"
+ normalize-path "^3.0.0"
+ schema-utils "^4.0.0"
+ serialize-javascript "^6.0.0"
+
+core-js-compat@^3.21.0, core-js-compat@^3.22.1:
+ version "3.25.1"
+ resolved "https://registry.yarnpkg.com/core-js-compat/-/core-js-compat-3.25.1.tgz#6f13a90de52f89bbe6267e5620a412c7f7ff7e42"
+ integrity sha512-pOHS7O0i8Qt4zlPW/eIFjwp+NrTPx+wTL0ctgI2fHn31sZOq89rDsmtc/A2vAX7r6shl+bmVI+678He46jgBlw==
+ dependencies:
+ browserslist "^4.21.3"
+
+core-js-pure@^3.20.2:
+ version "3.25.1"
+ resolved "https://registry.yarnpkg.com/core-js-pure/-/core-js-pure-3.25.1.tgz#79546518ae87cc362c991d9c2d211f45107991ee"
+ integrity sha512-7Fr74bliUDdeJCBMxkkIuQ4xfxn/SwrVg+HkJUAoNEXVqYLv55l6Af0dJ5Lq2YBUW9yKqSkLXaS5SYPK6MGa/A==
+
+core-js@^3.23.3:
+ version "3.25.1"
+ resolved "https://registry.yarnpkg.com/core-js/-/core-js-3.25.1.tgz#5818e09de0db8956e16bf10e2a7141e931b7c69c"
+ integrity sha512-sr0FY4lnO1hkQ4gLDr24K0DGnweGO1QwSj5BpfQjpSJPdqWalja4cTps29Y/PJVG/P7FYlPDkH3hO+Tr0CvDgQ==
+
+core-util-is@~1.0.0:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/core-util-is/-/core-util-is-1.0.3.tgz#a6042d3634c2b27e9328f837b965fac83808db85"
+ integrity sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==
+
+cosmiconfig-typescript-loader@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/cosmiconfig-typescript-loader/-/cosmiconfig-typescript-loader-4.0.0.tgz#4a6d856c1281135197346a6f64dfa73a9cd9fefa"
+ integrity sha512-cVpucSc2Tf+VPwCCR7SZzmQTQkPbkk4O01yXsYqXBIbjE1bhwqSyAgYQkRK1un4i0OPziTleqFhdkmOc4RQ/9g==
+
+cosmiconfig@^6.0.0:
+ version "6.0.0"
+ resolved "https://registry.yarnpkg.com/cosmiconfig/-/cosmiconfig-6.0.0.tgz#da4fee853c52f6b1e6935f41c1a2fc50bd4a9982"
+ integrity sha512-xb3ZL6+L8b9JLLCx3ZdoZy4+2ECphCMo2PwqgP1tlfVq6M6YReyzBJtvWWtbDSpNr9hn96pkCiZqUcFEc+54Qg==
+ dependencies:
+ "@types/parse-json" "^4.0.0"
+ import-fresh "^3.1.0"
+ parse-json "^5.0.0"
+ path-type "^4.0.0"
+ yaml "^1.7.2"
+
+cosmiconfig@^7.0.0, cosmiconfig@^7.0.1:
+ version "7.0.1"
+ resolved "https://registry.yarnpkg.com/cosmiconfig/-/cosmiconfig-7.0.1.tgz#714d756522cace867867ccb4474c5d01bbae5d6d"
+ integrity sha512-a1YWNUV2HwGimB7dU2s1wUMurNKjpx60HxBB6xUM8Re+2s1g1IIfJvFR0/iCF+XHdE0GMTKTuLR32UQff4TEyQ==
+ dependencies:
+ "@types/parse-json" "^4.0.0"
+ import-fresh "^3.2.1"
+ parse-json "^5.0.0"
+ path-type "^4.0.0"
+ yaml "^1.10.0"
+
+create-require@^1.1.0:
+ version "1.1.1"
+ resolved "https://registry.yarnpkg.com/create-require/-/create-require-1.1.1.tgz#c1d7e8f1e5f6cfc9ff65f9cd352d37348756c333"
+ integrity sha512-dcKFX3jn0MpIaXjisoRvexIJVEKzaq7z2rZKxf+MSr9TkdmHmsU4m2lcLojrj/FHl8mk5VxMmYA+ftRkP/3oKQ==
+
+cross-fetch@^3.1.5:
+ version "3.1.5"
+ resolved "https://registry.yarnpkg.com/cross-fetch/-/cross-fetch-3.1.5.tgz#e1389f44d9e7ba767907f7af8454787952ab534f"
+ integrity sha512-lvb1SBsI0Z7GDwmuid+mU3kWVBwTVUbe7S0H52yaaAdQOXq2YktTCZdlAcNKFzE6QtRz0snpw9bNiPeOIkkQvw==
+ dependencies:
+ node-fetch "2.6.7"
+
+cross-spawn@^7.0.3:
+ version "7.0.3"
+ resolved "https://registry.yarnpkg.com/cross-spawn/-/cross-spawn-7.0.3.tgz#f73a85b9d5d41d045551c177e2882d4ac85728a6"
+ integrity sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==
+ dependencies:
+ path-key "^3.1.0"
+ shebang-command "^2.0.0"
+ which "^2.0.1"
+
+crypto-random-string@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/crypto-random-string/-/crypto-random-string-2.0.0.tgz#ef2a7a966ec11083388369baa02ebead229b30d5"
+ integrity sha512-v1plID3y9r/lPhviJ1wrXpLeyUIGAZ2SHNYTEapm7/8A9nLPoyvVp3RK/EPFqn5kEznyWgYZNsRtYYIWbuG8KA==
+
+css-declaration-sorter@^6.3.0:
+ version "6.3.1"
+ resolved "https://registry.yarnpkg.com/css-declaration-sorter/-/css-declaration-sorter-6.3.1.tgz#be5e1d71b7a992433fb1c542c7a1b835e45682ec"
+ integrity sha512-fBffmak0bPAnyqc/HO8C3n2sHrp9wcqQz6ES9koRF2/mLOVAx9zIQ3Y7R29sYCteTPqMCwns4WYQoCX91Xl3+w==
+
+css-loader@^6.7.1:
+ version "6.7.1"
+ resolved "https://registry.yarnpkg.com/css-loader/-/css-loader-6.7.1.tgz#e98106f154f6e1baf3fc3bc455cb9981c1d5fd2e"
+ integrity sha512-yB5CNFa14MbPJcomwNh3wLThtkZgcNyI2bNMRt8iE5Z8Vwl7f8vQXFAzn2HDOJvtDq2NTZBUGMSUNNyrv3/+cw==
+ dependencies:
+ icss-utils "^5.1.0"
+ postcss "^8.4.7"
+ postcss-modules-extract-imports "^3.0.0"
+ postcss-modules-local-by-default "^4.0.0"
+ postcss-modules-scope "^3.0.0"
+ postcss-modules-values "^4.0.0"
+ postcss-value-parser "^4.2.0"
+ semver "^7.3.5"
+
+css-minimizer-webpack-plugin@^4.0.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/css-minimizer-webpack-plugin/-/css-minimizer-webpack-plugin-4.1.0.tgz#2ab9f7d8148c48f5d498604025e6e62cf9528855"
+ integrity sha512-Zd+yz4nta4GXi3pMqF6skO8kjzuCUbr62z8SLMGZZtxWxTGTLopOiabPGNDEyjHCRhnhdA1EfHmqLa2Oekjtng==
+ dependencies:
+ cssnano "^5.1.8"
+ jest-worker "^27.5.1"
+ postcss "^8.4.13"
+ schema-utils "^4.0.0"
+ serialize-javascript "^6.0.0"
+ source-map "^0.6.1"
+
+css-select@^4.1.3:
+ version "4.3.0"
+ resolved "https://registry.yarnpkg.com/css-select/-/css-select-4.3.0.tgz#db7129b2846662fd8628cfc496abb2b59e41529b"
+ integrity sha512-wPpOYtnsVontu2mODhA19JrqWxNsfdatRKd64kmpRbQgh1KtItko5sTnEpPdpSaJszTOhEMlF/RPz28qj4HqhQ==
+ dependencies:
+ boolbase "^1.0.0"
+ css-what "^6.0.1"
+ domhandler "^4.3.1"
+ domutils "^2.8.0"
+ nth-check "^2.0.1"
+
+css-select@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/css-select/-/css-select-5.1.0.tgz#b8ebd6554c3637ccc76688804ad3f6a6fdaea8a6"
+ integrity sha512-nwoRF1rvRRnnCqqY7updORDsuqKzqYJ28+oSMaJMMgOauh3fvwHqMS7EZpIPqK8GL+g9mKxF1vP/ZjSeNjEVHg==
+ dependencies:
+ boolbase "^1.0.0"
+ css-what "^6.1.0"
+ domhandler "^5.0.2"
+ domutils "^3.0.1"
+ nth-check "^2.0.1"
+
+css-tree@^1.1.2, css-tree@^1.1.3:
+ version "1.1.3"
+ resolved "https://registry.yarnpkg.com/css-tree/-/css-tree-1.1.3.tgz#eb4870fb6fd7707327ec95c2ff2ab09b5e8db91d"
+ integrity sha512-tRpdppF7TRazZrjJ6v3stzv93qxRcSsFmW6cX0Zm2NVKpxE1WV1HblnghVv9TreireHkqI/VDEsfolRF1p6y7Q==
+ dependencies:
+ mdn-data "2.0.14"
+ source-map "^0.6.1"
+
+css-what@^6.0.1, css-what@^6.1.0:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/css-what/-/css-what-6.1.0.tgz#fb5effcf76f1ddea2c81bdfaa4de44e79bac70f4"
+ integrity sha512-HTUrgRJ7r4dsZKU6GjmpfRK1O76h97Z8MfS1G0FozR+oF2kG6Vfe8JE6zwrkbxigziPHinCJ+gCPjA9EaBDtRw==
+
+cssesc@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/cssesc/-/cssesc-3.0.0.tgz#37741919903b868565e1c09ea747445cd18983ee"
+ integrity sha512-/Tb/JcjK111nNScGob5MNtsntNM1aCNUDipB/TkwZFhyDrrE47SOx/18wF2bbjgc3ZzCSKW1T5nt5EbFoAz/Vg==
+
+cssnano-preset-advanced@^5.3.8:
+ version "5.3.8"
+ resolved "https://registry.yarnpkg.com/cssnano-preset-advanced/-/cssnano-preset-advanced-5.3.8.tgz#027b1d05ef896d908178c483f0ec4190cb50ef9a"
+ integrity sha512-xUlLLnEB1LjpEik+zgRNlk8Y/koBPPtONZjp7JKbXigeAmCrFvq9H0pXW5jJV45bQWAlmJ0sKy+IMr0XxLYQZg==
+ dependencies:
+ autoprefixer "^10.3.7"
+ cssnano-preset-default "^5.2.12"
+ postcss-discard-unused "^5.1.0"
+ postcss-merge-idents "^5.1.1"
+ postcss-reduce-idents "^5.2.0"
+ postcss-zindex "^5.1.0"
+
+cssnano-preset-default@^5.2.12:
+ version "5.2.12"
+ resolved "https://registry.yarnpkg.com/cssnano-preset-default/-/cssnano-preset-default-5.2.12.tgz#ebe6596ec7030e62c3eb2b3c09f533c0644a9a97"
+ integrity sha512-OyCBTZi+PXgylz9HAA5kHyoYhfGcYdwFmyaJzWnzxuGRtnMw/kR6ilW9XzlzlRAtB6PLT/r+prYgkef7hngFew==
+ dependencies:
+ css-declaration-sorter "^6.3.0"
+ cssnano-utils "^3.1.0"
+ postcss-calc "^8.2.3"
+ postcss-colormin "^5.3.0"
+ postcss-convert-values "^5.1.2"
+ postcss-discard-comments "^5.1.2"
+ postcss-discard-duplicates "^5.1.0"
+ postcss-discard-empty "^5.1.1"
+ postcss-discard-overridden "^5.1.0"
+ postcss-merge-longhand "^5.1.6"
+ postcss-merge-rules "^5.1.2"
+ postcss-minify-font-values "^5.1.0"
+ postcss-minify-gradients "^5.1.1"
+ postcss-minify-params "^5.1.3"
+ postcss-minify-selectors "^5.2.1"
+ postcss-normalize-charset "^5.1.0"
+ postcss-normalize-display-values "^5.1.0"
+ postcss-normalize-positions "^5.1.1"
+ postcss-normalize-repeat-style "^5.1.1"
+ postcss-normalize-string "^5.1.0"
+ postcss-normalize-timing-functions "^5.1.0"
+ postcss-normalize-unicode "^5.1.0"
+ postcss-normalize-url "^5.1.0"
+ postcss-normalize-whitespace "^5.1.1"
+ postcss-ordered-values "^5.1.3"
+ postcss-reduce-initial "^5.1.0"
+ postcss-reduce-transforms "^5.1.0"
+ postcss-svgo "^5.1.0"
+ postcss-unique-selectors "^5.1.1"
+
+cssnano-utils@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/cssnano-utils/-/cssnano-utils-3.1.0.tgz#95684d08c91511edfc70d2636338ca37ef3a6861"
+ integrity sha512-JQNR19/YZhz4psLX/rQ9M83e3z2Wf/HdJbryzte4a3NSuafyp9w/I4U+hx5C2S9g41qlstH7DEWnZaaj83OuEA==
+
+cssnano@^5.1.12, cssnano@^5.1.8:
+ version "5.1.13"
+ resolved "https://registry.yarnpkg.com/cssnano/-/cssnano-5.1.13.tgz#83d0926e72955332dc4802a7070296e6258efc0a"
+ integrity sha512-S2SL2ekdEz6w6a2epXn4CmMKU4K3KpcyXLKfAYc9UQQqJRkD/2eLUG0vJ3Db/9OvO5GuAdgXw3pFbR6abqghDQ==
+ dependencies:
+ cssnano-preset-default "^5.2.12"
+ lilconfig "^2.0.3"
+ yaml "^1.10.2"
+
+csso@^4.2.0:
+ version "4.2.0"
+ resolved "https://registry.yarnpkg.com/csso/-/csso-4.2.0.tgz#ea3a561346e8dc9f546d6febedd50187cf389529"
+ integrity sha512-wvlcdIbf6pwKEk7vHj8/Bkc0B4ylXZruLvOgs9doS5eOsOpuodOV2zJChSpkp+pRpYQLQMeF04nr3Z68Sta9jA==
+ dependencies:
+ css-tree "^1.1.2"
+
+csstype@^3.0.2:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/csstype/-/csstype-3.1.1.tgz#841b532c45c758ee546a11d5bd7b7b473c8c30b9"
+ integrity sha512-DJR/VvkAvSZW9bTouZue2sSxDwdTN92uHjqeKVm+0dAqdfNykRzQ95tay8aXMBAAPpUiq4Qcug2L7neoRh2Egw==
+
+cz-conventional-changelog@3.3.0, cz-conventional-changelog@^3.3.0:
+ version "3.3.0"
+ resolved "https://registry.yarnpkg.com/cz-conventional-changelog/-/cz-conventional-changelog-3.3.0.tgz#9246947c90404149b3fe2cf7ee91acad3b7d22d2"
+ integrity sha512-U466fIzU5U22eES5lTNiNbZ+d8dfcHcssH4o7QsdWaCcRs/feIPCxKYSWkYBNs5mny7MvEfwpTLWjvbm94hecw==
+ dependencies:
+ chalk "^2.4.1"
+ commitizen "^4.0.3"
+ conventional-commit-types "^3.0.0"
+ lodash.map "^4.5.1"
+ longest "^2.0.1"
+ word-wrap "^1.0.3"
+ optionalDependencies:
+ "@commitlint/load" ">6.1.1"
+
+dargs@^7.0.0:
+ version "7.0.0"
+ resolved "https://registry.yarnpkg.com/dargs/-/dargs-7.0.0.tgz#04015c41de0bcb69ec84050f3d9be0caf8d6d5cc"
+ integrity sha512-2iy1EkLdlBzQGvbweYRFxmFath8+K7+AKB0TlhHWkNuH+TmovaMH/Wp7V7R4u7f4SnX3OgLsU9t1NI9ioDnUpg==
+
+dateformat@^3.0.0:
+ version "3.0.3"
+ resolved "https://registry.yarnpkg.com/dateformat/-/dateformat-3.0.3.tgz#a6e37499a4d9a9cf85ef5872044d62901c9889ae"
+ integrity sha512-jyCETtSl3VMZMWeRo7iY1FL19ges1t55hMo5yaam4Jrsm5EPL89UQkoQRyiI+Yf4k8r2ZpdngkV8hr1lIdjb3Q==
+
+debug@2.6.9, debug@^2.6.0:
+ version "2.6.9"
+ resolved "https://registry.yarnpkg.com/debug/-/debug-2.6.9.tgz#5d128515df134ff327e90a4c93f4e077a536341f"
+ integrity sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==
+ dependencies:
+ ms "2.0.0"
+
+debug@^3.2.6:
+ version "3.2.7"
+ resolved "https://mirrors.cloud.tencent.com/npm/debug/-/debug-3.2.7.tgz#72580b7e9145fb39b6676f9c5e5fb100b934179a"
+ integrity sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==
+ dependencies:
+ ms "^2.1.1"
+
+debug@^4.1.0, debug@^4.1.1:
+ version "4.3.4"
+ resolved "https://registry.yarnpkg.com/debug/-/debug-4.3.4.tgz#1319f6579357f2338d3337d2cdd4914bb5dcc865"
+ integrity sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==
+ dependencies:
+ ms "2.1.2"
+
+decamelize-keys@^1.1.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/decamelize-keys/-/decamelize-keys-1.1.0.tgz#d171a87933252807eb3cb61dc1c1445d078df2d9"
+ integrity sha512-ocLWuYzRPoS9bfiSdDd3cxvrzovVMZnRDVEzAs+hWIVXGDbHxWMECij2OBuyB/An0FFW/nLuq6Kv1i/YC5Qfzg==
+ dependencies:
+ decamelize "^1.1.0"
+ map-obj "^1.0.0"
+
+decamelize@^1.1.0:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/decamelize/-/decamelize-1.2.0.tgz#f6534d15148269b20352e7bee26f501f9a191290"
+ integrity sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA==
+
+decompress-response@^3.3.0:
+ version "3.3.0"
+ resolved "https://registry.yarnpkg.com/decompress-response/-/decompress-response-3.3.0.tgz#80a4dd323748384bfa248083622aedec982adff3"
+ integrity sha512-BzRPQuY1ip+qDonAOz42gRm/pg9F768C+npV/4JOsxRC2sq+Rlk+Q4ZCAsOhnIaMrgarILY+RMUIvMmmX1qAEA==
+ dependencies:
+ mimic-response "^1.0.0"
+
+dedent@0.7.0:
+ version "0.7.0"
+ resolved "https://registry.yarnpkg.com/dedent/-/dedent-0.7.0.tgz#2495ddbaf6eb874abb0e1be9df22d2e5a544326c"
+ integrity sha512-Q6fKUPqnAHAyhiUgFU7BUzLiv0kd8saH9al7tnu5Q/okj6dnupxyTgFIBjVzJATdfIAm9NAsvXNzjaKa+bxVyA==
+
+deep-extend@^0.6.0:
+ version "0.6.0"
+ resolved "https://registry.yarnpkg.com/deep-extend/-/deep-extend-0.6.0.tgz#c4fa7c95404a17a9c3e8ca7e1537312b736330ac"
+ integrity sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA==
+
+deepmerge@^4.2.2:
+ version "4.2.2"
+ resolved "https://registry.yarnpkg.com/deepmerge/-/deepmerge-4.2.2.tgz#44d2ea3679b8f4d4ffba33f03d865fc1e7bf4955"
+ integrity sha512-FJ3UgI4gIl+PHZm53knsuSFpE+nESMr7M4v9QcgB7S63Kj/6WqMiFQJpBBYz1Pt+66bZpP3Q7Lye0Oo9MPKEdg==
+
+default-gateway@^6.0.3:
+ version "6.0.3"
+ resolved "https://registry.yarnpkg.com/default-gateway/-/default-gateway-6.0.3.tgz#819494c888053bdb743edbf343d6cdf7f2943a71"
+ integrity sha512-fwSOJsbbNzZ/CUFpqFBqYfYNLj1NbMPm8MMCIzHjC83iSJRBEGmDUxU+WP661BaBQImeC2yHwXtz+P/O9o+XEg==
+ dependencies:
+ execa "^5.0.0"
+
+defaults@^1.0.3:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/defaults/-/defaults-1.0.3.tgz#c656051e9817d9ff08ed881477f3fe4019f3ef7d"
+ integrity sha512-s82itHOnYrN0Ib8r+z7laQz3sdE+4FP3d9Q7VLO7U+KRT+CR0GsWuyHxzdAY82I7cXv0G/twrqomTJLOssO5HA==
+ dependencies:
+ clone "^1.0.2"
+
+defer-to-connect@^1.0.1:
+ version "1.1.3"
+ resolved "https://registry.yarnpkg.com/defer-to-connect/-/defer-to-connect-1.1.3.tgz#331ae050c08dcf789f8c83a7b81f0ed94f4ac591"
+ integrity sha512-0ISdNousHvZT2EiFlZeZAHBUvSxmKswVCEf8hW7KWgG4a8MVEu/3Vb6uWYozkjylyCxe0JBIiRB1jV45S70WVQ==
+
+define-lazy-prop@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/define-lazy-prop/-/define-lazy-prop-2.0.0.tgz#3f7ae421129bcaaac9bc74905c98a0009ec9ee7f"
+ integrity sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og==
+
+define-properties@^1.1.4:
+ version "1.1.4"
+ resolved "https://registry.yarnpkg.com/define-properties/-/define-properties-1.1.4.tgz#0b14d7bd7fbeb2f3572c3a7eda80ea5d57fb05b1"
+ integrity sha512-uckOqKcfaVvtBdsVkdPv3XjveQJsNQqmhXgRi8uhvWWuPYZCNlzT8qAyblUgNoXdHdjMTzAqeGjAoli8f+bzPA==
+ dependencies:
+ has-property-descriptors "^1.0.0"
+ object-keys "^1.1.1"
+
+del@^6.1.1:
+ version "6.1.1"
+ resolved "https://registry.yarnpkg.com/del/-/del-6.1.1.tgz#3b70314f1ec0aa325c6b14eb36b95786671edb7a"
+ integrity sha512-ua8BhapfP0JUJKC/zV9yHHDW/rDoDxP4Zhn3AkA6/xT6gY7jYXJiaeyBZznYVujhZZET+UgcbZiQ7sN3WqcImg==
+ dependencies:
+ globby "^11.0.1"
+ graceful-fs "^4.2.4"
+ is-glob "^4.0.1"
+ is-path-cwd "^2.2.0"
+ is-path-inside "^3.0.2"
+ p-map "^4.0.0"
+ rimraf "^3.0.2"
+ slash "^3.0.0"
+
+depd@2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/depd/-/depd-2.0.0.tgz#b696163cc757560d09cf22cc8fad1571b79e76df"
+ integrity sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==
+
+depd@~1.1.2:
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/depd/-/depd-1.1.2.tgz#9bcd52e14c097763e749b274c4346ed2e560b5a9"
+ integrity sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ==
+
+destroy@1.2.0:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/destroy/-/destroy-1.2.0.tgz#4803735509ad8be552934c67df614f94e66fa015"
+ integrity sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==
+
+detab@2.0.4:
+ version "2.0.4"
+ resolved "https://registry.yarnpkg.com/detab/-/detab-2.0.4.tgz#b927892069aff405fbb9a186fe97a44a92a94b43"
+ integrity sha512-8zdsQA5bIkoRECvCrNKPla84lyoR7DSAyf7p0YgXzBO9PDJx8KntPUay7NS6yp+KdxdVtiE5SpHKtbp2ZQyA9g==
+ dependencies:
+ repeat-string "^1.5.4"
+
+detect-file@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/detect-file/-/detect-file-1.0.0.tgz#f0d66d03672a825cb1b73bdb3fe62310c8e552b7"
+ integrity sha512-DtCOLG98P007x7wiiOmfI0fi3eIKyWiLTGJ2MDnVi/E04lWGbf+JzrRHMm0rgIIZJGtHpKpbVgLWHrv8xXpc3Q==
+
+detect-indent@6.1.0, detect-indent@^6.0.0:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/detect-indent/-/detect-indent-6.1.0.tgz#592485ebbbf6b3b1ab2be175c8393d04ca0d57e6"
+ integrity sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA==
+
+detect-newline@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/detect-newline/-/detect-newline-3.1.0.tgz#576f5dfc63ae1a192ff192d8ad3af6308991b651"
+ integrity sha512-TLz+x/vEXm/Y7P7wn1EJFNLxYpUD4TgMosxY6fAVJUnJMbupHBOncxyWUG9OpTaH9EBD7uFI5LfEgmMOc54DsA==
+
+detect-node@^2.0.4:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/detect-node/-/detect-node-2.1.0.tgz#c9c70775a49c3d03bc2c06d9a73be550f978f8b1"
+ integrity sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==
+
+detect-port-alt@^1.1.6:
+ version "1.1.6"
+ resolved "https://registry.yarnpkg.com/detect-port-alt/-/detect-port-alt-1.1.6.tgz#24707deabe932d4a3cf621302027c2b266568275"
+ integrity sha512-5tQykt+LqfJFBEYaDITx7S7cR7mJ/zQmLXZ2qt5w04ainYZw6tBf9dBunMjVeVOdYVRUzUOE4HkY5J7+uttb5Q==
+ dependencies:
+ address "^1.0.1"
+ debug "^2.6.0"
+
+detect-port@^1.3.0:
+ version "1.3.0"
+ resolved "https://registry.yarnpkg.com/detect-port/-/detect-port-1.3.0.tgz#d9c40e9accadd4df5cac6a782aefd014d573d1f1"
+ integrity sha512-E+B1gzkl2gqxt1IhUzwjrxBKRqx1UzC3WLONHinn8S3T6lwV/agVCyitiFOsGJ/eYuEUBvD71MZHy3Pv1G9doQ==
+ dependencies:
+ address "^1.0.1"
+ debug "^2.6.0"
+
+diff@^4.0.1:
+ version "4.0.2"
+ resolved "https://registry.yarnpkg.com/diff/-/diff-4.0.2.tgz#60f3aecb89d5fae520c11aa19efc2bb982aade7d"
+ integrity sha512-58lmxKSA4BNyLz+HHMUzlOEpg09FV+ev6ZMe3vJihgdxzgcwZ8VoEEPmALCZG9LmqfVoNMMKpttIYTVG6uDY7A==
+
+dir-glob@^3.0.1:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/dir-glob/-/dir-glob-3.0.1.tgz#56dbf73d992a4a93ba1584f4534063fd2e41717f"
+ integrity sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==
+ dependencies:
+ path-type "^4.0.0"
+
+dns-equal@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/dns-equal/-/dns-equal-1.0.0.tgz#b39e7f1da6eb0a75ba9c17324b34753c47e0654d"
+ integrity sha512-z+paD6YUQsk+AbGCEM4PrOXSss5gd66QfcVBFTKR/HpFL9jCqikS94HYwKww6fQyO7IxrIIyUu+g0Ka9tUS2Cg==
+
+dns-packet@^5.2.2:
+ version "5.4.0"
+ resolved "https://registry.yarnpkg.com/dns-packet/-/dns-packet-5.4.0.tgz#1f88477cf9f27e78a213fb6d118ae38e759a879b"
+ integrity sha512-EgqGeaBB8hLiHLZtp/IbaDQTL8pZ0+IvwzSHA6d7VyMDM+B9hgddEMa9xjK5oYnw0ci0JQ6g2XCD7/f6cafU6g==
+ dependencies:
+ "@leichtgewicht/ip-codec" "^2.0.1"
+
+docusaurus-plugin-less@^2.0.2:
+ version "2.0.2"
+ resolved "https://mirrors.cloud.tencent.com/npm/docusaurus-plugin-less/-/docusaurus-plugin-less-2.0.2.tgz#63bf04a5539a3b8ddc38bf527b51eb135b60f528"
+ integrity sha512-ez6WSSvGS8HoJslYHeG5SflyShWvHFXeTTHXPBd3H1T3zgq9wp6wD7scXm+rXyyfhFhP5VNiIqhYB78z4OLjwg==
+
+dom-converter@^0.2.0:
+ version "0.2.0"
+ resolved "https://registry.yarnpkg.com/dom-converter/-/dom-converter-0.2.0.tgz#6721a9daee2e293682955b6afe416771627bb768"
+ integrity sha512-gd3ypIPfOMr9h5jIKq8E3sHOTCjeirnl0WK5ZdS1AW0Odt0b1PaWaHdJ4Qk4klv+YB9aJBS7mESXjFoDQPu6DA==
+ dependencies:
+ utila "~0.4"
+
+dom-serializer@^1.0.1:
+ version "1.4.1"
+ resolved "https://registry.yarnpkg.com/dom-serializer/-/dom-serializer-1.4.1.tgz#de5d41b1aea290215dc45a6dae8adcf1d32e2d30"
+ integrity sha512-VHwB3KfrcOOkelEG2ZOfxqLZdfkil8PtJi4P8N2MMXucZq2yLp75ClViUlOVwyoHEDjYU433Aq+5zWP61+RGag==
+ dependencies:
+ domelementtype "^2.0.1"
+ domhandler "^4.2.0"
+ entities "^2.0.0"
+
+dom-serializer@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/dom-serializer/-/dom-serializer-2.0.0.tgz#e41b802e1eedf9f6cae183ce5e622d789d7d8e53"
+ integrity sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg==
+ dependencies:
+ domelementtype "^2.3.0"
+ domhandler "^5.0.2"
+ entities "^4.2.0"
+
+domelementtype@^2.0.1, domelementtype@^2.2.0, domelementtype@^2.3.0:
+ version "2.3.0"
+ resolved "https://registry.yarnpkg.com/domelementtype/-/domelementtype-2.3.0.tgz#5c45e8e869952626331d7aab326d01daf65d589d"
+ integrity sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==
+
+domhandler@^4.0.0, domhandler@^4.2.0, domhandler@^4.3.1:
+ version "4.3.1"
+ resolved "https://registry.yarnpkg.com/domhandler/-/domhandler-4.3.1.tgz#8d792033416f59d68bc03a5aa7b018c1ca89279c"
+ integrity sha512-GrwoxYN+uWlzO8uhUXRl0P+kHE4GtVPfYzVLcUxPL7KNdHKj66vvlhiweIHqYYXWlw+T8iLMp42Lm67ghw4WMQ==
+ dependencies:
+ domelementtype "^2.2.0"
+
+domhandler@^5.0.1, domhandler@^5.0.2, domhandler@^5.0.3:
+ version "5.0.3"
+ resolved "https://registry.yarnpkg.com/domhandler/-/domhandler-5.0.3.tgz#cc385f7f751f1d1fc650c21374804254538c7d31"
+ integrity sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w==
+ dependencies:
+ domelementtype "^2.3.0"
+
+domutils@^2.5.2, domutils@^2.8.0:
+ version "2.8.0"
+ resolved "https://registry.yarnpkg.com/domutils/-/domutils-2.8.0.tgz#4437def5db6e2d1f5d6ee859bd95ca7d02048135"
+ integrity sha512-w96Cjofp72M5IIhpjgobBimYEfoPjx1Vx0BSX9P30WBdZW2WIKU0T1Bd0kz2eNZ9ikjKgHbEyKx8BB6H1L3h3A==
+ dependencies:
+ dom-serializer "^1.0.1"
+ domelementtype "^2.2.0"
+ domhandler "^4.2.0"
+
+domutils@^3.0.1:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/domutils/-/domutils-3.0.1.tgz#696b3875238338cb186b6c0612bd4901c89a4f1c"
+ integrity sha512-z08c1l761iKhDFtfXO04C7kTdPBLi41zwOZl00WS8b5eiaebNpY00HKbztwBq+e3vyqWNwWF3mP9YLUeqIrF+Q==
+ dependencies:
+ dom-serializer "^2.0.0"
+ domelementtype "^2.3.0"
+ domhandler "^5.0.1"
+
+dot-case@^3.0.4:
+ version "3.0.4"
+ resolved "https://registry.yarnpkg.com/dot-case/-/dot-case-3.0.4.tgz#9b2b670d00a431667a8a75ba29cd1b98809ce751"
+ integrity sha512-Kv5nKlh6yRrdrGvxeJ2e5y2eRUpkUosIW4A2AS38zwSz27zu7ufDwQPi5Jhs3XAlGNetl3bmnGhQsMtkKJnj3w==
+ dependencies:
+ no-case "^3.0.4"
+ tslib "^2.0.3"
+
+dot-prop@^5.1.0, dot-prop@^5.2.0:
+ version "5.3.0"
+ resolved "https://registry.yarnpkg.com/dot-prop/-/dot-prop-5.3.0.tgz#90ccce708cd9cd82cc4dc8c3ddd9abdd55b20e88"
+ integrity sha512-QM8q3zDe58hqUqjraQOmzZ1LIH9SWQJTlEKCH4kJ2oQvLZk7RbQXvtDM2XEq3fwkV9CCvvH4LA0AV+ogFsBM2Q==
+ dependencies:
+ is-obj "^2.0.0"
+
+dotgitignore@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/dotgitignore/-/dotgitignore-2.1.0.tgz#a4b15a4e4ef3cf383598aaf1dfa4a04bcc089b7b"
+ integrity sha512-sCm11ak2oY6DglEPpCB8TixLjWAxd3kJTs6UIcSasNYxXdFPV+YKlye92c8H4kKFqV5qYMIh7d+cYecEg0dIkA==
+ dependencies:
+ find-up "^3.0.0"
+ minimatch "^3.0.4"
+
+duplexer3@^0.1.4:
+ version "0.1.5"
+ resolved "https://registry.yarnpkg.com/duplexer3/-/duplexer3-0.1.5.tgz#0b5e4d7bad5de8901ea4440624c8e1d20099217e"
+ integrity sha512-1A8za6ws41LQgv9HrE/66jyC5yuSjQ3L/KOpFtoBilsAK2iA2wuS5rTt1OCzIvtS2V7nVmedsUU+DGRcjBmOYA==
+
+duplexer@^0.1.2:
+ version "0.1.2"
+ resolved "https://registry.yarnpkg.com/duplexer/-/duplexer-0.1.2.tgz#3abe43aef3835f8ae077d136ddce0f276b0400e6"
+ integrity sha512-jtD6YG370ZCIi/9GTaJKQxWTZD045+4R4hTk/x1UyoqadyJ9x9CgSi1RlVDQF8U2sxLLSnFkCaMihqljHIWgMg==
+
+eastasianwidth@^0.2.0:
+ version "0.2.0"
+ resolved "https://registry.yarnpkg.com/eastasianwidth/-/eastasianwidth-0.2.0.tgz#696ce2ec0aa0e6ea93a397ffcf24aa7840c827cb"
+ integrity sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==
+
+ee-first@1.1.1:
+ version "1.1.1"
+ resolved "https://registry.yarnpkg.com/ee-first/-/ee-first-1.1.1.tgz#590c61156b0ae2f4f0255732a158b266bc56b21d"
+ integrity sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==
+
+electron-to-chromium@^1.4.202:
+ version "1.4.249"
+ resolved "https://registry.yarnpkg.com/electron-to-chromium/-/electron-to-chromium-1.4.249.tgz#49c34336c742ee65453dbddf4c84355e59b96e2c"
+ integrity sha512-GMCxR3p2HQvIw47A599crTKYZprqihoBL4lDSAUmr7IYekXFK5t/WgEBrGJDCa2HWIZFQEkGuMqPCi05ceYqPQ==
+
+emoji-regex@^8.0.0:
+ version "8.0.0"
+ resolved "https://registry.yarnpkg.com/emoji-regex/-/emoji-regex-8.0.0.tgz#e818fd69ce5ccfcb404594f842963bf53164cc37"
+ integrity sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==
+
+emoji-regex@^9.2.2:
+ version "9.2.2"
+ resolved "https://registry.yarnpkg.com/emoji-regex/-/emoji-regex-9.2.2.tgz#840c8803b0d8047f4ff0cf963176b32d4ef3ed72"
+ integrity sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==
+
+emojis-list@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/emojis-list/-/emojis-list-3.0.0.tgz#5570662046ad29e2e916e71aae260abdff4f6a78"
+ integrity sha512-/kyM18EfinwXZbno9FyUGeFh87KC8HRQBQGildHZbEuRyWFOmv1U10o9BBp8XVZDVNNuQKyIGIu5ZYAAXJ0V2Q==
+
+emoticon@^3.2.0:
+ version "3.2.0"
+ resolved "https://registry.yarnpkg.com/emoticon/-/emoticon-3.2.0.tgz#c008ca7d7620fac742fe1bf4af8ff8fed154ae7f"
+ integrity sha512-SNujglcLTTg+lDAcApPNgEdudaqQFiAbJCqzjNxJkvN9vAwCGi0uu8IUVvx+f16h+V44KCY6Y2yboroc9pilHg==
+
+encodeurl@~1.0.2:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/encodeurl/-/encodeurl-1.0.2.tgz#ad3ff4c86ec2d029322f5a02c3a9a606c95b3f59"
+ integrity sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==
+
+end-of-stream@^1.1.0:
+ version "1.4.4"
+ resolved "https://registry.yarnpkg.com/end-of-stream/-/end-of-stream-1.4.4.tgz#5ae64a5f45057baf3626ec14da0ca5e4b2431eb0"
+ integrity sha512-+uw1inIHVPQoaVuHzRyXd21icM+cnt4CzD5rW+NC1wjOUSTOs+Te7FOv7AhN7vS9x/oIyhLP5PR1H+phQAHu5Q==
+ dependencies:
+ once "^1.4.0"
+
+enhanced-resolve@^5.10.0:
+ version "5.10.0"
+ resolved "https://registry.yarnpkg.com/enhanced-resolve/-/enhanced-resolve-5.10.0.tgz#0dc579c3bb2a1032e357ac45b8f3a6f3ad4fb1e6"
+ integrity sha512-T0yTFjdpldGY8PmuXXR0PyQ1ufZpEGiHVrp7zHKB7jdR4qlmZHhONVM5AQOAWXuF/w3dnHbEQVrNptJgt7F+cQ==
+ dependencies:
+ graceful-fs "^4.2.4"
+ tapable "^2.2.0"
+
+entities@^2.0.0:
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/entities/-/entities-2.2.0.tgz#098dc90ebb83d8dffa089d55256b351d34c4da55"
+ integrity sha512-p92if5Nz619I0w+akJrLZH0MX0Pb5DX39XOwQTtXSdQQOaYH03S1uIQp4mhOZtAXrxq4ViO67YTiLBo2638o9A==
+
+entities@^4.2.0, entities@^4.3.0, entities@^4.4.0:
+ version "4.4.0"
+ resolved "https://registry.yarnpkg.com/entities/-/entities-4.4.0.tgz#97bdaba170339446495e653cfd2db78962900174"
+ integrity sha512-oYp7156SP8LkeGD0GF85ad1X9Ai79WtRsZ2gxJqtBuzH+98YUV6jkHEKlZkMbcrjJjIVJNIDP/3WL9wQkoPbWA==
+
+errno@^0.1.1:
+ version "0.1.8"
+ resolved "https://mirrors.cloud.tencent.com/npm/errno/-/errno-0.1.8.tgz#8bb3e9c7d463be4976ff888f76b4809ebc2e811f"
+ integrity sha512-dJ6oBr5SQ1VSd9qkk7ByRgb/1SH4JZjCHSW/mr63/QcXO9zLVxvJ6Oy13nio03rxpSnVDDjFor75SjVeZWPW/A==
+ dependencies:
+ prr "~1.0.1"
+
+error-ex@^1.3.1:
+ version "1.3.2"
+ resolved "https://registry.yarnpkg.com/error-ex/-/error-ex-1.3.2.tgz#b4ac40648107fdcdcfae242f428bea8a14d4f1bf"
+ integrity sha512-7dFHNmqeFSEt2ZBsCriorKnn3Z2pj+fd9kmI6QoWw4//DL+icEBfc0U7qJCisqrTsKTjw4fNFy2pW9OqStD84g==
+ dependencies:
+ is-arrayish "^0.2.1"
+
+es-module-lexer@^0.9.0:
+ version "0.9.3"
+ resolved "https://registry.yarnpkg.com/es-module-lexer/-/es-module-lexer-0.9.3.tgz#6f13db00cc38417137daf74366f535c8eb438f19"
+ integrity sha512-1HQ2M2sPtxwnvOvT1ZClHyQDiggdNjURWpY2we6aMKCQiUVxTmVs2UYPLIrD84sS+kMdUwfBSylbJPwNnBrnHQ==
+
+escalade@^3.1.1:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/escalade/-/escalade-3.1.1.tgz#d8cfdc7000965c5a0174b4a82eaa5c0552742e40"
+ integrity sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw==
+
+escape-goat@^2.0.0:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/escape-goat/-/escape-goat-2.1.1.tgz#1b2dc77003676c457ec760b2dc68edb648188675"
+ integrity sha512-8/uIhbG12Csjy2JEW7D9pHbreaVaS/OpN3ycnyvElTdwM5n6GY6W6e2IPemfvGZeUMqZ9A/3GqIZMgKnBhAw/Q==
+
+escape-html@^1.0.3, escape-html@~1.0.3:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/escape-html/-/escape-html-1.0.3.tgz#0258eae4d3d0c0974de1c169188ef0051d1d1988"
+ integrity sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==
+
+escape-string-regexp@^1.0.5:
+ version "1.0.5"
+ resolved "https://registry.yarnpkg.com/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz#1b61c0562190a8dff6ae3bb2cf0200ca130b86d4"
+ integrity sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==
+
+escape-string-regexp@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz#14ba83a5d373e3d311e5afca29cf5bfad965bf34"
+ integrity sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==
+
+eslint-scope@5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/eslint-scope/-/eslint-scope-5.1.1.tgz#e786e59a66cb92b3f6c1fb0d508aab174848f48c"
+ integrity sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==
+ dependencies:
+ esrecurse "^4.3.0"
+ estraverse "^4.1.1"
+
+esprima@^4.0.0:
+ version "4.0.1"
+ resolved "https://registry.yarnpkg.com/esprima/-/esprima-4.0.1.tgz#13b04cdb3e6c5d19df91ab6987a8695619b0aa71"
+ integrity sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==
+
+esrecurse@^4.3.0:
+ version "4.3.0"
+ resolved "https://registry.yarnpkg.com/esrecurse/-/esrecurse-4.3.0.tgz#7ad7964d679abb28bee72cec63758b1c5d2c9921"
+ integrity sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==
+ dependencies:
+ estraverse "^5.2.0"
+
+estraverse@^4.1.1:
+ version "4.3.0"
+ resolved "https://registry.yarnpkg.com/estraverse/-/estraverse-4.3.0.tgz#398ad3f3c5a24948be7725e83d11a7de28cdbd1d"
+ integrity sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==
+
+estraverse@^5.2.0:
+ version "5.3.0"
+ resolved "https://registry.yarnpkg.com/estraverse/-/estraverse-5.3.0.tgz#2eea5290702f26ab8fe5370370ff86c965d21123"
+ integrity sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==
+
+esutils@^2.0.2:
+ version "2.0.3"
+ resolved "https://registry.yarnpkg.com/esutils/-/esutils-2.0.3.tgz#74d2eb4de0b8da1293711910d50775b9b710ef64"
+ integrity sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==
+
+eta@^1.12.3:
+ version "1.12.3"
+ resolved "https://registry.yarnpkg.com/eta/-/eta-1.12.3.tgz#2982d08adfbef39f9fa50e2fbd42d7337e7338b1"
+ integrity sha512-qHixwbDLtekO/d51Yr4glcaUJCIjGVJyTzuqV4GPlgZo1YpgOKG+avQynErZIYrfM6JIJdtiG2Kox8tbb+DoGg==
+
+etag@~1.8.1:
+ version "1.8.1"
+ resolved "https://registry.yarnpkg.com/etag/-/etag-1.8.1.tgz#41ae2eeb65efa62268aebfea83ac7d79299b0887"
+ integrity sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==
+
+eval@^0.1.8:
+ version "0.1.8"
+ resolved "https://registry.yarnpkg.com/eval/-/eval-0.1.8.tgz#2b903473b8cc1d1989b83a1e7923f883eb357f85"
+ integrity sha512-EzV94NYKoO09GLXGjXj9JIlXijVck4ONSr5wiCWDvhsvj5jxSrzTmRU/9C1DyB6uToszLs8aifA6NQ7lEQdvFw==
+ dependencies:
+ "@types/node" "*"
+ require-like ">= 0.1.1"
+
+eventemitter3@^4.0.0:
+ version "4.0.7"
+ resolved "https://registry.yarnpkg.com/eventemitter3/-/eventemitter3-4.0.7.tgz#2de9b68f6528d5644ef5c59526a1b4a07306169f"
+ integrity sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==
+
+events@^3.2.0:
+ version "3.3.0"
+ resolved "https://registry.yarnpkg.com/events/-/events-3.3.0.tgz#31a95ad0a924e2d2c419a813aeb2c4e878ea7400"
+ integrity sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==
+
+execa@^5.0.0:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/execa/-/execa-5.1.1.tgz#f80ad9cbf4298f7bd1d4c9555c21e93741c411dd"
+ integrity sha512-8uSpZZocAZRBAPIEINJj3Lo9HyGitllczc27Eh5YYojjMFMn8yHMDMaUHE2Jqfq05D/wucwI4JGURyXt1vchyg==
+ dependencies:
+ cross-spawn "^7.0.3"
+ get-stream "^6.0.0"
+ human-signals "^2.1.0"
+ is-stream "^2.0.0"
+ merge-stream "^2.0.0"
+ npm-run-path "^4.0.1"
+ onetime "^5.1.2"
+ signal-exit "^3.0.3"
+ strip-final-newline "^2.0.0"
+
+expand-tilde@^2.0.0, expand-tilde@^2.0.2:
+ version "2.0.2"
+ resolved "https://registry.yarnpkg.com/expand-tilde/-/expand-tilde-2.0.2.tgz#97e801aa052df02454de46b02bf621642cdc8502"
+ integrity sha512-A5EmesHW6rfnZ9ysHQjPdJRni0SRar0tjtG5MNtm9n5TUvsYU8oozprtRD4AqHxcZWWlVuAmQo2nWKfN9oyjTw==
+ dependencies:
+ homedir-polyfill "^1.0.1"
+
+express@^4.17.3:
+ version "4.18.1"
+ resolved "https://registry.yarnpkg.com/express/-/express-4.18.1.tgz#7797de8b9c72c857b9cd0e14a5eea80666267caf"
+ integrity sha512-zZBcOX9TfehHQhtupq57OF8lFZ3UZi08Y97dwFCkD8p9d/d2Y3M+ykKcwaMDEL+4qyUolgBDX6AblpR3fL212Q==
+ dependencies:
+ accepts "~1.3.8"
+ array-flatten "1.1.1"
+ body-parser "1.20.0"
+ content-disposition "0.5.4"
+ content-type "~1.0.4"
+ cookie "0.5.0"
+ cookie-signature "1.0.6"
+ debug "2.6.9"
+ depd "2.0.0"
+ encodeurl "~1.0.2"
+ escape-html "~1.0.3"
+ etag "~1.8.1"
+ finalhandler "1.2.0"
+ fresh "0.5.2"
+ http-errors "2.0.0"
+ merge-descriptors "1.0.1"
+ methods "~1.1.2"
+ on-finished "2.4.1"
+ parseurl "~1.3.3"
+ path-to-regexp "0.1.7"
+ proxy-addr "~2.0.7"
+ qs "6.10.3"
+ range-parser "~1.2.1"
+ safe-buffer "5.2.1"
+ send "0.18.0"
+ serve-static "1.15.0"
+ setprototypeof "1.2.0"
+ statuses "2.0.1"
+ type-is "~1.6.18"
+ utils-merge "1.0.1"
+ vary "~1.1.2"
+
+extend-shallow@^2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/extend-shallow/-/extend-shallow-2.0.1.tgz#51af7d614ad9a9f610ea1bafbb989d6b1c56890f"
+ integrity sha512-zCnTtlxNoAiDc3gqY2aYAWFx7XWWiasuF2K8Me5WbN8otHKTUKBwjPtNpRs/rbUZm7KxWAaNj7P1a/p52GbVug==
+ dependencies:
+ is-extendable "^0.1.0"
+
+extend@^3.0.0:
+ version "3.0.2"
+ resolved "https://registry.yarnpkg.com/extend/-/extend-3.0.2.tgz#f8b1136b4071fbd8eb140aff858b1019ec2915fa"
+ integrity sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==
+
+external-editor@^3.0.3:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/external-editor/-/external-editor-3.1.0.tgz#cb03f740befae03ea4d283caed2741a83f335495"
+ integrity sha512-hMQ4CX1p1izmuLYyZqLMO/qGNw10wSv9QDCPfzXfyFrOaCSSoRfqE1Kf1s5an66J5JZC62NewG+mK49jOCtQew==
+ dependencies:
+ chardet "^0.7.0"
+ iconv-lite "^0.4.24"
+ tmp "^0.0.33"
+
+fast-deep-equal@^3.1.1, fast-deep-equal@^3.1.3:
+ version "3.1.3"
+ resolved "https://registry.yarnpkg.com/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz#3a7d56b559d6cbc3eb512325244e619a65c6c525"
+ integrity sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==
+
+fast-glob@^3.2.11, fast-glob@^3.2.9:
+ version "3.2.12"
+ resolved "https://registry.yarnpkg.com/fast-glob/-/fast-glob-3.2.12.tgz#7f39ec99c2e6ab030337142da9e0c18f37afae80"
+ integrity sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==
+ dependencies:
+ "@nodelib/fs.stat" "^2.0.2"
+ "@nodelib/fs.walk" "^1.2.3"
+ glob-parent "^5.1.2"
+ merge2 "^1.3.0"
+ micromatch "^4.0.4"
+
+fast-json-stable-stringify@^2.0.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz#874bf69c6f404c2b5d99c481341399fd55892633"
+ integrity sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==
+
+fast-url-parser@1.1.3:
+ version "1.1.3"
+ resolved "https://registry.yarnpkg.com/fast-url-parser/-/fast-url-parser-1.1.3.tgz#f4af3ea9f34d8a271cf58ad2b3759f431f0b318d"
+ integrity sha512-5jOCVXADYNuRkKFzNJ0dCCewsZiYo0dz8QNYljkOpFC6r2U4OBmKtvm/Tsuh4w1YYdDqDb31a8TVhBJ2OJKdqQ==
+ dependencies:
+ punycode "^1.3.2"
+
+fastq@^1.6.0:
+ version "1.13.0"
+ resolved "https://registry.yarnpkg.com/fastq/-/fastq-1.13.0.tgz#616760f88a7526bdfc596b7cab8c18938c36b98c"
+ integrity sha512-YpkpUnK8od0o1hmeSc7UUs/eB/vIPWJYjKck2QKIzAf71Vm1AAQ3EbuZB3g2JIy+pg+ERD0vqI79KyZiB2e2Nw==
+ dependencies:
+ reusify "^1.0.4"
+
+faye-websocket@^0.11.3:
+ version "0.11.4"
+ resolved "https://registry.yarnpkg.com/faye-websocket/-/faye-websocket-0.11.4.tgz#7f0d9275cfdd86a1c963dc8b65fcc451edcbb1da"
+ integrity sha512-CzbClwlXAuiRQAlUyfqPgvPoNKTckTPGfwZV4ZdAhVcP2lh9KUxJg2b5GkE7XbjKQ3YJnQ9z6D9ntLAlB+tP8g==
+ dependencies:
+ websocket-driver ">=0.5.1"
+
+fbemitter@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/fbemitter/-/fbemitter-3.0.0.tgz#00b2a1af5411254aab416cd75f9e6289bee4bff3"
+ integrity sha512-KWKaceCwKQU0+HPoop6gn4eOHk50bBv/VxjJtGMfwmJt3D29JpN4H4eisCtIPA+a8GVBam+ldMMpMjJUvpDyHw==
+ dependencies:
+ fbjs "^3.0.0"
+
+fbjs-css-vars@^1.0.0:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/fbjs-css-vars/-/fbjs-css-vars-1.0.2.tgz#216551136ae02fe255932c3ec8775f18e2c078b8"
+ integrity sha512-b2XGFAFdWZWg0phtAWLHCk836A1Xann+I+Dgd3Gk64MHKZO44FfoD1KxyvbSh0qZsIoXQGGlVztIY+oitJPpRQ==
+
+fbjs@^3.0.0, fbjs@^3.0.1:
+ version "3.0.4"
+ resolved "https://registry.yarnpkg.com/fbjs/-/fbjs-3.0.4.tgz#e1871c6bd3083bac71ff2da868ad5067d37716c6"
+ integrity sha512-ucV0tDODnGV3JCnnkmoszb5lf4bNpzjv80K41wd4k798Etq+UYD0y0TIfalLjZoKgjive6/adkRnszwapiDgBQ==
+ dependencies:
+ cross-fetch "^3.1.5"
+ fbjs-css-vars "^1.0.0"
+ loose-envify "^1.0.0"
+ object-assign "^4.1.0"
+ promise "^7.1.1"
+ setimmediate "^1.0.5"
+ ua-parser-js "^0.7.30"
+
+feed@^4.2.2:
+ version "4.2.2"
+ resolved "https://registry.yarnpkg.com/feed/-/feed-4.2.2.tgz#865783ef6ed12579e2c44bbef3c9113bc4956a7e"
+ integrity sha512-u5/sxGfiMfZNtJ3OvQpXcvotFpYkL0n9u9mM2vkui2nGo8b4wvDkJ8gAkYqbA8QpGyFCv3RK0Z+Iv+9veCS9bQ==
+ dependencies:
+ xml-js "^1.6.11"
+
+figures@^3.0.0, figures@^3.1.0:
+ version "3.2.0"
+ resolved "https://registry.yarnpkg.com/figures/-/figures-3.2.0.tgz#625c18bd293c604dc4a8ddb2febf0c88341746af"
+ integrity sha512-yaduQFRKLXYOGgEn6AZau90j3ggSOyiqXU0F9JZfeXYhNa+Jk4X+s45A2zg5jns87GAFa34BBm2kXw4XpNcbdg==
+ dependencies:
+ escape-string-regexp "^1.0.5"
+
+file-loader@^6.2.0:
+ version "6.2.0"
+ resolved "https://registry.yarnpkg.com/file-loader/-/file-loader-6.2.0.tgz#baef7cf8e1840df325e4390b4484879480eebe4d"
+ integrity sha512-qo3glqyTa61Ytg4u73GultjHGjdRyig3tG6lPtyX/jOEJvHif9uB0/OCI2Kif6ctF3caQTW2G5gym21oAsI4pw==
+ dependencies:
+ loader-utils "^2.0.0"
+ schema-utils "^3.0.0"
+
+filesize@^8.0.6:
+ version "8.0.7"
+ resolved "https://registry.yarnpkg.com/filesize/-/filesize-8.0.7.tgz#695e70d80f4e47012c132d57a059e80c6b580bd8"
+ integrity sha512-pjmC+bkIF8XI7fWaH8KxHcZL3DPybs1roSKP4rKDvy20tAWwIObE4+JIseG2byfGKhud5ZnM4YSGKBz7Sh0ndQ==
+
+fill-range@^7.0.1:
+ version "7.0.1"
+ resolved "https://registry.yarnpkg.com/fill-range/-/fill-range-7.0.1.tgz#1919a6a7c75fe38b2c7c77e5198535da9acdda40"
+ integrity sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==
+ dependencies:
+ to-regex-range "^5.0.1"
+
+finalhandler@1.2.0:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/finalhandler/-/finalhandler-1.2.0.tgz#7d23fe5731b207b4640e4fcd00aec1f9207a7b32"
+ integrity sha512-5uXcUVftlQMFnWC9qu/svkWv3GTd2PfUhK/3PLkYNAe7FbqJMt3515HaxE6eRL74GdsriiwujiawdaB1BpEISg==
+ dependencies:
+ debug "2.6.9"
+ encodeurl "~1.0.2"
+ escape-html "~1.0.3"
+ on-finished "2.4.1"
+ parseurl "~1.3.3"
+ statuses "2.0.1"
+ unpipe "~1.0.0"
+
+find-cache-dir@^3.3.1:
+ version "3.3.2"
+ resolved "https://registry.yarnpkg.com/find-cache-dir/-/find-cache-dir-3.3.2.tgz#b30c5b6eff0730731aea9bbd9dbecbd80256d64b"
+ integrity sha512-wXZV5emFEjrridIgED11OoUKLxiYjAcqot/NJdAkOhlJ+vGzwhOAfcG5OX1jP+S0PcjEn8bdMJv+g2jwQ3Onig==
+ dependencies:
+ commondir "^1.0.1"
+ make-dir "^3.0.2"
+ pkg-dir "^4.1.0"
+
+find-node-modules@^2.1.2:
+ version "2.1.3"
+ resolved "https://registry.yarnpkg.com/find-node-modules/-/find-node-modules-2.1.3.tgz#3c976cff2ca29ee94b4f9eafc613987fc4c0ee44"
+ integrity sha512-UC2I2+nx1ZuOBclWVNdcnbDR5dlrOdVb7xNjmT/lHE+LsgztWks3dG7boJ37yTS/venXw84B/mAW9uHVoC5QRg==
+ dependencies:
+ findup-sync "^4.0.0"
+ merge "^2.1.1"
+
+find-root@1.1.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/find-root/-/find-root-1.1.0.tgz#abcfc8ba76f708c42a97b3d685b7e9450bfb9ce4"
+ integrity sha512-NKfW6bec6GfKc0SGx1e07QZY9PE99u0Bft/0rzSD5k3sO/vwkVUpDUKVm5Gpp5Ue3YfShPFTX2070tDs5kB9Ng==
+
+find-up@^2.0.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/find-up/-/find-up-2.1.0.tgz#45d1b7e506c717ddd482775a2b77920a3c0c57a7"
+ integrity sha512-NWzkk0jSJtTt08+FBFMvXoeZnOJD+jTtsRmBYbAIzJdX6l7dLgR7CTubCM5/eDdPUBvLCeVasP1brfVR/9/EZQ==
+ dependencies:
+ locate-path "^2.0.0"
+
+find-up@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/find-up/-/find-up-3.0.0.tgz#49169f1d7993430646da61ecc5ae355c21c97b73"
+ integrity sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==
+ dependencies:
+ locate-path "^3.0.0"
+
+find-up@^4.0.0, find-up@^4.1.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/find-up/-/find-up-4.1.0.tgz#97afe7d6cdc0bc5928584b7c8d7b16e8a9aa5d19"
+ integrity sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==
+ dependencies:
+ locate-path "^5.0.0"
+ path-exists "^4.0.0"
+
+find-up@^5.0.0:
+ version "5.0.0"
+ resolved "https://registry.yarnpkg.com/find-up/-/find-up-5.0.0.tgz#4c92819ecb7083561e4f4a240a86be5198f536fc"
+ integrity sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==
+ dependencies:
+ locate-path "^6.0.0"
+ path-exists "^4.0.0"
+
+findup-sync@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/findup-sync/-/findup-sync-4.0.0.tgz#956c9cdde804052b881b428512905c4a5f2cdef0"
+ integrity sha512-6jvvn/12IC4quLBL1KNokxC7wWTvYncaVUYSoxWw7YykPLuRrnv4qdHcSOywOI5RpkOVGeQRtWM8/q+G6W6qfQ==
+ dependencies:
+ detect-file "^1.0.0"
+ is-glob "^4.0.0"
+ micromatch "^4.0.2"
+ resolve-dir "^1.0.1"
+
+flux@^4.0.1:
+ version "4.0.3"
+ resolved "https://registry.yarnpkg.com/flux/-/flux-4.0.3.tgz#573b504a24982c4768fdfb59d8d2ea5637d72ee7"
+ integrity sha512-yKAbrp7JhZhj6uiT1FTuVMlIAT1J4jqEyBpFApi1kxpGZCvacMVc/t1pMQyotqHhAgvoE3bNvAykhCo2CLjnYw==
+ dependencies:
+ fbemitter "^3.0.0"
+ fbjs "^3.0.1"
+
+follow-redirects@^1.0.0, follow-redirects@^1.14.7:
+ version "1.15.2"
+ resolved "https://registry.yarnpkg.com/follow-redirects/-/follow-redirects-1.15.2.tgz#b460864144ba63f2681096f274c4e57026da2c13"
+ integrity sha512-VQLG33o04KaQ8uYi2tVNbdrWp1QWxNNea+nmIB4EVM28v0hmP17z7aG1+wAkNzVq4KeXTq3221ye5qTJP91JwA==
+
+fork-ts-checker-webpack-plugin@^6.5.0:
+ version "6.5.2"
+ resolved "https://registry.yarnpkg.com/fork-ts-checker-webpack-plugin/-/fork-ts-checker-webpack-plugin-6.5.2.tgz#4f67183f2f9eb8ba7df7177ce3cf3e75cdafb340"
+ integrity sha512-m5cUmF30xkZ7h4tWUgTAcEaKmUW7tfyUyTqNNOz7OxWJ0v1VWKTcOvH8FWHUwSjlW/356Ijc9vi3XfcPstpQKA==
+ dependencies:
+ "@babel/code-frame" "^7.8.3"
+ "@types/json-schema" "^7.0.5"
+ chalk "^4.1.0"
+ chokidar "^3.4.2"
+ cosmiconfig "^6.0.0"
+ deepmerge "^4.2.2"
+ fs-extra "^9.0.0"
+ glob "^7.1.6"
+ memfs "^3.1.2"
+ minimatch "^3.0.4"
+ schema-utils "2.7.0"
+ semver "^7.3.2"
+ tapable "^1.0.0"
+
+forwarded@0.2.0:
+ version "0.2.0"
+ resolved "https://registry.yarnpkg.com/forwarded/-/forwarded-0.2.0.tgz#2269936428aad4c15c7ebe9779a84bf0b2a81811"
+ integrity sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==
+
+fraction.js@^4.2.0:
+ version "4.2.0"
+ resolved "https://registry.yarnpkg.com/fraction.js/-/fraction.js-4.2.0.tgz#448e5109a313a3527f5a3ab2119ec4cf0e0e2950"
+ integrity sha512-MhLuK+2gUcnZe8ZHlaaINnQLl0xRIGRfcGk2yl8xoQAfHrSsL3rYu6FCmBdkdbhc9EPlwyGHewaRsvwRMJtAlA==
+
+fresh@0.5.2:
+ version "0.5.2"
+ resolved "https://registry.yarnpkg.com/fresh/-/fresh-0.5.2.tgz#3d8cadd90d976569fa835ab1f8e4b23a105605a7"
+ integrity sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==
+
+fs-extra@9.1.0, fs-extra@^9.0.0:
+ version "9.1.0"
+ resolved "https://registry.yarnpkg.com/fs-extra/-/fs-extra-9.1.0.tgz#5954460c764a8da2094ba3554bf839e6b9a7c86d"
+ integrity sha512-hcg3ZmepS30/7BSFqRvoo3DOMQu7IjqxO5nCDt+zM9XWjb33Wg7ziNT+Qvqbuc3+gWpzO02JubVyk2G4Zvo1OQ==
+ dependencies:
+ at-least-node "^1.0.0"
+ graceful-fs "^4.2.0"
+ jsonfile "^6.0.1"
+ universalify "^2.0.0"
+
+fs-extra@^10.1.0:
+ version "10.1.0"
+ resolved "https://registry.yarnpkg.com/fs-extra/-/fs-extra-10.1.0.tgz#02873cfbc4084dde127eaa5f9905eef2325d1abf"
+ integrity sha512-oRXApq54ETRj4eMiFzGnHWGy+zo5raudjuxN0b8H7s/RU2oW0Wvsx9O0ACRN/kRq9E8Vu/ReskGB5o3ji+FzHQ==
+ dependencies:
+ graceful-fs "^4.2.0"
+ jsonfile "^6.0.1"
+ universalify "^2.0.0"
+
+fs-monkey@^1.0.3:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/fs-monkey/-/fs-monkey-1.0.3.tgz#ae3ac92d53bb328efe0e9a1d9541f6ad8d48e2d3"
+ integrity sha512-cybjIfiiE+pTWicSCLFHSrXZ6EilF30oh91FDP9S2B051prEa7QWfrVTQm10/dDpswBDXZugPa1Ogu8Yh+HV0Q==
+
+fs.realpath@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/fs.realpath/-/fs.realpath-1.0.0.tgz#1504ad2523158caa40db4a2787cb01411994ea4f"
+ integrity sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==
+
+fsevents@~2.3.2:
+ version "2.3.2"
+ resolved "https://registry.yarnpkg.com/fsevents/-/fsevents-2.3.2.tgz#8a526f78b8fdf4623b709e0b975c52c24c02fd1a"
+ integrity sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==
+
+function-bind@^1.1.1:
+ version "1.1.1"
+ resolved "https://registry.yarnpkg.com/function-bind/-/function-bind-1.1.1.tgz#a56899d3ea3c9bab874bb9773b7c5ede92f4895d"
+ integrity sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==
+
+gensync@^1.0.0-beta.1, gensync@^1.0.0-beta.2:
+ version "1.0.0-beta.2"
+ resolved "https://registry.yarnpkg.com/gensync/-/gensync-1.0.0-beta.2.tgz#32a6ee76c3d7f52d46b2b1ae5d93fea8580a25e0"
+ integrity sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==
+
+get-caller-file@^2.0.5:
+ version "2.0.5"
+ resolved "https://registry.yarnpkg.com/get-caller-file/-/get-caller-file-2.0.5.tgz#4f94412a82db32f36e3b0b9741f8a97feb031f7e"
+ integrity sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==
+
+get-intrinsic@^1.0.2, get-intrinsic@^1.1.1:
+ version "1.1.3"
+ resolved "https://registry.yarnpkg.com/get-intrinsic/-/get-intrinsic-1.1.3.tgz#063c84329ad93e83893c7f4f243ef63ffa351385"
+ integrity sha512-QJVz1Tj7MS099PevUG5jvnt9tSkXN8K14dxQlikJuPt4uD9hHAHjLyLBiLR5zELelBdD9QNRAXZzsJx0WaDL9A==
+ dependencies:
+ function-bind "^1.1.1"
+ has "^1.0.3"
+ has-symbols "^1.0.3"
+
+get-own-enumerable-property-symbols@^3.0.0:
+ version "3.0.2"
+ resolved "https://registry.yarnpkg.com/get-own-enumerable-property-symbols/-/get-own-enumerable-property-symbols-3.0.2.tgz#b5fde77f22cbe35f390b4e089922c50bce6ef664"
+ integrity sha512-I0UBV/XOz1XkIJHEUDMZAbzCThU/H8DxmSfmdGcKPnVhu2VfFqr34jr9777IyaTYvxjedWhqVIilEDsCdP5G6g==
+
+get-pkg-repo@^4.0.0:
+ version "4.2.1"
+ resolved "https://registry.yarnpkg.com/get-pkg-repo/-/get-pkg-repo-4.2.1.tgz#75973e1c8050c73f48190c52047c4cee3acbf385"
+ integrity sha512-2+QbHjFRfGB74v/pYWjd5OhU3TDIC2Gv/YKUTk/tCvAz0pkn/Mz6P3uByuBimLOcPvN2jYdScl3xGFSrx0jEcA==
+ dependencies:
+ "@hutson/parse-repository-url" "^3.0.0"
+ hosted-git-info "^4.0.0"
+ through2 "^2.0.0"
+ yargs "^16.2.0"
+
+get-stream@^4.1.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/get-stream/-/get-stream-4.1.0.tgz#c1b255575f3dc21d59bfc79cd3d2b46b1c3a54b5"
+ integrity sha512-GMat4EJ5161kIy2HevLlr4luNjBgvmj413KaQA7jt4V8B4RDsfpHk7WQ9GVqfYyyx8OS/L66Kox+rJRNklLK7w==
+ dependencies:
+ pump "^3.0.0"
+
+get-stream@^5.1.0:
+ version "5.2.0"
+ resolved "https://registry.yarnpkg.com/get-stream/-/get-stream-5.2.0.tgz#4966a1795ee5ace65e706c4b7beb71257d6e22d3"
+ integrity sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA==
+ dependencies:
+ pump "^3.0.0"
+
+get-stream@^6.0.0:
+ version "6.0.1"
+ resolved "https://registry.yarnpkg.com/get-stream/-/get-stream-6.0.1.tgz#a262d8eef67aced57c2852ad6167526a43cbf7b7"
+ integrity sha512-ts6Wi+2j3jQjqi70w5AlN8DFnkSwC+MqmxEzdEALB2qXZYV3X/b1CTfgPLGJNMeAWxdPfU8FO1ms3NUfaHCPYg==
+
+git-raw-commits@^2.0.8:
+ version "2.0.11"
+ resolved "https://registry.yarnpkg.com/git-raw-commits/-/git-raw-commits-2.0.11.tgz#bc3576638071d18655e1cc60d7f524920008d723"
+ integrity sha512-VnctFhw+xfj8Va1xtfEqCUD2XDrbAPSJx+hSrE5K7fGdjZruW7XV+QOrN7LF/RJyvspRiD2I0asWsxFp0ya26A==
+ dependencies:
+ dargs "^7.0.0"
+ lodash "^4.17.15"
+ meow "^8.0.0"
+ split2 "^3.0.0"
+ through2 "^4.0.0"
+
+git-remote-origin-url@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/git-remote-origin-url/-/git-remote-origin-url-2.0.0.tgz#5282659dae2107145a11126112ad3216ec5fa65f"
+ integrity sha512-eU+GGrZgccNJcsDH5LkXR3PB9M958hxc7sbA8DFJjrv9j4L2P/eZfKhM+QD6wyzpiv+b1BpK0XrYCxkovtjSLw==
+ dependencies:
+ gitconfiglocal "^1.0.0"
+ pify "^2.3.0"
+
+git-semver-tags@^4.0.0, git-semver-tags@^4.1.1:
+ version "4.1.1"
+ resolved "https://registry.yarnpkg.com/git-semver-tags/-/git-semver-tags-4.1.1.tgz#63191bcd809b0ec3e151ba4751c16c444e5b5780"
+ integrity sha512-OWyMt5zBe7xFs8vglMmhM9lRQzCWL3WjHtxNNfJTMngGym7pC1kh8sP6jevfydJ6LP3ZvGxfb6ABYgPUM0mtsA==
+ dependencies:
+ meow "^8.0.0"
+ semver "^6.0.0"
+
+gitconfiglocal@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/gitconfiglocal/-/gitconfiglocal-1.0.0.tgz#41d045f3851a5ea88f03f24ca1c6178114464b9b"
+ integrity sha512-spLUXeTAVHxDtKsJc8FkFVgFtMdEN9qPGpL23VfSHx4fP4+Ds097IXLvymbnDH8FnmxX5Nr9bPw3A+AQ6mWEaQ==
+ dependencies:
+ ini "^1.3.2"
+
+github-slugger@^1.4.0:
+ version "1.4.0"
+ resolved "https://registry.yarnpkg.com/github-slugger/-/github-slugger-1.4.0.tgz#206eb96cdb22ee56fdc53a28d5a302338463444e"
+ integrity sha512-w0dzqw/nt51xMVmlaV1+JRzN+oCa1KfcgGEWhxUG16wbdA+Xnt/yoFO8Z8x/V82ZcZ0wy6ln9QDup5avbhiDhQ==
+
+glob-parent@^5.1.2, glob-parent@~5.1.2:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/glob-parent/-/glob-parent-5.1.2.tgz#869832c58034fe68a4093c17dc15e8340d8401c4"
+ integrity sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==
+ dependencies:
+ is-glob "^4.0.1"
+
+glob-parent@^6.0.1:
+ version "6.0.2"
+ resolved "https://registry.yarnpkg.com/glob-parent/-/glob-parent-6.0.2.tgz#6d237d99083950c79290f24c7642a3de9a28f9e3"
+ integrity sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==
+ dependencies:
+ is-glob "^4.0.3"
+
+glob-to-regexp@^0.4.1:
+ version "0.4.1"
+ resolved "https://registry.yarnpkg.com/glob-to-regexp/-/glob-to-regexp-0.4.1.tgz#c75297087c851b9a578bd217dd59a92f59fe546e"
+ integrity sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==
+
+glob@7.2.3, glob@^7.0.0, glob@^7.1.3, glob@^7.1.6:
+ version "7.2.3"
+ resolved "https://registry.yarnpkg.com/glob/-/glob-7.2.3.tgz#b8df0fb802bbfa8e89bd1d938b4e16578ed44f2b"
+ integrity sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==
+ dependencies:
+ fs.realpath "^1.0.0"
+ inflight "^1.0.4"
+ inherits "2"
+ minimatch "^3.1.1"
+ once "^1.3.0"
+ path-is-absolute "^1.0.0"
+
+global-dirs@^0.1.1:
+ version "0.1.1"
+ resolved "https://registry.yarnpkg.com/global-dirs/-/global-dirs-0.1.1.tgz#b319c0dd4607f353f3be9cca4c72fc148c49f445"
+ integrity sha512-NknMLn7F2J7aflwFOlGdNIuCDpN3VGoSoB+aap3KABFWbHVn1TCgFC+np23J8W2BiZbjfEw3BFBycSMv1AFblg==
+ dependencies:
+ ini "^1.3.4"
+
+global-dirs@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/global-dirs/-/global-dirs-3.0.0.tgz#70a76fe84ea315ab37b1f5576cbde7d48ef72686"
+ integrity sha512-v8ho2DS5RiCjftj1nD9NmnfaOzTdud7RRnVd9kFNOjqZbISlx5DQ+OrTkywgd0dIt7oFCvKetZSHoHcP3sDdiA==
+ dependencies:
+ ini "2.0.0"
+
+global-modules@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/global-modules/-/global-modules-1.0.0.tgz#6d770f0eb523ac78164d72b5e71a8877265cc3ea"
+ integrity sha512-sKzpEkf11GpOFuw0Zzjzmt4B4UZwjOcG757PPvrfhxcLFbq0wpsgpOqxpxtxFiCG4DtG93M6XRVbF2oGdev7bg==
+ dependencies:
+ global-prefix "^1.0.1"
+ is-windows "^1.0.1"
+ resolve-dir "^1.0.0"
+
+global-modules@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/global-modules/-/global-modules-2.0.0.tgz#997605ad2345f27f51539bea26574421215c7780"
+ integrity sha512-NGbfmJBp9x8IxyJSd1P+otYK8vonoJactOogrVfFRIAEY1ukil8RSKDz2Yo7wh1oihl51l/r6W4epkeKJHqL8A==
+ dependencies:
+ global-prefix "^3.0.0"
+
+global-prefix@^1.0.1:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/global-prefix/-/global-prefix-1.0.2.tgz#dbf743c6c14992593c655568cb66ed32c0122ebe"
+ integrity sha512-5lsx1NUDHtSjfg0eHlmYvZKv8/nVqX4ckFbM+FrGcQ+04KWcWFo9P5MxPZYSzUvyzmdTbI7Eix8Q4IbELDqzKg==
+ dependencies:
+ expand-tilde "^2.0.2"
+ homedir-polyfill "^1.0.1"
+ ini "^1.3.4"
+ is-windows "^1.0.1"
+ which "^1.2.14"
+
+global-prefix@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/global-prefix/-/global-prefix-3.0.0.tgz#fc85f73064df69f50421f47f883fe5b913ba9b97"
+ integrity sha512-awConJSVCHVGND6x3tmMaKcQvwXLhjdkmomy2W+Goaui8YPgYgXJZewhg3fWC+DlfqqQuWg8AwqjGTD2nAPVWg==
+ dependencies:
+ ini "^1.3.5"
+ kind-of "^6.0.2"
+ which "^1.3.1"
+
+globals@^11.1.0:
+ version "11.12.0"
+ resolved "https://registry.yarnpkg.com/globals/-/globals-11.12.0.tgz#ab8795338868a0babd8525758018c2a7eb95c42e"
+ integrity sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==
+
+globby@^11.0.1, globby@^11.0.4, globby@^11.1.0:
+ version "11.1.0"
+ resolved "https://registry.yarnpkg.com/globby/-/globby-11.1.0.tgz#bd4be98bb042f83d796f7e3811991fbe82a0d34b"
+ integrity sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g==
+ dependencies:
+ array-union "^2.1.0"
+ dir-glob "^3.0.1"
+ fast-glob "^3.2.9"
+ ignore "^5.2.0"
+ merge2 "^1.4.1"
+ slash "^3.0.0"
+
+globby@^13.1.1:
+ version "13.1.2"
+ resolved "https://registry.yarnpkg.com/globby/-/globby-13.1.2.tgz#29047105582427ab6eca4f905200667b056da515"
+ integrity sha512-LKSDZXToac40u8Q1PQtZihbNdTYSNMuWe+K5l+oa6KgDzSvVrHXlJy40hUP522RjAIoNLJYBJi7ow+rbFpIhHQ==
+ dependencies:
+ dir-glob "^3.0.1"
+ fast-glob "^3.2.11"
+ ignore "^5.2.0"
+ merge2 "^1.4.1"
+ slash "^4.0.0"
+
+got@^9.6.0:
+ version "9.6.0"
+ resolved "https://registry.yarnpkg.com/got/-/got-9.6.0.tgz#edf45e7d67f99545705de1f7bbeeeb121765ed85"
+ integrity sha512-R7eWptXuGYxwijs0eV+v3o6+XH1IqVK8dJOEecQfTmkncw9AV4dcw/Dhxi8MdlqPthxxpZyizMzyg8RTmEsG+Q==
+ dependencies:
+ "@sindresorhus/is" "^0.14.0"
+ "@szmarczak/http-timer" "^1.1.2"
+ cacheable-request "^6.0.0"
+ decompress-response "^3.3.0"
+ duplexer3 "^0.1.4"
+ get-stream "^4.1.0"
+ lowercase-keys "^1.0.1"
+ mimic-response "^1.0.1"
+ p-cancelable "^1.0.0"
+ to-readable-stream "^1.0.0"
+ url-parse-lax "^3.0.0"
+
+graceful-fs@^4.1.2, graceful-fs@^4.1.6, graceful-fs@^4.2.0, graceful-fs@^4.2.4, graceful-fs@^4.2.6, graceful-fs@^4.2.9:
+ version "4.2.10"
+ resolved "https://registry.yarnpkg.com/graceful-fs/-/graceful-fs-4.2.10.tgz#147d3a006da4ca3ce14728c7aefc287c367d7a6c"
+ integrity sha512-9ByhssR2fPVsNZj478qUUbKfmL0+t5BDVyjShtyZZLiK7ZDAArFFfopyOTj0M05wE2tJPisA4iTnnXl2YoPvOA==
+
+gray-matter@^4.0.3:
+ version "4.0.3"
+ resolved "https://registry.yarnpkg.com/gray-matter/-/gray-matter-4.0.3.tgz#e893c064825de73ea1f5f7d88c7a9f7274288798"
+ integrity sha512-5v6yZd4JK3eMI3FqqCouswVqwugaA9r4dNZB1wwcmrD02QkV5H0y7XBQW8QwQqEaZY1pM9aqORSORhJRdNK44Q==
+ dependencies:
+ js-yaml "^3.13.1"
+ kind-of "^6.0.2"
+ section-matter "^1.0.0"
+ strip-bom-string "^1.0.0"
+
+gzip-size@^6.0.0:
+ version "6.0.0"
+ resolved "https://registry.yarnpkg.com/gzip-size/-/gzip-size-6.0.0.tgz#065367fd50c239c0671cbcbad5be3e2eeb10e462"
+ integrity sha512-ax7ZYomf6jqPTQ4+XCpUGyXKHk5WweS+e05MBO4/y3WJ5RkmPXNKvX+bx1behVILVwr6JSQvZAku021CHPXG3Q==
+ dependencies:
+ duplexer "^0.1.2"
+
+handle-thing@^2.0.0:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/handle-thing/-/handle-thing-2.0.1.tgz#857f79ce359580c340d43081cc648970d0bb234e"
+ integrity sha512-9Qn4yBxelxoh2Ow62nP+Ka/kMnOXRi8BXnRaUwezLNhqelnN49xKz4F/dPP8OYLxLxq6JDtZb2i9XznUQbNPTg==
+
+handlebars@^4.7.7:
+ version "4.7.7"
+ resolved "https://registry.yarnpkg.com/handlebars/-/handlebars-4.7.7.tgz#9ce33416aad02dbd6c8fafa8240d5d98004945a1"
+ integrity sha512-aAcXm5OAfE/8IXkcZvCepKU3VzW1/39Fb5ZuqMtgI/hT8X2YgoMvBY5dLhq/cpOvw7Lk1nK/UF71aLG/ZnVYRA==
+ dependencies:
+ minimist "^1.2.5"
+ neo-async "^2.6.0"
+ source-map "^0.6.1"
+ wordwrap "^1.0.0"
+ optionalDependencies:
+ uglify-js "^3.1.4"
+
+hard-rejection@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/hard-rejection/-/hard-rejection-2.1.0.tgz#1c6eda5c1685c63942766d79bb40ae773cecd883"
+ integrity sha512-VIZB+ibDhx7ObhAe7OVtoEbuP4h/MuOTHJ+J8h/eBXotJYl0fBgR72xDFCKgIh22OJZIOVNxBMWuhAr10r8HdA==
+
+has-flag@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/has-flag/-/has-flag-3.0.0.tgz#b5d454dc2199ae225699f3467e5a07f3b955bafd"
+ integrity sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==
+
+has-flag@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/has-flag/-/has-flag-4.0.0.tgz#944771fd9c81c81265c4d6941860da06bb59479b"
+ integrity sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==
+
+has-property-descriptors@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/has-property-descriptors/-/has-property-descriptors-1.0.0.tgz#610708600606d36961ed04c196193b6a607fa861"
+ integrity sha512-62DVLZGoiEBDHQyqG4w9xCuZ7eJEwNmJRWw2VY84Oedb7WFcA27fiEVe8oUQx9hAUJ4ekurquucTGwsyO1XGdQ==
+ dependencies:
+ get-intrinsic "^1.1.1"
+
+has-symbols@^1.0.3:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/has-symbols/-/has-symbols-1.0.3.tgz#bb7b2c4349251dce87b125f7bdf874aa7c8b39f8"
+ integrity sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A==
+
+has-yarn@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/has-yarn/-/has-yarn-2.1.0.tgz#137e11354a7b5bf11aa5cb649cf0c6f3ff2b2e77"
+ integrity sha512-UqBRqi4ju7T+TqGNdqAO0PaSVGsDGJUBQvk9eUWNGRY1CFGDzYhLWoM7JQEemnlvVcv/YEmc2wNW8BC24EnUsw==
+
+has@^1.0.3:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/has/-/has-1.0.3.tgz#722d7cbfc1f6aa8241f16dd814e011e1f41e8796"
+ integrity sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==
+ dependencies:
+ function-bind "^1.1.1"
+
+hast-to-hyperscript@^9.0.0:
+ version "9.0.1"
+ resolved "https://registry.yarnpkg.com/hast-to-hyperscript/-/hast-to-hyperscript-9.0.1.tgz#9b67fd188e4c81e8ad66f803855334173920218d"
+ integrity sha512-zQgLKqF+O2F72S1aa4y2ivxzSlko3MAvxkwG8ehGmNiqd98BIN3JM1rAJPmplEyLmGLO2QZYJtIneOSZ2YbJuA==
+ dependencies:
+ "@types/unist" "^2.0.3"
+ comma-separated-tokens "^1.0.0"
+ property-information "^5.3.0"
+ space-separated-tokens "^1.0.0"
+ style-to-object "^0.3.0"
+ unist-util-is "^4.0.0"
+ web-namespaces "^1.0.0"
+
+hast-util-from-parse5@^6.0.0:
+ version "6.0.1"
+ resolved "https://registry.yarnpkg.com/hast-util-from-parse5/-/hast-util-from-parse5-6.0.1.tgz#554e34abdeea25ac76f5bd950a1f0180e0b3bc2a"
+ integrity sha512-jeJUWiN5pSxW12Rh01smtVkZgZr33wBokLzKLwinYOUfSzm1Nl/c3GUGebDyOKjdsRgMvoVbV0VpAcpjF4NrJA==
+ dependencies:
+ "@types/parse5" "^5.0.0"
+ hastscript "^6.0.0"
+ property-information "^5.0.0"
+ vfile "^4.0.0"
+ vfile-location "^3.2.0"
+ web-namespaces "^1.0.0"
+
+hast-util-parse-selector@^2.0.0:
+ version "2.2.5"
+ resolved "https://registry.yarnpkg.com/hast-util-parse-selector/-/hast-util-parse-selector-2.2.5.tgz#d57c23f4da16ae3c63b3b6ca4616683313499c3a"
+ integrity sha512-7j6mrk/qqkSehsM92wQjdIgWM2/BW61u/53G6xmC8i1OmEdKLHbk419QKQUjz6LglWsfqoiHmyMRkP1BGjecNQ==
+
+hast-util-raw@6.0.1:
+ version "6.0.1"
+ resolved "https://registry.yarnpkg.com/hast-util-raw/-/hast-util-raw-6.0.1.tgz#973b15930b7529a7b66984c98148b46526885977"
+ integrity sha512-ZMuiYA+UF7BXBtsTBNcLBF5HzXzkyE6MLzJnL605LKE8GJylNjGc4jjxazAHUtcwT5/CEt6afRKViYB4X66dig==
+ dependencies:
+ "@types/hast" "^2.0.0"
+ hast-util-from-parse5 "^6.0.0"
+ hast-util-to-parse5 "^6.0.0"
+ html-void-elements "^1.0.0"
+ parse5 "^6.0.0"
+ unist-util-position "^3.0.0"
+ vfile "^4.0.0"
+ web-namespaces "^1.0.0"
+ xtend "^4.0.0"
+ zwitch "^1.0.0"
+
+hast-util-to-parse5@^6.0.0:
+ version "6.0.0"
+ resolved "https://registry.yarnpkg.com/hast-util-to-parse5/-/hast-util-to-parse5-6.0.0.tgz#1ec44650b631d72952066cea9b1445df699f8479"
+ integrity sha512-Lu5m6Lgm/fWuz8eWnrKezHtVY83JeRGaNQ2kn9aJgqaxvVkFCZQBEhgodZUDUvoodgyROHDb3r5IxAEdl6suJQ==
+ dependencies:
+ hast-to-hyperscript "^9.0.0"
+ property-information "^5.0.0"
+ web-namespaces "^1.0.0"
+ xtend "^4.0.0"
+ zwitch "^1.0.0"
+
+hastscript@^6.0.0:
+ version "6.0.0"
+ resolved "https://registry.yarnpkg.com/hastscript/-/hastscript-6.0.0.tgz#e8768d7eac56c3fdeac8a92830d58e811e5bf640"
+ integrity sha512-nDM6bvd7lIqDUiYEiu5Sl/+6ReP0BMk/2f4U/Rooccxkj0P5nm+acM5PrGJ/t5I8qPGiqZSE6hVAwZEdZIvP4w==
+ dependencies:
+ "@types/hast" "^2.0.0"
+ comma-separated-tokens "^1.0.0"
+ hast-util-parse-selector "^2.0.0"
+ property-information "^5.0.0"
+ space-separated-tokens "^1.0.0"
+
+he@^1.2.0:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/he/-/he-1.2.0.tgz#84ae65fa7eafb165fddb61566ae14baf05664f0f"
+ integrity sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==
+
+history@^4.9.0:
+ version "4.10.1"
+ resolved "https://registry.yarnpkg.com/history/-/history-4.10.1.tgz#33371a65e3a83b267434e2b3f3b1b4c58aad4cf3"
+ integrity sha512-36nwAD620w12kuzPAsyINPWJqlNbij+hpK1k9XRloDtym8mxzGYl2c17LnV6IAGB2Dmg4tEa7G7DlawS0+qjew==
+ dependencies:
+ "@babel/runtime" "^7.1.2"
+ loose-envify "^1.2.0"
+ resolve-pathname "^3.0.0"
+ tiny-invariant "^1.0.2"
+ tiny-warning "^1.0.0"
+ value-equal "^1.0.1"
+
+hoist-non-react-statics@^3.1.0:
+ version "3.3.2"
+ resolved "https://registry.yarnpkg.com/hoist-non-react-statics/-/hoist-non-react-statics-3.3.2.tgz#ece0acaf71d62c2969c2ec59feff42a4b1a85b45"
+ integrity sha512-/gGivxi8JPKWNm/W0jSmzcMPpfpPLc3dY/6GxhX2hQ9iGj3aDfklV4ET7NjKpSinLpJ5vafa9iiGIEZg10SfBw==
+ dependencies:
+ react-is "^16.7.0"
+
+homedir-polyfill@^1.0.1:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/homedir-polyfill/-/homedir-polyfill-1.0.3.tgz#743298cef4e5af3e194161fbadcc2151d3a058e8"
+ integrity sha512-eSmmWE5bZTK2Nou4g0AI3zZ9rswp7GRKoKXS1BLUkvPviOqs4YTN1djQIqrXy9k5gEtdLPy86JjRwsNM9tnDcA==
+ dependencies:
+ parse-passwd "^1.0.0"
+
+hosted-git-info@^2.1.4:
+ version "2.8.9"
+ resolved "https://registry.yarnpkg.com/hosted-git-info/-/hosted-git-info-2.8.9.tgz#dffc0bf9a21c02209090f2aa69429e1414daf3f9"
+ integrity sha512-mxIDAb9Lsm6DoOJ7xH+5+X4y1LU/4Hi50L9C5sIswK3JzULS4bwk1FvjdBgvYR4bzT4tuUQiC15FE2f5HbLvYw==
+
+hosted-git-info@^4.0.0, hosted-git-info@^4.0.1:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/hosted-git-info/-/hosted-git-info-4.1.0.tgz#827b82867e9ff1c8d0c4d9d53880397d2c86d224"
+ integrity sha512-kyCuEOWjJqZuDbRHzL8V93NzQhwIB71oFWSyzVo+KPZI+pnQPPxucdkrOZvkLRnrf5URsQM+IJ09Dw29cRALIA==
+ dependencies:
+ lru-cache "^6.0.0"
+
+hpack.js@^2.1.6:
+ version "2.1.6"
+ resolved "https://registry.yarnpkg.com/hpack.js/-/hpack.js-2.1.6.tgz#87774c0949e513f42e84575b3c45681fade2a0b2"
+ integrity sha512-zJxVehUdMGIKsRaNt7apO2Gqp0BdqW5yaiGHXXmbpvxgBYVZnAql+BJb4RO5ad2MgpbZKn5G6nMnegrH1FcNYQ==
+ dependencies:
+ inherits "^2.0.1"
+ obuf "^1.0.0"
+ readable-stream "^2.0.1"
+ wbuf "^1.1.0"
+
+html-entities@^2.3.2:
+ version "2.3.3"
+ resolved "https://registry.yarnpkg.com/html-entities/-/html-entities-2.3.3.tgz#117d7626bece327fc8baace8868fa6f5ef856e46"
+ integrity sha512-DV5Ln36z34NNTDgnz0EWGBLZENelNAtkiFA4kyNOG2tDI6Mz1uSWiq1wAKdyjnJwyDiDO7Fa2SO1CTxPXL8VxA==
+
+html-minifier-terser@^6.0.2, html-minifier-terser@^6.1.0:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/html-minifier-terser/-/html-minifier-terser-6.1.0.tgz#bfc818934cc07918f6b3669f5774ecdfd48f32ab"
+ integrity sha512-YXxSlJBZTP7RS3tWnQw74ooKa6L9b9i9QYXY21eUEvhZ3u9XLfv6OnFsQq6RxkhHygsaUMvYsZRV5rU/OVNZxw==
+ dependencies:
+ camel-case "^4.1.2"
+ clean-css "^5.2.2"
+ commander "^8.3.0"
+ he "^1.2.0"
+ param-case "^3.0.4"
+ relateurl "^0.2.7"
+ terser "^5.10.0"
+
+html-tags@^3.2.0:
+ version "3.2.0"
+ resolved "https://registry.yarnpkg.com/html-tags/-/html-tags-3.2.0.tgz#dbb3518d20b726524e4dd43de397eb0a95726961"
+ integrity sha512-vy7ClnArOZwCnqZgvv+ddgHgJiAFXe3Ge9ML5/mBctVJoUoYPCdxVucOywjDARn6CVoh3dRSFdPHy2sX80L0Wg==
+
+html-void-elements@^1.0.0:
+ version "1.0.5"
+ resolved "https://registry.yarnpkg.com/html-void-elements/-/html-void-elements-1.0.5.tgz#ce9159494e86d95e45795b166c2021c2cfca4483"
+ integrity sha512-uE/TxKuyNIcx44cIWnjr/rfIATDH7ZaOMmstu0CwhFG1Dunhlp4OC6/NMbhiwoq5BpW0ubi303qnEk/PZj614w==
+
+html-webpack-plugin@^5.5.0:
+ version "5.5.0"
+ resolved "https://registry.yarnpkg.com/html-webpack-plugin/-/html-webpack-plugin-5.5.0.tgz#c3911936f57681c1f9f4d8b68c158cd9dfe52f50"
+ integrity sha512-sy88PC2cRTVxvETRgUHFrL4No3UxvcH8G1NepGhqaTT+GXN2kTamqasot0inS5hXeg1cMbFDt27zzo9p35lZVw==
+ dependencies:
+ "@types/html-minifier-terser" "^6.0.0"
+ html-minifier-terser "^6.0.2"
+ lodash "^4.17.21"
+ pretty-error "^4.0.0"
+ tapable "^2.0.0"
+
+htmlparser2@^6.1.0:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/htmlparser2/-/htmlparser2-6.1.0.tgz#c4d762b6c3371a05dbe65e94ae43a9f845fb8fb7"
+ integrity sha512-gyyPk6rgonLFEDGoeRgQNaEUvdJ4ktTmmUh/h2t7s+M8oPpIPxgNACWa+6ESR57kXstwqPiCut0V8NRpcwgU7A==
+ dependencies:
+ domelementtype "^2.0.1"
+ domhandler "^4.0.0"
+ domutils "^2.5.2"
+ entities "^2.0.0"
+
+htmlparser2@^8.0.1:
+ version "8.0.1"
+ resolved "https://registry.yarnpkg.com/htmlparser2/-/htmlparser2-8.0.1.tgz#abaa985474fcefe269bc761a779b544d7196d010"
+ integrity sha512-4lVbmc1diZC7GUJQtRQ5yBAeUCL1exyMwmForWkRLnwyzWBFxN633SALPMGYaWZvKe9j1pRZJpauvmxENSp/EA==
+ dependencies:
+ domelementtype "^2.3.0"
+ domhandler "^5.0.2"
+ domutils "^3.0.1"
+ entities "^4.3.0"
+
+http-cache-semantics@^4.0.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz#49e91c5cbf36c9b94bcfcd71c23d5249ec74e390"
+ integrity sha512-carPklcUh7ROWRK7Cv27RPtdhYhUsela/ue5/jKzjegVvXDqM2ILE9Q2BGn9JZJh1g87cp56su/FgQSzcWS8cQ==
+
+http-deceiver@^1.2.7:
+ version "1.2.7"
+ resolved "https://registry.yarnpkg.com/http-deceiver/-/http-deceiver-1.2.7.tgz#fa7168944ab9a519d337cb0bec7284dc3e723d87"
+ integrity sha512-LmpOGxTfbpgtGVxJrj5k7asXHCgNZp5nLfp+hWc8QQRqtb7fUy6kRY3BO1h9ddF6yIPYUARgxGOwB42DnxIaNw==
+
+http-errors@2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/http-errors/-/http-errors-2.0.0.tgz#b7774a1486ef73cf7667ac9ae0858c012c57b9d3"
+ integrity sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==
+ dependencies:
+ depd "2.0.0"
+ inherits "2.0.4"
+ setprototypeof "1.2.0"
+ statuses "2.0.1"
+ toidentifier "1.0.1"
+
+http-errors@~1.6.2:
+ version "1.6.3"
+ resolved "https://registry.yarnpkg.com/http-errors/-/http-errors-1.6.3.tgz#8b55680bb4be283a0b5bf4ea2e38580be1d9320d"
+ integrity sha512-lks+lVC8dgGyh97jxvxeYTWQFvh4uw4yC12gVl63Cg30sjPX4wuGcdkICVXDAESr6OJGjqGA8Iz5mkeN6zlD7A==
+ dependencies:
+ depd "~1.1.2"
+ inherits "2.0.3"
+ setprototypeof "1.1.0"
+ statuses ">= 1.4.0 < 2"
+
+http-parser-js@>=0.5.1:
+ version "0.5.8"
+ resolved "https://registry.yarnpkg.com/http-parser-js/-/http-parser-js-0.5.8.tgz#af23090d9ac4e24573de6f6aecc9d84a48bf20e3"
+ integrity sha512-SGeBX54F94Wgu5RH3X5jsDtf4eHyRogWX1XGT3b4HuW3tQPM4AaBzoUji/4AAJNXCEOWZ5O0DgZmJw1947gD5Q==
+
+http-proxy-middleware@^2.0.3:
+ version "2.0.6"
+ resolved "https://registry.yarnpkg.com/http-proxy-middleware/-/http-proxy-middleware-2.0.6.tgz#e1a4dd6979572c7ab5a4e4b55095d1f32a74963f"
+ integrity sha512-ya/UeJ6HVBYxrgYotAZo1KvPWlgB48kUJLDePFeneHsVujFaW5WNj2NgWCAE//B1Dl02BIfYlpNgBy8Kf8Rjmw==
+ dependencies:
+ "@types/http-proxy" "^1.17.8"
+ http-proxy "^1.18.1"
+ is-glob "^4.0.1"
+ is-plain-obj "^3.0.0"
+ micromatch "^4.0.2"
+
+http-proxy@^1.18.1:
+ version "1.18.1"
+ resolved "https://registry.yarnpkg.com/http-proxy/-/http-proxy-1.18.1.tgz#401541f0534884bbf95260334e72f88ee3976549"
+ integrity sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==
+ dependencies:
+ eventemitter3 "^4.0.0"
+ follow-redirects "^1.0.0"
+ requires-port "^1.0.0"
+
+human-signals@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/human-signals/-/human-signals-2.1.0.tgz#dc91fcba42e4d06e4abaed33b3e7a3c02f514ea0"
+ integrity sha512-B4FFZ6q/T2jhhksgkbEW3HBvWIfDW85snkQgawt07S7J5QXTk6BkNV+0yAeZrM5QpMAdYlocGoljn0sJ/WQkFw==
+
+iconv-lite@0.4.24, iconv-lite@^0.4.24:
+ version "0.4.24"
+ resolved "https://registry.yarnpkg.com/iconv-lite/-/iconv-lite-0.4.24.tgz#2022b4b25fbddc21d2f524974a474aafe733908b"
+ integrity sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==
+ dependencies:
+ safer-buffer ">= 2.1.2 < 3"
+
+iconv-lite@^0.6.3:
+ version "0.6.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/iconv-lite/-/iconv-lite-0.6.3.tgz#a52f80bf38da1952eb5c681790719871a1a72501"
+ integrity sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==
+ dependencies:
+ safer-buffer ">= 2.1.2 < 3.0.0"
+
+icss-utils@^5.0.0, icss-utils@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/icss-utils/-/icss-utils-5.1.0.tgz#c6be6858abd013d768e98366ae47e25d5887b1ae"
+ integrity sha512-soFhflCVWLfRNOPU3iv5Z9VUdT44xFRbzjLsEzSr5AQmgqPMTHdU3PMT1Cf1ssx8fLNJDA1juftYl+PUcv3MqA==
+
+ieee754@^1.1.13:
+ version "1.2.1"
+ resolved "https://registry.yarnpkg.com/ieee754/-/ieee754-1.2.1.tgz#8eb7a10a63fff25d15a57b001586d177d1b0d352"
+ integrity sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==
+
+ignore@^5.2.0:
+ version "5.2.0"
+ resolved "https://registry.yarnpkg.com/ignore/-/ignore-5.2.0.tgz#6d3bac8fa7fe0d45d9f9be7bac2fc279577e345a"
+ integrity sha512-CmxgYGiEPCLhfLnpPp1MoRmifwEIOgjcHXxOBjv7mY96c+eWScsOP9c112ZyLdWHi0FxHjI+4uVhKYp/gcdRmQ==
+
+image-size@^1.0.1:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/image-size/-/image-size-1.0.2.tgz#d778b6d0ab75b2737c1556dd631652eb963bc486"
+ integrity sha512-xfOoWjceHntRb3qFCrh5ZFORYH8XCdYpASltMhZ/Q0KZiOwjdE/Yl2QCiWdwD+lygV5bMCvauzgu5PxBX/Yerg==
+ dependencies:
+ queue "6.0.2"
+
+image-size@~0.5.0:
+ version "0.5.5"
+ resolved "https://mirrors.cloud.tencent.com/npm/image-size/-/image-size-0.5.5.tgz#09dfd4ab9d20e29eb1c3e80b8990378df9e3cb9c"
+ integrity sha1-Cd/Uq50g4p6xw+gLiZA3jfnjy5w=
+
+immer@^9.0.7:
+ version "9.0.15"
+ resolved "https://registry.yarnpkg.com/immer/-/immer-9.0.15.tgz#0b9169e5b1d22137aba7d43f8a81a495dd1b62dc"
+ integrity sha512-2eB/sswms9AEUSkOm4SbV5Y7Vmt/bKRwByd52jfLkW4OLYeaTP3EEiJ9agqU0O/tq6Dk62Zfj+TJSqfm1rLVGQ==
+
+import-fresh@^3.0.0, import-fresh@^3.1.0, import-fresh@^3.2.1, import-fresh@^3.3.0:
+ version "3.3.0"
+ resolved "https://registry.yarnpkg.com/import-fresh/-/import-fresh-3.3.0.tgz#37162c25fcb9ebaa2e6e53d5b4d88ce17d9e0c2b"
+ integrity sha512-veYYhQa+D1QBKznvhUHxb8faxlrwUnxseDAbAp457E0wLNio2bOSKnjYDhMj+YiAq61xrMGhQk9iXVk5FzgQMw==
+ dependencies:
+ parent-module "^1.0.0"
+ resolve-from "^4.0.0"
+
+import-lazy@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/import-lazy/-/import-lazy-2.1.0.tgz#05698e3d45c88e8d7e9d92cb0584e77f096f3e43"
+ integrity sha512-m7ZEHgtw69qOGw+jwxXkHlrlIPdTGkyh66zXZ1ajZbxkDBNjSY/LGbmjc7h0s2ELsUDTAhFr55TrPSSqJGPG0A==
+
+imurmurhash@^0.1.4:
+ version "0.1.4"
+ resolved "https://registry.yarnpkg.com/imurmurhash/-/imurmurhash-0.1.4.tgz#9218b9b2b928a238b13dc4fb6b6d576f231453ea"
+ integrity sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==
+
+indent-string@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/indent-string/-/indent-string-4.0.0.tgz#624f8f4497d619b2d9768531d58f4122854d7251"
+ integrity sha512-EdDDZu4A2OyIK7Lr/2zG+w5jmbuk1DVBnEwREQvBzspBJkCEbRa8GxU1lghYcaGJCnRWibjDXlq779X1/y5xwg==
+
+infima@0.2.0-alpha.42:
+ version "0.2.0-alpha.42"
+ resolved "https://registry.yarnpkg.com/infima/-/infima-0.2.0-alpha.42.tgz#f6e86a655ad40877c6b4d11b2ede681eb5470aa5"
+ integrity sha512-ift8OXNbQQwtbIt6z16KnSWP7uJ/SysSMFI4F87MNRTicypfl4Pv3E2OGVv6N3nSZFJvA8imYulCBS64iyHYww==
+
+inflight@^1.0.4:
+ version "1.0.6"
+ resolved "https://registry.yarnpkg.com/inflight/-/inflight-1.0.6.tgz#49bd6331d7d02d0c09bc910a1075ba8165b56df9"
+ integrity sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==
+ dependencies:
+ once "^1.3.0"
+ wrappy "1"
+
+inherits@2, inherits@2.0.4, inherits@^2.0.0, inherits@^2.0.1, inherits@^2.0.3, inherits@^2.0.4, inherits@~2.0.3:
+ version "2.0.4"
+ resolved "https://registry.yarnpkg.com/inherits/-/inherits-2.0.4.tgz#0fa2c64f932917c3433a0ded55363aae37416b7c"
+ integrity sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==
+
+inherits@2.0.3:
+ version "2.0.3"
+ resolved "https://registry.yarnpkg.com/inherits/-/inherits-2.0.3.tgz#633c2c83e3da42a502f52466022480f4208261de"
+ integrity sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==
+
+ini@2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/ini/-/ini-2.0.0.tgz#e5fd556ecdd5726be978fa1001862eacb0a94bc5"
+ integrity sha512-7PnF4oN3CvZF23ADhA5wRaYEQpJ8qygSkbtTXWBeXWXmEVRXK+1ITciHWwHhsjv1TmW0MgacIv6hEi5pX5NQdA==
+
+ini@^1.3.2, ini@^1.3.4, ini@^1.3.5, ini@~1.3.0:
+ version "1.3.8"
+ resolved "https://registry.yarnpkg.com/ini/-/ini-1.3.8.tgz#a29da425b48806f34767a4efce397269af28432c"
+ integrity sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew==
+
+inline-style-parser@0.1.1:
+ version "0.1.1"
+ resolved "https://registry.yarnpkg.com/inline-style-parser/-/inline-style-parser-0.1.1.tgz#ec8a3b429274e9c0a1f1c4ffa9453a7fef72cea1"
+ integrity sha512-7NXolsK4CAS5+xvdj5OMMbI962hU/wvwoxk+LWR9Ek9bVtyuuYScDN6eS0rUm6TxApFpw7CX1o4uJzcd4AyD3Q==
+
+inquirer@8.2.4:
+ version "8.2.4"
+ resolved "https://registry.yarnpkg.com/inquirer/-/inquirer-8.2.4.tgz#ddbfe86ca2f67649a67daa6f1051c128f684f0b4"
+ integrity sha512-nn4F01dxU8VeKfq192IjLsxu0/OmMZ4Lg3xKAns148rCaXP6ntAoEkVYZThWjwON8AlzdZZi6oqnhNbxUG9hVg==
+ dependencies:
+ ansi-escapes "^4.2.1"
+ chalk "^4.1.1"
+ cli-cursor "^3.1.0"
+ cli-width "^3.0.0"
+ external-editor "^3.0.3"
+ figures "^3.0.0"
+ lodash "^4.17.21"
+ mute-stream "0.0.8"
+ ora "^5.4.1"
+ run-async "^2.4.0"
+ rxjs "^7.5.5"
+ string-width "^4.1.0"
+ strip-ansi "^6.0.0"
+ through "^2.3.6"
+ wrap-ansi "^7.0.0"
+
+interpret@^1.0.0:
+ version "1.4.0"
+ resolved "https://registry.yarnpkg.com/interpret/-/interpret-1.4.0.tgz#665ab8bc4da27a774a40584e812e3e0fa45b1a1e"
+ integrity sha512-agE4QfB2Lkp9uICn7BAqoscw4SZP9kTE2hxiFI3jBPmXJfdqiahTbUuKGsMoN2GtqL9AxhYioAcVvgsb1HvRbA==
+
+invariant@^2.2.4:
+ version "2.2.4"
+ resolved "https://registry.yarnpkg.com/invariant/-/invariant-2.2.4.tgz#610f3c92c9359ce1db616e538008d23ff35158e6"
+ integrity sha512-phJfQVBuaJM5raOpJjSfkiD6BpbCE4Ns//LaXl6wGYtUBY83nWS6Rf9tXm2e8VaK60JEjYldbPif/A2B1C2gNA==
+ dependencies:
+ loose-envify "^1.0.0"
+
+ipaddr.js@1.9.1:
+ version "1.9.1"
+ resolved "https://registry.yarnpkg.com/ipaddr.js/-/ipaddr.js-1.9.1.tgz#bff38543eeb8984825079ff3a2a8e6cbd46781b3"
+ integrity sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==
+
+ipaddr.js@^2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/ipaddr.js/-/ipaddr.js-2.0.1.tgz#eca256a7a877e917aeb368b0a7497ddf42ef81c0"
+ integrity sha512-1qTgH9NG+IIJ4yfKs2e6Pp1bZg8wbDbKHT21HrLIeYBTRLgMYKnMTPAuI3Lcs61nfx5h1xlXnbJtH1kX5/d/ng==
+
+is-alphabetical@1.0.4, is-alphabetical@^1.0.0:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/is-alphabetical/-/is-alphabetical-1.0.4.tgz#9e7d6b94916be22153745d184c298cbf986a686d"
+ integrity sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg==
+
+is-alphanumerical@^1.0.0:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/is-alphanumerical/-/is-alphanumerical-1.0.4.tgz#7eb9a2431f855f6b1ef1a78e326df515696c4dbf"
+ integrity sha512-UzoZUr+XfVz3t3v4KyGEniVL9BDRoQtY7tOyrRybkVNjDFWyo1yhXNGrrBTQxp3ib9BLAWs7k2YKBQsFRkZG9A==
+ dependencies:
+ is-alphabetical "^1.0.0"
+ is-decimal "^1.0.0"
+
+is-arrayish@^0.2.1:
+ version "0.2.1"
+ resolved "https://registry.yarnpkg.com/is-arrayish/-/is-arrayish-0.2.1.tgz#77c99840527aa8ecb1a8ba697b80645a7a926a9d"
+ integrity sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==
+
+is-binary-path@~2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/is-binary-path/-/is-binary-path-2.1.0.tgz#ea1f7f3b80f064236e83470f86c09c254fb45b09"
+ integrity sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==
+ dependencies:
+ binary-extensions "^2.0.0"
+
+is-buffer@^2.0.0:
+ version "2.0.5"
+ resolved "https://registry.yarnpkg.com/is-buffer/-/is-buffer-2.0.5.tgz#ebc252e400d22ff8d77fa09888821a24a658c191"
+ integrity sha512-i2R6zNFDwgEHJyQUtJEk0XFi1i0dPFn/oqjK3/vPCcDeJvW5NQ83V8QbicfF1SupOaB0h8ntgBC2YiE7dfyctQ==
+
+is-ci@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/is-ci/-/is-ci-2.0.0.tgz#6bc6334181810e04b5c22b3d589fdca55026404c"
+ integrity sha512-YfJT7rkpQB0updsdHLGWrvhBJfcfzNNawYDNIyQXJz0IViGf75O8EBPKSdvw2rF+LGCsX4FZ8tcr3b19LcZq4w==
+ dependencies:
+ ci-info "^2.0.0"
+
+is-core-module@^2.5.0, is-core-module@^2.9.0:
+ version "2.10.0"
+ resolved "https://registry.yarnpkg.com/is-core-module/-/is-core-module-2.10.0.tgz#9012ede0a91c69587e647514e1d5277019e728ed"
+ integrity sha512-Erxj2n/LDAZ7H8WNJXd9tw38GYM3dv8rk8Zcs+jJuxYTW7sozH+SS8NtrSjVL1/vpLvWi1hxy96IzjJ3EHTJJg==
+ dependencies:
+ has "^1.0.3"
+
+is-decimal@^1.0.0:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/is-decimal/-/is-decimal-1.0.4.tgz#65a3a5958a1c5b63a706e1b333d7cd9f630d3fa5"
+ integrity sha512-RGdriMmQQvZ2aqaQq3awNA6dCGtKpiDFcOzrTWrDAT2MiWrKQVPmxLGHl7Y2nNu6led0kEyoX0enY0qXYsv9zw==
+
+is-docker@^2.0.0, is-docker@^2.1.1:
+ version "2.2.1"
+ resolved "https://registry.yarnpkg.com/is-docker/-/is-docker-2.2.1.tgz#33eeabe23cfe86f14bde4408a02c0cfb853acdaa"
+ integrity sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ==
+
+is-extendable@^0.1.0:
+ version "0.1.1"
+ resolved "https://registry.yarnpkg.com/is-extendable/-/is-extendable-0.1.1.tgz#62b110e289a471418e3ec36a617d472e301dfc89"
+ integrity sha512-5BMULNob1vgFX6EjQw5izWDxrecWK9AM72rugNr0TFldMOi0fj6Jk+zeKIt0xGj4cEfQIJth4w3OKWOJ4f+AFw==
+
+is-extglob@^2.1.1:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/is-extglob/-/is-extglob-2.1.1.tgz#a88c02535791f02ed37c76a1b9ea9773c833f8c2"
+ integrity sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==
+
+is-fullwidth-code-point@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz#f116f8064fe90b3f7844a38997c0b75051269f1d"
+ integrity sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==
+
+is-glob@^4.0.0, is-glob@^4.0.1, is-glob@^4.0.3, is-glob@~4.0.1:
+ version "4.0.3"
+ resolved "https://registry.yarnpkg.com/is-glob/-/is-glob-4.0.3.tgz#64f61e42cbbb2eec2071a9dac0b28ba1e65d5084"
+ integrity sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==
+ dependencies:
+ is-extglob "^2.1.1"
+
+is-hexadecimal@^1.0.0:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/is-hexadecimal/-/is-hexadecimal-1.0.4.tgz#cc35c97588da4bd49a8eedd6bc4082d44dcb23a7"
+ integrity sha512-gyPJuv83bHMpocVYoqof5VDiZveEoGoFL8m3BXNb2VW8Xs+rz9kqO8LOQ5DH6EsuvilT1ApazU0pyl+ytbPtlw==
+
+is-installed-globally@^0.4.0:
+ version "0.4.0"
+ resolved "https://registry.yarnpkg.com/is-installed-globally/-/is-installed-globally-0.4.0.tgz#9a0fd407949c30f86eb6959ef1b7994ed0b7b520"
+ integrity sha512-iwGqO3J21aaSkC7jWnHP/difazwS7SFeIqxv6wEtLU8Y5KlzFTjyqcSIT0d8s4+dDhKytsk9PJZ2BkS5eZwQRQ==
+ dependencies:
+ global-dirs "^3.0.0"
+ is-path-inside "^3.0.2"
+
+is-interactive@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/is-interactive/-/is-interactive-1.0.0.tgz#cea6e6ae5c870a7b0a0004070b7b587e0252912e"
+ integrity sha512-2HvIEKRoqS62guEC+qBjpvRubdX910WCMuJTZ+I9yvqKU2/12eSL549HMwtabb4oupdj2sMP50k+XJfB/8JE6w==
+
+is-npm@^5.0.0:
+ version "5.0.0"
+ resolved "https://registry.yarnpkg.com/is-npm/-/is-npm-5.0.0.tgz#43e8d65cc56e1b67f8d47262cf667099193f45a8"
+ integrity sha512-WW/rQLOazUq+ST/bCAVBp/2oMERWLsR7OrKyt052dNDk4DHcDE0/7QSXITlmi+VBcV13DfIbysG3tZJm5RfdBA==
+
+is-number@^7.0.0:
+ version "7.0.0"
+ resolved "https://registry.yarnpkg.com/is-number/-/is-number-7.0.0.tgz#7535345b896734d5f80c4d06c50955527a14f12b"
+ integrity sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==
+
+is-obj@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/is-obj/-/is-obj-1.0.1.tgz#3e4729ac1f5fde025cd7d83a896dab9f4f67db0f"
+ integrity sha512-l4RyHgRqGN4Y3+9JHVrNqO+tN0rV5My76uW5/nuO4K1b6vw5G8d/cmFjP9tRfEsdhZNt0IFdZuK/c2Vr4Nb+Qg==
+
+is-obj@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/is-obj/-/is-obj-2.0.0.tgz#473fb05d973705e3fd9620545018ca8e22ef4982"
+ integrity sha512-drqDG3cbczxxEJRoOXcOjtdp1J/lyp1mNn0xaznRs8+muBhgQcrnbspox5X5fOw0HnMnbfDzvnEMEtqDEJEo8w==
+
+is-path-cwd@^2.2.0:
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/is-path-cwd/-/is-path-cwd-2.2.0.tgz#67d43b82664a7b5191fd9119127eb300048a9fdb"
+ integrity sha512-w942bTcih8fdJPJmQHFzkS76NEP8Kzzvmw92cXsazb8intwLqPibPPdXf4ANdKV3rYMuuQYGIWtvz9JilB3NFQ==
+
+is-path-inside@^3.0.2:
+ version "3.0.3"
+ resolved "https://registry.yarnpkg.com/is-path-inside/-/is-path-inside-3.0.3.tgz#d231362e53a07ff2b0e0ea7fed049161ffd16283"
+ integrity sha512-Fd4gABb+ycGAmKou8eMftCupSir5lRxqf4aD/vd0cD2qc4HL07OjCeuHMr8Ro4CoMaeCKDB0/ECBOVWjTwUvPQ==
+
+is-plain-obj@^1.1.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/is-plain-obj/-/is-plain-obj-1.1.0.tgz#71a50c8429dfca773c92a390a4a03b39fcd51d3e"
+ integrity sha512-yvkRyxmFKEOQ4pNXCmJG5AEQNlXJS5LaONXo5/cLdTZdWvsZ1ioJEonLGAosKlMWE8lwUy/bJzMjcw8az73+Fg==
+
+is-plain-obj@^2.0.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/is-plain-obj/-/is-plain-obj-2.1.0.tgz#45e42e37fccf1f40da8e5f76ee21515840c09287"
+ integrity sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA==
+
+is-plain-obj@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/is-plain-obj/-/is-plain-obj-3.0.0.tgz#af6f2ea14ac5a646183a5bbdb5baabbc156ad9d7"
+ integrity sha512-gwsOE28k+23GP1B6vFl1oVh/WOzmawBrKwo5Ev6wMKzPkaXaCDIQKzLnvsA42DRlbVTWorkgTKIviAKCWkfUwA==
+
+is-plain-object@^2.0.4:
+ version "2.0.4"
+ resolved "https://registry.yarnpkg.com/is-plain-object/-/is-plain-object-2.0.4.tgz#2c163b3fafb1b606d9d17928f05c2a1c38e07677"
+ integrity sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og==
+ dependencies:
+ isobject "^3.0.1"
+
+is-regexp@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/is-regexp/-/is-regexp-1.0.0.tgz#fd2d883545c46bac5a633e7b9a09e87fa2cb5069"
+ integrity sha512-7zjFAPO4/gwyQAAgRRmqeEeyIICSdmCqa3tsVHMdBzaXXRiqopZL4Cyghg/XulGWrtABTpbnYYzzIRffLkP4oA==
+
+is-root@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/is-root/-/is-root-2.1.0.tgz#809e18129cf1129644302a4f8544035d51984a9c"
+ integrity sha512-AGOriNp96vNBd3HtU+RzFEc75FfR5ymiYv8E553I71SCeXBiMsVDUtdio1OEFvrPyLIQ9tVR5RxXIFe5PUFjMg==
+
+is-stream@^2.0.0:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/is-stream/-/is-stream-2.0.1.tgz#fac1e3d53b97ad5a9d0ae9cef2389f5810a5c077"
+ integrity sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg==
+
+is-text-path@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/is-text-path/-/is-text-path-1.0.1.tgz#4e1aa0fb51bfbcb3e92688001397202c1775b66e"
+ integrity sha512-xFuJpne9oFz5qDaodwmmG08e3CawH/2ZV8Qqza1Ko7Sk8POWbkRdwIoAWVhqvq0XeUzANEhKo2n0IXUGBm7A/w==
+ dependencies:
+ text-extensions "^1.0.0"
+
+is-typedarray@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/is-typedarray/-/is-typedarray-1.0.0.tgz#e479c80858df0c1b11ddda6940f96011fcda4a9a"
+ integrity sha512-cyA56iCMHAh5CdzjJIa4aohJyeO1YbwLi3Jc35MmRU6poroFjIGZzUzupGiRPOjgHg9TLu43xbpwXk523fMxKA==
+
+is-unicode-supported@^0.1.0:
+ version "0.1.0"
+ resolved "https://registry.yarnpkg.com/is-unicode-supported/-/is-unicode-supported-0.1.0.tgz#3f26c76a809593b52bfa2ecb5710ed2779b522a7"
+ integrity sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw==
+
+is-utf8@^0.2.1:
+ version "0.2.1"
+ resolved "https://registry.yarnpkg.com/is-utf8/-/is-utf8-0.2.1.tgz#4b0da1442104d1b336340e80797e865cf39f7d72"
+ integrity sha512-rMYPYvCzsXywIsldgLaSoPlw5PfoB/ssr7hY4pLfcodrA5M/eArza1a9VmTiNIBNMjOGr1Ow9mTyU2o69U6U9Q==
+
+is-what@^3.14.1:
+ version "3.14.1"
+ resolved "https://mirrors.cloud.tencent.com/npm/is-what/-/is-what-3.14.1.tgz#e1222f46ddda85dead0fd1c9df131760e77755c1"
+ integrity sha512-sNxgpk9793nzSs7bA6JQJGeIuRBQhAaNGG77kzYQgMkrID+lS6SlK07K5LaptscDlSaIgH+GPFzf+d75FVxozA==
+
+is-whitespace-character@^1.0.0:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/is-whitespace-character/-/is-whitespace-character-1.0.4.tgz#0858edd94a95594c7c9dd0b5c174ec6e45ee4aa7"
+ integrity sha512-SDweEzfIZM0SJV0EUga669UTKlmL0Pq8Lno0QDQsPnvECB3IM2aP0gdx5TrU0A01MAPfViaZiI2V1QMZLaKK5w==
+
+is-windows@^1.0.1:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/is-windows/-/is-windows-1.0.2.tgz#d1850eb9791ecd18e6182ce12a30f396634bb19d"
+ integrity sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA==
+
+is-word-character@^1.0.0:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/is-word-character/-/is-word-character-1.0.4.tgz#ce0e73216f98599060592f62ff31354ddbeb0230"
+ integrity sha512-5SMO8RVennx3nZrqtKwCGyyetPE9VDba5ugvKLaD4KopPG5kR4mQ7tNt/r7feL5yt5h3lpuBbIUmCOG2eSzXHA==
+
+is-wsl@^2.2.0:
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/is-wsl/-/is-wsl-2.2.0.tgz#74a4c76e77ca9fd3f932f290c17ea326cd157271"
+ integrity sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==
+ dependencies:
+ is-docker "^2.0.0"
+
+is-yarn-global@^0.3.0:
+ version "0.3.0"
+ resolved "https://registry.yarnpkg.com/is-yarn-global/-/is-yarn-global-0.3.0.tgz#d502d3382590ea3004893746754c89139973e232"
+ integrity sha512-VjSeb/lHmkoyd8ryPVIKvOCn4D1koMqY+vqyjjUfc3xyKtP4dYOxM44sZrnqQSzSds3xyOrUTLTC9LVCVgLngw==
+
+isarray@0.0.1:
+ version "0.0.1"
+ resolved "https://registry.yarnpkg.com/isarray/-/isarray-0.0.1.tgz#8a18acfca9a8f4177e09abfc6038939b05d1eedf"
+ integrity sha512-D2S+3GLxWH+uhrNEcoh/fnmYeP8E8/zHl644d/jdA0g2uyXvy3sb0qxotE+ne0LtccHknQzWwZEzhak7oJ0COQ==
+
+isarray@~1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/isarray/-/isarray-1.0.0.tgz#bb935d48582cba168c06834957a54a3e07124f11"
+ integrity sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==
+
+isexe@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/isexe/-/isexe-2.0.0.tgz#e8fbf374dc556ff8947a10dcb0572d633f2cfa10"
+ integrity sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==
+
+isobject@^3.0.1:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/isobject/-/isobject-3.0.1.tgz#4e431e92b11a9731636aa1f9c8d1ccbcfdab78df"
+ integrity sha512-WhB9zCku7EGTj/HQQRz5aUQEUeoQZH2bWcltRErOpymJ4boYE6wL9Tbr23krRPSZ+C5zqNSrSw+Cc7sZZ4b7vg==
+
+jest-worker@^27.4.5, jest-worker@^27.5.1:
+ version "27.5.1"
+ resolved "https://registry.yarnpkg.com/jest-worker/-/jest-worker-27.5.1.tgz#8d146f0900e8973b106b6f73cc1e9a8cb86f8db0"
+ integrity sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==
+ dependencies:
+ "@types/node" "*"
+ merge-stream "^2.0.0"
+ supports-color "^8.0.0"
+
+joi@^17.6.0:
+ version "17.6.0"
+ resolved "https://registry.yarnpkg.com/joi/-/joi-17.6.0.tgz#0bb54f2f006c09a96e75ce687957bd04290054b2"
+ integrity sha512-OX5dG6DTbcr/kbMFj0KGYxuew69HPcAE3K/sZpEV2nP6e/j/C0HV+HNiBPCASxdx5T7DMoa0s8UeHWMnb6n2zw==
+ dependencies:
+ "@hapi/hoek" "^9.0.0"
+ "@hapi/topo" "^5.0.0"
+ "@sideway/address" "^4.1.3"
+ "@sideway/formula" "^3.0.0"
+ "@sideway/pinpoint" "^2.0.0"
+
+"js-tokens@^3.0.0 || ^4.0.0", js-tokens@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/js-tokens/-/js-tokens-4.0.0.tgz#19203fb59991df98e3a287050d4647cdeaf32499"
+ integrity sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==
+
+js-yaml@^3.13.1:
+ version "3.14.1"
+ resolved "https://registry.yarnpkg.com/js-yaml/-/js-yaml-3.14.1.tgz#dae812fdb3825fa306609a8717383c50c36a0537"
+ integrity sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==
+ dependencies:
+ argparse "^1.0.7"
+ esprima "^4.0.0"
+
+js-yaml@^4.1.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/js-yaml/-/js-yaml-4.1.0.tgz#c1fb65f8f5017901cdd2c951864ba18458a10602"
+ integrity sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==
+ dependencies:
+ argparse "^2.0.1"
+
+jsesc@^2.5.1:
+ version "2.5.2"
+ resolved "https://registry.yarnpkg.com/jsesc/-/jsesc-2.5.2.tgz#80564d2e483dacf6e8ef209650a67df3f0c283a4"
+ integrity sha512-OYu7XEzjkCQ3C5Ps3QIZsQfNpqoJyZZA99wd9aWd05NCtC5pWOkShK2mkL6HXQR6/Cy2lbNdPlZBpuQHXE63gA==
+
+jsesc@~0.5.0:
+ version "0.5.0"
+ resolved "https://registry.yarnpkg.com/jsesc/-/jsesc-0.5.0.tgz#e7dee66e35d6fc16f710fe91d5cf69f70f08911d"
+ integrity sha512-uZz5UnB7u4T9LvwmFqXii7pZSouaRPorGs5who1Ip7VO0wxanFvBL7GkM6dTHlgX+jhBApRetaWpnDabOeTcnA==
+
+json-buffer@3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/json-buffer/-/json-buffer-3.0.0.tgz#5b1f397afc75d677bde8bcfc0e47e1f9a3d9a898"
+ integrity sha512-CuUqjv0FUZIdXkHPI8MezCnFCdaTAacej1TZYulLoAg1h/PhwkdXFN4V/gzY4g+fMBCOV2xF+rp7t2XD2ns/NQ==
+
+json-parse-better-errors@^1.0.1:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/json-parse-better-errors/-/json-parse-better-errors-1.0.2.tgz#bb867cfb3450e69107c131d1c514bab3dc8bcaa9"
+ integrity sha512-mrqyZKfX5EhL7hvqcV6WG1yYjnjeuYDzDhhcAAUrq8Po85NBQBJP+ZDUT75qZQ98IkUoBqdkExkukOU7Ts2wrw==
+
+json-parse-even-better-errors@^2.3.0, json-parse-even-better-errors@^2.3.1:
+ version "2.3.1"
+ resolved "https://registry.yarnpkg.com/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz#7c47805a94319928e05777405dc12e1f7a4ee02d"
+ integrity sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==
+
+json-schema-traverse@^0.4.1:
+ version "0.4.1"
+ resolved "https://registry.yarnpkg.com/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz#69f6a87d9513ab8bb8fe63bdb0979c448e684660"
+ integrity sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==
+
+json-schema-traverse@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz#ae7bcb3656ab77a73ba5c49bf654f38e6b6860e2"
+ integrity sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==
+
+json-stringify-safe@^5.0.1:
+ version "5.0.1"
+ resolved "https://registry.yarnpkg.com/json-stringify-safe/-/json-stringify-safe-5.0.1.tgz#1296a2d58fd45f19a0f6ce01d65701e2c735b6eb"
+ integrity sha512-ZClg6AaYvamvYEE82d3Iyd3vSSIjQ+odgjaTzRuO3s7toCdFKczob2i0zCh7JE8kWn17yvAWhUVxvqGwUalsRA==
+
+json5@^2.1.2, json5@^2.2.1:
+ version "2.2.1"
+ resolved "https://registry.yarnpkg.com/json5/-/json5-2.2.1.tgz#655d50ed1e6f95ad1a3caababd2b0efda10b395c"
+ integrity sha512-1hqLFMSrGHRHxav9q9gNjJ5EXznIxGVO09xQRrwplcS8qs28pZ8s8hupZAmqDwZUmVZ2Qb2jnyPOWcDH8m8dlA==
+
+jsonfile@^6.0.1:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/jsonfile/-/jsonfile-6.1.0.tgz#bc55b2634793c679ec6403094eb13698a6ec0aae"
+ integrity sha512-5dgndWOriYSm5cnYaJNhalLNDKOqFwyDB/rr1E9ZsGciGvKPs8R2xYGCacuf3z6K1YKDz182fd+fY3cn3pMqXQ==
+ dependencies:
+ universalify "^2.0.0"
+ optionalDependencies:
+ graceful-fs "^4.1.6"
+
+jsonparse@^1.2.0:
+ version "1.3.1"
+ resolved "https://registry.yarnpkg.com/jsonparse/-/jsonparse-1.3.1.tgz#3f4dae4a91fac315f71062f8521cc239f1366280"
+ integrity sha512-POQXvpdL69+CluYsillJ7SUhKvytYjW9vG/GKpnf+xP8UWgYEM/RaMzHHofbALDiKbbP1W8UEYmgGl39WkPZsg==
+
+keyv@^3.0.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/keyv/-/keyv-3.1.0.tgz#ecc228486f69991e49e9476485a5be1e8fc5c4d9"
+ integrity sha512-9ykJ/46SN/9KPM/sichzQ7OvXyGDYKGTaDlKMGCAlg2UK8KRy4jb0d8sFc+0Tt0YYnThq8X2RZgCg74RPxgcVA==
+ dependencies:
+ json-buffer "3.0.0"
+
+kind-of@^6.0.0, kind-of@^6.0.2, kind-of@^6.0.3:
+ version "6.0.3"
+ resolved "https://registry.yarnpkg.com/kind-of/-/kind-of-6.0.3.tgz#07c05034a6c349fa06e24fa35aa76db4580ce4dd"
+ integrity sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==
+
+kleur@^3.0.3:
+ version "3.0.3"
+ resolved "https://registry.yarnpkg.com/kleur/-/kleur-3.0.3.tgz#a79c9ecc86ee1ce3fa6206d1216c501f147fc07e"
+ integrity sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w==
+
+klona@^2.0.4, klona@^2.0.5:
+ version "2.0.5"
+ resolved "https://registry.yarnpkg.com/klona/-/klona-2.0.5.tgz#d166574d90076395d9963aa7a928fabb8d76afbc"
+ integrity sha512-pJiBpiXMbt7dkzXe8Ghj/u4FfXOOa98fPW+bihOJ4SjnoijweJrNThJfd3ifXpXhREjpoF2mZVH1GfS9LV3kHQ==
+
+latest-version@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/latest-version/-/latest-version-5.1.0.tgz#119dfe908fe38d15dfa43ecd13fa12ec8832face"
+ integrity sha512-weT+r0kTkRQdCdYCNtkMwWXQTMEswKrFBkm4ckQOMVhhqhIMI1UT2hMj+1iigIhgSZm5gTmrRXBNoGUgaTY1xA==
+ dependencies:
+ package-json "^6.3.0"
+
+less-loader@^11.1.0:
+ version "11.1.0"
+ resolved "https://mirrors.cloud.tencent.com/npm/less-loader/-/less-loader-11.1.0.tgz#a452384259bdf8e4f6d5fdcc39543609e6313f82"
+ integrity sha512-C+uDBV7kS7W5fJlUjq5mPBeBVhYpTIm5gB09APT9o3n/ILeaXVsiSFTbZpTJCJwQ/Crczfn3DmfQFwxYusWFug==
+ dependencies:
+ klona "^2.0.4"
+
+less@^4.1.3:
+ version "4.1.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/less/-/less-4.1.3.tgz#175be9ddcbf9b250173e0a00b4d6920a5b770246"
+ integrity sha512-w16Xk/Ta9Hhyei0Gpz9m7VS8F28nieJaL/VyShID7cYvP6IL5oHeL6p4TXSDJqZE/lNv0oJ2pGVjJsRkfwm5FA==
+ dependencies:
+ copy-anything "^2.0.1"
+ parse-node-version "^1.0.1"
+ tslib "^2.3.0"
+ optionalDependencies:
+ errno "^0.1.1"
+ graceful-fs "^4.1.2"
+ image-size "~0.5.0"
+ make-dir "^2.1.0"
+ mime "^1.4.1"
+ needle "^3.1.0"
+ source-map "~0.6.0"
+
+leven@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/leven/-/leven-3.1.0.tgz#77891de834064cccba82ae7842bb6b14a13ed7f2"
+ integrity sha512-qsda+H8jTaUaN/x5vzW2rzc+8Rw4TAQ/4KjB46IwK5VH+IlVeeeje/EoZRpiXvIqjFgK84QffqPztGI3VBLG1A==
+
+lilconfig@^2.0.3:
+ version "2.0.6"
+ resolved "https://registry.yarnpkg.com/lilconfig/-/lilconfig-2.0.6.tgz#32a384558bd58af3d4c6e077dd1ad1d397bc69d4"
+ integrity sha512-9JROoBW7pobfsx+Sq2JsASvCo6Pfo6WWoUW79HuB1BCoBXD4PLWJPqDF6fNj67pqBYTbAHkE57M1kS/+L1neOg==
+
+lines-and-columns@^1.1.6:
+ version "1.2.4"
+ resolved "https://registry.yarnpkg.com/lines-and-columns/-/lines-and-columns-1.2.4.tgz#eca284f75d2965079309dc0ad9255abb2ebc1632"
+ integrity sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==
+
+load-json-file@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/load-json-file/-/load-json-file-4.0.0.tgz#2f5f45ab91e33216234fd53adab668eb4ec0993b"
+ integrity sha512-Kx8hMakjX03tiGTLAIdJ+lL0htKnXjEZN6hk/tozf/WOuYGdZBJrZ+rCJRbVCugsjB3jMLn9746NsQIf5VjBMw==
+ dependencies:
+ graceful-fs "^4.1.2"
+ parse-json "^4.0.0"
+ pify "^3.0.0"
+ strip-bom "^3.0.0"
+
+loader-runner@^4.2.0:
+ version "4.3.0"
+ resolved "https://registry.yarnpkg.com/loader-runner/-/loader-runner-4.3.0.tgz#c1b4a163b99f614830353b16755e7149ac2314e1"
+ integrity sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg==
+
+loader-utils@^2.0.0:
+ version "2.0.2"
+ resolved "https://registry.yarnpkg.com/loader-utils/-/loader-utils-2.0.2.tgz#d6e3b4fb81870721ae4e0868ab11dd638368c129"
+ integrity sha512-TM57VeHptv569d/GKh6TAYdzKblwDNiumOdkFnejjD0XwTH87K90w3O7AiJRqdQoXygvi1VQTJTLGhJl7WqA7A==
+ dependencies:
+ big.js "^5.2.2"
+ emojis-list "^3.0.0"
+ json5 "^2.1.2"
+
+loader-utils@^3.2.0:
+ version "3.2.0"
+ resolved "https://registry.yarnpkg.com/loader-utils/-/loader-utils-3.2.0.tgz#bcecc51a7898bee7473d4bc6b845b23af8304d4f"
+ integrity sha512-HVl9ZqccQihZ7JM85dco1MvO9G+ONvxoGa9rkhzFsneGLKSUg1gJf9bWzhRhcvm2qChhWpebQhP44qxjKIUCaQ==
+
+locate-path@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/locate-path/-/locate-path-2.0.0.tgz#2b568b265eec944c6d9c0de9c3dbbbca0354cd8e"
+ integrity sha512-NCI2kiDkyR7VeEKm27Kda/iQHyKJe1Bu0FlTbYp3CqJu+9IFe9bLyAjMxf5ZDDbEg+iMPzB5zYyUTSm8wVTKmA==
+ dependencies:
+ p-locate "^2.0.0"
+ path-exists "^3.0.0"
+
+locate-path@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/locate-path/-/locate-path-3.0.0.tgz#dbec3b3ab759758071b58fe59fc41871af21400e"
+ integrity sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==
+ dependencies:
+ p-locate "^3.0.0"
+ path-exists "^3.0.0"
+
+locate-path@^5.0.0:
+ version "5.0.0"
+ resolved "https://registry.yarnpkg.com/locate-path/-/locate-path-5.0.0.tgz#1afba396afd676a6d42504d0a67a3a7eb9f62aa0"
+ integrity sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==
+ dependencies:
+ p-locate "^4.1.0"
+
+locate-path@^6.0.0:
+ version "6.0.0"
+ resolved "https://registry.yarnpkg.com/locate-path/-/locate-path-6.0.0.tgz#55321eb309febbc59c4801d931a72452a681d286"
+ integrity sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==
+ dependencies:
+ p-locate "^5.0.0"
+
+lodash.curry@^4.0.1:
+ version "4.1.1"
+ resolved "https://registry.yarnpkg.com/lodash.curry/-/lodash.curry-4.1.1.tgz#248e36072ede906501d75966200a86dab8b23170"
+ integrity sha512-/u14pXGviLaweY5JI0IUzgzF2J6Ne8INyzAZjImcryjgkZ+ebruBxy2/JaOOkTqScddcYtakjhSaeemV8lR0tA==
+
+lodash.debounce@^4.0.8:
+ version "4.0.8"
+ resolved "https://registry.yarnpkg.com/lodash.debounce/-/lodash.debounce-4.0.8.tgz#82d79bff30a67c4005ffd5e2515300ad9ca4d7af"
+ integrity sha512-FT1yDzDYEoYWhnSGnpE/4Kj1fLZkDFyqRb7fNt6FdYOSxlUWAtp42Eh6Wb0rGIv/m9Bgo7x4GhQbm5Ys4SG5ow==
+
+lodash.flow@^3.3.0:
+ version "3.5.0"
+ resolved "https://registry.yarnpkg.com/lodash.flow/-/lodash.flow-3.5.0.tgz#87bf40292b8cf83e4e8ce1a3ae4209e20071675a"
+ integrity sha512-ff3BX/tSioo+XojX4MOsOMhJw0nZoUEF011LX8g8d3gvjVbxd89cCio4BCXronjxcTUIJUoqKEUA+n4CqvvRPw==
+
+lodash.ismatch@^4.4.0:
+ version "4.4.0"
+ resolved "https://registry.yarnpkg.com/lodash.ismatch/-/lodash.ismatch-4.4.0.tgz#756cb5150ca3ba6f11085a78849645f188f85f37"
+ integrity sha512-fPMfXjGQEV9Xsq/8MTSgUf255gawYRbjwMyDbcvDhXgV7enSZA0hynz6vMPnpAb5iONEzBHBPsT+0zes5Z301g==
+
+lodash.map@^4.5.1:
+ version "4.6.0"
+ resolved "https://registry.yarnpkg.com/lodash.map/-/lodash.map-4.6.0.tgz#771ec7839e3473d9c4cde28b19394c3562f4f6d3"
+ integrity sha512-worNHGKLDetmcEYDvh2stPCrrQRkP20E4l0iIS7F8EvzMqBBi7ltvFN5m1HvTf1P7Jk1txKhvFcmYsCr8O2F1Q==
+
+lodash.memoize@^4.1.2:
+ version "4.1.2"
+ resolved "https://registry.yarnpkg.com/lodash.memoize/-/lodash.memoize-4.1.2.tgz#bcc6c49a42a2840ed997f323eada5ecd182e0bfe"
+ integrity sha512-t7j+NzmgnQzTAYXcsHYLgimltOV1MXHtlOWf6GjL9Kj8GK5FInw5JotxvbOs+IvV1/Dzo04/fCGfLVs7aXb4Ag==
+
+lodash.uniq@4.5.0, lodash.uniq@^4.5.0:
+ version "4.5.0"
+ resolved "https://registry.yarnpkg.com/lodash.uniq/-/lodash.uniq-4.5.0.tgz#d0225373aeb652adc1bc82e4945339a842754773"
+ integrity sha512-xfBaXQd9ryd9dlSDvnvI0lvxfLJlYAZzXomUYzLKtUeOQvOP5piqAWuGtrhWeqaXK9hhoM/iyJc5AV+XfsX3HQ==
+
+lodash@4.17.21, lodash@^4.17.15, lodash@^4.17.19, lodash@^4.17.20, lodash@^4.17.21:
+ version "4.17.21"
+ resolved "https://registry.yarnpkg.com/lodash/-/lodash-4.17.21.tgz#679591c564c3bffaae8454cf0b3df370c3d6911c"
+ integrity sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==
+
+log-symbols@^4.1.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/log-symbols/-/log-symbols-4.1.0.tgz#3fbdbb95b4683ac9fc785111e792e558d4abd503"
+ integrity sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==
+ dependencies:
+ chalk "^4.1.0"
+ is-unicode-supported "^0.1.0"
+
+longest@^2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/longest/-/longest-2.0.1.tgz#781e183296aa94f6d4d916dc335d0d17aefa23f8"
+ integrity sha512-Ajzxb8CM6WAnFjgiloPsI3bF+WCxcvhdIG3KNA2KN962+tdBsHcuQ4k4qX/EcS/2CRkcc0iAkR956Nib6aXU/Q==
+
+loose-envify@^1.0.0, loose-envify@^1.1.0, loose-envify@^1.2.0, loose-envify@^1.3.1, loose-envify@^1.4.0:
+ version "1.4.0"
+ resolved "https://registry.yarnpkg.com/loose-envify/-/loose-envify-1.4.0.tgz#71ee51fa7be4caec1a63839f7e682d8132d30caf"
+ integrity sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==
+ dependencies:
+ js-tokens "^3.0.0 || ^4.0.0"
+
+lower-case@^2.0.2:
+ version "2.0.2"
+ resolved "https://registry.yarnpkg.com/lower-case/-/lower-case-2.0.2.tgz#6fa237c63dbdc4a82ca0fd882e4722dc5e634e28"
+ integrity sha512-7fm3l3NAF9WfN6W3JOmf5drwpVqX78JtoGJ3A6W0a6ZnldM41w2fV5D490psKFTpMds8TJse/eHLFFsNHHjHgg==
+ dependencies:
+ tslib "^2.0.3"
+
+lowercase-keys@^1.0.0, lowercase-keys@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/lowercase-keys/-/lowercase-keys-1.0.1.tgz#6f9e30b47084d971a7c820ff15a6c5167b74c26f"
+ integrity sha512-G2Lj61tXDnVFFOi8VZds+SoQjtQC3dgokKdDG2mTm1tx4m50NUHBOZSBwQQHyy0V12A0JTG4icfZQH+xPyh8VA==
+
+lowercase-keys@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/lowercase-keys/-/lowercase-keys-2.0.0.tgz#2603e78b7b4b0006cbca2fbcc8a3202558ac9479"
+ integrity sha512-tqNXrS78oMOE73NMxK4EMLQsQowWf8jKooH9g7xPavRT706R6bkQJ6DY2Te7QukaZsulxa30wQ7bk0pm4XiHmA==
+
+lru-cache@^6.0.0:
+ version "6.0.0"
+ resolved "https://registry.yarnpkg.com/lru-cache/-/lru-cache-6.0.0.tgz#6d6fe6570ebd96aaf90fcad1dafa3b2566db3a94"
+ integrity sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==
+ dependencies:
+ yallist "^4.0.0"
+
+make-dir@^2.1.0:
+ version "2.1.0"
+ resolved "https://mirrors.cloud.tencent.com/npm/make-dir/-/make-dir-2.1.0.tgz#5f0310e18b8be898cc07009295a30ae41e91e6f5"
+ integrity sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==
+ dependencies:
+ pify "^4.0.1"
+ semver "^5.6.0"
+
+make-dir@^3.0.0, make-dir@^3.0.2, make-dir@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/make-dir/-/make-dir-3.1.0.tgz#415e967046b3a7f1d185277d84aa58203726a13f"
+ integrity sha512-g3FeP20LNwhALb/6Cz6Dd4F2ngze0jz7tbzrD2wAV+o9FeNHe4rL+yK2md0J/fiSf1sa1ADhXqi5+oVwOM/eGw==
+ dependencies:
+ semver "^6.0.0"
+
+make-error@^1.1.1:
+ version "1.3.6"
+ resolved "https://registry.yarnpkg.com/make-error/-/make-error-1.3.6.tgz#2eb2e37ea9b67c4891f684a1394799af484cf7a2"
+ integrity sha512-s8UhlNe7vPKomQhC1qFelMokr/Sc3AgNbso3n74mVPA5LTZwkB9NlXf4XPamLxJE8h0gh73rM94xvwRT2CVInw==
+
+map-obj@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/map-obj/-/map-obj-1.0.1.tgz#d933ceb9205d82bdcf4886f6742bdc2b4dea146d"
+ integrity sha512-7N/q3lyZ+LVCp7PzuxrJr4KMbBE2hW7BT7YNia330OFxIf4d3r5zVpicP2650l7CPN6RM9zOJRl3NGpqSiw3Eg==
+
+map-obj@^4.0.0:
+ version "4.3.0"
+ resolved "https://registry.yarnpkg.com/map-obj/-/map-obj-4.3.0.tgz#9304f906e93faae70880da102a9f1df0ea8bb05a"
+ integrity sha512-hdN1wVrZbb29eBGiGjJbeP8JbKjq1urkHJ/LIP/NY48MZ1QVXUsQBV1G1zvYFHn1XE06cwjBsOI2K3Ulnj1YXQ==
+
+markdown-escapes@^1.0.0:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/markdown-escapes/-/markdown-escapes-1.0.4.tgz#c95415ef451499d7602b91095f3c8e8975f78535"
+ integrity sha512-8z4efJYk43E0upd0NbVXwgSTQs6cT3T06etieCMEg7dRbzCbxUCK/GHlX8mhHRDcp+OLlHkPKsvqQTCvsRl2cg==
+
+mdast-squeeze-paragraphs@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/mdast-squeeze-paragraphs/-/mdast-squeeze-paragraphs-4.0.0.tgz#7c4c114679c3bee27ef10b58e2e015be79f1ef97"
+ integrity sha512-zxdPn69hkQ1rm4J+2Cs2j6wDEv7O17TfXTJ33tl/+JPIoEmtV9t2ZzBM5LPHE8QlHsmVD8t3vPKCyY3oH+H8MQ==
+ dependencies:
+ unist-util-remove "^2.0.0"
+
+mdast-util-definitions@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/mdast-util-definitions/-/mdast-util-definitions-4.0.0.tgz#c5c1a84db799173b4dcf7643cda999e440c24db2"
+ integrity sha512-k8AJ6aNnUkB7IE+5azR9h81O5EQ/cTDXtWdMq9Kk5KcEW/8ritU5CeLg/9HhOC++nALHBlaogJ5jz0Ybk3kPMQ==
+ dependencies:
+ unist-util-visit "^2.0.0"
+
+mdast-util-to-hast@10.0.1:
+ version "10.0.1"
+ resolved "https://registry.yarnpkg.com/mdast-util-to-hast/-/mdast-util-to-hast-10.0.1.tgz#0cfc82089494c52d46eb0e3edb7a4eb2aea021eb"
+ integrity sha512-BW3LM9SEMnjf4HXXVApZMt8gLQWVNXc3jryK0nJu/rOXPOnlkUjmdkDlmxMirpbU9ILncGFIwLH/ubnWBbcdgA==
+ dependencies:
+ "@types/mdast" "^3.0.0"
+ "@types/unist" "^2.0.0"
+ mdast-util-definitions "^4.0.0"
+ mdurl "^1.0.0"
+ unist-builder "^2.0.0"
+ unist-util-generated "^1.0.0"
+ unist-util-position "^3.0.0"
+ unist-util-visit "^2.0.0"
+
+mdast-util-to-string@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/mdast-util-to-string/-/mdast-util-to-string-2.0.0.tgz#b8cfe6a713e1091cb5b728fc48885a4767f8b97b"
+ integrity sha512-AW4DRS3QbBayY/jJmD8437V1Gombjf8RSOUCMFBuo5iHi58AGEgVCKQ+ezHkZZDpAQS75hcBMpLqjpJTjtUL7w==
+
+mdn-data@2.0.14:
+ version "2.0.14"
+ resolved "https://registry.yarnpkg.com/mdn-data/-/mdn-data-2.0.14.tgz#7113fc4281917d63ce29b43446f701e68c25ba50"
+ integrity sha512-dn6wd0uw5GsdswPFfsgMp5NSB0/aDe6fK94YJV/AJDYXL6HVLWBsxeq7js7Ad+mU2K9LAlwpk6kN2D5mwCPVow==
+
+mdurl@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/mdurl/-/mdurl-1.0.1.tgz#fe85b2ec75a59037f2adfec100fd6c601761152e"
+ integrity sha512-/sKlQJCBYVY9Ers9hqzKou4H6V5UWc/M59TH2dvkt+84itfnq7uFOMLpOiOS4ujvHP4etln18fmIxA5R5fll0g==
+
+media-typer@0.3.0:
+ version "0.3.0"
+ resolved "https://registry.yarnpkg.com/media-typer/-/media-typer-0.3.0.tgz#8710d7af0aa626f8fffa1ce00168545263255748"
+ integrity sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==
+
+memfs@^3.1.2, memfs@^3.4.3:
+ version "3.4.7"
+ resolved "https://registry.yarnpkg.com/memfs/-/memfs-3.4.7.tgz#e5252ad2242a724f938cb937e3c4f7ceb1f70e5a"
+ integrity sha512-ygaiUSNalBX85388uskeCyhSAoOSgzBbtVCr9jA2RROssFL9Q19/ZXFqS+2Th2sr1ewNIWgFdLzLC3Yl1Zv+lw==
+ dependencies:
+ fs-monkey "^1.0.3"
+
+meow@^8.0.0:
+ version "8.1.2"
+ resolved "https://registry.yarnpkg.com/meow/-/meow-8.1.2.tgz#bcbe45bda0ee1729d350c03cffc8395a36c4e897"
+ integrity sha512-r85E3NdZ+mpYk1C6RjPFEMSE+s1iZMuHtsHAqY0DT3jZczl0diWUZ8g6oU7h0M9cD2EL+PzaYghhCLzR0ZNn5Q==
+ dependencies:
+ "@types/minimist" "^1.2.0"
+ camelcase-keys "^6.2.2"
+ decamelize-keys "^1.1.0"
+ hard-rejection "^2.1.0"
+ minimist-options "4.1.0"
+ normalize-package-data "^3.0.0"
+ read-pkg-up "^7.0.1"
+ redent "^3.0.0"
+ trim-newlines "^3.0.0"
+ type-fest "^0.18.0"
+ yargs-parser "^20.2.3"
+
+merge-descriptors@1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/merge-descriptors/-/merge-descriptors-1.0.1.tgz#b00aaa556dd8b44568150ec9d1b953f3f90cbb61"
+ integrity sha512-cCi6g3/Zr1iqQi6ySbseM1Xvooa98N0w31jzUYrXPX2xqObmFGHJ0tQ5u74H3mVh7wLouTseZyYIq39g8cNp1w==
+
+merge-stream@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/merge-stream/-/merge-stream-2.0.0.tgz#52823629a14dd00c9770fb6ad47dc6310f2c1f60"
+ integrity sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==
+
+merge2@^1.3.0, merge2@^1.4.1:
+ version "1.4.1"
+ resolved "https://registry.yarnpkg.com/merge2/-/merge2-1.4.1.tgz#4368892f885e907455a6fd7dc55c0c9d404990ae"
+ integrity sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==
+
+merge@^2.1.1:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/merge/-/merge-2.1.1.tgz#59ef4bf7e0b3e879186436e8481c06a6c162ca98"
+ integrity sha512-jz+Cfrg9GWOZbQAnDQ4hlVnQky+341Yk5ru8bZSe6sIDTCIg8n9i/u7hSQGSVOF3C7lH6mGtqjkiT9G4wFLL0w==
+
+methods@~1.1.2:
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/methods/-/methods-1.1.2.tgz#5529a4d67654134edcc5266656835b0f851afcee"
+ integrity sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==
+
+micromatch@^4.0.2, micromatch@^4.0.4, micromatch@^4.0.5:
+ version "4.0.5"
+ resolved "https://registry.yarnpkg.com/micromatch/-/micromatch-4.0.5.tgz#bc8999a7cbbf77cdc89f132f6e467051b49090c6"
+ integrity sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==
+ dependencies:
+ braces "^3.0.2"
+ picomatch "^2.3.1"
+
+mime-db@1.52.0, "mime-db@>= 1.43.0 < 2":
+ version "1.52.0"
+ resolved "https://registry.yarnpkg.com/mime-db/-/mime-db-1.52.0.tgz#bbabcdc02859f4987301c856e3387ce5ec43bf70"
+ integrity sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==
+
+mime-db@~1.33.0:
+ version "1.33.0"
+ resolved "https://registry.yarnpkg.com/mime-db/-/mime-db-1.33.0.tgz#a3492050a5cb9b63450541e39d9788d2272783db"
+ integrity sha512-BHJ/EKruNIqJf/QahvxwQZXKygOQ256myeN/Ew+THcAa5q+PjyTTMMeNQC4DZw5AwfvelsUrA6B67NKMqXDbzQ==
+
+mime-types@2.1.18:
+ version "2.1.18"
+ resolved "https://registry.yarnpkg.com/mime-types/-/mime-types-2.1.18.tgz#6f323f60a83d11146f831ff11fd66e2fe5503bb8"
+ integrity sha512-lc/aahn+t4/SWV/qcmumYjymLsWfN3ELhpmVuUFjgsORruuZPVSwAQryq+HHGvO/SI2KVX26bx+En+zhM8g8hQ==
+ dependencies:
+ mime-db "~1.33.0"
+
+mime-types@^2.1.27, mime-types@^2.1.31, mime-types@~2.1.17, mime-types@~2.1.24, mime-types@~2.1.34:
+ version "2.1.35"
+ resolved "https://registry.yarnpkg.com/mime-types/-/mime-types-2.1.35.tgz#381a871b62a734450660ae3deee44813f70d959a"
+ integrity sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==
+ dependencies:
+ mime-db "1.52.0"
+
+mime@1.6.0, mime@^1.4.1:
+ version "1.6.0"
+ resolved "https://registry.yarnpkg.com/mime/-/mime-1.6.0.tgz#32cd9e5c64553bd58d19a568af452acff04981b1"
+ integrity sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==
+
+mimic-fn@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/mimic-fn/-/mimic-fn-2.1.0.tgz#7ed2c2ccccaf84d3ffcb7a69b57711fc2083401b"
+ integrity sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==
+
+mimic-response@^1.0.0, mimic-response@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/mimic-response/-/mimic-response-1.0.1.tgz#4923538878eef42063cb8a3e3b0798781487ab1b"
+ integrity sha512-j5EctnkH7amfV/q5Hgmoal1g2QHFJRraOtmx0JpIqkxhBhI/lJSl1nMpQ45hVarwNETOoWEimndZ4QK0RHxuxQ==
+
+min-indent@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/min-indent/-/min-indent-1.0.1.tgz#a63f681673b30571fbe8bc25686ae746eefa9869"
+ integrity sha512-I9jwMn07Sy/IwOj3zVkVik2JTvgpaykDZEigL6Rx6N9LbMywwUSMtxET+7lVoDLLd3O3IXwJwvuuns8UB/HeAg==
+
+mini-create-react-context@^0.4.0:
+ version "0.4.1"
+ resolved "https://registry.yarnpkg.com/mini-create-react-context/-/mini-create-react-context-0.4.1.tgz#072171561bfdc922da08a60c2197a497cc2d1d5e"
+ integrity sha512-YWCYEmd5CQeHGSAKrYvXgmzzkrvssZcuuQDDeqkT+PziKGMgE+0MCCtcKbROzocGBG1meBLl2FotlRwf4gAzbQ==
+ dependencies:
+ "@babel/runtime" "^7.12.1"
+ tiny-warning "^1.0.3"
+
+mini-css-extract-plugin@^2.6.1:
+ version "2.6.1"
+ resolved "https://registry.yarnpkg.com/mini-css-extract-plugin/-/mini-css-extract-plugin-2.6.1.tgz#9a1251d15f2035c342d99a468ab9da7a0451b71e"
+ integrity sha512-wd+SD57/K6DiV7jIR34P+s3uckTRuQvx0tKPcvjFlrEylk6P4mQ2KSWk1hblj1Kxaqok7LogKOieygXqBczNlg==
+ dependencies:
+ schema-utils "^4.0.0"
+
+minimalistic-assert@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz#2e194de044626d4a10e7f7fbc00ce73e83e4d5c7"
+ integrity sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A==
+
+minimatch@3.0.4:
+ version "3.0.4"
+ resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.0.4.tgz#5166e286457f03306064be5497e8dbb0c3d32083"
+ integrity sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==
+ dependencies:
+ brace-expansion "^1.1.7"
+
+minimatch@^3.0.4, minimatch@^3.1.1:
+ version "3.1.2"
+ resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.1.2.tgz#19cd194bfd3e428f049a70817c038d89ab4be35b"
+ integrity sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==
+ dependencies:
+ brace-expansion "^1.1.7"
+
+minimist-options@4.1.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/minimist-options/-/minimist-options-4.1.0.tgz#c0655713c53a8a2ebd77ffa247d342c40f010619"
+ integrity sha512-Q4r8ghd80yhO/0j1O3B2BjweX3fiHg9cdOwjJd2J76Q135c+NDxGCqdYKQ1SKBuFfgWbAUzBfvYjPUEeNgqN1A==
+ dependencies:
+ arrify "^1.0.1"
+ is-plain-obj "^1.1.0"
+ kind-of "^6.0.3"
+
+minimist@1.2.6, minimist@^1.2.0, minimist@^1.2.5:
+ version "1.2.6"
+ resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.6.tgz#8637a5b759ea0d6e98702cfb3a9283323c93af44"
+ integrity sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q==
+
+modify-values@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/modify-values/-/modify-values-1.0.1.tgz#b3939fa605546474e3e3e3c63d64bd43b4ee6022"
+ integrity sha512-xV2bxeN6F7oYjZWTe/YPAy6MN2M+sL4u/Rlm2AHCIVGfo2p1yGmBHQ6vHehl4bRTZBdHu3TSkWdYgkwpYzAGSw==
+
+mrmime@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/mrmime/-/mrmime-1.0.1.tgz#5f90c825fad4bdd41dc914eff5d1a8cfdaf24f27"
+ integrity sha512-hzzEagAgDyoU1Q6yg5uI+AorQgdvMCur3FcKf7NhMKWsaYg+RnbTyHRa/9IlLF9rf455MOCtcqqrQQ83pPP7Uw==
+
+ms@2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/ms/-/ms-2.0.0.tgz#5608aeadfc00be6c2901df5f9861788de0d597c8"
+ integrity sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==
+
+ms@2.1.2:
+ version "2.1.2"
+ resolved "https://registry.yarnpkg.com/ms/-/ms-2.1.2.tgz#d09d1f357b443f493382a8eb3ccd183872ae6009"
+ integrity sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==
+
+ms@2.1.3, ms@^2.1.1:
+ version "2.1.3"
+ resolved "https://registry.yarnpkg.com/ms/-/ms-2.1.3.tgz#574c8138ce1d2b5861f0b44579dbadd60c6615b2"
+ integrity sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==
+
+multicast-dns@^7.2.5:
+ version "7.2.5"
+ resolved "https://registry.yarnpkg.com/multicast-dns/-/multicast-dns-7.2.5.tgz#77eb46057f4d7adbd16d9290fa7299f6fa64cced"
+ integrity sha512-2eznPJP8z2BFLX50tf0LuODrpINqP1RVIm/CObbTcBRITQgmC/TjcREF1NeTBzIcR5XO/ukWo+YHOjBbFwIupg==
+ dependencies:
+ dns-packet "^5.2.2"
+ thunky "^1.0.2"
+
+mute-stream@0.0.8:
+ version "0.0.8"
+ resolved "https://registry.yarnpkg.com/mute-stream/-/mute-stream-0.0.8.tgz#1630c42b2251ff81e2a283de96a5497ea92e5e0d"
+ integrity sha512-nnbWWOkoWyUsTjKrhgD0dcz22mdkSnpYqbEjIm2nhwhuxlSkpywJmBo8h0ZqJdkp73mb90SssHkN4rsRaBAfAA==
+
+nanoid@^3.3.4:
+ version "3.3.4"
+ resolved "https://registry.yarnpkg.com/nanoid/-/nanoid-3.3.4.tgz#730b67e3cd09e2deacf03c027c81c9d9dbc5e8ab"
+ integrity sha512-MqBkQh/OHTS2egovRtLk45wEyNXwF+cokD+1YPf9u5VfJiRdAiRwB2froX5Co9Rh20xs4siNPm8naNotSD6RBw==
+
+needle@^3.1.0:
+ version "3.2.0"
+ resolved "https://mirrors.cloud.tencent.com/npm/needle/-/needle-3.2.0.tgz#07d240ebcabfd65c76c03afae7f6defe6469df44"
+ integrity sha512-oUvzXnyLiVyVGoianLijF9O/RecZUf7TkBfimjGrLM4eQhXyeJwM6GeAWccwfQ9aa4gMCZKqhAOuLaMIcQxajQ==
+ dependencies:
+ debug "^3.2.6"
+ iconv-lite "^0.6.3"
+ sax "^1.2.4"
+
+negotiator@0.6.3:
+ version "0.6.3"
+ resolved "https://registry.yarnpkg.com/negotiator/-/negotiator-0.6.3.tgz#58e323a72fedc0d6f9cd4d31fe49f51479590ccd"
+ integrity sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==
+
+neo-async@^2.6.0, neo-async@^2.6.2:
+ version "2.6.2"
+ resolved "https://registry.yarnpkg.com/neo-async/-/neo-async-2.6.2.tgz#b4aafb93e3aeb2d8174ca53cf163ab7d7308305f"
+ integrity sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==
+
+no-case@^3.0.4:
+ version "3.0.4"
+ resolved "https://registry.yarnpkg.com/no-case/-/no-case-3.0.4.tgz#d361fd5c9800f558551a8369fc0dcd4662b6124d"
+ integrity sha512-fgAN3jGAh+RoxUGZHTSOLJIqUc2wmoBwGR4tbpNAKmmovFoWq0OdRkb0VkldReO2a2iBT/OEulG9XSUc10r3zg==
+ dependencies:
+ lower-case "^2.0.2"
+ tslib "^2.0.3"
+
+node-emoji@^1.10.0:
+ version "1.11.0"
+ resolved "https://registry.yarnpkg.com/node-emoji/-/node-emoji-1.11.0.tgz#69a0150e6946e2f115e9d7ea4df7971e2628301c"
+ integrity sha512-wo2DpQkQp7Sjm2A0cq+sN7EHKO6Sl0ctXeBdFZrL9T9+UywORbufTcTZxom8YqpLQt/FqNMUkOpkZrJVYSKD3A==
+ dependencies:
+ lodash "^4.17.21"
+
+node-fetch@2.6.7:
+ version "2.6.7"
+ resolved "https://registry.yarnpkg.com/node-fetch/-/node-fetch-2.6.7.tgz#24de9fba827e3b4ae44dc8b20256a379160052ad"
+ integrity sha512-ZjMPFEfVx5j+y2yF35Kzx5sF7kDzxuDj6ziH4FFbOp87zKDZNx8yExJIb05OGF4Nlt9IHFIMBkRl41VdvcNdbQ==
+ dependencies:
+ whatwg-url "^5.0.0"
+
+node-forge@^1:
+ version "1.3.1"
+ resolved "https://registry.yarnpkg.com/node-forge/-/node-forge-1.3.1.tgz#be8da2af243b2417d5f646a770663a92b7e9ded3"
+ integrity sha512-dPEtOeMvF9VMcYV/1Wb8CPoVAXtp6MKMlcbAt4ddqmGqUJ6fQZFXkNZNkNlfevtNkGtaSoXf/vNNNSvgrdXwtA==
+
+node-releases@^2.0.6:
+ version "2.0.6"
+ resolved "https://registry.yarnpkg.com/node-releases/-/node-releases-2.0.6.tgz#8a7088c63a55e493845683ebf3c828d8c51c5503"
+ integrity sha512-PiVXnNuFm5+iYkLBNeq5211hvO38y63T0i2KKh2KnUs3RpzJ+JtODFjkD8yjLwnDkTYF1eKXheUwdssR+NRZdg==
+
+normalize-package-data@^2.3.2, normalize-package-data@^2.5.0:
+ version "2.5.0"
+ resolved "https://registry.yarnpkg.com/normalize-package-data/-/normalize-package-data-2.5.0.tgz#e66db1838b200c1dfc233225d12cb36520e234a8"
+ integrity sha512-/5CMN3T0R4XTj4DcGaexo+roZSdSFW/0AOOTROrjxzCG1wrWXEsGbRKevjlIL+ZDE4sZlJr5ED4YW0yqmkK+eA==
+ dependencies:
+ hosted-git-info "^2.1.4"
+ resolve "^1.10.0"
+ semver "2 || 3 || 4 || 5"
+ validate-npm-package-license "^3.0.1"
+
+normalize-package-data@^3.0.0:
+ version "3.0.3"
+ resolved "https://registry.yarnpkg.com/normalize-package-data/-/normalize-package-data-3.0.3.tgz#dbcc3e2da59509a0983422884cd172eefdfa525e"
+ integrity sha512-p2W1sgqij3zMMyRC067Dg16bfzVH+w7hyegmpIvZ4JNjqtGOVAIvLmjBx3yP7YTe9vKJgkoNOPjwQGogDoMXFA==
+ dependencies:
+ hosted-git-info "^4.0.1"
+ is-core-module "^2.5.0"
+ semver "^7.3.4"
+ validate-npm-package-license "^3.0.1"
+
+normalize-path@^3.0.0, normalize-path@~3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/normalize-path/-/normalize-path-3.0.0.tgz#0dcd69ff23a1c9b11fd0978316644a0388216a65"
+ integrity sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==
+
+normalize-range@^0.1.2:
+ version "0.1.2"
+ resolved "https://registry.yarnpkg.com/normalize-range/-/normalize-range-0.1.2.tgz#2d10c06bdfd312ea9777695a4d28439456b75942"
+ integrity sha512-bdok/XvKII3nUpklnV6P2hxtMNrCboOjAcyBuQnWEhO665FwrSNRxU+AqpsyvO6LgGYPspN+lu5CLtw4jPRKNA==
+
+normalize-url@^4.1.0:
+ version "4.5.1"
+ resolved "https://registry.yarnpkg.com/normalize-url/-/normalize-url-4.5.1.tgz#0dd90cf1288ee1d1313b87081c9a5932ee48518a"
+ integrity sha512-9UZCFRHQdNrfTpGg8+1INIg93B6zE0aXMVFkw1WFwvO4SlZywU6aLg5Of0Ap/PgcbSw4LNxvMWXMeugwMCX0AA==
+
+normalize-url@^6.0.1:
+ version "6.1.0"
+ resolved "https://registry.yarnpkg.com/normalize-url/-/normalize-url-6.1.0.tgz#40d0885b535deffe3f3147bec877d05fe4c5668a"
+ integrity sha512-DlL+XwOy3NxAQ8xuC0okPgK46iuVNAK01YN7RueYBqqFeGsBjV9XmCAzAdgt+667bCl5kPh9EqKKDwnaPG1I7A==
+
+npm-run-path@^4.0.1:
+ version "4.0.1"
+ resolved "https://registry.yarnpkg.com/npm-run-path/-/npm-run-path-4.0.1.tgz#b7ecd1e5ed53da8e37a55e1c2269e0b97ed748ea"
+ integrity sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==
+ dependencies:
+ path-key "^3.0.0"
+
+nprogress@^0.2.0:
+ version "0.2.0"
+ resolved "https://registry.yarnpkg.com/nprogress/-/nprogress-0.2.0.tgz#cb8f34c53213d895723fcbab907e9422adbcafb1"
+ integrity sha512-I19aIingLgR1fmhftnbWWO3dXc0hSxqHQHQb3H8m+K3TnEn/iSeTZZOyvKXWqQESMwuUVnatlCnZdLBZZt2VSA==
+
+nth-check@^2.0.1:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/nth-check/-/nth-check-2.1.1.tgz#c9eab428effce36cd6b92c924bdb000ef1f1ed1d"
+ integrity sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==
+ dependencies:
+ boolbase "^1.0.0"
+
+object-assign@^4.1.0, object-assign@^4.1.1:
+ version "4.1.1"
+ resolved "https://registry.yarnpkg.com/object-assign/-/object-assign-4.1.1.tgz#2109adc7965887cfc05cbbd442cac8bfbb360863"
+ integrity sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==
+
+object-inspect@^1.9.0:
+ version "1.12.2"
+ resolved "https://registry.yarnpkg.com/object-inspect/-/object-inspect-1.12.2.tgz#c0641f26394532f28ab8d796ab954e43c009a8ea"
+ integrity sha512-z+cPxW0QGUp0mcqcsgQyLVRDoXFQbXOwBaqyF7VIgI4TWNQsDHrBpUQslRmIfAoYWdYzs6UlKJtB2XJpTaNSpQ==
+
+object-keys@^1.1.1:
+ version "1.1.1"
+ resolved "https://registry.yarnpkg.com/object-keys/-/object-keys-1.1.1.tgz#1c47f272df277f3b1daf061677d9c82e2322c60e"
+ integrity sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA==
+
+object.assign@^4.1.0:
+ version "4.1.4"
+ resolved "https://registry.yarnpkg.com/object.assign/-/object.assign-4.1.4.tgz#9673c7c7c351ab8c4d0b516f4343ebf4dfb7799f"
+ integrity sha512-1mxKf0e58bvyjSCtKYY4sRe9itRk3PJpquJOjeIkz885CczcI4IvJJDLPS72oowuSh+pBxUFROpX+TU++hxhZQ==
+ dependencies:
+ call-bind "^1.0.2"
+ define-properties "^1.1.4"
+ has-symbols "^1.0.3"
+ object-keys "^1.1.1"
+
+obuf@^1.0.0, obuf@^1.1.2:
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/obuf/-/obuf-1.1.2.tgz#09bea3343d41859ebd446292d11c9d4db619084e"
+ integrity sha512-PX1wu0AmAdPqOL1mWhqmlOd8kOIZQwGZw6rh7uby9fTc5lhaOWFLX3I6R1hrF9k3zUY40e6igsLGkDXK92LJNg==
+
+on-finished@2.4.1:
+ version "2.4.1"
+ resolved "https://registry.yarnpkg.com/on-finished/-/on-finished-2.4.1.tgz#58c8c44116e54845ad57f14ab10b03533184ac3f"
+ integrity sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==
+ dependencies:
+ ee-first "1.1.1"
+
+on-headers@~1.0.2:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/on-headers/-/on-headers-1.0.2.tgz#772b0ae6aaa525c399e489adfad90c403eb3c28f"
+ integrity sha512-pZAE+FJLoyITytdqK0U5s+FIpjN0JP3OzFi/u8Rx+EV5/W+JTWGXG8xFzevE7AjBfDqHv/8vL8qQsIhHnqRkrA==
+
+once@^1.3.0, once@^1.3.1, once@^1.4.0:
+ version "1.4.0"
+ resolved "https://registry.yarnpkg.com/once/-/once-1.4.0.tgz#583b1aa775961d4b113ac17d9c50baef9dd76bd1"
+ integrity sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==
+ dependencies:
+ wrappy "1"
+
+onetime@^5.1.0, onetime@^5.1.2:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/onetime/-/onetime-5.1.2.tgz#d0e96ebb56b07476df1dd9c4806e5237985ca45e"
+ integrity sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==
+ dependencies:
+ mimic-fn "^2.1.0"
+
+open@^8.0.9, open@^8.4.0:
+ version "8.4.0"
+ resolved "https://registry.yarnpkg.com/open/-/open-8.4.0.tgz#345321ae18f8138f82565a910fdc6b39e8c244f8"
+ integrity sha512-XgFPPM+B28FtCCgSb9I+s9szOC1vZRSwgWsRUA5ylIxRTgKozqjOCrVOqGsYABPYK5qnfqClxZTFBa8PKt2v6Q==
+ dependencies:
+ define-lazy-prop "^2.0.0"
+ is-docker "^2.1.1"
+ is-wsl "^2.2.0"
+
+opener@^1.5.2:
+ version "1.5.2"
+ resolved "https://registry.yarnpkg.com/opener/-/opener-1.5.2.tgz#5d37e1f35077b9dcac4301372271afdeb2a13598"
+ integrity sha512-ur5UIdyw5Y7yEj9wLzhqXiy6GZ3Mwx0yGI+5sMn2r0N0v3cKJvUmFH5yPP+WXh9e0xfyzyJX95D8l088DNFj7A==
+
+ora@^5.4.1:
+ version "5.4.1"
+ resolved "https://registry.yarnpkg.com/ora/-/ora-5.4.1.tgz#1b2678426af4ac4a509008e5e4ac9e9959db9e18"
+ integrity sha512-5b6Y85tPxZZ7QytO+BQzysW31HJku27cRIlkbAXaNx+BdcVi+LlRFmVXzeF6a7JCwJpyw5c4b+YSVImQIrBpuQ==
+ dependencies:
+ bl "^4.1.0"
+ chalk "^4.1.0"
+ cli-cursor "^3.1.0"
+ cli-spinners "^2.5.0"
+ is-interactive "^1.0.0"
+ is-unicode-supported "^0.1.0"
+ log-symbols "^4.1.0"
+ strip-ansi "^6.0.0"
+ wcwidth "^1.0.1"
+
+os-tmpdir@~1.0.2:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/os-tmpdir/-/os-tmpdir-1.0.2.tgz#bbe67406c79aa85c5cfec766fe5734555dfa1274"
+ integrity sha512-D2FR03Vir7FIu45XBY20mTb+/ZSWB00sjU9jdQXt83gDrI4Ztz5Fs7/yy74g2N5SVQY4xY1qDr4rNddwYRVX0g==
+
+p-cancelable@^1.0.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/p-cancelable/-/p-cancelable-1.1.0.tgz#d078d15a3af409220c886f1d9a0ca2e441ab26cc"
+ integrity sha512-s73XxOZ4zpt1edZYZzvhqFa6uvQc1vwUa0K0BdtIZgQMAJj9IbebH+JkgKZc9h+B05PKHLOTl4ajG1BmNrVZlw==
+
+p-limit@^1.1.0:
+ version "1.3.0"
+ resolved "https://registry.yarnpkg.com/p-limit/-/p-limit-1.3.0.tgz#b86bd5f0c25690911c7590fcbfc2010d54b3ccb8"
+ integrity sha512-vvcXsLAJ9Dr5rQOPk7toZQZJApBl2K4J6dANSsEuh6QI41JYcsS/qhTGa9ErIUUgK3WNQoJYvylxvjqmiqEA9Q==
+ dependencies:
+ p-try "^1.0.0"
+
+p-limit@^2.0.0, p-limit@^2.2.0:
+ version "2.3.0"
+ resolved "https://registry.yarnpkg.com/p-limit/-/p-limit-2.3.0.tgz#3dd33c647a214fdfffd835933eb086da0dc21db1"
+ integrity sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==
+ dependencies:
+ p-try "^2.0.0"
+
+p-limit@^3.0.2:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/p-limit/-/p-limit-3.1.0.tgz#e1daccbe78d0d1388ca18c64fea38e3e57e3706b"
+ integrity sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==
+ dependencies:
+ yocto-queue "^0.1.0"
+
+p-locate@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/p-locate/-/p-locate-2.0.0.tgz#20a0103b222a70c8fd39cc2e580680f3dde5ec43"
+ integrity sha512-nQja7m7gSKuewoVRen45CtVfODR3crN3goVQ0DDZ9N3yHxgpkuBhZqsaiotSQRrADUrne346peY7kT3TSACykg==
+ dependencies:
+ p-limit "^1.1.0"
+
+p-locate@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/p-locate/-/p-locate-3.0.0.tgz#322d69a05c0264b25997d9f40cd8a891ab0064a4"
+ integrity sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==
+ dependencies:
+ p-limit "^2.0.0"
+
+p-locate@^4.1.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/p-locate/-/p-locate-4.1.0.tgz#a3428bb7088b3a60292f66919278b7c297ad4f07"
+ integrity sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==
+ dependencies:
+ p-limit "^2.2.0"
+
+p-locate@^5.0.0:
+ version "5.0.0"
+ resolved "https://registry.yarnpkg.com/p-locate/-/p-locate-5.0.0.tgz#83c8315c6785005e3bd021839411c9e110e6d834"
+ integrity sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==
+ dependencies:
+ p-limit "^3.0.2"
+
+p-map@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/p-map/-/p-map-4.0.0.tgz#bb2f95a5eda2ec168ec9274e06a747c3e2904d2b"
+ integrity sha512-/bjOqmgETBYB5BoEeGVea8dmvHb2m9GLy1E9W43yeyfP6QQCZGFNa+XRceJEuDB6zqr+gKpIAmlLebMpykw/MQ==
+ dependencies:
+ aggregate-error "^3.0.0"
+
+p-retry@^4.5.0:
+ version "4.6.2"
+ resolved "https://registry.yarnpkg.com/p-retry/-/p-retry-4.6.2.tgz#9baae7184057edd4e17231cee04264106e092a16"
+ integrity sha512-312Id396EbJdvRONlngUx0NydfrIQ5lsYu0znKVUzVvArzEIt08V1qhtyESbGVd1FGX7UKtiFp5uwKZdM8wIuQ==
+ dependencies:
+ "@types/retry" "0.12.0"
+ retry "^0.13.1"
+
+p-try@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/p-try/-/p-try-1.0.0.tgz#cbc79cdbaf8fd4228e13f621f2b1a237c1b207b3"
+ integrity sha512-U1etNYuMJoIz3ZXSrrySFjsXQTWOx2/jdi86L+2pRvph/qMKL6sbcCYdH23fqsbm8TH2Gn0OybpT4eSFlCVHww==
+
+p-try@^2.0.0:
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/p-try/-/p-try-2.2.0.tgz#cb2868540e313d61de58fafbe35ce9004d5540e6"
+ integrity sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==
+
+package-json@^6.3.0:
+ version "6.5.0"
+ resolved "https://registry.yarnpkg.com/package-json/-/package-json-6.5.0.tgz#6feedaca35e75725876d0b0e64974697fed145b0"
+ integrity sha512-k3bdm2n25tkyxcjSKzB5x8kfVxlMdgsbPr0GkZcwHsLpba6cBjqCt1KlcChKEvxHIcTB1FVMuwoijZ26xex5MQ==
+ dependencies:
+ got "^9.6.0"
+ registry-auth-token "^4.0.0"
+ registry-url "^5.0.0"
+ semver "^6.2.0"
+
+param-case@^3.0.4:
+ version "3.0.4"
+ resolved "https://registry.yarnpkg.com/param-case/-/param-case-3.0.4.tgz#7d17fe4aa12bde34d4a77d91acfb6219caad01c5"
+ integrity sha512-RXlj7zCYokReqWpOPH9oYivUzLYZ5vAPIfEmCTNViosC78F8F0H9y7T7gG2M39ymgutxF5gcFEsyZQSph9Bp3A==
+ dependencies:
+ dot-case "^3.0.4"
+ tslib "^2.0.3"
+
+parent-module@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/parent-module/-/parent-module-1.0.1.tgz#691d2709e78c79fae3a156622452d00762caaaa2"
+ integrity sha512-GQ2EWRpQV8/o+Aw8YqtfZZPfNRWZYkbidE9k5rpl/hC3vtHHBfGm2Ifi6qWV+coDGkrUKZAxE3Lot5kcsRlh+g==
+ dependencies:
+ callsites "^3.0.0"
+
+parse-entities@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/parse-entities/-/parse-entities-2.0.0.tgz#53c6eb5b9314a1f4ec99fa0fdf7ce01ecda0cbe8"
+ integrity sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==
+ dependencies:
+ character-entities "^1.0.0"
+ character-entities-legacy "^1.0.0"
+ character-reference-invalid "^1.0.0"
+ is-alphanumerical "^1.0.0"
+ is-decimal "^1.0.0"
+ is-hexadecimal "^1.0.0"
+
+parse-json@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/parse-json/-/parse-json-4.0.0.tgz#be35f5425be1f7f6c747184f98a788cb99477ee0"
+ integrity sha512-aOIos8bujGN93/8Ox/jPLh7RwVnPEysynVFE+fQZyg6jKELEHwzgKdLRFHUgXJL6kylijVSBC4BvN9OmsB48Rw==
+ dependencies:
+ error-ex "^1.3.1"
+ json-parse-better-errors "^1.0.1"
+
+parse-json@^5.0.0:
+ version "5.2.0"
+ resolved "https://registry.yarnpkg.com/parse-json/-/parse-json-5.2.0.tgz#c76fc66dee54231c962b22bcc8a72cf2f99753cd"
+ integrity sha512-ayCKvm/phCGxOkYRSCM82iDwct8/EonSEgCSxWxD7ve6jHggsFl4fZVQBPRNgQoKiuV/odhFrGzQXZwbifC8Rg==
+ dependencies:
+ "@babel/code-frame" "^7.0.0"
+ error-ex "^1.3.1"
+ json-parse-even-better-errors "^2.3.0"
+ lines-and-columns "^1.1.6"
+
+parse-node-version@^1.0.1:
+ version "1.0.1"
+ resolved "https://mirrors.cloud.tencent.com/npm/parse-node-version/-/parse-node-version-1.0.1.tgz#e2b5dbede00e7fa9bc363607f53327e8b073189b"
+ integrity sha512-3YHlOa/JgH6Mnpr05jP9eDG254US9ek25LyIxZlDItp2iJtwyaXQb57lBYLdT3MowkUFYEV2XXNAYIPlESvJlA==
+
+parse-numeric-range@^1.3.0:
+ version "1.3.0"
+ resolved "https://registry.yarnpkg.com/parse-numeric-range/-/parse-numeric-range-1.3.0.tgz#7c63b61190d61e4d53a1197f0c83c47bb670ffa3"
+ integrity sha512-twN+njEipszzlMJd4ONUYgSfZPDxgHhT9Ahed5uTigpQn90FggW4SA/AIPq/6a149fTbE9qBEcSwE3FAEp6wQQ==
+
+parse-passwd@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/parse-passwd/-/parse-passwd-1.0.0.tgz#6d5b934a456993b23d37f40a382d6f1666a8e5c6"
+ integrity sha512-1Y1A//QUXEZK7YKz+rD9WydcE1+EuPr6ZBgKecAB8tmoW6UFv0NREVJe1p+jRxtThkcbbKkfwIbWJe/IeE6m2Q==
+
+parse5-htmlparser2-tree-adapter@^7.0.0:
+ version "7.0.0"
+ resolved "https://registry.yarnpkg.com/parse5-htmlparser2-tree-adapter/-/parse5-htmlparser2-tree-adapter-7.0.0.tgz#23c2cc233bcf09bb7beba8b8a69d46b08c62c2f1"
+ integrity sha512-B77tOZrqqfUfnVcOrUvfdLbz4pu4RopLD/4vmu3HUPswwTA8OH0EMW9BlWR2B0RCoiZRAHEUu7IxeP1Pd1UU+g==
+ dependencies:
+ domhandler "^5.0.2"
+ parse5 "^7.0.0"
+
+parse5@^6.0.0:
+ version "6.0.1"
+ resolved "https://registry.yarnpkg.com/parse5/-/parse5-6.0.1.tgz#e1a1c085c569b3dc08321184f19a39cc27f7c30b"
+ integrity sha512-Ofn/CTFzRGTTxwpNEs9PP93gXShHcTq255nzRYSKe8AkVpZY7e1fpmTfOyoIvjP5HG7Z2ZM7VS9PPhQGW2pOpw==
+
+parse5@^7.0.0:
+ version "7.1.1"
+ resolved "https://registry.yarnpkg.com/parse5/-/parse5-7.1.1.tgz#4649f940ccfb95d8754f37f73078ea20afe0c746"
+ integrity sha512-kwpuwzB+px5WUg9pyK0IcK/shltJN5/OVhQagxhCQNtT9Y9QRZqNY2e1cmbu/paRh5LMnz/oVTVLBpjFmMZhSg==
+ dependencies:
+ entities "^4.4.0"
+
+parseurl@~1.3.2, parseurl@~1.3.3:
+ version "1.3.3"
+ resolved "https://registry.yarnpkg.com/parseurl/-/parseurl-1.3.3.tgz#9da19e7bee8d12dff0513ed5b76957793bc2e8d4"
+ integrity sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==
+
+pascal-case@^3.1.2:
+ version "3.1.2"
+ resolved "https://registry.yarnpkg.com/pascal-case/-/pascal-case-3.1.2.tgz#b48e0ef2b98e205e7c1dae747d0b1508237660eb"
+ integrity sha512-uWlGT3YSnK9x3BQJaOdcZwrnV6hPpd8jFH1/ucpiLRPh/2zCVJKS19E4GvYHvaCcACn3foXZ0cLB9Wrx1KGe5g==
+ dependencies:
+ no-case "^3.0.4"
+ tslib "^2.0.3"
+
+path-exists@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/path-exists/-/path-exists-3.0.0.tgz#ce0ebeaa5f78cb18925ea7d810d7b59b010fd515"
+ integrity sha512-bpC7GYwiDYQ4wYLe+FA8lhRjhQCMcQGuSgGGqDkg/QerRWw9CmGRT0iSOVRSZJ29NMLZgIzqaljJ63oaL4NIJQ==
+
+path-exists@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/path-exists/-/path-exists-4.0.0.tgz#513bdbe2d3b95d7762e8c1137efa195c6c61b5b3"
+ integrity sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==
+
+path-is-absolute@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/path-is-absolute/-/path-is-absolute-1.0.1.tgz#174b9268735534ffbc7ace6bf53a5a9e1b5c5f5f"
+ integrity sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==
+
+path-is-inside@1.0.2:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/path-is-inside/-/path-is-inside-1.0.2.tgz#365417dede44430d1c11af61027facf074bdfc53"
+ integrity sha512-DUWJr3+ULp4zXmol/SZkFf3JGsS9/SIv+Y3Rt93/UjPpDpklB5f1er4O3POIbUuUJ3FXgqte2Q7SrU6zAqwk8w==
+
+path-key@^3.0.0, path-key@^3.1.0:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/path-key/-/path-key-3.1.1.tgz#581f6ade658cbba65a0d3380de7753295054f375"
+ integrity sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==
+
+path-parse@^1.0.7:
+ version "1.0.7"
+ resolved "https://registry.yarnpkg.com/path-parse/-/path-parse-1.0.7.tgz#fbc114b60ca42b30d9daf5858e4bd68bbedb6735"
+ integrity sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==
+
+path-to-regexp@0.1.7:
+ version "0.1.7"
+ resolved "https://registry.yarnpkg.com/path-to-regexp/-/path-to-regexp-0.1.7.tgz#df604178005f522f15eb4490e7247a1bfaa67f8c"
+ integrity sha512-5DFkuoqlv1uYQKxy8omFBeJPQcdoE07Kv2sferDCrAq1ohOU+MSDswDIbnx3YAM60qIOnYa53wBhXW0EbMonrQ==
+
+path-to-regexp@2.2.1:
+ version "2.2.1"
+ resolved "https://registry.yarnpkg.com/path-to-regexp/-/path-to-regexp-2.2.1.tgz#90b617025a16381a879bc82a38d4e8bdeb2bcf45"
+ integrity sha512-gu9bD6Ta5bwGrrU8muHzVOBFFREpp2iRkVfhBJahwJ6p6Xw20SjT0MxLnwkjOibQmGSYhiUnf2FLe7k+jcFmGQ==
+
+path-to-regexp@^1.7.0:
+ version "1.8.0"
+ resolved "https://registry.yarnpkg.com/path-to-regexp/-/path-to-regexp-1.8.0.tgz#887b3ba9d84393e87a0a0b9f4cb756198b53548a"
+ integrity sha512-n43JRhlUKUAlibEJhPeir1ncUID16QnEjNpwzNdO3Lm4ywrBpBZ5oLD0I6br9evr1Y9JTqwRtAh7JLoOzAQdVA==
+ dependencies:
+ isarray "0.0.1"
+
+path-type@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/path-type/-/path-type-3.0.0.tgz#cef31dc8e0a1a3bb0d105c0cd97cf3bf47f4e36f"
+ integrity sha512-T2ZUsdZFHgA3u4e5PfPbjd7HDDpxPnQb5jN0SrDsjNSuVXHJqtwTnWqG0B1jZrgmJ/7lj1EmVIByWt1gxGkWvg==
+ dependencies:
+ pify "^3.0.0"
+
+path-type@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/path-type/-/path-type-4.0.0.tgz#84ed01c0a7ba380afe09d90a8c180dcd9d03043b"
+ integrity sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==
+
+picocolors@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/picocolors/-/picocolors-1.0.0.tgz#cb5bdc74ff3f51892236eaf79d68bc44564ab81c"
+ integrity sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ==
+
+picomatch@^2.0.4, picomatch@^2.2.1, picomatch@^2.3.1:
+ version "2.3.1"
+ resolved "https://registry.yarnpkg.com/picomatch/-/picomatch-2.3.1.tgz#3ba3833733646d9d3e4995946c1365a67fb07a42"
+ integrity sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==
+
+pify@^2.3.0:
+ version "2.3.0"
+ resolved "https://registry.yarnpkg.com/pify/-/pify-2.3.0.tgz#ed141a6ac043a849ea588498e7dca8b15330e90c"
+ integrity sha512-udgsAY+fTnvv7kI7aaxbqwWNb0AHiB0qBO89PZKPkoTmGOgdbrHDKD+0B2X4uTfJ/FT1R09r9gTsjUjNJotuog==
+
+pify@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/pify/-/pify-3.0.0.tgz#e5a4acd2c101fdf3d9a4d07f0dbc4db49dd28176"
+ integrity sha512-C3FsVNH1udSEX48gGX1xfvwTWfsYWj5U+8/uK15BGzIGrKoUpghX8hWZwa/OFnakBiiVNmBvemTJR5mcy7iPcg==
+
+pify@^4.0.1:
+ version "4.0.1"
+ resolved "https://mirrors.cloud.tencent.com/npm/pify/-/pify-4.0.1.tgz#4b2cd25c50d598735c50292224fd8c6df41e3231"
+ integrity sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==
+
+pkg-dir@^4.1.0:
+ version "4.2.0"
+ resolved "https://registry.yarnpkg.com/pkg-dir/-/pkg-dir-4.2.0.tgz#f099133df7ede422e81d1d8448270eeb3e4261f3"
+ integrity sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==
+ dependencies:
+ find-up "^4.0.0"
+
+pkg-up@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/pkg-up/-/pkg-up-3.1.0.tgz#100ec235cc150e4fd42519412596a28512a0def5"
+ integrity sha512-nDywThFk1i4BQK4twPQ6TA4RT8bDY96yeuCVBWL3ePARCiEKDRSrNGbFIgUJpLp+XeIR65v8ra7WuJOFUBtkMA==
+ dependencies:
+ find-up "^3.0.0"
+
+postcss-calc@^8.2.3:
+ version "8.2.4"
+ resolved "https://registry.yarnpkg.com/postcss-calc/-/postcss-calc-8.2.4.tgz#77b9c29bfcbe8a07ff6693dc87050828889739a5"
+ integrity sha512-SmWMSJmB8MRnnULldx0lQIyhSNvuDl9HfrZkaqqE/WHAhToYsAvDq+yAsA/kIyINDszOp3Rh0GFoNuH5Ypsm3Q==
+ dependencies:
+ postcss-selector-parser "^6.0.9"
+ postcss-value-parser "^4.2.0"
+
+postcss-colormin@^5.3.0:
+ version "5.3.0"
+ resolved "https://registry.yarnpkg.com/postcss-colormin/-/postcss-colormin-5.3.0.tgz#3cee9e5ca62b2c27e84fce63affc0cfb5901956a"
+ integrity sha512-WdDO4gOFG2Z8n4P8TWBpshnL3JpmNmJwdnfP2gbk2qBA8PWwOYcmjmI/t3CmMeL72a7Hkd+x/Mg9O2/0rD54Pg==
+ dependencies:
+ browserslist "^4.16.6"
+ caniuse-api "^3.0.0"
+ colord "^2.9.1"
+ postcss-value-parser "^4.2.0"
+
+postcss-convert-values@^5.1.2:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/postcss-convert-values/-/postcss-convert-values-5.1.2.tgz#31586df4e184c2e8890e8b34a0b9355313f503ab"
+ integrity sha512-c6Hzc4GAv95B7suy4udszX9Zy4ETyMCgFPUDtWjdFTKH1SE9eFY/jEpHSwTH1QPuwxHpWslhckUQWbNRM4ho5g==
+ dependencies:
+ browserslist "^4.20.3"
+ postcss-value-parser "^4.2.0"
+
+postcss-discard-comments@^5.1.2:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/postcss-discard-comments/-/postcss-discard-comments-5.1.2.tgz#8df5e81d2925af2780075840c1526f0660e53696"
+ integrity sha512-+L8208OVbHVF2UQf1iDmRcbdjJkuBF6IS29yBDSiWUIzpYaAhtNl6JYnYm12FnkeCwQqF5LeklOu6rAqgfBZqQ==
+
+postcss-discard-duplicates@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-discard-duplicates/-/postcss-discard-duplicates-5.1.0.tgz#9eb4fe8456706a4eebd6d3b7b777d07bad03e848"
+ integrity sha512-zmX3IoSI2aoenxHV6C7plngHWWhUOV3sP1T8y2ifzxzbtnuhk1EdPwm0S1bIUNaJ2eNbWeGLEwzw8huPD67aQw==
+
+postcss-discard-empty@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/postcss-discard-empty/-/postcss-discard-empty-5.1.1.tgz#e57762343ff7f503fe53fca553d18d7f0c369c6c"
+ integrity sha512-zPz4WljiSuLWsI0ir4Mcnr4qQQ5e1Ukc3i7UfE2XcrwKK2LIPIqE5jxMRxO6GbI3cv//ztXDsXwEWT3BHOGh3A==
+
+postcss-discard-overridden@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-discard-overridden/-/postcss-discard-overridden-5.1.0.tgz#7e8c5b53325747e9d90131bb88635282fb4a276e"
+ integrity sha512-21nOL7RqWR1kasIVdKs8HNqQJhFxLsyRfAnUDm4Fe4t4mCWL9OJiHvlHPjcd8zc5Myu89b/7wZDnOSjFgeWRtw==
+
+postcss-discard-unused@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-discard-unused/-/postcss-discard-unused-5.1.0.tgz#8974e9b143d887677304e558c1166d3762501142"
+ integrity sha512-KwLWymI9hbwXmJa0dkrzpRbSJEh0vVUd7r8t0yOGPcfKzyJJxFM8kLyC5Ev9avji6nY95pOp1W6HqIrfT+0VGw==
+ dependencies:
+ postcss-selector-parser "^6.0.5"
+
+postcss-loader@^7.0.0:
+ version "7.0.1"
+ resolved "https://registry.yarnpkg.com/postcss-loader/-/postcss-loader-7.0.1.tgz#4c883cc0a1b2bfe2074377b7a74c1cd805684395"
+ integrity sha512-VRviFEyYlLjctSM93gAZtcJJ/iSkPZ79zWbN/1fSH+NisBByEiVLqpdVDrPLVSi8DX0oJo12kL/GppTBdKVXiQ==
+ dependencies:
+ cosmiconfig "^7.0.0"
+ klona "^2.0.5"
+ semver "^7.3.7"
+
+postcss-merge-idents@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/postcss-merge-idents/-/postcss-merge-idents-5.1.1.tgz#7753817c2e0b75d0853b56f78a89771e15ca04a1"
+ integrity sha512-pCijL1TREiCoog5nQp7wUe+TUonA2tC2sQ54UGeMmryK3UFGIYKqDyjnqd6RcuI4znFn9hWSLNN8xKE/vWcUQw==
+ dependencies:
+ cssnano-utils "^3.1.0"
+ postcss-value-parser "^4.2.0"
+
+postcss-merge-longhand@^5.1.6:
+ version "5.1.6"
+ resolved "https://registry.yarnpkg.com/postcss-merge-longhand/-/postcss-merge-longhand-5.1.6.tgz#f378a8a7e55766b7b644f48e5d8c789ed7ed51ce"
+ integrity sha512-6C/UGF/3T5OE2CEbOuX7iNO63dnvqhGZeUnKkDeifebY0XqkkvrctYSZurpNE902LDf2yKwwPFgotnfSoPhQiw==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+ stylehacks "^5.1.0"
+
+postcss-merge-rules@^5.1.2:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/postcss-merge-rules/-/postcss-merge-rules-5.1.2.tgz#7049a14d4211045412116d79b751def4484473a5"
+ integrity sha512-zKMUlnw+zYCWoPN6yhPjtcEdlJaMUZ0WyVcxTAmw3lkkN/NDMRkOkiuctQEoWAOvH7twaxUUdvBWl0d4+hifRQ==
+ dependencies:
+ browserslist "^4.16.6"
+ caniuse-api "^3.0.0"
+ cssnano-utils "^3.1.0"
+ postcss-selector-parser "^6.0.5"
+
+postcss-minify-font-values@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-minify-font-values/-/postcss-minify-font-values-5.1.0.tgz#f1df0014a726083d260d3bd85d7385fb89d1f01b"
+ integrity sha512-el3mYTgx13ZAPPirSVsHqFzl+BBBDrXvbySvPGFnQcTI4iNslrPaFq4muTkLZmKlGk4gyFAYUBMH30+HurREyA==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-minify-gradients@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/postcss-minify-gradients/-/postcss-minify-gradients-5.1.1.tgz#f1fe1b4f498134a5068240c2f25d46fcd236ba2c"
+ integrity sha512-VGvXMTpCEo4qHTNSa9A0a3D+dxGFZCYwR6Jokk+/3oB6flu2/PnPXAh2x7x52EkY5xlIHLm+Le8tJxe/7TNhzw==
+ dependencies:
+ colord "^2.9.1"
+ cssnano-utils "^3.1.0"
+ postcss-value-parser "^4.2.0"
+
+postcss-minify-params@^5.1.3:
+ version "5.1.3"
+ resolved "https://registry.yarnpkg.com/postcss-minify-params/-/postcss-minify-params-5.1.3.tgz#ac41a6465be2db735099bbd1798d85079a6dc1f9"
+ integrity sha512-bkzpWcjykkqIujNL+EVEPOlLYi/eZ050oImVtHU7b4lFS82jPnsCb44gvC6pxaNt38Els3jWYDHTjHKf0koTgg==
+ dependencies:
+ browserslist "^4.16.6"
+ cssnano-utils "^3.1.0"
+ postcss-value-parser "^4.2.0"
+
+postcss-minify-selectors@^5.2.1:
+ version "5.2.1"
+ resolved "https://registry.yarnpkg.com/postcss-minify-selectors/-/postcss-minify-selectors-5.2.1.tgz#d4e7e6b46147b8117ea9325a915a801d5fe656c6"
+ integrity sha512-nPJu7OjZJTsVUmPdm2TcaiohIwxP+v8ha9NehQ2ye9szv4orirRU3SDdtUmKH+10nzn0bAyOXZ0UEr7OpvLehg==
+ dependencies:
+ postcss-selector-parser "^6.0.5"
+
+postcss-modules-extract-imports@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/postcss-modules-extract-imports/-/postcss-modules-extract-imports-3.0.0.tgz#cda1f047c0ae80c97dbe28c3e76a43b88025741d"
+ integrity sha512-bdHleFnP3kZ4NYDhuGlVK+CMrQ/pqUm8bx/oGL93K6gVwiclvX5x0n76fYMKuIGKzlABOy13zsvqjb0f92TEXw==
+
+postcss-modules-local-by-default@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/postcss-modules-local-by-default/-/postcss-modules-local-by-default-4.0.0.tgz#ebbb54fae1598eecfdf691a02b3ff3b390a5a51c"
+ integrity sha512-sT7ihtmGSF9yhm6ggikHdV0hlziDTX7oFoXtuVWeDd3hHObNkcHRo9V3yg7vCAY7cONyxJC/XXCmmiHHcvX7bQ==
+ dependencies:
+ icss-utils "^5.0.0"
+ postcss-selector-parser "^6.0.2"
+ postcss-value-parser "^4.1.0"
+
+postcss-modules-scope@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/postcss-modules-scope/-/postcss-modules-scope-3.0.0.tgz#9ef3151456d3bbfa120ca44898dfca6f2fa01f06"
+ integrity sha512-hncihwFA2yPath8oZ15PZqvWGkWf+XUfQgUGamS4LqoP1anQLOsOJw0vr7J7IwLpoY9fatA2qiGUGmuZL0Iqlg==
+ dependencies:
+ postcss-selector-parser "^6.0.4"
+
+postcss-modules-values@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/postcss-modules-values/-/postcss-modules-values-4.0.0.tgz#d7c5e7e68c3bb3c9b27cbf48ca0bb3ffb4602c9c"
+ integrity sha512-RDxHkAiEGI78gS2ofyvCsu7iycRv7oqw5xMWn9iMoR0N/7mf9D50ecQqUo5BZ9Zh2vH4bCUR/ktCqbB9m8vJjQ==
+ dependencies:
+ icss-utils "^5.0.0"
+
+postcss-normalize-charset@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-charset/-/postcss-normalize-charset-5.1.0.tgz#9302de0b29094b52c259e9b2cf8dc0879879f0ed"
+ integrity sha512-mSgUJ+pd/ldRGVx26p2wz9dNZ7ji6Pn8VWBajMXFf8jk7vUoSrZ2lt/wZR7DtlZYKesmZI680qjr2CeFF2fbUg==
+
+postcss-normalize-display-values@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-display-values/-/postcss-normalize-display-values-5.1.0.tgz#72abbae58081960e9edd7200fcf21ab8325c3da8"
+ integrity sha512-WP4KIM4o2dazQXWmFaqMmcvsKmhdINFblgSeRgn8BJ6vxaMyaJkwAzpPpuvSIoG/rmX3M+IrRZEz2H0glrQNEA==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-normalize-positions@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-positions/-/postcss-normalize-positions-5.1.1.tgz#ef97279d894087b59325b45c47f1e863daefbb92"
+ integrity sha512-6UpCb0G4eofTCQLFVuI3EVNZzBNPiIKcA1AKVka+31fTVySphr3VUgAIULBhxZkKgwLImhzMR2Bw1ORK+37INg==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-normalize-repeat-style@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-repeat-style/-/postcss-normalize-repeat-style-5.1.1.tgz#e9eb96805204f4766df66fd09ed2e13545420fb2"
+ integrity sha512-mFpLspGWkQtBcWIRFLmewo8aC3ImN2i/J3v8YCFUwDnPu3Xz4rLohDO26lGjwNsQxB3YF0KKRwspGzE2JEuS0g==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-normalize-string@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-string/-/postcss-normalize-string-5.1.0.tgz#411961169e07308c82c1f8c55f3e8a337757e228"
+ integrity sha512-oYiIJOf4T9T1N4i+abeIc7Vgm/xPCGih4bZz5Nm0/ARVJ7K6xrDlLwvwqOydvyL3RHNf8qZk6vo3aatiw/go3w==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-normalize-timing-functions@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-timing-functions/-/postcss-normalize-timing-functions-5.1.0.tgz#d5614410f8f0b2388e9f240aa6011ba6f52dafbb"
+ integrity sha512-DOEkzJ4SAXv5xkHl0Wa9cZLF3WCBhF3o1SKVxKQAa+0pYKlueTpCgvkFAHfk+Y64ezX9+nITGrDZeVGgITJXjg==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-normalize-unicode@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-unicode/-/postcss-normalize-unicode-5.1.0.tgz#3d23aede35e160089a285e27bf715de11dc9db75"
+ integrity sha512-J6M3MizAAZ2dOdSjy2caayJLQT8E8K9XjLce8AUQMwOrCvjCHv24aLC/Lps1R1ylOfol5VIDMaM/Lo9NGlk1SQ==
+ dependencies:
+ browserslist "^4.16.6"
+ postcss-value-parser "^4.2.0"
+
+postcss-normalize-url@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-url/-/postcss-normalize-url-5.1.0.tgz#ed9d88ca82e21abef99f743457d3729a042adcdc"
+ integrity sha512-5upGeDO+PVthOxSmds43ZeMeZfKH+/DKgGRD7TElkkyS46JXAUhMzIKiCa7BabPeIy3AQcTkXwVVN7DbqsiCew==
+ dependencies:
+ normalize-url "^6.0.1"
+ postcss-value-parser "^4.2.0"
+
+postcss-normalize-whitespace@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/postcss-normalize-whitespace/-/postcss-normalize-whitespace-5.1.1.tgz#08a1a0d1ffa17a7cc6efe1e6c9da969cc4493cfa"
+ integrity sha512-83ZJ4t3NUDETIHTa3uEg6asWjSBYL5EdkVB0sDncx9ERzOKBVJIUeDO9RyA9Zwtig8El1d79HBp0JEi8wvGQnA==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-ordered-values@^5.1.3:
+ version "5.1.3"
+ resolved "https://registry.yarnpkg.com/postcss-ordered-values/-/postcss-ordered-values-5.1.3.tgz#b6fd2bd10f937b23d86bc829c69e7732ce76ea38"
+ integrity sha512-9UO79VUhPwEkzbb3RNpqqghc6lcYej1aveQteWY+4POIwlqkYE21HKWaLDF6lWNuqCobEAyTovVhtI32Rbv2RQ==
+ dependencies:
+ cssnano-utils "^3.1.0"
+ postcss-value-parser "^4.2.0"
+
+postcss-reduce-idents@^5.2.0:
+ version "5.2.0"
+ resolved "https://registry.yarnpkg.com/postcss-reduce-idents/-/postcss-reduce-idents-5.2.0.tgz#c89c11336c432ac4b28792f24778859a67dfba95"
+ integrity sha512-BTrLjICoSB6gxbc58D5mdBK8OhXRDqud/zodYfdSi52qvDHdMwk+9kB9xsM8yJThH/sZU5A6QVSmMmaN001gIg==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-reduce-initial@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-reduce-initial/-/postcss-reduce-initial-5.1.0.tgz#fc31659ea6e85c492fb2a7b545370c215822c5d6"
+ integrity sha512-5OgTUviz0aeH6MtBjHfbr57tml13PuedK/Ecg8szzd4XRMbYxH4572JFG067z+FqBIf6Zp/d+0581glkvvWMFw==
+ dependencies:
+ browserslist "^4.16.6"
+ caniuse-api "^3.0.0"
+
+postcss-reduce-transforms@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-reduce-transforms/-/postcss-reduce-transforms-5.1.0.tgz#333b70e7758b802f3dd0ddfe98bb1ccfef96b6e9"
+ integrity sha512-2fbdbmgir5AvpW9RLtdONx1QoYG2/EtqpNQbFASDlixBbAYuTcJ0dECwlqNqH7VbaUnEnh8SrxOe2sRIn24XyQ==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+
+postcss-selector-parser@^6.0.2, postcss-selector-parser@^6.0.4, postcss-selector-parser@^6.0.5, postcss-selector-parser@^6.0.9:
+ version "6.0.10"
+ resolved "https://registry.yarnpkg.com/postcss-selector-parser/-/postcss-selector-parser-6.0.10.tgz#79b61e2c0d1bfc2602d549e11d0876256f8df88d"
+ integrity sha512-IQ7TZdoaqbT+LCpShg46jnZVlhWD2w6iQYAcYXfHARZ7X1t/UGhhceQDs5X0cGqKvYlHNOuv7Oa1xmb0oQuA3w==
+ dependencies:
+ cssesc "^3.0.0"
+ util-deprecate "^1.0.2"
+
+postcss-sort-media-queries@^4.2.1:
+ version "4.3.0"
+ resolved "https://registry.yarnpkg.com/postcss-sort-media-queries/-/postcss-sort-media-queries-4.3.0.tgz#f48a77d6ce379e86676fc3f140cf1b10a06f6051"
+ integrity sha512-jAl8gJM2DvuIJiI9sL1CuiHtKM4s5aEIomkU8G3LFvbP+p8i7Sz8VV63uieTgoewGqKbi+hxBTiOKJlB35upCg==
+ dependencies:
+ sort-css-media-queries "2.1.0"
+
+postcss-svgo@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-svgo/-/postcss-svgo-5.1.0.tgz#0a317400ced789f233a28826e77523f15857d80d"
+ integrity sha512-D75KsH1zm5ZrHyxPakAxJWtkyXew5qwS70v56exwvw542d9CRtTo78K0WeFxZB4G7JXKKMbEZtZayTGdIky/eA==
+ dependencies:
+ postcss-value-parser "^4.2.0"
+ svgo "^2.7.0"
+
+postcss-unique-selectors@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/postcss-unique-selectors/-/postcss-unique-selectors-5.1.1.tgz#a9f273d1eacd09e9aa6088f4b0507b18b1b541b6"
+ integrity sha512-5JiODlELrz8L2HwxfPnhOWZYWDxVHWL83ufOv84NrcgipI7TaeRsatAhK4Tr2/ZiYldpK/wBvw5BD3qfaK96GA==
+ dependencies:
+ postcss-selector-parser "^6.0.5"
+
+postcss-value-parser@^4.1.0, postcss-value-parser@^4.2.0:
+ version "4.2.0"
+ resolved "https://registry.yarnpkg.com/postcss-value-parser/-/postcss-value-parser-4.2.0.tgz#723c09920836ba6d3e5af019f92bc0971c02e514"
+ integrity sha512-1NNCs6uurfkVbeXG4S8JFT9t19m45ICnif8zWLd5oPSZ50QnwMfK+H3jv408d4jw/7Bttv5axS5IiHoLaVNHeQ==
+
+postcss-zindex@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/postcss-zindex/-/postcss-zindex-5.1.0.tgz#4a5c7e5ff1050bd4c01d95b1847dfdcc58a496ff"
+ integrity sha512-fgFMf0OtVSBR1va1JNHYgMxYk73yhn/qb4uQDq1DLGYolz8gHCyr/sesEuGUaYs58E3ZJRcpoGuPVoB7Meiq9A==
+
+postcss@^8.3.11, postcss@^8.4.13, postcss@^8.4.14, postcss@^8.4.7:
+ version "8.4.16"
+ resolved "https://registry.yarnpkg.com/postcss/-/postcss-8.4.16.tgz#33a1d675fac39941f5f445db0de4db2b6e01d43c"
+ integrity sha512-ipHE1XBvKzm5xI7hiHCZJCSugxvsdq2mPnsq5+UF+VHCjiBvtDrlxJfMBToWaP9D5XlgNmcFGqoHmUn0EYEaRQ==
+ dependencies:
+ nanoid "^3.3.4"
+ picocolors "^1.0.0"
+ source-map-js "^1.0.2"
+
+prepend-http@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/prepend-http/-/prepend-http-2.0.0.tgz#e92434bfa5ea8c19f41cdfd401d741a3c819d897"
+ integrity sha512-ravE6m9Atw9Z/jjttRUZ+clIXogdghyZAuWJ3qEzjT+jI/dL1ifAqhZeC5VHzQp1MSt1+jxKkFNemj/iO7tVUA==
+
+pretty-error@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/pretty-error/-/pretty-error-4.0.0.tgz#90a703f46dd7234adb46d0f84823e9d1cb8f10d6"
+ integrity sha512-AoJ5YMAcXKYxKhuJGdcvse+Voc6v1RgnsR3nWcYU7q4t6z0Q6T86sv5Zq8VIRbOWWFpvdGE83LtdSMNd+6Y0xw==
+ dependencies:
+ lodash "^4.17.20"
+ renderkid "^3.0.0"
+
+pretty-time@^1.1.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/pretty-time/-/pretty-time-1.1.0.tgz#ffb7429afabb8535c346a34e41873adf3d74dd0e"
+ integrity sha512-28iF6xPQrP8Oa6uxE6a1biz+lWeTOAPKggvjB8HAs6nVMKZwf5bG++632Dx614hIWgUPkgivRfG+a8uAXGTIbA==
+
+prism-react-renderer@^1.3.5:
+ version "1.3.5"
+ resolved "https://registry.yarnpkg.com/prism-react-renderer/-/prism-react-renderer-1.3.5.tgz#786bb69aa6f73c32ba1ee813fbe17a0115435085"
+ integrity sha512-IJ+MSwBWKG+SM3b2SUfdrhC+gu01QkV2KmRQgREThBfSQRoufqRfxfHUxpG1WcaFjP+kojcFyO9Qqtpgt3qLCg==
+
+prismjs@^1.28.0:
+ version "1.29.0"
+ resolved "https://registry.yarnpkg.com/prismjs/-/prismjs-1.29.0.tgz#f113555a8fa9b57c35e637bba27509dcf802dd12"
+ integrity sha512-Kx/1w86q/epKcmte75LNrEoT+lX8pBpavuAbvJWRXar7Hz8jrtF+e3vY751p0R8H9HdArwaCTNDDzHg/ScJK1Q==
+
+process-nextick-args@~2.0.0:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/process-nextick-args/-/process-nextick-args-2.0.1.tgz#7820d9b16120cc55ca9ae7792680ae7dba6d7fe2"
+ integrity sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==
+
+promise@^7.1.1:
+ version "7.3.1"
+ resolved "https://registry.yarnpkg.com/promise/-/promise-7.3.1.tgz#064b72602b18f90f29192b8b1bc418ffd1ebd3bf"
+ integrity sha512-nolQXZ/4L+bP/UGlkfaIujX9BKxGwmQ9OT4mOt5yvy8iK1h3wqTEJCijzGANTCCl9nWjY41juyAn2K3Q1hLLTg==
+ dependencies:
+ asap "~2.0.3"
+
+prompts@^2.4.2:
+ version "2.4.2"
+ resolved "https://registry.yarnpkg.com/prompts/-/prompts-2.4.2.tgz#7b57e73b3a48029ad10ebd44f74b01722a4cb069"
+ integrity sha512-NxNv/kLguCA7p3jE8oL2aEBsrJWgAakBpgmgK6lpPWV+WuOmY6r2/zbAVnP+T8bQlA0nzHXSJSJW0Hq7ylaD2Q==
+ dependencies:
+ kleur "^3.0.3"
+ sisteransi "^1.0.5"
+
+prop-types@^15.6.2, prop-types@^15.7.2:
+ version "15.8.1"
+ resolved "https://registry.yarnpkg.com/prop-types/-/prop-types-15.8.1.tgz#67d87bf1a694f48435cf332c24af10214a3140b5"
+ integrity sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg==
+ dependencies:
+ loose-envify "^1.4.0"
+ object-assign "^4.1.1"
+ react-is "^16.13.1"
+
+property-information@^5.0.0, property-information@^5.3.0:
+ version "5.6.0"
+ resolved "https://registry.yarnpkg.com/property-information/-/property-information-5.6.0.tgz#61675545fb23002f245c6540ec46077d4da3ed69"
+ integrity sha512-YUHSPk+A30YPv+0Qf8i9Mbfe/C0hdPXk1s1jPVToV8pk8BQtpw10ct89Eo7OWkutrwqvT0eicAxlOg3dOAu8JA==
+ dependencies:
+ xtend "^4.0.0"
+
+proxy-addr@~2.0.7:
+ version "2.0.7"
+ resolved "https://registry.yarnpkg.com/proxy-addr/-/proxy-addr-2.0.7.tgz#f19fe69ceab311eeb94b42e70e8c2070f9ba1025"
+ integrity sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==
+ dependencies:
+ forwarded "0.2.0"
+ ipaddr.js "1.9.1"
+
+prr@~1.0.1:
+ version "1.0.1"
+ resolved "https://mirrors.cloud.tencent.com/npm/prr/-/prr-1.0.1.tgz#d3fc114ba06995a45ec6893f484ceb1d78f5f476"
+ integrity sha1-0/wRS6BplaRexok/SEzrHXj19HY=
+
+pump@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/pump/-/pump-3.0.0.tgz#b4a2116815bde2f4e1ea602354e8c75565107a64"
+ integrity sha512-LwZy+p3SFs1Pytd/jYct4wpv49HiYCqd9Rlc5ZVdk0V+8Yzv6jR5Blk3TRmPL1ft69TxP0IMZGJ+WPFU2BFhww==
+ dependencies:
+ end-of-stream "^1.1.0"
+ once "^1.3.1"
+
+punycode@^1.3.2:
+ version "1.4.1"
+ resolved "https://registry.yarnpkg.com/punycode/-/punycode-1.4.1.tgz#c0d5a63b2718800ad8e1eb0fa5269c84dd41845e"
+ integrity sha512-jmYNElW7yvO7TV33CjSmvSiE2yco3bV2czu/OzDKdMNVZQWfxCblURLhf+47syQRBntjfLdd/H0egrzIG+oaFQ==
+
+punycode@^2.1.0:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/punycode/-/punycode-2.1.1.tgz#b58b010ac40c22c5657616c8d2c2c02c7bf479ec"
+ integrity sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==
+
+pupa@^2.1.1:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/pupa/-/pupa-2.1.1.tgz#f5e8fd4afc2c5d97828faa523549ed8744a20d62"
+ integrity sha512-l1jNAspIBSFqbT+y+5FosojNpVpF94nlI+wDUpqP9enwOTfHx9f0gh5nB96vl+6yTpsJsypeNrwfzPrKuHB41A==
+ dependencies:
+ escape-goat "^2.0.0"
+
+pure-color@^1.2.0:
+ version "1.3.0"
+ resolved "https://registry.yarnpkg.com/pure-color/-/pure-color-1.3.0.tgz#1fe064fb0ac851f0de61320a8bf796836422f33e"
+ integrity sha512-QFADYnsVoBMw1srW7OVKEYjG+MbIa49s54w1MA1EDY6r2r/sTcKKYqRX1f4GYvnXP7eN/Pe9HFcX+hwzmrXRHA==
+
+q@^1.5.1:
+ version "1.5.1"
+ resolved "https://registry.yarnpkg.com/q/-/q-1.5.1.tgz#7e32f75b41381291d04611f1bf14109ac00651d7"
+ integrity sha512-kV/CThkXo6xyFEZUugw/+pIOywXcDbFYgSct5cT3gqlbkBE1SJdwy6UQoZvodiWF/ckQLZyDE/Bu1M6gVu5lVw==
+
+qs@6.10.3:
+ version "6.10.3"
+ resolved "https://registry.yarnpkg.com/qs/-/qs-6.10.3.tgz#d6cde1b2ffca87b5aa57889816c5f81535e22e8e"
+ integrity sha512-wr7M2E0OFRfIfJZjKGieI8lBKb7fRCH4Fv5KNPEs7gJ8jadvotdsS08PzOKR7opXhZ/Xkjtt3WF9g38drmyRqQ==
+ dependencies:
+ side-channel "^1.0.4"
+
+queue-microtask@^1.2.2:
+ version "1.2.3"
+ resolved "https://registry.yarnpkg.com/queue-microtask/-/queue-microtask-1.2.3.tgz#4929228bbc724dfac43e0efb058caf7b6cfb6243"
+ integrity sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==
+
+queue@6.0.2:
+ version "6.0.2"
+ resolved "https://registry.yarnpkg.com/queue/-/queue-6.0.2.tgz#b91525283e2315c7553d2efa18d83e76432fed65"
+ integrity sha512-iHZWu+q3IdFZFX36ro/lKBkSvfkztY5Y7HMiPlOUjhupPcG2JMfst2KKEpu5XndviX/3UhFbRngUPNKtgvtZiA==
+ dependencies:
+ inherits "~2.0.3"
+
+quick-lru@^4.0.1:
+ version "4.0.1"
+ resolved "https://registry.yarnpkg.com/quick-lru/-/quick-lru-4.0.1.tgz#5b8878f113a58217848c6482026c73e1ba57727f"
+ integrity sha512-ARhCpm70fzdcvNQfPoy49IaanKkTlRWF2JMzqhcJbhSFRZv7nPTvZJdcY7301IPmvW+/p0RgIWnQDLJxifsQ7g==
+
+randombytes@^2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/randombytes/-/randombytes-2.1.0.tgz#df6f84372f0270dc65cdf6291349ab7a473d4f2a"
+ integrity sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==
+ dependencies:
+ safe-buffer "^5.1.0"
+
+range-parser@1.2.0:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/range-parser/-/range-parser-1.2.0.tgz#f49be6b487894ddc40dcc94a322f611092e00d5e"
+ integrity sha512-kA5WQoNVo4t9lNx2kQNFCxKeBl5IbbSNBl1M/tLkw9WCn+hxNBAW5Qh8gdhs63CJnhjJ2zQWFoqPJP2sK1AV5A==
+
+range-parser@^1.2.1, range-parser@~1.2.1:
+ version "1.2.1"
+ resolved "https://registry.yarnpkg.com/range-parser/-/range-parser-1.2.1.tgz#3cf37023d199e1c24d1a55b84800c2f3e6468031"
+ integrity sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==
+
+raw-body@2.5.1:
+ version "2.5.1"
+ resolved "https://registry.yarnpkg.com/raw-body/-/raw-body-2.5.1.tgz#fe1b1628b181b700215e5fd42389f98b71392857"
+ integrity sha512-qqJBtEyVgS0ZmPGdCFPWJ3FreoqvG4MVQln/kCgF7Olq95IbOp0/BWyMwbdtn4VTvkM8Y7khCQ2Xgk/tcrCXig==
+ dependencies:
+ bytes "3.1.2"
+ http-errors "2.0.0"
+ iconv-lite "0.4.24"
+ unpipe "1.0.0"
+
+rc@1.2.8, rc@^1.2.8:
+ version "1.2.8"
+ resolved "https://registry.yarnpkg.com/rc/-/rc-1.2.8.tgz#cd924bf5200a075b83c188cd6b9e211b7fc0d3ed"
+ integrity sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==
+ dependencies:
+ deep-extend "^0.6.0"
+ ini "~1.3.0"
+ minimist "^1.2.0"
+ strip-json-comments "~2.0.1"
+
+react-base16-styling@^0.6.0:
+ version "0.6.0"
+ resolved "https://registry.yarnpkg.com/react-base16-styling/-/react-base16-styling-0.6.0.tgz#ef2156d66cf4139695c8a167886cb69ea660792c"
+ integrity sha512-yvh/7CArceR/jNATXOKDlvTnPKPmGZz7zsenQ3jUwLzHkNUR0CvY3yGYJbWJ/nnxsL8Sgmt5cO3/SILVuPO6TQ==
+ dependencies:
+ base16 "^1.0.0"
+ lodash.curry "^4.0.1"
+ lodash.flow "^3.3.0"
+ pure-color "^1.2.0"
+
+react-dev-utils@^12.0.1:
+ version "12.0.1"
+ resolved "https://registry.yarnpkg.com/react-dev-utils/-/react-dev-utils-12.0.1.tgz#ba92edb4a1f379bd46ccd6bcd4e7bc398df33e73"
+ integrity sha512-84Ivxmr17KjUupyqzFode6xKhjwuEJDROWKJy/BthkL7Wn6NJ8h4WE6k/exAv6ImS+0oZLRRW5j/aINMHyeGeQ==
+ dependencies:
+ "@babel/code-frame" "^7.16.0"
+ address "^1.1.2"
+ browserslist "^4.18.1"
+ chalk "^4.1.2"
+ cross-spawn "^7.0.3"
+ detect-port-alt "^1.1.6"
+ escape-string-regexp "^4.0.0"
+ filesize "^8.0.6"
+ find-up "^5.0.0"
+ fork-ts-checker-webpack-plugin "^6.5.0"
+ global-modules "^2.0.0"
+ globby "^11.0.4"
+ gzip-size "^6.0.0"
+ immer "^9.0.7"
+ is-root "^2.1.0"
+ loader-utils "^3.2.0"
+ open "^8.4.0"
+ pkg-up "^3.1.0"
+ prompts "^2.4.2"
+ react-error-overlay "^6.0.11"
+ recursive-readdir "^2.2.2"
+ shell-quote "^1.7.3"
+ strip-ansi "^6.0.1"
+ text-table "^0.2.0"
+
+react-dom@^17.0.2:
+ version "17.0.2"
+ resolved "https://registry.yarnpkg.com/react-dom/-/react-dom-17.0.2.tgz#ecffb6845e3ad8dbfcdc498f0d0a939736502c23"
+ integrity sha512-s4h96KtLDUQlsENhMn1ar8t2bEa+q/YAtj8pPPdIjPDGBDIVNsrD9aXNWqspUe6AzKCIG0C1HZZLqLV7qpOBGA==
+ dependencies:
+ loose-envify "^1.1.0"
+ object-assign "^4.1.1"
+ scheduler "^0.20.2"
+
+react-error-overlay@^6.0.11:
+ version "6.0.11"
+ resolved "https://registry.yarnpkg.com/react-error-overlay/-/react-error-overlay-6.0.11.tgz#92835de5841c5cf08ba00ddd2d677b6d17ff9adb"
+ integrity sha512-/6UZ2qgEyH2aqzYZgQPxEnz33NJ2gNsnHA2o5+o4wW9bLM/JYQitNP9xPhsXwC08hMMovfGe/8retsdDsczPRg==
+
+react-fast-compare@^3.2.0:
+ version "3.2.0"
+ resolved "https://registry.yarnpkg.com/react-fast-compare/-/react-fast-compare-3.2.0.tgz#641a9da81b6a6320f270e89724fb45a0b39e43bb"
+ integrity sha512-rtGImPZ0YyLrscKI9xTpV8psd6I8VAtjKCzQDlzyDvqJA8XOW78TXYQwNRNd8g8JZnDu8q9Fu/1v4HPAVwVdHA==
+
+react-helmet-async@*, react-helmet-async@^1.3.0:
+ version "1.3.0"
+ resolved "https://registry.yarnpkg.com/react-helmet-async/-/react-helmet-async-1.3.0.tgz#7bd5bf8c5c69ea9f02f6083f14ce33ef545c222e"
+ integrity sha512-9jZ57/dAn9t3q6hneQS0wukqC2ENOBgMNVEhb/ZG9ZSxUetzVIw4iAmEU38IaVg3QGYauQPhSeUTuIUtFglWpg==
+ dependencies:
+ "@babel/runtime" "^7.12.5"
+ invariant "^2.2.4"
+ prop-types "^15.7.2"
+ react-fast-compare "^3.2.0"
+ shallowequal "^1.1.0"
+
+react-is@^16.13.1, react-is@^16.6.0, react-is@^16.7.0:
+ version "16.13.1"
+ resolved "https://registry.yarnpkg.com/react-is/-/react-is-16.13.1.tgz#789729a4dc36de2999dc156dd6c1d9c18cea56a4"
+ integrity sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==
+
+react-json-view@^1.21.3:
+ version "1.21.3"
+ resolved "https://registry.yarnpkg.com/react-json-view/-/react-json-view-1.21.3.tgz#f184209ee8f1bf374fb0c41b0813cff54549c475"
+ integrity sha512-13p8IREj9/x/Ye4WI/JpjhoIwuzEgUAtgJZNBJckfzJt1qyh24BdTm6UQNGnyTq9dapQdrqvquZTo3dz1X6Cjw==
+ dependencies:
+ flux "^4.0.1"
+ react-base16-styling "^0.6.0"
+ react-lifecycles-compat "^3.0.4"
+ react-textarea-autosize "^8.3.2"
+
+react-lifecycles-compat@^3.0.4:
+ version "3.0.4"
+ resolved "https://registry.yarnpkg.com/react-lifecycles-compat/-/react-lifecycles-compat-3.0.4.tgz#4f1a273afdfc8f3488a8c516bfda78f872352362"
+ integrity sha512-fBASbA6LnOU9dOU2eW7aQ8xmYBSXUIWr+UmF9b1efZBazGNO+rcXT/icdKnYm2pTwcRylVUYwW7H1PHfLekVzA==
+
+react-loadable-ssr-addon-v5-slorber@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/react-loadable-ssr-addon-v5-slorber/-/react-loadable-ssr-addon-v5-slorber-1.0.1.tgz#2cdc91e8a744ffdf9e3556caabeb6e4278689883"
+ integrity sha512-lq3Lyw1lGku8zUEJPDxsNm1AfYHBrO9Y1+olAYwpUJ2IGFBskM0DMKok97A6LWUpHm+o7IvQBOWu9MLenp9Z+A==
+ dependencies:
+ "@babel/runtime" "^7.10.3"
+
+react-particles@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/react-particles/-/react-particles-2.5.3.tgz#eb6d50e98054c74bc523aacb7cfa37d1c106c237"
+ integrity sha512-TREHHCvJiYYFuuJ43zY3HBafIt2CrD4JK9Nm4BZ+IoWgumEptUR7xZDeYGqjjsR4Dih4zHG087iMusHKO4lNxw==
+ dependencies:
+ fast-deep-equal "^3.1.3"
+ tsparticles-engine "^2.5.2"
+
+react-router-config@^5.1.1:
+ version "5.1.1"
+ resolved "https://registry.yarnpkg.com/react-router-config/-/react-router-config-5.1.1.tgz#0f4263d1a80c6b2dc7b9c1902c9526478194a988"
+ integrity sha512-DuanZjaD8mQp1ppHjgnnUnyOlqYXZVjnov/JzFhjLEwd3Z4dYjMSnqrEzzGThH47vpCOqPPwJM2FtthLeJ8Pbg==
+ dependencies:
+ "@babel/runtime" "^7.1.2"
+
+react-router-dom@^5.3.3:
+ version "5.3.3"
+ resolved "https://registry.yarnpkg.com/react-router-dom/-/react-router-dom-5.3.3.tgz#8779fc28e6691d07afcaf98406d3812fe6f11199"
+ integrity sha512-Ov0tGPMBgqmbu5CDmN++tv2HQ9HlWDuWIIqn4b88gjlAN5IHI+4ZUZRcpz9Hl0azFIwihbLDYw1OiHGRo7ZIng==
+ dependencies:
+ "@babel/runtime" "^7.12.13"
+ history "^4.9.0"
+ loose-envify "^1.3.1"
+ prop-types "^15.6.2"
+ react-router "5.3.3"
+ tiny-invariant "^1.0.2"
+ tiny-warning "^1.0.0"
+
+react-router@5.3.3, react-router@^5.3.3:
+ version "5.3.3"
+ resolved "https://registry.yarnpkg.com/react-router/-/react-router-5.3.3.tgz#8e3841f4089e728cf82a429d92cdcaa5e4a3a288"
+ integrity sha512-mzQGUvS3bM84TnbtMYR8ZjKnuPJ71IjSzR+DE6UkUqvN4czWIqEs17yLL8xkAycv4ev0AiN+IGrWu88vJs/p2w==
+ dependencies:
+ "@babel/runtime" "^7.12.13"
+ history "^4.9.0"
+ hoist-non-react-statics "^3.1.0"
+ loose-envify "^1.3.1"
+ mini-create-react-context "^0.4.0"
+ path-to-regexp "^1.7.0"
+ prop-types "^15.6.2"
+ react-is "^16.6.0"
+ tiny-invariant "^1.0.2"
+ tiny-warning "^1.0.0"
+
+react-textarea-autosize@^8.3.2:
+ version "8.3.4"
+ resolved "https://registry.yarnpkg.com/react-textarea-autosize/-/react-textarea-autosize-8.3.4.tgz#270a343de7ad350534141b02c9cb78903e553524"
+ integrity sha512-CdtmP8Dc19xL8/R6sWvtknD/eCXkQr30dtvC4VmGInhRsfF8X/ihXCq6+9l9qbxmKRiq407/7z5fxE7cVWQNgQ==
+ dependencies:
+ "@babel/runtime" "^7.10.2"
+ use-composed-ref "^1.3.0"
+ use-latest "^1.2.1"
+
+react@^17.0.2:
+ version "17.0.2"
+ resolved "https://registry.yarnpkg.com/react/-/react-17.0.2.tgz#d0b5cc516d29eb3eee383f75b62864cfb6800037"
+ integrity sha512-gnhPt75i/dq/z3/6q/0asP78D0u592D5L1pd7M8P+dck6Fu/jJeL6iVVK23fptSUZj8Vjf++7wXA8UNclGQcbA==
+ dependencies:
+ loose-envify "^1.1.0"
+ object-assign "^4.1.1"
+
+read-pkg-up@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/read-pkg-up/-/read-pkg-up-3.0.0.tgz#3ed496685dba0f8fe118d0691dc51f4a1ff96f07"
+ integrity sha512-YFzFrVvpC6frF1sz8psoHDBGF7fLPc+llq/8NB43oagqWkx8ar5zYtsTORtOjw9W2RHLpWP+zTWwBvf1bCmcSw==
+ dependencies:
+ find-up "^2.0.0"
+ read-pkg "^3.0.0"
+
+read-pkg-up@^7.0.1:
+ version "7.0.1"
+ resolved "https://registry.yarnpkg.com/read-pkg-up/-/read-pkg-up-7.0.1.tgz#f3a6135758459733ae2b95638056e1854e7ef507"
+ integrity sha512-zK0TB7Xd6JpCLmlLmufqykGE+/TlOePD6qKClNW7hHDKFh/J7/7gCWGR7joEQEW1bKq3a3yUZSObOoWLFQ4ohg==
+ dependencies:
+ find-up "^4.1.0"
+ read-pkg "^5.2.0"
+ type-fest "^0.8.1"
+
+read-pkg@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/read-pkg/-/read-pkg-3.0.0.tgz#9cbc686978fee65d16c00e2b19c237fcf6e38389"
+ integrity sha512-BLq/cCO9two+lBgiTYNqD6GdtK8s4NpaWrl6/rCO9w0TUS8oJl7cmToOZfRYllKTISY6nt1U7jQ53brmKqY6BA==
+ dependencies:
+ load-json-file "^4.0.0"
+ normalize-package-data "^2.3.2"
+ path-type "^3.0.0"
+
+read-pkg@^5.2.0:
+ version "5.2.0"
+ resolved "https://registry.yarnpkg.com/read-pkg/-/read-pkg-5.2.0.tgz#7bf295438ca5a33e56cd30e053b34ee7250c93cc"
+ integrity sha512-Ug69mNOpfvKDAc2Q8DRpMjjzdtrnv9HcSMX+4VsZxD1aZ6ZzrIE7rlzXBtWTyhULSMKg076AW6WR5iZpD0JiOg==
+ dependencies:
+ "@types/normalize-package-data" "^2.4.0"
+ normalize-package-data "^2.5.0"
+ parse-json "^5.0.0"
+ type-fest "^0.6.0"
+
+readable-stream@3, readable-stream@^3.0.0, readable-stream@^3.0.2, readable-stream@^3.0.6, readable-stream@^3.4.0:
+ version "3.6.0"
+ resolved "https://registry.yarnpkg.com/readable-stream/-/readable-stream-3.6.0.tgz#337bbda3adc0706bd3e024426a286d4b4b2c9198"
+ integrity sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==
+ dependencies:
+ inherits "^2.0.3"
+ string_decoder "^1.1.1"
+ util-deprecate "^1.0.1"
+
+readable-stream@^2.0.1, readable-stream@~2.3.6:
+ version "2.3.7"
+ resolved "https://registry.yarnpkg.com/readable-stream/-/readable-stream-2.3.7.tgz#1eca1cf711aef814c04f62252a36a62f6cb23b57"
+ integrity sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==
+ dependencies:
+ core-util-is "~1.0.0"
+ inherits "~2.0.3"
+ isarray "~1.0.0"
+ process-nextick-args "~2.0.0"
+ safe-buffer "~5.1.1"
+ string_decoder "~1.1.1"
+ util-deprecate "~1.0.1"
+
+readdirp@~3.6.0:
+ version "3.6.0"
+ resolved "https://registry.yarnpkg.com/readdirp/-/readdirp-3.6.0.tgz#74a370bd857116e245b29cc97340cd431a02a6c7"
+ integrity sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==
+ dependencies:
+ picomatch "^2.2.1"
+
+reading-time@^1.5.0:
+ version "1.5.0"
+ resolved "https://registry.yarnpkg.com/reading-time/-/reading-time-1.5.0.tgz#d2a7f1b6057cb2e169beaf87113cc3411b5bc5bb"
+ integrity sha512-onYyVhBNr4CmAxFsKS7bz+uTLRakypIe4R+5A824vBSkQy/hB3fZepoVEf8OVAxzLvK+H/jm9TzpI3ETSm64Kg==
+
+rechoir@^0.6.2:
+ version "0.6.2"
+ resolved "https://registry.yarnpkg.com/rechoir/-/rechoir-0.6.2.tgz#85204b54dba82d5742e28c96756ef43af50e3384"
+ integrity sha512-HFM8rkZ+i3zrV+4LQjwQ0W+ez98pApMGM3HUrN04j3CqzPOzl9nmP15Y8YXNm8QHGv/eacOVEjqhmWpkRV0NAw==
+ dependencies:
+ resolve "^1.1.6"
+
+recursive-readdir@^2.2.2:
+ version "2.2.2"
+ resolved "https://registry.yarnpkg.com/recursive-readdir/-/recursive-readdir-2.2.2.tgz#9946fb3274e1628de6e36b2f6714953b4845094f"
+ integrity sha512-nRCcW9Sj7NuZwa2XvH9co8NPeXUBhZP7CRKJtU+cS6PW9FpCIFoI5ib0NT1ZrbNuPoRy0ylyCaUL8Gih4LSyFg==
+ dependencies:
+ minimatch "3.0.4"
+
+redent@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/redent/-/redent-3.0.0.tgz#e557b7998316bb53c9f1f56fa626352c6963059f"
+ integrity sha512-6tDA8g98We0zd0GvVeMT9arEOnTw9qM03L9cJXaCjrip1OO764RDBLBfrB4cwzNGDj5OA5ioymC9GkizgWJDUg==
+ dependencies:
+ indent-string "^4.0.0"
+ strip-indent "^3.0.0"
+
+regenerate-unicode-properties@^10.0.1:
+ version "10.0.1"
+ resolved "https://registry.yarnpkg.com/regenerate-unicode-properties/-/regenerate-unicode-properties-10.0.1.tgz#7f442732aa7934a3740c779bb9b3340dccc1fb56"
+ integrity sha512-vn5DU6yg6h8hP/2OkQo3K7uVILvY4iu0oI4t3HFa81UPkhGJwkRwM10JEc3upjdhHjs/k8GJY1sRBhk5sr69Bw==
+ dependencies:
+ regenerate "^1.4.2"
+
+regenerate@^1.4.2:
+ version "1.4.2"
+ resolved "https://registry.yarnpkg.com/regenerate/-/regenerate-1.4.2.tgz#b9346d8827e8f5a32f7ba29637d398b69014848a"
+ integrity sha512-zrceR/XhGYU/d/opr2EKO7aRHUeiBI8qjtfHqADTwZd6Szfy16la6kqD0MIUs5z5hx6AaKa+PixpPrR289+I0A==
+
+regenerator-runtime@^0.13.4:
+ version "0.13.9"
+ resolved "https://registry.yarnpkg.com/regenerator-runtime/-/regenerator-runtime-0.13.9.tgz#8925742a98ffd90814988d7566ad30ca3b263b52"
+ integrity sha512-p3VT+cOEgxFsRRA9X4lkI1E+k2/CtnKtU4gcxyaCUreilL/vqI6CdZ3wxVUx3UOUg+gnUOQQcRI7BmSI656MYA==
+
+regenerator-transform@^0.15.0:
+ version "0.15.0"
+ resolved "https://registry.yarnpkg.com/regenerator-transform/-/regenerator-transform-0.15.0.tgz#cbd9ead5d77fae1a48d957cf889ad0586adb6537"
+ integrity sha512-LsrGtPmbYg19bcPHwdtmXwbW+TqNvtY4riE3P83foeHRroMbH6/2ddFBfab3t7kbzc7v7p4wbkIecHImqt0QNg==
+ dependencies:
+ "@babel/runtime" "^7.8.4"
+
+regexpu-core@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/regexpu-core/-/regexpu-core-5.1.0.tgz#2f8504c3fd0ebe11215783a41541e21c79942c6d"
+ integrity sha512-bb6hk+xWd2PEOkj5It46A16zFMs2mv86Iwpdu94la4S3sJ7C973h2dHpYKwIBGaWSO7cIRJ+UX0IeMaWcO4qwA==
+ dependencies:
+ regenerate "^1.4.2"
+ regenerate-unicode-properties "^10.0.1"
+ regjsgen "^0.6.0"
+ regjsparser "^0.8.2"
+ unicode-match-property-ecmascript "^2.0.0"
+ unicode-match-property-value-ecmascript "^2.0.0"
+
+registry-auth-token@^4.0.0:
+ version "4.2.2"
+ resolved "https://registry.yarnpkg.com/registry-auth-token/-/registry-auth-token-4.2.2.tgz#f02d49c3668884612ca031419491a13539e21fac"
+ integrity sha512-PC5ZysNb42zpFME6D/XlIgtNGdTl8bBOCw90xQLVMpzuuubJKYDWFAEuUNc+Cn8Z8724tg2SDhDRrkVEsqfDMg==
+ dependencies:
+ rc "1.2.8"
+
+registry-url@^5.0.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/registry-url/-/registry-url-5.1.0.tgz#e98334b50d5434b81136b44ec638d9c2009c5009"
+ integrity sha512-8acYXXTI0AkQv6RAOjE3vOaIXZkT9wo4LOFbBKYQEEnnMNBpKqdUrI6S4NT0KPIo/WVvJ5tE/X5LF/TQUf0ekw==
+ dependencies:
+ rc "^1.2.8"
+
+regjsgen@^0.6.0:
+ version "0.6.0"
+ resolved "https://registry.yarnpkg.com/regjsgen/-/regjsgen-0.6.0.tgz#83414c5354afd7d6627b16af5f10f41c4e71808d"
+ integrity sha512-ozE883Uigtqj3bx7OhL1KNbCzGyW2NQZPl6Hs09WTvCuZD5sTI4JY58bkbQWa/Y9hxIsvJ3M8Nbf7j54IqeZbA==
+
+regjsparser@^0.8.2:
+ version "0.8.4"
+ resolved "https://registry.yarnpkg.com/regjsparser/-/regjsparser-0.8.4.tgz#8a14285ffcc5de78c5b95d62bbf413b6bc132d5f"
+ integrity sha512-J3LABycON/VNEu3abOviqGHuB/LOtOQj8SKmfP9anY5GfAVw/SPjwzSjxGjbZXIxbGfqTHtJw58C2Li/WkStmA==
+ dependencies:
+ jsesc "~0.5.0"
+
+relateurl@^0.2.7:
+ version "0.2.7"
+ resolved "https://registry.yarnpkg.com/relateurl/-/relateurl-0.2.7.tgz#54dbf377e51440aca90a4cd274600d3ff2d888a9"
+ integrity sha512-G08Dxvm4iDN3MLM0EsP62EDV9IuhXPR6blNz6Utcp7zyV3tr4HVNINt6MpaRWbxoOHT3Q7YN2P+jaHX8vUbgog==
+
+remark-emoji@^2.2.0:
+ version "2.2.0"
+ resolved "https://registry.yarnpkg.com/remark-emoji/-/remark-emoji-2.2.0.tgz#1c702090a1525da5b80e15a8f963ef2c8236cac7"
+ integrity sha512-P3cj9s5ggsUvWw5fS2uzCHJMGuXYRb0NnZqYlNecewXt8QBU9n5vW3DUUKOhepS8F9CwdMx9B8a3i7pqFWAI5w==
+ dependencies:
+ emoticon "^3.2.0"
+ node-emoji "^1.10.0"
+ unist-util-visit "^2.0.3"
+
+remark-footnotes@2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/remark-footnotes/-/remark-footnotes-2.0.0.tgz#9001c4c2ffebba55695d2dd80ffb8b82f7e6303f"
+ integrity sha512-3Clt8ZMH75Ayjp9q4CorNeyjwIxHFcTkaektplKGl2A1jNGEUey8cKL0ZC5vJwfcD5GFGsNLImLG/NGzWIzoMQ==
+
+remark-mdx@1.6.22:
+ version "1.6.22"
+ resolved "https://registry.yarnpkg.com/remark-mdx/-/remark-mdx-1.6.22.tgz#06a8dab07dcfdd57f3373af7f86bd0e992108bbd"
+ integrity sha512-phMHBJgeV76uyFkH4rvzCftLfKCr2RZuF+/gmVcaKrpsihyzmhXjA0BEMDaPTXG5y8qZOKPVo83NAOX01LPnOQ==
+ dependencies:
+ "@babel/core" "7.12.9"
+ "@babel/helper-plugin-utils" "7.10.4"
+ "@babel/plugin-proposal-object-rest-spread" "7.12.1"
+ "@babel/plugin-syntax-jsx" "7.12.1"
+ "@mdx-js/util" "1.6.22"
+ is-alphabetical "1.0.4"
+ remark-parse "8.0.3"
+ unified "9.2.0"
+
+remark-parse@8.0.3:
+ version "8.0.3"
+ resolved "https://registry.yarnpkg.com/remark-parse/-/remark-parse-8.0.3.tgz#9c62aa3b35b79a486454c690472906075f40c7e1"
+ integrity sha512-E1K9+QLGgggHxCQtLt++uXltxEprmWzNfg+MxpfHsZlrddKzZ/hZyWHDbK3/Ap8HJQqYJRXP+jHczdL6q6i85Q==
+ dependencies:
+ ccount "^1.0.0"
+ collapse-white-space "^1.0.2"
+ is-alphabetical "^1.0.0"
+ is-decimal "^1.0.0"
+ is-whitespace-character "^1.0.0"
+ is-word-character "^1.0.0"
+ markdown-escapes "^1.0.0"
+ parse-entities "^2.0.0"
+ repeat-string "^1.5.4"
+ state-toggle "^1.0.0"
+ trim "0.0.1"
+ trim-trailing-lines "^1.0.0"
+ unherit "^1.0.4"
+ unist-util-remove-position "^2.0.0"
+ vfile-location "^3.0.0"
+ xtend "^4.0.1"
+
+remark-squeeze-paragraphs@4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/remark-squeeze-paragraphs/-/remark-squeeze-paragraphs-4.0.0.tgz#76eb0e085295131c84748c8e43810159c5653ead"
+ integrity sha512-8qRqmL9F4nuLPIgl92XUuxI3pFxize+F1H0e/W3llTk0UsjJaj01+RrirkMw7P21RKe4X6goQhYRSvNWX+70Rw==
+ dependencies:
+ mdast-squeeze-paragraphs "^4.0.0"
+
+renderkid@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/renderkid/-/renderkid-3.0.0.tgz#5fd823e4d6951d37358ecc9a58b1f06836b6268a"
+ integrity sha512-q/7VIQA8lmM1hF+jn+sFSPWGlMkSAeNYcPLmDQx2zzuiDfaLrOmumR8iaUKlenFgh0XRPIUeSPlH3A+AW3Z5pg==
+ dependencies:
+ css-select "^4.1.3"
+ dom-converter "^0.2.0"
+ htmlparser2 "^6.1.0"
+ lodash "^4.17.21"
+ strip-ansi "^6.0.1"
+
+repeat-string@^1.5.4:
+ version "1.6.1"
+ resolved "https://registry.yarnpkg.com/repeat-string/-/repeat-string-1.6.1.tgz#8dcae470e1c88abc2d600fff4a776286da75e637"
+ integrity sha512-PV0dzCYDNfRi1jCDbJzpW7jNNDRuCOG/jI5ctQcGKt/clZD+YcPS3yIlWuTJMmESC8aevCFmWJy5wjAFgNqN6w==
+
+require-directory@^2.1.1:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/require-directory/-/require-directory-2.1.1.tgz#8c64ad5fd30dab1c976e2344ffe7f792a6a6df42"
+ integrity sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==
+
+require-from-string@^2.0.2:
+ version "2.0.2"
+ resolved "https://registry.yarnpkg.com/require-from-string/-/require-from-string-2.0.2.tgz#89a7fdd938261267318eafe14f9c32e598c36909"
+ integrity sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==
+
+"require-like@>= 0.1.1":
+ version "0.1.2"
+ resolved "https://registry.yarnpkg.com/require-like/-/require-like-0.1.2.tgz#ad6f30c13becd797010c468afa775c0c0a6b47fa"
+ integrity sha512-oyrU88skkMtDdauHDuKVrgR+zuItqr6/c//FXzvmxRGMexSDc6hNvJInGW3LL46n+8b50RykrvwSUIIQH2LQ5A==
+
+requires-port@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/requires-port/-/requires-port-1.0.0.tgz#925d2601d39ac485e091cf0da5c6e694dc3dcaff"
+ integrity sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==
+
+resolve-dir@^1.0.0, resolve-dir@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/resolve-dir/-/resolve-dir-1.0.1.tgz#79a40644c362be82f26effe739c9bb5382046f43"
+ integrity sha512-R7uiTjECzvOsWSfdM0QKFNBVFcK27aHOUwdvK53BcW8zqnGdYp0Fbj82cy54+2A4P2tFM22J5kRfe1R+lM/1yg==
+ dependencies:
+ expand-tilde "^2.0.0"
+ global-modules "^1.0.0"
+
+resolve-from@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/resolve-from/-/resolve-from-4.0.0.tgz#4abcd852ad32dd7baabfe9b40e00a36db5f392e6"
+ integrity sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==
+
+resolve-from@^5.0.0:
+ version "5.0.0"
+ resolved "https://registry.yarnpkg.com/resolve-from/-/resolve-from-5.0.0.tgz#c35225843df8f776df21c57557bc087e9dfdfc69"
+ integrity sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==
+
+resolve-global@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/resolve-global/-/resolve-global-1.0.0.tgz#a2a79df4af2ca3f49bf77ef9ddacd322dad19255"
+ integrity sha512-zFa12V4OLtT5XUX/Q4VLvTfBf+Ok0SPc1FNGM/z9ctUdiU618qwKpWnd0CHs3+RqROfyEg/DhuHbMWYqcgljEw==
+ dependencies:
+ global-dirs "^0.1.1"
+
+resolve-pathname@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/resolve-pathname/-/resolve-pathname-3.0.0.tgz#99d02224d3cf263689becbb393bc560313025dcd"
+ integrity sha512-C7rARubxI8bXFNB/hqcp/4iUeIXJhJZvFPFPiSPRnhU5UPxzMFIl+2E6yY6c4k9giDJAhtV+enfA+G89N6Csng==
+
+resolve@^1.1.6, resolve@^1.10.0, resolve@^1.14.2, resolve@^1.3.2:
+ version "1.22.1"
+ resolved "https://registry.yarnpkg.com/resolve/-/resolve-1.22.1.tgz#27cb2ebb53f91abb49470a928bba7558066ac177"
+ integrity sha512-nBpuuYuY5jFsli/JIs1oldw6fOQCBioohqWZg/2hiaOybXOft4lonv85uDOKXdf8rhyK159cxU5cDcK/NKk8zw==
+ dependencies:
+ is-core-module "^2.9.0"
+ path-parse "^1.0.7"
+ supports-preserve-symlinks-flag "^1.0.0"
+
+responselike@^1.0.2:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/responselike/-/responselike-1.0.2.tgz#918720ef3b631c5642be068f15ade5a46f4ba1e7"
+ integrity sha512-/Fpe5guzJk1gPqdJLJR5u7eG/gNY4nImjbRDaVWVMRhne55TCmj2i9Q+54PBRfatRC8v/rIiv9BN0pMd9OV5EQ==
+ dependencies:
+ lowercase-keys "^1.0.0"
+
+restore-cursor@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/restore-cursor/-/restore-cursor-3.1.0.tgz#39f67c54b3a7a58cea5236d95cf0034239631f7e"
+ integrity sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA==
+ dependencies:
+ onetime "^5.1.0"
+ signal-exit "^3.0.2"
+
+retry@^0.13.1:
+ version "0.13.1"
+ resolved "https://registry.yarnpkg.com/retry/-/retry-0.13.1.tgz#185b1587acf67919d63b357349e03537b2484658"
+ integrity sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==
+
+reusify@^1.0.4:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/reusify/-/reusify-1.0.4.tgz#90da382b1e126efc02146e90845a88db12925d76"
+ integrity sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==
+
+rimraf@^3.0.2:
+ version "3.0.2"
+ resolved "https://registry.yarnpkg.com/rimraf/-/rimraf-3.0.2.tgz#f1a5402ba6220ad52cc1282bac1ae3aa49fd061a"
+ integrity sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==
+ dependencies:
+ glob "^7.1.3"
+
+rtl-detect@^1.0.4:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/rtl-detect/-/rtl-detect-1.0.4.tgz#40ae0ea7302a150b96bc75af7d749607392ecac6"
+ integrity sha512-EBR4I2VDSSYr7PkBmFy04uhycIpDKp+21p/jARYXlCSjQksTBQcJ0HFUPOO79EPPH5JS6VAhiIQbycf0O3JAxQ==
+
+rtlcss@^3.5.0:
+ version "3.5.0"
+ resolved "https://registry.yarnpkg.com/rtlcss/-/rtlcss-3.5.0.tgz#c9eb91269827a102bac7ae3115dd5d049de636c3"
+ integrity sha512-wzgMaMFHQTnyi9YOwsx9LjOxYXJPzS8sYnFaKm6R5ysvTkwzHiB0vxnbHwchHQT65PTdBjDG21/kQBWI7q9O7A==
+ dependencies:
+ find-up "^5.0.0"
+ picocolors "^1.0.0"
+ postcss "^8.3.11"
+ strip-json-comments "^3.1.1"
+
+run-async@^2.4.0:
+ version "2.4.1"
+ resolved "https://registry.yarnpkg.com/run-async/-/run-async-2.4.1.tgz#8440eccf99ea3e70bd409d49aab88e10c189a455"
+ integrity sha512-tvVnVv01b8c1RrA6Ep7JkStj85Guv/YrMcwqYQnwjsAS2cTmmPGBBjAjpCW7RrSodNSoE2/qg9O4bceNvUuDgQ==
+
+run-parallel@^1.1.9:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/run-parallel/-/run-parallel-1.2.0.tgz#66d1368da7bdf921eb9d95bd1a9229e7f21a43ee"
+ integrity sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==
+ dependencies:
+ queue-microtask "^1.2.2"
+
+rxjs@^7.5.4, rxjs@^7.5.5:
+ version "7.5.6"
+ resolved "https://registry.yarnpkg.com/rxjs/-/rxjs-7.5.6.tgz#0446577557862afd6903517ce7cae79ecb9662bc"
+ integrity sha512-dnyv2/YsXhnm461G+R/Pe5bWP41Nm6LBXEYWI6eiFP4fiwx6WRI/CD0zbdVAudd9xwLEF2IDcKXLHit0FYjUzw==
+ dependencies:
+ tslib "^2.1.0"
+
+safe-buffer@5.1.2, safe-buffer@~5.1.0, safe-buffer@~5.1.1:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/safe-buffer/-/safe-buffer-5.1.2.tgz#991ec69d296e0313747d59bdfd2b745c35f8828d"
+ integrity sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==
+
+safe-buffer@5.2.1, safe-buffer@>=5.1.0, safe-buffer@^5.1.0, safe-buffer@~5.2.0:
+ version "5.2.1"
+ resolved "https://registry.yarnpkg.com/safe-buffer/-/safe-buffer-5.2.1.tgz#1eaf9fa9bdb1fdd4ec75f58f9cdb4e6b7827eec6"
+ integrity sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==
+
+"safer-buffer@>= 2.1.2 < 3", "safer-buffer@>= 2.1.2 < 3.0.0":
+ version "2.1.2"
+ resolved "https://registry.yarnpkg.com/safer-buffer/-/safer-buffer-2.1.2.tgz#44fa161b0187b9549dd84bb91802f9bd8385cd6a"
+ integrity sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==
+
+sax@^1.2.4:
+ version "1.2.4"
+ resolved "https://registry.yarnpkg.com/sax/-/sax-1.2.4.tgz#2816234e2378bddc4e5354fab5caa895df7100d9"
+ integrity sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw==
+
+scheduler@^0.20.2:
+ version "0.20.2"
+ resolved "https://registry.yarnpkg.com/scheduler/-/scheduler-0.20.2.tgz#4baee39436e34aa93b4874bddcbf0fe8b8b50e91"
+ integrity sha512-2eWfGgAqqWFGqtdMmcL5zCMK1U8KlXv8SQFGglL3CEtd0aDVDWgeF/YoCmvln55m5zSk3J/20hTaSBeSObsQDQ==
+ dependencies:
+ loose-envify "^1.1.0"
+ object-assign "^4.1.1"
+
+schema-utils@2.7.0:
+ version "2.7.0"
+ resolved "https://registry.yarnpkg.com/schema-utils/-/schema-utils-2.7.0.tgz#17151f76d8eae67fbbf77960c33c676ad9f4efc7"
+ integrity sha512-0ilKFI6QQF5nxDZLFn2dMjvc4hjg/Wkg7rHd3jK6/A4a1Hl9VFdQWvgB1UMGoU94pad1P/8N7fMcEnLnSiju8A==
+ dependencies:
+ "@types/json-schema" "^7.0.4"
+ ajv "^6.12.2"
+ ajv-keywords "^3.4.1"
+
+schema-utils@^2.6.5:
+ version "2.7.1"
+ resolved "https://registry.yarnpkg.com/schema-utils/-/schema-utils-2.7.1.tgz#1ca4f32d1b24c590c203b8e7a50bf0ea4cd394d7"
+ integrity sha512-SHiNtMOUGWBQJwzISiVYKu82GiV4QYGePp3odlY1tuKO7gPtphAT5R/py0fA6xtbgLL/RvtJZnU9b8s0F1q0Xg==
+ dependencies:
+ "@types/json-schema" "^7.0.5"
+ ajv "^6.12.4"
+ ajv-keywords "^3.5.2"
+
+schema-utils@^3.0.0, schema-utils@^3.1.0, schema-utils@^3.1.1:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/schema-utils/-/schema-utils-3.1.1.tgz#bc74c4b6b6995c1d88f76a8b77bea7219e0c8281"
+ integrity sha512-Y5PQxS4ITlC+EahLuXaY86TXfR7Dc5lw294alXOq86JAHCihAIZfqv8nNCWvaEJvaC51uN9hbLGeV0cFBdH+Fw==
+ dependencies:
+ "@types/json-schema" "^7.0.8"
+ ajv "^6.12.5"
+ ajv-keywords "^3.5.2"
+
+schema-utils@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/schema-utils/-/schema-utils-4.0.0.tgz#60331e9e3ae78ec5d16353c467c34b3a0a1d3df7"
+ integrity sha512-1edyXKgh6XnJsJSQ8mKWXnN/BVaIbFMLpouRUrXgVq7WYne5kw3MW7UPhO44uRXQSIpTSXoJbmrR2X0w9kUTyg==
+ dependencies:
+ "@types/json-schema" "^7.0.9"
+ ajv "^8.8.0"
+ ajv-formats "^2.1.1"
+ ajv-keywords "^5.0.0"
+
+section-matter@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/section-matter/-/section-matter-1.0.0.tgz#e9041953506780ec01d59f292a19c7b850b84167"
+ integrity sha512-vfD3pmTzGpufjScBh50YHKzEu2lxBWhVEHsNGoEXmCmn2hKGfeNLYMzCJpe8cD7gqX7TJluOVpBkAequ6dgMmA==
+ dependencies:
+ extend-shallow "^2.0.1"
+ kind-of "^6.0.0"
+
+select-hose@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/select-hose/-/select-hose-2.0.0.tgz#625d8658f865af43ec962bfc376a37359a4994ca"
+ integrity sha512-mEugaLK+YfkijB4fx0e6kImuJdCIt2LxCRcbEYPqRGCs4F2ogyfZU5IAZRdjCP8JPq2AtdNoC/Dux63d9Kiryg==
+
+selfsigned@^2.0.1:
+ version "2.1.1"
+ resolved "https://registry.yarnpkg.com/selfsigned/-/selfsigned-2.1.1.tgz#18a7613d714c0cd3385c48af0075abf3f266af61"
+ integrity sha512-GSL3aowiF7wa/WtSFwnUrludWFoNhftq8bUkH9pkzjpN2XSPOAYEgg6e0sS9s0rZwgJzJiQRPU18A6clnoW5wQ==
+ dependencies:
+ node-forge "^1"
+
+semver-diff@^3.1.1:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/semver-diff/-/semver-diff-3.1.1.tgz#05f77ce59f325e00e2706afd67bb506ddb1ca32b"
+ integrity sha512-GX0Ix/CJcHyB8c4ykpHGIAvLyOwOobtM/8d+TQkAd81/bEjgPHrfba41Vpesr7jX/t8Uh+R3EX9eAS5be+jQYg==
+ dependencies:
+ semver "^6.3.0"
+
+"semver@2 || 3 || 4 || 5", semver@^5.4.1, semver@^5.6.0:
+ version "5.7.1"
+ resolved "https://registry.yarnpkg.com/semver/-/semver-5.7.1.tgz#a954f931aeba508d307bbf069eff0c01c96116f7"
+ integrity sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==
+
+semver@^6.0.0, semver@^6.1.1, semver@^6.1.2, semver@^6.2.0, semver@^6.3.0:
+ version "6.3.0"
+ resolved "https://registry.yarnpkg.com/semver/-/semver-6.3.0.tgz#ee0a64c8af5e8ceea67687b133761e1becbd1d3d"
+ integrity sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw==
+
+semver@^7.1.1, semver@^7.3.2, semver@^7.3.4, semver@^7.3.5, semver@^7.3.7:
+ version "7.3.7"
+ resolved "https://registry.yarnpkg.com/semver/-/semver-7.3.7.tgz#12c5b649afdbf9049707796e22a4028814ce523f"
+ integrity sha512-QlYTucUYOews+WeEujDoEGziz4K6c47V/Bd+LjSSYcA94p+DmINdf7ncaUinThfvZyu13lN9OY1XDxt8C0Tw0g==
+ dependencies:
+ lru-cache "^6.0.0"
+
+send@0.18.0:
+ version "0.18.0"
+ resolved "https://registry.yarnpkg.com/send/-/send-0.18.0.tgz#670167cc654b05f5aa4a767f9113bb371bc706be"
+ integrity sha512-qqWzuOjSFOuqPjFe4NOsMLafToQQwBSOEpS+FwEt3A2V3vKubTquT3vmLTQpFgMXp8AlFWFuP1qKaJZOtPpVXg==
+ dependencies:
+ debug "2.6.9"
+ depd "2.0.0"
+ destroy "1.2.0"
+ encodeurl "~1.0.2"
+ escape-html "~1.0.3"
+ etag "~1.8.1"
+ fresh "0.5.2"
+ http-errors "2.0.0"
+ mime "1.6.0"
+ ms "2.1.3"
+ on-finished "2.4.1"
+ range-parser "~1.2.1"
+ statuses "2.0.1"
+
+serialize-javascript@^6.0.0:
+ version "6.0.0"
+ resolved "https://registry.yarnpkg.com/serialize-javascript/-/serialize-javascript-6.0.0.tgz#efae5d88f45d7924141da8b5c3a7a7e663fefeb8"
+ integrity sha512-Qr3TosvguFt8ePWqsvRfrKyQXIiW+nGbYpy8XK24NQHE83caxWt+mIymTT19DGFbNWNLfEwsrkSmN64lVWB9ag==
+ dependencies:
+ randombytes "^2.1.0"
+
+serve-handler@^6.1.3:
+ version "6.1.3"
+ resolved "https://registry.yarnpkg.com/serve-handler/-/serve-handler-6.1.3.tgz#1bf8c5ae138712af55c758477533b9117f6435e8"
+ integrity sha512-FosMqFBNrLyeiIDvP1zgO6YoTzFYHxLDEIavhlmQ+knB2Z7l1t+kGLHkZIDN7UVWqQAmKI3D20A6F6jo3nDd4w==
+ dependencies:
+ bytes "3.0.0"
+ content-disposition "0.5.2"
+ fast-url-parser "1.1.3"
+ mime-types "2.1.18"
+ minimatch "3.0.4"
+ path-is-inside "1.0.2"
+ path-to-regexp "2.2.1"
+ range-parser "1.2.0"
+
+serve-index@^1.9.1:
+ version "1.9.1"
+ resolved "https://registry.yarnpkg.com/serve-index/-/serve-index-1.9.1.tgz#d3768d69b1e7d82e5ce050fff5b453bea12a9239"
+ integrity sha512-pXHfKNP4qujrtteMrSBb0rc8HJ9Ms/GrXwcUtUtD5s4ewDJI8bT3Cz2zTVRMKtri49pLx2e0Ya8ziP5Ya2pZZw==
+ dependencies:
+ accepts "~1.3.4"
+ batch "0.6.1"
+ debug "2.6.9"
+ escape-html "~1.0.3"
+ http-errors "~1.6.2"
+ mime-types "~2.1.17"
+ parseurl "~1.3.2"
+
+serve-static@1.15.0:
+ version "1.15.0"
+ resolved "https://registry.yarnpkg.com/serve-static/-/serve-static-1.15.0.tgz#faaef08cffe0a1a62f60cad0c4e513cff0ac9540"
+ integrity sha512-XGuRDNjXUijsUL0vl6nSD7cwURuzEgglbOaFuZM9g3kwDXOWVTck0jLzjPzGD+TazWbboZYu52/9/XPdUgne9g==
+ dependencies:
+ encodeurl "~1.0.2"
+ escape-html "~1.0.3"
+ parseurl "~1.3.3"
+ send "0.18.0"
+
+setimmediate@^1.0.5:
+ version "1.0.5"
+ resolved "https://registry.yarnpkg.com/setimmediate/-/setimmediate-1.0.5.tgz#290cbb232e306942d7d7ea9b83732ab7856f8285"
+ integrity sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==
+
+setprototypeof@1.1.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/setprototypeof/-/setprototypeof-1.1.0.tgz#d0bd85536887b6fe7c0d818cb962d9d91c54e656"
+ integrity sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ==
+
+setprototypeof@1.2.0:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/setprototypeof/-/setprototypeof-1.2.0.tgz#66c9a24a73f9fc28cbe66b09fed3d33dcaf1b424"
+ integrity sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==
+
+shallow-clone@^3.0.0:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/shallow-clone/-/shallow-clone-3.0.1.tgz#8f2981ad92531f55035b01fb230769a40e02efa3"
+ integrity sha512-/6KqX+GVUdqPuPPd2LxDDxzX6CAbjJehAAOKlNpqqUpAqPM6HeL8f+o3a+JsyGjn2lv0WY8UsTgUJjU9Ok55NA==
+ dependencies:
+ kind-of "^6.0.2"
+
+shallowequal@^1.1.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/shallowequal/-/shallowequal-1.1.0.tgz#188d521de95b9087404fd4dcb68b13df0ae4e7f8"
+ integrity sha512-y0m1JoUZSlPAjXVtPPW70aZWfIL/dSP7AFkRnniLCrK/8MDKog3TySTBmckD+RObVxH0v4Tox67+F14PdED2oQ==
+
+shebang-command@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/shebang-command/-/shebang-command-2.0.0.tgz#ccd0af4f8835fbdc265b82461aaf0c36663f34ea"
+ integrity sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==
+ dependencies:
+ shebang-regex "^3.0.0"
+
+shebang-regex@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/shebang-regex/-/shebang-regex-3.0.0.tgz#ae16f1644d873ecad843b0307b143362d4c42172"
+ integrity sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==
+
+shell-quote@^1.7.3:
+ version "1.7.3"
+ resolved "https://registry.yarnpkg.com/shell-quote/-/shell-quote-1.7.3.tgz#aa40edac170445b9a431e17bb62c0b881b9c4123"
+ integrity sha512-Vpfqwm4EnqGdlsBFNmHhxhElJYrdfcxPThu+ryKS5J8L/fhAwLazFZtq+S+TWZ9ANj2piSQLGj6NQg+lKPmxrw==
+
+shelljs@^0.8.5:
+ version "0.8.5"
+ resolved "https://registry.yarnpkg.com/shelljs/-/shelljs-0.8.5.tgz#de055408d8361bed66c669d2f000538ced8ee20c"
+ integrity sha512-TiwcRcrkhHvbrZbnRcFYMLl30Dfov3HKqzp5tO5b4pt6G/SezKcYhmDg15zXVBswHmctSAQKznqNW2LO5tTDow==
+ dependencies:
+ glob "^7.0.0"
+ interpret "^1.0.0"
+ rechoir "^0.6.2"
+
+side-channel@^1.0.4:
+ version "1.0.4"
+ resolved "https://registry.yarnpkg.com/side-channel/-/side-channel-1.0.4.tgz#efce5c8fdc104ee751b25c58d4290011fa5ea2cf"
+ integrity sha512-q5XPytqFEIKHkGdiMIrY10mvLRvnQh42/+GoBlFW3b2LXLE2xxJpZFdm94we0BaoV3RwJyGqg5wS7epxTv0Zvw==
+ dependencies:
+ call-bind "^1.0.0"
+ get-intrinsic "^1.0.2"
+ object-inspect "^1.9.0"
+
+signal-exit@^3.0.2, signal-exit@^3.0.3:
+ version "3.0.7"
+ resolved "https://registry.yarnpkg.com/signal-exit/-/signal-exit-3.0.7.tgz#a9a1767f8af84155114eaabd73f99273c8f59ad9"
+ integrity sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==
+
+sirv@^1.0.7:
+ version "1.0.19"
+ resolved "https://registry.yarnpkg.com/sirv/-/sirv-1.0.19.tgz#1d73979b38c7fe91fcba49c85280daa9c2363b49"
+ integrity sha512-JuLThK3TnZG1TAKDwNIqNq6QA2afLOCcm+iE8D1Kj3GA40pSPsxQjjJl0J8X3tsR7T+CP1GavpzLwYkgVLWrZQ==
+ dependencies:
+ "@polka/url" "^1.0.0-next.20"
+ mrmime "^1.0.0"
+ totalist "^1.0.0"
+
+sisteransi@^1.0.5:
+ version "1.0.5"
+ resolved "https://registry.yarnpkg.com/sisteransi/-/sisteransi-1.0.5.tgz#134d681297756437cc05ca01370d3a7a571075ed"
+ integrity sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==
+
+sitemap@^7.1.1:
+ version "7.1.1"
+ resolved "https://registry.yarnpkg.com/sitemap/-/sitemap-7.1.1.tgz#eeed9ad6d95499161a3eadc60f8c6dce4bea2bef"
+ integrity sha512-mK3aFtjz4VdJN0igpIJrinf3EO8U8mxOPsTBzSsy06UtjZQJ3YY3o3Xa7zSc5nMqcMrRwlChHZ18Kxg0caiPBg==
+ dependencies:
+ "@types/node" "^17.0.5"
+ "@types/sax" "^1.2.1"
+ arg "^5.0.0"
+ sax "^1.2.4"
+
+slash@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/slash/-/slash-3.0.0.tgz#6539be870c165adbd5240220dbe361f1bc4d4634"
+ integrity sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==
+
+slash@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/slash/-/slash-4.0.0.tgz#2422372176c4c6c5addb5e2ada885af984b396a7"
+ integrity sha512-3dOsAHXXUkQTpOYcoAxLIorMTp4gIQr5IW3iVb7A7lFIp0VHhnynm9izx6TssdrIcVIESAlVjtnO2K8bg+Coew==
+
+sockjs@^0.3.24:
+ version "0.3.24"
+ resolved "https://registry.yarnpkg.com/sockjs/-/sockjs-0.3.24.tgz#c9bc8995f33a111bea0395ec30aa3206bdb5ccce"
+ integrity sha512-GJgLTZ7vYb/JtPSSZ10hsOYIvEYsjbNU+zPdIHcUaWVNUEPivzxku31865sSSud0Da0W4lEeOPlmw93zLQchuQ==
+ dependencies:
+ faye-websocket "^0.11.3"
+ uuid "^8.3.2"
+ websocket-driver "^0.7.4"
+
+sort-css-media-queries@2.1.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/sort-css-media-queries/-/sort-css-media-queries-2.1.0.tgz#7c85e06f79826baabb232f5560e9745d7a78c4ce"
+ integrity sha512-IeWvo8NkNiY2vVYdPa27MCQiR0MN0M80johAYFVxWWXQ44KU84WNxjslwBHmc/7ZL2ccwkM7/e6S5aiKZXm7jA==
+
+source-map-js@^1.0.2:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/source-map-js/-/source-map-js-1.0.2.tgz#adbc361d9c62df380125e7f161f71c826f1e490c"
+ integrity sha512-R0XvVJ9WusLiqTCEiGCmICCMplcCkIwwR11mOSD9CR5u+IXYdiseeEuXCVAjS54zqwkLcPNnmU4OeJ6tUrWhDw==
+
+source-map-support@~0.5.20:
+ version "0.5.21"
+ resolved "https://registry.yarnpkg.com/source-map-support/-/source-map-support-0.5.21.tgz#04fe7c7f9e1ed2d662233c28cb2b35b9f63f6e4f"
+ integrity sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w==
+ dependencies:
+ buffer-from "^1.0.0"
+ source-map "^0.6.0"
+
+source-map@^0.5.0:
+ version "0.5.7"
+ resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.5.7.tgz#8a039d2d1021d22d1ea14c80d8ea468ba2ef3fcc"
+ integrity sha512-LbrmJOMUSdEVxIKvdcJzQC+nQhe8FUZQTXQy6+I75skNgn3OoQ0DZA8YnFa7gp8tqtL3KPf1kmo0R5DoApeSGQ==
+
+source-map@^0.6.0, source-map@^0.6.1, source-map@~0.6.0:
+ version "0.6.1"
+ resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.6.1.tgz#74722af32e9614e9c287a8d0bbde48b5e2f1a263"
+ integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==
+
+space-separated-tokens@^1.0.0:
+ version "1.1.5"
+ resolved "https://registry.yarnpkg.com/space-separated-tokens/-/space-separated-tokens-1.1.5.tgz#85f32c3d10d9682007e917414ddc5c26d1aa6899"
+ integrity sha512-q/JSVd1Lptzhf5bkYm4ob4iWPjx0KiRe3sRFBNrVqbJkFaBm5vbbowy1mymoPNLRa52+oadOhJ+K49wsSeSjTA==
+
+spdx-correct@^3.0.0:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/spdx-correct/-/spdx-correct-3.1.1.tgz#dece81ac9c1e6713e5f7d1b6f17d468fa53d89a9"
+ integrity sha512-cOYcUWwhCuHCXi49RhFRCyJEK3iPj1Ziz9DpViV3tbZOwXD49QzIN3MpOLJNxh2qwq2lJJZaKMVw9qNi4jTC0w==
+ dependencies:
+ spdx-expression-parse "^3.0.0"
+ spdx-license-ids "^3.0.0"
+
+spdx-exceptions@^2.1.0:
+ version "2.3.0"
+ resolved "https://registry.yarnpkg.com/spdx-exceptions/-/spdx-exceptions-2.3.0.tgz#3f28ce1a77a00372683eade4a433183527a2163d"
+ integrity sha512-/tTrYOC7PPI1nUAgx34hUpqXuyJG+DTHJTnIULG4rDygi4xu/tfgmq1e1cIRwRzwZgo4NLySi+ricLkZkw4i5A==
+
+spdx-expression-parse@^3.0.0:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/spdx-expression-parse/-/spdx-expression-parse-3.0.1.tgz#cf70f50482eefdc98e3ce0a6833e4a53ceeba679"
+ integrity sha512-cbqHunsQWnJNE6KhVSMsMeH5H/L9EpymbzqTQ3uLwNCLZ1Q481oWaofqH7nO6V07xlXwY6PhQdQ2IedWx/ZK4Q==
+ dependencies:
+ spdx-exceptions "^2.1.0"
+ spdx-license-ids "^3.0.0"
+
+spdx-license-ids@^3.0.0:
+ version "3.0.12"
+ resolved "https://registry.yarnpkg.com/spdx-license-ids/-/spdx-license-ids-3.0.12.tgz#69077835abe2710b65f03969898b6637b505a779"
+ integrity sha512-rr+VVSXtRhO4OHbXUiAF7xW3Bo9DuuF6C5jH+q/x15j2jniycgKbxU09Hr0WqlSLUs4i4ltHGXqTe7VHclYWyA==
+
+spdy-transport@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/spdy-transport/-/spdy-transport-3.0.0.tgz#00d4863a6400ad75df93361a1608605e5dcdcf31"
+ integrity sha512-hsLVFE5SjA6TCisWeJXFKniGGOpBgMLmerfO2aCyCU5s7nJ/rpAepqmFifv/GCbSbueEeAJJnmSQ2rKC/g8Fcw==
+ dependencies:
+ debug "^4.1.0"
+ detect-node "^2.0.4"
+ hpack.js "^2.1.6"
+ obuf "^1.1.2"
+ readable-stream "^3.0.6"
+ wbuf "^1.7.3"
+
+spdy@^4.0.2:
+ version "4.0.2"
+ resolved "https://registry.yarnpkg.com/spdy/-/spdy-4.0.2.tgz#b74f466203a3eda452c02492b91fb9e84a27677b"
+ integrity sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==
+ dependencies:
+ debug "^4.1.0"
+ handle-thing "^2.0.0"
+ http-deceiver "^1.2.7"
+ select-hose "^2.0.0"
+ spdy-transport "^3.0.0"
+
+split2@^3.0.0:
+ version "3.2.2"
+ resolved "https://registry.yarnpkg.com/split2/-/split2-3.2.2.tgz#bf2cf2a37d838312c249c89206fd7a17dd12365f"
+ integrity sha512-9NThjpgZnifTkJpzTZ7Eue85S49QwpNhZTq6GRJwObb6jnLFNGB7Qm73V5HewTROPyxD0C29xqmaI68bQtV+hg==
+ dependencies:
+ readable-stream "^3.0.0"
+
+split@^1.0.0:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/split/-/split-1.0.1.tgz#605bd9be303aa59fb35f9229fbea0ddec9ea07d9"
+ integrity sha512-mTyOoPbrivtXnwnIxZRFYRrPNtEFKlpB2fvjSnCQUiAA6qAZzqwna5envK4uk6OIeP17CsdF3rSBGYVBsU0Tkg==
+ dependencies:
+ through "2"
+
+sprintf-js@~1.0.2:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/sprintf-js/-/sprintf-js-1.0.3.tgz#04e6926f662895354f3dd015203633b857297e2c"
+ integrity sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g==
+
+stable@^0.1.8:
+ version "0.1.8"
+ resolved "https://registry.yarnpkg.com/stable/-/stable-0.1.8.tgz#836eb3c8382fe2936feaf544631017ce7d47a3cf"
+ integrity sha512-ji9qxRnOVfcuLDySj9qzhGSEFVobyt1kIOSkj1qZzYLzq7Tos/oUUWvotUPQLlrsidqsK6tBH89Bc9kL5zHA6w==
+
+standard-version@^9.5.0:
+ version "9.5.0"
+ resolved "https://registry.yarnpkg.com/standard-version/-/standard-version-9.5.0.tgz#851d6dcddf5320d5079601832aeb185dbf497949"
+ integrity sha512-3zWJ/mmZQsOaO+fOlsa0+QK90pwhNd042qEcw6hKFNoLFs7peGyvPffpEBbK/DSGPbyOvli0mUIFv5A4qTjh2Q==
+ dependencies:
+ chalk "^2.4.2"
+ conventional-changelog "3.1.25"
+ conventional-changelog-config-spec "2.1.0"
+ conventional-changelog-conventionalcommits "4.6.3"
+ conventional-recommended-bump "6.1.0"
+ detect-indent "^6.0.0"
+ detect-newline "^3.1.0"
+ dotgitignore "^2.1.0"
+ figures "^3.1.0"
+ find-up "^5.0.0"
+ git-semver-tags "^4.0.0"
+ semver "^7.1.1"
+ stringify-package "^1.0.1"
+ yargs "^16.0.0"
+
+state-toggle@^1.0.0:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/state-toggle/-/state-toggle-1.0.3.tgz#e123b16a88e143139b09c6852221bc9815917dfe"
+ integrity sha512-d/5Z4/2iiCnHw6Xzghyhb+GcmF89bxwgXG60wjIiZaxnymbyOmI8Hk4VqHXiVVp6u2ysaskFfXg3ekCj4WNftQ==
+
+statuses@2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/statuses/-/statuses-2.0.1.tgz#55cb000ccf1d48728bd23c685a063998cf1a1b63"
+ integrity sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==
+
+"statuses@>= 1.4.0 < 2":
+ version "1.5.0"
+ resolved "https://registry.yarnpkg.com/statuses/-/statuses-1.5.0.tgz#161c7dac177659fd9811f43771fa99381478628c"
+ integrity sha512-OpZ3zP+jT1PI7I8nemJX4AKmAX070ZkYPVWV/AaKTJl+tXCTGyVdC1a4SL8RUQYEwk/f34ZX8UTykN68FwrqAA==
+
+std-env@^3.0.1:
+ version "3.2.1"
+ resolved "https://registry.yarnpkg.com/std-env/-/std-env-3.2.1.tgz#00e260ec3901333537125f81282b9296b00d7304"
+ integrity sha512-D/uYFWkI/31OrnKmXZqGAGK5GbQRPp/BWA1nuITcc6ICblhhuQUPHS5E2GSCVS7Hwhf4ciq8qsATwBUxv+lI6w==
+
+string-width@^4.0.0, string-width@^4.1.0, string-width@^4.2.0, string-width@^4.2.2:
+ version "4.2.3"
+ resolved "https://registry.yarnpkg.com/string-width/-/string-width-4.2.3.tgz#269c7117d27b05ad2e536830a8ec895ef9c6d010"
+ integrity sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==
+ dependencies:
+ emoji-regex "^8.0.0"
+ is-fullwidth-code-point "^3.0.0"
+ strip-ansi "^6.0.1"
+
+string-width@^5.0.1:
+ version "5.1.2"
+ resolved "https://registry.yarnpkg.com/string-width/-/string-width-5.1.2.tgz#14f8daec6d81e7221d2a357e668cab73bdbca794"
+ integrity sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==
+ dependencies:
+ eastasianwidth "^0.2.0"
+ emoji-regex "^9.2.2"
+ strip-ansi "^7.0.1"
+
+string_decoder@^1.1.1:
+ version "1.3.0"
+ resolved "https://registry.yarnpkg.com/string_decoder/-/string_decoder-1.3.0.tgz#42f114594a46cf1a8e30b0a84f56c78c3edac21e"
+ integrity sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==
+ dependencies:
+ safe-buffer "~5.2.0"
+
+string_decoder@~1.1.1:
+ version "1.1.1"
+ resolved "https://registry.yarnpkg.com/string_decoder/-/string_decoder-1.1.1.tgz#9cf1611ba62685d7030ae9e4ba34149c3af03fc8"
+ integrity sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==
+ dependencies:
+ safe-buffer "~5.1.0"
+
+stringify-object@^3.3.0:
+ version "3.3.0"
+ resolved "https://registry.yarnpkg.com/stringify-object/-/stringify-object-3.3.0.tgz#703065aefca19300d3ce88af4f5b3956d7556629"
+ integrity sha512-rHqiFh1elqCQ9WPLIC8I0Q/g/wj5J1eMkyoiD6eoQApWHP0FtlK7rqnhmabL5VUY9JQCcqwwvlOaSuutekgyrw==
+ dependencies:
+ get-own-enumerable-property-symbols "^3.0.0"
+ is-obj "^1.0.1"
+ is-regexp "^1.0.0"
+
+stringify-package@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/stringify-package/-/stringify-package-1.0.1.tgz#e5aa3643e7f74d0f28628b72f3dad5cecfc3ba85"
+ integrity sha512-sa4DUQsYciMP1xhKWGuFM04fB0LG/9DlluZoSVywUMRNvzid6XucHK0/90xGxRoHrAaROrcHK1aPKaijCtSrhg==
+
+strip-ansi@^6.0.0, strip-ansi@^6.0.1:
+ version "6.0.1"
+ resolved "https://registry.yarnpkg.com/strip-ansi/-/strip-ansi-6.0.1.tgz#9e26c63d30f53443e9489495b2105d37b67a85d9"
+ integrity sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==
+ dependencies:
+ ansi-regex "^5.0.1"
+
+strip-ansi@^7.0.1:
+ version "7.0.1"
+ resolved "https://registry.yarnpkg.com/strip-ansi/-/strip-ansi-7.0.1.tgz#61740a08ce36b61e50e65653f07060d000975fb2"
+ integrity sha512-cXNxvT8dFNRVfhVME3JAe98mkXDYN2O1l7jmcwMnOslDeESg1rF/OZMtK0nRAhiari1unG5cD4jG3rapUAkLbw==
+ dependencies:
+ ansi-regex "^6.0.1"
+
+strip-bom-string@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/strip-bom-string/-/strip-bom-string-1.0.0.tgz#e5211e9224369fbb81d633a2f00044dc8cedad92"
+ integrity sha512-uCC2VHvQRYu+lMh4My/sFNmF2klFymLX1wHJeXnbEJERpV/ZsVuonzerjfrGpIGF7LBVa1O7i9kjiWvJiFck8g==
+
+strip-bom@4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/strip-bom/-/strip-bom-4.0.0.tgz#9c3505c1db45bcedca3d9cf7a16f5c5aa3901878"
+ integrity sha512-3xurFv5tEgii33Zi8Jtp55wEIILR9eh34FAW00PZf+JnSsTmV/ioewSgQl97JHvgjoRGwPShsWm+IdrxB35d0w==
+
+strip-bom@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/strip-bom/-/strip-bom-3.0.0.tgz#2334c18e9c759f7bdd56fdef7e9ae3d588e68ed3"
+ integrity sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA==
+
+strip-final-newline@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/strip-final-newline/-/strip-final-newline-2.0.0.tgz#89b852fb2fcbe936f6f4b3187afb0a12c1ab58ad"
+ integrity sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA==
+
+strip-indent@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/strip-indent/-/strip-indent-3.0.0.tgz#c32e1cee940b6b3432c771bc2c54bcce73cd3001"
+ integrity sha512-laJTa3Jb+VQpaC6DseHhF7dXVqHTfJPCRDaEbid/drOhgitgYku/letMUqOXFoWV0zIIUbjpdH2t+tYj4bQMRQ==
+ dependencies:
+ min-indent "^1.0.0"
+
+strip-json-comments@3.1.1, strip-json-comments@^3.1.1:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/strip-json-comments/-/strip-json-comments-3.1.1.tgz#31f1281b3832630434831c310c01cccda8cbe006"
+ integrity sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==
+
+strip-json-comments@~2.0.1:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/strip-json-comments/-/strip-json-comments-2.0.1.tgz#3c531942e908c2697c0ec344858c286c7ca0a60a"
+ integrity sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ==
+
+style-to-object@0.3.0, style-to-object@^0.3.0:
+ version "0.3.0"
+ resolved "https://registry.yarnpkg.com/style-to-object/-/style-to-object-0.3.0.tgz#b1b790d205991cc783801967214979ee19a76e46"
+ integrity sha512-CzFnRRXhzWIdItT3OmF8SQfWyahHhjq3HwcMNCNLn+N7klOOqPjMeG/4JSu77D7ypZdGvSzvkrbyeTMizz2VrA==
+ dependencies:
+ inline-style-parser "0.1.1"
+
+stylehacks@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/stylehacks/-/stylehacks-5.1.0.tgz#a40066490ca0caca04e96c6b02153ddc39913520"
+ integrity sha512-SzLmvHQTrIWfSgljkQCw2++C9+Ne91d/6Sp92I8c5uHTcy/PgeHamwITIbBW9wnFTY/3ZfSXR9HIL6Ikqmcu6Q==
+ dependencies:
+ browserslist "^4.16.6"
+ postcss-selector-parser "^6.0.4"
+
+supports-color@^5.3.0:
+ version "5.5.0"
+ resolved "https://registry.yarnpkg.com/supports-color/-/supports-color-5.5.0.tgz#e2e69a44ac8772f78a1ec0b35b689df6530efc8f"
+ integrity sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==
+ dependencies:
+ has-flag "^3.0.0"
+
+supports-color@^7.1.0:
+ version "7.2.0"
+ resolved "https://registry.yarnpkg.com/supports-color/-/supports-color-7.2.0.tgz#1b7dcdcb32b8138801b3e478ba6a51caa89648da"
+ integrity sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==
+ dependencies:
+ has-flag "^4.0.0"
+
+supports-color@^8.0.0:
+ version "8.1.1"
+ resolved "https://registry.yarnpkg.com/supports-color/-/supports-color-8.1.1.tgz#cd6fc17e28500cff56c1b86c0a7fd4a54a73005c"
+ integrity sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==
+ dependencies:
+ has-flag "^4.0.0"
+
+supports-preserve-symlinks-flag@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz#6eda4bd344a3c94aea376d4cc31bc77311039e09"
+ integrity sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==
+
+svg-parser@^2.0.4:
+ version "2.0.4"
+ resolved "https://registry.yarnpkg.com/svg-parser/-/svg-parser-2.0.4.tgz#fdc2e29e13951736140b76cb122c8ee6630eb6b5"
+ integrity sha512-e4hG1hRwoOdRb37cIMSgzNsxyzKfayW6VOflrwvR+/bzrkyxY/31WkbgnQpgtrNp1SdpJvpUAGTa/ZoiPNDuRQ==
+
+svgo@^2.7.0, svgo@^2.8.0:
+ version "2.8.0"
+ resolved "https://registry.yarnpkg.com/svgo/-/svgo-2.8.0.tgz#4ff80cce6710dc2795f0c7c74101e6764cfccd24"
+ integrity sha512-+N/Q9kV1+F+UeWYoSiULYo4xYSDQlTgb+ayMobAXPwMnLvop7oxKMo9OzIrX5x3eS4L4f2UHhc9axXwY8DpChg==
+ dependencies:
+ "@trysound/sax" "0.2.0"
+ commander "^7.2.0"
+ css-select "^4.1.3"
+ css-tree "^1.1.3"
+ csso "^4.2.0"
+ picocolors "^1.0.0"
+ stable "^0.1.8"
+
+tapable@^1.0.0:
+ version "1.1.3"
+ resolved "https://registry.yarnpkg.com/tapable/-/tapable-1.1.3.tgz#a1fccc06b58db61fd7a45da2da44f5f3a3e67ba2"
+ integrity sha512-4WK/bYZmj8xLr+HUCODHGF1ZFzsYffasLUgEiMBY4fgtltdO6B4WJtlSbPaDTLpYTcGVwM2qLnFTICEcNxs3kA==
+
+tapable@^2.0.0, tapable@^2.1.1, tapable@^2.2.0:
+ version "2.2.1"
+ resolved "https://registry.yarnpkg.com/tapable/-/tapable-2.2.1.tgz#1967a73ef4060a82f12ab96af86d52fdb76eeca0"
+ integrity sha512-GNzQvQTOIP6RyTfE2Qxb8ZVlNmw0n88vp1szwWRimP02mnTsx3Wtn5qRdqY9w2XduFNUgvOwhNnQsjwCp+kqaQ==
+
+terser-webpack-plugin@^5.1.3, terser-webpack-plugin@^5.3.3:
+ version "5.3.6"
+ resolved "https://registry.yarnpkg.com/terser-webpack-plugin/-/terser-webpack-plugin-5.3.6.tgz#5590aec31aa3c6f771ce1b1acca60639eab3195c"
+ integrity sha512-kfLFk+PoLUQIbLmB1+PZDMRSZS99Mp+/MHqDNmMA6tOItzRt+Npe3E+fsMs5mfcM0wCtrrdU387UnV+vnSffXQ==
+ dependencies:
+ "@jridgewell/trace-mapping" "^0.3.14"
+ jest-worker "^27.4.5"
+ schema-utils "^3.1.1"
+ serialize-javascript "^6.0.0"
+ terser "^5.14.1"
+
+terser@^5.10.0, terser@^5.14.1:
+ version "5.15.0"
+ resolved "https://registry.yarnpkg.com/terser/-/terser-5.15.0.tgz#e16967894eeba6e1091509ec83f0c60e179f2425"
+ integrity sha512-L1BJiXVmheAQQy+as0oF3Pwtlo4s3Wi1X2zNZ2NxOB4wx9bdS9Vk67XQENLFdLYGCK/Z2di53mTj/hBafR+dTA==
+ dependencies:
+ "@jridgewell/source-map" "^0.3.2"
+ acorn "^8.5.0"
+ commander "^2.20.0"
+ source-map-support "~0.5.20"
+
+text-extensions@^1.0.0:
+ version "1.9.0"
+ resolved "https://registry.yarnpkg.com/text-extensions/-/text-extensions-1.9.0.tgz#1853e45fee39c945ce6f6c36b2d659b5aabc2a26"
+ integrity sha512-wiBrwC1EhBelW12Zy26JeOUkQ5mRu+5o8rpsJk5+2t+Y5vE7e842qtZDQ2g1NpX/29HdyFeJ4nSIhI47ENSxlQ==
+
+text-table@^0.2.0:
+ version "0.2.0"
+ resolved "https://registry.yarnpkg.com/text-table/-/text-table-0.2.0.tgz#7f5ee823ae805207c00af2df4a84ec3fcfa570b4"
+ integrity sha512-N+8UisAXDGk8PFXP4HAzVR9nbfmVJ3zYLAWiTIoqC5v5isinhr+r5uaO8+7r3BMfuNIufIsA7RdpVgacC2cSpw==
+
+through2@^2.0.0:
+ version "2.0.5"
+ resolved "https://registry.yarnpkg.com/through2/-/through2-2.0.5.tgz#01c1e39eb31d07cb7d03a96a70823260b23132cd"
+ integrity sha512-/mrRod8xqpA+IHSLyGCQ2s8SPHiCDEeQJSep1jqLYeEUClOFG2Qsh+4FU6G9VeqpZnGW/Su8LQGc4YKni5rYSQ==
+ dependencies:
+ readable-stream "~2.3.6"
+ xtend "~4.0.1"
+
+through2@^4.0.0:
+ version "4.0.2"
+ resolved "https://registry.yarnpkg.com/through2/-/through2-4.0.2.tgz#a7ce3ac2a7a8b0b966c80e7c49f0484c3b239764"
+ integrity sha512-iOqSav00cVxEEICeD7TjLB1sueEL+81Wpzp2bY17uZjZN0pWZPuo4suZ/61VujxmqSGFfgOcNuTZ85QJwNZQpw==
+ dependencies:
+ readable-stream "3"
+
+through@2, "through@>=2.2.7 <3", through@^2.3.6:
+ version "2.3.8"
+ resolved "https://registry.yarnpkg.com/through/-/through-2.3.8.tgz#0dd4c9ffaabc357960b1b724115d7e0e86a2e1f5"
+ integrity sha512-w89qg7PI8wAdvX60bMDP+bFoD5Dvhm9oLheFp5O4a2QF0cSBGsBX4qZmadPMvVqlLJBBci+WqGGOAPvcDeNSVg==
+
+thunky@^1.0.2:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/thunky/-/thunky-1.1.0.tgz#5abaf714a9405db0504732bbccd2cedd9ef9537d"
+ integrity sha512-eHY7nBftgThBqOyHGVN+l8gF0BucP09fMo0oO/Lb0w1OF80dJv+lDVpXG60WMQvkcxAkNybKsrEIE3ZtKGmPrA==
+
+tiny-invariant@^1.0.2:
+ version "1.2.0"
+ resolved "https://registry.yarnpkg.com/tiny-invariant/-/tiny-invariant-1.2.0.tgz#a1141f86b672a9148c72e978a19a73b9b94a15a9"
+ integrity sha512-1Uhn/aqw5C6RI4KejVeTg6mIS7IqxnLJ8Mv2tV5rTc0qWobay7pDUz6Wi392Cnc8ak1H0F2cjoRzb2/AW4+Fvg==
+
+tiny-warning@^1.0.0, tiny-warning@^1.0.3:
+ version "1.0.3"
+ resolved "https://registry.yarnpkg.com/tiny-warning/-/tiny-warning-1.0.3.tgz#94a30db453df4c643d0fd566060d60a875d84754"
+ integrity sha512-lBN9zLN/oAf68o3zNXYrdCt1kP8WsiGW8Oo2ka41b2IM5JL/S1CTyX1rW0mb/zSuJun0ZUrDxx4sqvYS2FWzPA==
+
+tmp@^0.0.33:
+ version "0.0.33"
+ resolved "https://registry.yarnpkg.com/tmp/-/tmp-0.0.33.tgz#6d34335889768d21b2bcda0aa277ced3b1bfadf9"
+ integrity sha512-jRCJlojKnZ3addtTOjdIqoRuPEKBvNXcGYqzO6zWZX8KfKEpnGY5jfggJQ3EjKuu8D4bJRr0y+cYJFmYbImXGw==
+ dependencies:
+ os-tmpdir "~1.0.2"
+
+to-fast-properties@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/to-fast-properties/-/to-fast-properties-2.0.0.tgz#dc5e698cbd079265bc73e0377681a4e4e83f616e"
+ integrity sha512-/OaKK0xYrs3DmxRYqL/yDc+FxFUVYhDlXMhRmv3z915w2HF1tnN1omB354j8VUGO/hbRzyD6Y3sA7v7GS/ceog==
+
+to-readable-stream@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/to-readable-stream/-/to-readable-stream-1.0.0.tgz#ce0aa0c2f3df6adf852efb404a783e77c0475771"
+ integrity sha512-Iq25XBt6zD5npPhlLVXGFN3/gyR2/qODcKNNyTMd4vbm39HUaOiAM4PMq0eMVC/Tkxz+Zjdsc55g9yyz+Yq00Q==
+
+to-regex-range@^5.0.1:
+ version "5.0.1"
+ resolved "https://registry.yarnpkg.com/to-regex-range/-/to-regex-range-5.0.1.tgz#1648c44aae7c8d988a326018ed72f5b4dd0392e4"
+ integrity sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==
+ dependencies:
+ is-number "^7.0.0"
+
+toidentifier@1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/toidentifier/-/toidentifier-1.0.1.tgz#3be34321a88a820ed1bd80dfaa33e479fbb8dd35"
+ integrity sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==
+
+totalist@^1.0.0:
+ version "1.1.0"
+ resolved "https://registry.yarnpkg.com/totalist/-/totalist-1.1.0.tgz#a4d65a3e546517701e3e5c37a47a70ac97fe56df"
+ integrity sha512-gduQwd1rOdDMGxFG1gEvhV88Oirdo2p+KjoYFU7k2g+i7n6AFFbDQ5kMPUsW0pNbfQsB/cwXvT1i4Bue0s9g5g==
+
+tr46@~0.0.3:
+ version "0.0.3"
+ resolved "https://registry.yarnpkg.com/tr46/-/tr46-0.0.3.tgz#8184fd347dac9cdc185992f3a6622e14b9d9ab6a"
+ integrity sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==
+
+trim-newlines@^3.0.0:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/trim-newlines/-/trim-newlines-3.0.1.tgz#260a5d962d8b752425b32f3a7db0dcacd176c144"
+ integrity sha512-c1PTsA3tYrIsLGkJkzHF+w9F2EyxfXGo4UyJc4pFL++FMjnq0HJS69T3M7d//gKrFKwy429bouPescbjecU+Zw==
+
+trim-trailing-lines@^1.0.0:
+ version "1.1.4"
+ resolved "https://registry.yarnpkg.com/trim-trailing-lines/-/trim-trailing-lines-1.1.4.tgz#bd4abbec7cc880462f10b2c8b5ce1d8d1ec7c2c0"
+ integrity sha512-rjUWSqnfTNrjbB9NQWfPMH/xRK1deHeGsHoVfpxJ++XeYXE0d6B1En37AHfw3jtfTU7dzMzZL2jjpe8Qb5gLIQ==
+
+trim@0.0.1:
+ version "0.0.1"
+ resolved "https://registry.yarnpkg.com/trim/-/trim-0.0.1.tgz#5858547f6b290757ee95cccc666fb50084c460dd"
+ integrity sha512-YzQV+TZg4AxpKxaTHK3c3D+kRDCGVEE7LemdlQZoQXn0iennk10RsIoY6ikzAqJTc9Xjl9C1/waHom/J86ziAQ==
+
+trough@^1.0.0:
+ version "1.0.5"
+ resolved "https://registry.yarnpkg.com/trough/-/trough-1.0.5.tgz#b8b639cefad7d0bb2abd37d433ff8293efa5f406"
+ integrity sha512-rvuRbTarPXmMb79SmzEp8aqXNKcK+y0XaB298IXueQ8I2PsrATcPBCSPyK/dDNa2iWOhKlfNnOjdAOTBU/nkFA==
+
+ts-node@^10.8.1:
+ version "10.9.1"
+ resolved "https://registry.yarnpkg.com/ts-node/-/ts-node-10.9.1.tgz#e73de9102958af9e1f0b168a6ff320e25adcff4b"
+ integrity sha512-NtVysVPkxxrwFGUUxGYhfux8k78pQB3JqYBXlLRZgdGUqTO5wU/UyHop5p70iEbGhB7q5KmiZiU0Y3KlJrScEw==
+ dependencies:
+ "@cspotcode/source-map-support" "^0.8.0"
+ "@tsconfig/node10" "^1.0.7"
+ "@tsconfig/node12" "^1.0.7"
+ "@tsconfig/node14" "^1.0.0"
+ "@tsconfig/node16" "^1.0.2"
+ acorn "^8.4.1"
+ acorn-walk "^8.1.1"
+ arg "^4.1.0"
+ create-require "^1.1.0"
+ diff "^4.0.1"
+ make-error "^1.1.1"
+ v8-compile-cache-lib "^3.0.1"
+ yn "3.1.1"
+
+tslib@^2.0.3, tslib@^2.1.0, tslib@^2.4.0:
+ version "2.4.0"
+ resolved "https://registry.yarnpkg.com/tslib/-/tslib-2.4.0.tgz#7cecaa7f073ce680a05847aa77be941098f36dc3"
+ integrity sha512-d6xOpEDfsi2CZVlPQzGeux8XMwLT9hssAsaPYExaQMuYskwb+x1x7J371tWlbBdWHroy99KnVB6qIkUbs5X3UQ==
+
+tslib@^2.3.0:
+ version "2.4.1"
+ resolved "https://mirrors.cloud.tencent.com/npm/tslib/-/tslib-2.4.1.tgz#0d0bfbaac2880b91e22df0768e55be9753a5b17e"
+ integrity sha512-tGyy4dAjRIEwI7BzsB0lynWgOpfqjUdq91XXAlIWD2OwKBH7oCl/GZG/HT4BOHrTlPMOASlMQ7veyTqpmRcrNA==
+
+tsparticles-engine@^2.5.2:
+ version "2.5.2"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-engine/-/tsparticles-engine-2.5.2.tgz#1772e857c452af806602ab0f33c90338d109f8b8"
+ integrity sha512-P2m1E/EIlvEnH9l7OEIpeKXxSn1ThNhWSp6zeRYvH/DntJpI5Oqa/AMrjum15rUzBkMDgRo7XFO4LqRWs8iB/Q==
+
+tsparticles-interaction-external-attract@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-attract/-/tsparticles-interaction-external-attract-2.5.3.tgz#18554439696ee04bb0c40afe893afcaee791ec67"
+ integrity sha512-W9w8ztCLjFK8ADROTpCBepyotTfwZdRqU6FMe0W0KU3Lj4GpR53dN7tk/YUuLtZut1DRFNH0SUh6IIuW4Oq2oQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-bounce@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-bounce/-/tsparticles-interaction-external-bounce-2.5.3.tgz#5acc056d69711f05203056112d00b4bb568f0cd5"
+ integrity sha512-z/3e8KSTm5n6kIU9KRpSr+XLz8ARDV4XuFqtDcRZBp0CXWcb2GVuuUQvHXhir1KPM3VzlHjHK8wxzqaE6TRfaw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-bubble@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-bubble/-/tsparticles-interaction-external-bubble-2.5.3.tgz#16e081b18be1cfa2560970292f7ee01ec82cdddf"
+ integrity sha512-cAXvn5ScwEku1e01CzWFMMGogvDkY/O/FnMr3kFvxztY3BeeSuj3RIwd+INp5U0L0D88hue0kL0BEIfPQLbfCA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-connect@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-connect/-/tsparticles-interaction-external-connect-2.5.3.tgz#c2cea19c60abd7658937266e3b530a73227a7f04"
+ integrity sha512-KOcNNxb4MxmDeTfnNawWtn72YaILjTvpN5zSNXRDDiYLJ9dFAuoU8jfXxTyb5EGtF6PNkU9er0KX2aA3hQNM1w==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-grab@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-grab/-/tsparticles-interaction-external-grab-2.5.3.tgz#8f6c7628c7cd59837bfbd6599151dbbd8b62e121"
+ integrity sha512-bVTdB8lojJsd+9Pm2kGuQQa9VrIq+hITn2lBQ2YVzlF+vJ4a02Nvw07PHKlnmu+EJDmMwlGmu5z1oAFaWaRGMA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-pause@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-pause/-/tsparticles-interaction-external-pause-2.5.3.tgz#bde967da09fa0a6eafaf973ab09a3678b2b40174"
+ integrity sha512-LPNcYCmo9/PoHdyTb+8nmZxvbtknYdqnjs70fYIngL4pNvdq8KwO38Z1anh7rBIVvHOlpm2w7NnwO3jeOnYOVQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-push@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-push/-/tsparticles-interaction-external-push-2.5.3.tgz#eaababd0f71f20e56f39520a783f280a85b98d80"
+ integrity sha512-5vfDVR9ciAGORVszq8ciqE6HYmpciDS2zom2++apHLvxM1AAQJKh/zRy3W3hl2qQ5aEvb67t/iaRiglTtQD2qA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-remove@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-remove/-/tsparticles-interaction-external-remove-2.5.3.tgz#2fd951732aa6a66f2fa2de801fa65bb557baed21"
+ integrity sha512-+goJgwiTHR6OFmuHZjqI7wO+VFeH8GObJEZzvRLzAZ1jRDZquc/Lm7bMLnQjKOS+pA44tLrZB4ErFT1/+VH1dg==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-repulse@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-repulse/-/tsparticles-interaction-external-repulse-2.5.3.tgz#3db55dd2170c92c100407d1e8b101640454d4214"
+ integrity sha512-6o/Xk8oTB/oGAlp291f8HllRvb5P7Qs6Z35imJ+hmLBuxXpu++TRbBJl9AyTuzhkaiX46cy765q79K4Nf71tTQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-slow@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-slow/-/tsparticles-interaction-external-slow-2.5.3.tgz#47216b262686aa9cadedbf4d1fb8a3e8075199ca"
+ integrity sha512-nWDtU2q3gLccu9CafmONAVJFA9gMvAJ26Uf7+jiqWPAG0R3p/oL8INZDyIwKtxd+OFkE6QwBWe1BJovOUjJfZw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-external-trail@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-external-trail/-/tsparticles-interaction-external-trail-2.5.3.tgz#10410f65c929f1a3561baab46cf97bddb909662a"
+ integrity sha512-Z0PuHPjgNHTUOHGSyM31dls03Brq1//6qrSfMu4zmVcfyb9wVZN2UeTd0OjqNLiz7gsBVdFWinRVe20EcOPYIQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-particles-attract@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-particles-attract/-/tsparticles-interaction-particles-attract-2.5.3.tgz#d764c564a825cac61685bc78ea30cc872728cdc5"
+ integrity sha512-vACi7Zdx3LtqFNbapCxylNuj/T52lhc8xN2vTq2oGMEm0f8JjBqrsfXYzvN7cpVVg4hQDiBWwWgAftQEwAHkJQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-particles-collisions@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-particles-collisions/-/tsparticles-interaction-particles-collisions-2.5.3.tgz#641ec461256be2600172b2f6a5266537c94319b4"
+ integrity sha512-KhlWgrMr8GD0rkj75yhhEZkvr56DJxIlk57YZfBHrYvpLVUyYDXO7iv5mVAPG7qtfxfMwggEXZSpCxbH5bfhbA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-interaction-particles-links@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-interaction-particles-links/-/tsparticles-interaction-particles-links-2.5.3.tgz#03f80105ec1e4f49fa7f877f89f79c5d45d80c2f"
+ integrity sha512-aT43l+90HV81IKuNPgVKUU/1aMwC6aVYQeqRwyUi1WMoW2n0JUuM/k2ifqEUsfNuoXOfedU9HRXk4+iXnKd0SQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-move-base@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-move-base/-/tsparticles-move-base-2.5.3.tgz#e0a735972a078348a4903c2740f5a09a8be9b91d"
+ integrity sha512-YOpTua9TfBoU6dmK6VdvVgXIpb2iU8PuA3lL+p/Qe6VLqGx4XgSt27pD63Qa5p7lFZTCxa4cnE+DcHFVSRWG1Q==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-move-parallax@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-move-parallax/-/tsparticles-move-parallax-2.5.3.tgz#c37143119779824738c45d09866f638bb08d3fe1"
+ integrity sha512-l9vgLZrdR9WIaiJ8lH7Nq2h60zg4/w+X/pvp76CWce3S7ochW3s5r/J9dnNJOSyHUEvs42s76yOdZA1apkFHvw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-particles.js@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-particles.js/-/tsparticles-particles.js-2.5.3.tgz#df7c777721a7da2841a578f88cabc47c7467b3cd"
+ integrity sha512-nksab+qsfpz+gA/qXqpiQJYaMznPh52FY77kdcttVUKj2b5vQLjMq2avewREKnsWJYqDl5LohekbEMXDznS2wQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-plugin-absorbers@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-plugin-absorbers/-/tsparticles-plugin-absorbers-2.5.3.tgz#7d1164f37f09be4e5fffbb2b4448c096991f182b"
+ integrity sha512-sN0Utyd62vMXr46Pko/0ncRbhTLUfkFNpY4x8Z7eCnkX4WnWkVMrcwT3ib7wn56/rtxn8rzWYCIquaioU8oOEQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-plugin-easing-quad@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-plugin-easing-quad/-/tsparticles-plugin-easing-quad-2.5.3.tgz#1244cea271c3b28fb56c0d76c4a8ac0cdce63439"
+ integrity sha512-rGhPbL4mJrQkziYu5wpcKtdDqj3srfvcWqfm6D3NBjdveWm5Q6o/fA8hlkNXuBPCIIpQG4cfwQYx8nc5BEtwgA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-plugin-emitters@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-plugin-emitters/-/tsparticles-plugin-emitters-2.5.3.tgz#a05815f74aa56111764d4df5bc30b82d845732ea"
+ integrity sha512-yF1qiGJu99EIG40nTLVGwV6JEgpB9NGCynd6chvqby7A01AWoqpaUaQ7AdyblcjpPq2WYkyMhJWdeNEk6YblFQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-shape-circle@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-shape-circle/-/tsparticles-shape-circle-2.5.3.tgz#1df7c5f43b26566734a2d7de6e0065a3c93b2fa9"
+ integrity sha512-D8zbbC/eaqaOjd3MjhmbkmMQ+uH97RKNJL3YyZ743s1YowCGPK4c6N5NUdVNqljWHn15ALpq0NV/GzY5oGVTNw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-shape-image@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-shape-image/-/tsparticles-shape-image-2.5.3.tgz#faf004a839cfb9fd7f467f00e5cb856029b5b4c1"
+ integrity sha512-hdSxxkTXlEC2vn1Hi//GDZNqIGCVyuRfoTmE8ex3hj9NA0YhZoBcmr+4kmFDfJRBvtvumIQk4V1JVruHgoVnuw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-shape-line@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-shape-line/-/tsparticles-shape-line-2.5.3.tgz#fbf1fe2361ee605fd06450a7522f1d31ab97db0d"
+ integrity sha512-ee+mA40mTfvbpetU0ohmsiZ1ClY7OCFwcAEfNlfTJc7BGUVQuvEN47RHR9UtL4k3REu9JNKr6J2EIuUOgRwwLw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-shape-polygon@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-shape-polygon/-/tsparticles-shape-polygon-2.5.3.tgz#08b3d280946db84839a9005277cacf38f807ada3"
+ integrity sha512-1w+6nke5LvKd/jEwCgj8QKDIi5qFLjm54Dg1fH4S1jhEtNAM25o28Law0JGfZeernmBWqaxBj5pV9BJGeqqfcw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-shape-square@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-shape-square/-/tsparticles-shape-square-2.5.3.tgz#43888f4df268a959b0a2185b278d08a4ec4f9430"
+ integrity sha512-LXS6UJJx0gCB7kS8kJb4N01owMnIwYkUaN5eVcN5h0b8Hc2cX+E962yImR2ZN+Hcq4P9lb7qXGj7SsAAa054WQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-shape-star@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-shape-star/-/tsparticles-shape-star-2.5.3.tgz#d499ff380c37b5fce776347f3e1c5849f3430ef2"
+ integrity sha512-/xvEP8XQ1TWcKBGMA0vTyu7EW/dcOz3jaOzgtAN3MvKgu9AQDiT02HUNW92QIUmqLStlCPAttf37eUss+JGTVw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-shape-text@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-shape-text/-/tsparticles-shape-text-2.5.3.tgz#5dbc92a3ce7729bc07dea30c499e9d6e0d8b0d4a"
+ integrity sha512-d6/xZVhBTAVtnMOH8K9nCamgfiO10M2q67rAUDTUVTscYES996jQYsTeDfg43IIO7eruXBH80LZVLt1MBZJ3DA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-slim@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-slim/-/tsparticles-slim-2.5.3.tgz#82634757ae64826248af929108fcddc9c86fba13"
+ integrity sha512-t44w8lrajWIOp2P14G9Hfp32v+EzpjjB4402jsbVHRyaQdm3vlgrtqA1H/2qQGdJHkl24lUZXNlLqX4fSoBZbg==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+ tsparticles-interaction-external-attract "^2.5.3"
+ tsparticles-interaction-external-bounce "^2.5.3"
+ tsparticles-interaction-external-bubble "^2.5.3"
+ tsparticles-interaction-external-connect "^2.5.3"
+ tsparticles-interaction-external-grab "^2.5.3"
+ tsparticles-interaction-external-pause "^2.5.3"
+ tsparticles-interaction-external-push "^2.5.3"
+ tsparticles-interaction-external-remove "^2.5.3"
+ tsparticles-interaction-external-repulse "^2.5.3"
+ tsparticles-interaction-external-slow "^2.5.3"
+ tsparticles-interaction-particles-attract "^2.5.3"
+ tsparticles-interaction-particles-collisions "^2.5.3"
+ tsparticles-interaction-particles-links "^2.5.3"
+ tsparticles-move-base "^2.5.3"
+ tsparticles-move-parallax "^2.5.3"
+ tsparticles-particles.js "^2.5.3"
+ tsparticles-plugin-easing-quad "^2.5.3"
+ tsparticles-shape-circle "^2.5.3"
+ tsparticles-shape-image "^2.5.3"
+ tsparticles-shape-line "^2.5.3"
+ tsparticles-shape-polygon "^2.5.3"
+ tsparticles-shape-square "^2.5.3"
+ tsparticles-shape-star "^2.5.3"
+ tsparticles-shape-text "^2.5.3"
+ tsparticles-updater-angle "^2.5.3"
+ tsparticles-updater-color "^2.5.3"
+ tsparticles-updater-life "^2.5.3"
+ tsparticles-updater-opacity "^2.5.3"
+ tsparticles-updater-out-modes "^2.5.3"
+ tsparticles-updater-size "^2.5.3"
+ tsparticles-updater-stroke-color "^2.5.3"
+
+tsparticles-updater-angle@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-angle/-/tsparticles-updater-angle-2.5.3.tgz#91a6ce3fa715d2ad44034f4debf82e1df4715c9f"
+ integrity sha512-2oQYSV5PC0FWnwRswcUZjXAeuHa8QQw0dK3NnyxG52PuyJorxGz38t8/INrPMxaNDA8LvSHO+CzYFi9CzQvXbg==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-color@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-color/-/tsparticles-updater-color-2.5.3.tgz#93da0ab8d81643b6646c95ec1b0a721c55fe4582"
+ integrity sha512-eNmyfkzk7AfghCLwCewt+480LAfK5uX2D5jnQC/Vo5VNxco1P0clGTx61g9pc3vVzUV1e5NEj6Gw1tCSb8f+nw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-destroy@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-destroy/-/tsparticles-updater-destroy-2.5.3.tgz#f41e0209b8b1ddf71741623b6ce2ef1f135ff38a"
+ integrity sha512-2G6haBy5PqpNmnOLVeUzmm82zjfx8YmSwu9qZ5nUEPLNDQCDkAN4u6mffrkZRajY0u/KmwTTaajIVK0ExIR9aw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-life@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-life/-/tsparticles-updater-life-2.5.3.tgz#68ae7b0d56825466aaf173f863bce990c4da702c"
+ integrity sha512-smK9iuNRPmK9rr74X9xzx9LPpVaSbRY+uri7/lI9NLvoi5b1PtYImYSIQGiXrPfFZ2/KL+MApddVnwqI/8MHqw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-opacity@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-opacity/-/tsparticles-updater-opacity-2.5.3.tgz#4f54110b1c12520e7655c01a62834fa61bde1f0b"
+ integrity sha512-1tLJYDq1uQM7sNn8QPO8s2yPTRAJONCxyETWl9SAc6ghTX0qrebcla3pGRBWMT0g0cIJb4qIsO9Nyaq9RYf0iQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-out-modes@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-out-modes/-/tsparticles-updater-out-modes-2.5.3.tgz#3fb366bec4aebb08755d673d72d7b1d30f434cd3"
+ integrity sha512-KyhA2YNA3j7TFv2lOwZN9JXuR5z1/N7fUmaFTVJGooI6Bf7WvTFaDFp0gvufrBSd4CqDwsQITqDOwV2J1sGDEQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-roll@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-roll/-/tsparticles-updater-roll-2.5.3.tgz#ff21b29fbfc77d3fede95f7474256bf931109ed1"
+ integrity sha512-jkiGkFW7s7TJBqfsiC4ZU0/gR/YCAeOkHAIIuHcVy4soM2rf0P2NKKzZxYMxSIyGQrCG0Hxw0GEGlLV5UiqK2w==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-size@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-size/-/tsparticles-updater-size-2.5.3.tgz#9c7ba5beb52462f993b559b6884e8922b5c4726a"
+ integrity sha512-kU1qyP8NKQd5XhAc2jp/VLn2AxakM+HFKozlzHIp8DPy+INW5lpgswo2Cc5e1ZXQCdOMQlQFn6cNCbct24kKcQ==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-stroke-color@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-stroke-color/-/tsparticles-updater-stroke-color-2.5.3.tgz#888088fc9a56788309d3ab4bf74730066611fcd2"
+ integrity sha512-XONraSFtepvtEqJXQWQSV+KOEWill9HtwVju8CsFgS/NtJYlYKfpSEbv41r7pWRhZRaFZN/BbOnMfwHCdLuBHw==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-tilt@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-tilt/-/tsparticles-updater-tilt-2.5.3.tgz#cb52b4f8898aae2ddcb0cc333b5b9390f9ab0d73"
+ integrity sha512-Njt3aR+WtYiuvaLWnJOsL6NGDfVAfMrVzzOGrrf7X4TZKpk4HMMYcxOQFbvBQnoQZhenq3/TJh09Xx44AX7Rag==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-twinkle@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-twinkle/-/tsparticles-updater-twinkle-2.5.3.tgz#ae8e30b19d5cc0830b60ce5a8298a395a3fa067f"
+ integrity sha512-YvfRIf00stfTqttQL5IZ5yHaVufR7XnXxPk+GZ9bLzw5SGoUD+EyLlKVoP6L9gbY6O3I6c64ZKrKmBWMoYWxDg==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles-updater-wobble@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles-updater-wobble/-/tsparticles-updater-wobble-2.5.3.tgz#7d8801c238f8693f5ea67c09768e4649f2dadf3b"
+ integrity sha512-uB4VdZU1Ha7T8ZtOe0SAdnrVVkoCCDeb/BIUTGp7nF1/fVlv57v7iI54GmbYE9Aabljc2KOvEaaiYPO+iBJqgA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+
+tsparticles@^2.5.3:
+ version "2.5.3"
+ resolved "https://mirrors.cloud.tencent.com/npm/tsparticles/-/tsparticles-2.5.3.tgz#670ad90fc3f92c7c2c44dc5fc78ae62483688755"
+ integrity sha512-qp3WdOKLQsZUgR11biq5mK5njmD6bhDcK2XpsLenj0oNvz6A/lHn8fv9KgWmLka6sBBHwTpAPpnFGBII5GMaqA==
+ dependencies:
+ tsparticles-engine "^2.5.2"
+ tsparticles-interaction-external-trail "^2.5.3"
+ tsparticles-plugin-absorbers "^2.5.3"
+ tsparticles-plugin-emitters "^2.5.3"
+ tsparticles-slim "^2.5.3"
+ tsparticles-updater-destroy "^2.5.3"
+ tsparticles-updater-roll "^2.5.3"
+ tsparticles-updater-tilt "^2.5.3"
+ tsparticles-updater-twinkle "^2.5.3"
+ tsparticles-updater-wobble "^2.5.3"
+
+type-fest@^0.18.0:
+ version "0.18.1"
+ resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.18.1.tgz#db4bc151a4a2cf4eebf9add5db75508db6cc841f"
+ integrity sha512-OIAYXk8+ISY+qTOwkHtKqzAuxchoMiD9Udx+FSGQDuiRR+PJKJHc2NJAXlbhkGwTt/4/nKZxELY1w3ReWOL8mw==
+
+type-fest@^0.20.2:
+ version "0.20.2"
+ resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.20.2.tgz#1bf207f4b28f91583666cb5fbd327887301cd5f4"
+ integrity sha512-Ne+eE4r0/iWnpAxD852z3A+N0Bt5RN//NjJwRd2VFHEmrywxf5vsZlh4R6lixl6B+wz/8d+maTSAkN1FIkI3LQ==
+
+type-fest@^0.21.3:
+ version "0.21.3"
+ resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.21.3.tgz#d260a24b0198436e133fa26a524a6d65fa3b2e37"
+ integrity sha512-t0rzBq87m3fVcduHDUFhKmyyX+9eo6WQjZvf51Ea/M0Q7+T374Jp1aUiyUl0GKxp8M/OETVHSDvmkyPgvX+X2w==
+
+type-fest@^0.6.0:
+ version "0.6.0"
+ resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.6.0.tgz#8d2a2370d3df886eb5c90ada1c5bf6188acf838b"
+ integrity sha512-q+MB8nYR1KDLrgr4G5yemftpMC7/QLqVndBmEEdqzmNj5dcFOO4Oo8qlwZE3ULT3+Zim1F8Kq4cBnikNhlCMlg==
+
+type-fest@^0.8.1:
+ version "0.8.1"
+ resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.8.1.tgz#09e249ebde851d3b1e48d27c105444667f17b83d"
+ integrity sha512-4dbzIzqvjtgiM5rw1k5rEHtBANKmdudhGyBEajN01fEyhaAIhsoKNy6y7+IN93IfpFtwY9iqi7kD+xwKhQsNJA==
+
+type-fest@^2.5.0:
+ version "2.19.0"
+ resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-2.19.0.tgz#88068015bb33036a598b952e55e9311a60fd3a9b"
+ integrity sha512-RAH822pAdBgcNMAfWnCBU3CFZcfZ/i1eZjwFU/dsLKumyuuP3niueg2UAukXYF0E2AAoc82ZSSf9J0WQBinzHA==
+
+type-is@~1.6.18:
+ version "1.6.18"
+ resolved "https://registry.yarnpkg.com/type-is/-/type-is-1.6.18.tgz#4e552cd05df09467dcbc4ef739de89f2cf37c131"
+ integrity sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==
+ dependencies:
+ media-typer "0.3.0"
+ mime-types "~2.1.24"
+
+typedarray-to-buffer@^3.1.5:
+ version "3.1.5"
+ resolved "https://registry.yarnpkg.com/typedarray-to-buffer/-/typedarray-to-buffer-3.1.5.tgz#a97ee7a9ff42691b9f783ff1bc5112fe3fca9080"
+ integrity sha512-zdu8XMNEDepKKR+XYOXAVPtWui0ly0NtohUscw+UmaHiAWT8hrV1rr//H6V+0DvJ3OQ19S979M0laLfX8rm82Q==
+ dependencies:
+ is-typedarray "^1.0.0"
+
+typedarray@^0.0.6:
+ version "0.0.6"
+ resolved "https://registry.yarnpkg.com/typedarray/-/typedarray-0.0.6.tgz#867ac74e3864187b1d3d47d996a78ec5c8830777"
+ integrity sha512-/aCDEGatGvZ2BIk+HmLf4ifCJFwvKFNb9/JeZPMulfgFracn9QFcAf5GO8B/mweUjSoblS5In0cWhqpfs/5PQA==
+
+typescript@^4.6.4, typescript@^4.7.4:
+ version "4.8.3"
+ resolved "https://registry.yarnpkg.com/typescript/-/typescript-4.8.3.tgz#d59344522c4bc464a65a730ac695007fdb66dd88"
+ integrity sha512-goMHfm00nWPa8UvR/CPSvykqf6dVV8x/dp0c5mFTMTIu0u0FlGWRioyy7Nn0PGAdHxpJZnuO/ut+PpQ8UiHAig==
+
+ua-parser-js@^0.7.30:
+ version "0.7.31"
+ resolved "https://registry.yarnpkg.com/ua-parser-js/-/ua-parser-js-0.7.31.tgz#649a656b191dffab4f21d5e053e27ca17cbff5c6"
+ integrity sha512-qLK/Xe9E2uzmYI3qLeOmI0tEOt+TBBQyUIAh4aAgU05FVYzeZrKUdkAZfBNVGRaHVgV0TDkdEngJSw/SyQchkQ==
+
+uglify-js@^3.1.4:
+ version "3.17.0"
+ resolved "https://registry.yarnpkg.com/uglify-js/-/uglify-js-3.17.0.tgz#55bd6e9d19ce5eef0d5ad17cd1f587d85b180a85"
+ integrity sha512-aTeNPVmgIMPpm1cxXr2Q/nEbvkmV8yq66F3om7X3P/cvOXQ0TMQ64Wk63iyT1gPlmdmGzjGpyLh1f3y8MZWXGg==
+
+unherit@^1.0.4:
+ version "1.1.3"
+ resolved "https://registry.yarnpkg.com/unherit/-/unherit-1.1.3.tgz#6c9b503f2b41b262330c80e91c8614abdaa69c22"
+ integrity sha512-Ft16BJcnapDKp0+J/rqFC3Rrk6Y/Ng4nzsC028k2jdDII/rdZ7Wd3pPT/6+vIIxRagwRc9K0IUX0Ra4fKvw+WQ==
+ dependencies:
+ inherits "^2.0.0"
+ xtend "^4.0.0"
+
+unicode-canonical-property-names-ecmascript@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/unicode-canonical-property-names-ecmascript/-/unicode-canonical-property-names-ecmascript-2.0.0.tgz#301acdc525631670d39f6146e0e77ff6bbdebddc"
+ integrity sha512-yY5PpDlfVIU5+y/BSCxAJRBIS1Zc2dDG3Ujq+sR0U+JjUevW2JhocOF+soROYDSaAezOzOKuyyixhD6mBknSmQ==
+
+unicode-match-property-ecmascript@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/unicode-match-property-ecmascript/-/unicode-match-property-ecmascript-2.0.0.tgz#54fd16e0ecb167cf04cf1f756bdcc92eba7976c3"
+ integrity sha512-5kaZCrbp5mmbz5ulBkDkbY0SsPOjKqVS35VpL9ulMPfSl0J0Xsm+9Evphv9CoIZFwre7aJoa94AY6seMKGVN5Q==
+ dependencies:
+ unicode-canonical-property-names-ecmascript "^2.0.0"
+ unicode-property-aliases-ecmascript "^2.0.0"
+
+unicode-match-property-value-ecmascript@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/unicode-match-property-value-ecmascript/-/unicode-match-property-value-ecmascript-2.0.0.tgz#1a01aa57247c14c568b89775a54938788189a714"
+ integrity sha512-7Yhkc0Ye+t4PNYzOGKedDhXbYIBe1XEQYQxOPyhcXNMJ0WCABqqj6ckydd6pWRZTHV4GuCPKdBAUiMc60tsKVw==
+
+unicode-property-aliases-ecmascript@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/unicode-property-aliases-ecmascript/-/unicode-property-aliases-ecmascript-2.0.0.tgz#0a36cb9a585c4f6abd51ad1deddb285c165297c8"
+ integrity sha512-5Zfuy9q/DFr4tfO7ZPeVXb1aPoeQSdeFMLpYuFebehDAhbuevLs5yxSZmIFN1tP5F9Wl4IpJrYojg85/zgyZHQ==
+
+unified@9.2.0:
+ version "9.2.0"
+ resolved "https://registry.yarnpkg.com/unified/-/unified-9.2.0.tgz#67a62c627c40589edebbf60f53edfd4d822027f8"
+ integrity sha512-vx2Z0vY+a3YoTj8+pttM3tiJHCwY5UFbYdiWrwBEbHmK8pvsPj2rtAX2BFfgXen8T39CJWblWRDT4L5WGXtDdg==
+ dependencies:
+ bail "^1.0.0"
+ extend "^3.0.0"
+ is-buffer "^2.0.0"
+ is-plain-obj "^2.0.0"
+ trough "^1.0.0"
+ vfile "^4.0.0"
+
+unified@^9.2.2:
+ version "9.2.2"
+ resolved "https://registry.yarnpkg.com/unified/-/unified-9.2.2.tgz#67649a1abfc3ab85d2969502902775eb03146975"
+ integrity sha512-Sg7j110mtefBD+qunSLO1lqOEKdrwBFBrR6Qd8f4uwkhWNlbkaqwHse6e7QvD3AP/MNoJdEDLaf8OxYyoWgorQ==
+ dependencies:
+ bail "^1.0.0"
+ extend "^3.0.0"
+ is-buffer "^2.0.0"
+ is-plain-obj "^2.0.0"
+ trough "^1.0.0"
+ vfile "^4.0.0"
+
+unique-string@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/unique-string/-/unique-string-2.0.0.tgz#39c6451f81afb2749de2b233e3f7c5e8843bd89d"
+ integrity sha512-uNaeirEPvpZWSgzwsPGtU2zVSTrn/8L5q/IexZmH0eH6SA73CmAA5U4GwORTxQAZs95TAXLNqeLoPPNO5gZfWg==
+ dependencies:
+ crypto-random-string "^2.0.0"
+
+unist-builder@2.0.3, unist-builder@^2.0.0:
+ version "2.0.3"
+ resolved "https://registry.yarnpkg.com/unist-builder/-/unist-builder-2.0.3.tgz#77648711b5d86af0942f334397a33c5e91516436"
+ integrity sha512-f98yt5pnlMWlzP539tPc4grGMsFaQQlP/vM396b00jngsiINumNmsY8rkXjfoi1c6QaM8nQ3vaGDuoKWbe/1Uw==
+
+unist-util-generated@^1.0.0:
+ version "1.1.6"
+ resolved "https://registry.yarnpkg.com/unist-util-generated/-/unist-util-generated-1.1.6.tgz#5ab51f689e2992a472beb1b35f2ce7ff2f324d4b"
+ integrity sha512-cln2Mm1/CZzN5ttGK7vkoGw+RZ8VcUH6BtGbq98DDtRGquAAOXig1mrBQYelOwMXYS8rK+vZDyyojSjp7JX+Lg==
+
+unist-util-is@^4.0.0:
+ version "4.1.0"
+ resolved "https://registry.yarnpkg.com/unist-util-is/-/unist-util-is-4.1.0.tgz#976e5f462a7a5de73d94b706bac1b90671b57797"
+ integrity sha512-ZOQSsnce92GrxSqlnEEseX0gi7GH9zTJZ0p9dtu87WRb/37mMPO2Ilx1s/t9vBHrFhbgweUwb+t7cIn5dxPhZg==
+
+unist-util-position@^3.0.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/unist-util-position/-/unist-util-position-3.1.0.tgz#1c42ee6301f8d52f47d14f62bbdb796571fa2d47"
+ integrity sha512-w+PkwCbYSFw8vpgWD0v7zRCl1FpY3fjDSQ3/N/wNd9Ffa4gPi8+4keqt99N3XW6F99t/mUzp2xAhNmfKWp95QA==
+
+unist-util-remove-position@^2.0.0:
+ version "2.0.1"
+ resolved "https://registry.yarnpkg.com/unist-util-remove-position/-/unist-util-remove-position-2.0.1.tgz#5d19ca79fdba712301999b2b73553ca8f3b352cc"
+ integrity sha512-fDZsLYIe2uT+oGFnuZmy73K6ZxOPG/Qcm+w7jbEjaFcJgbQ6cqjs/eSPzXhsmGpAsWPkqZM9pYjww5QTn3LHMA==
+ dependencies:
+ unist-util-visit "^2.0.0"
+
+unist-util-remove@^2.0.0:
+ version "2.1.0"
+ resolved "https://registry.yarnpkg.com/unist-util-remove/-/unist-util-remove-2.1.0.tgz#b0b4738aa7ee445c402fda9328d604a02d010588"
+ integrity sha512-J8NYPyBm4baYLdCbjmf1bhPu45Cr1MWTm77qd9istEkzWpnN6O9tMsEbB2JhNnBCqGENRqEWomQ+He6au0B27Q==
+ dependencies:
+ unist-util-is "^4.0.0"
+
+unist-util-stringify-position@^2.0.0:
+ version "2.0.3"
+ resolved "https://registry.yarnpkg.com/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz#cce3bfa1cdf85ba7375d1d5b17bdc4cada9bd9da"
+ integrity sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==
+ dependencies:
+ "@types/unist" "^2.0.2"
+
+unist-util-visit-parents@^3.0.0:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/unist-util-visit-parents/-/unist-util-visit-parents-3.1.1.tgz#65a6ce698f78a6b0f56aa0e88f13801886cdaef6"
+ integrity sha512-1KROIZWo6bcMrZEwiH2UrXDyalAa0uqzWCxCJj6lPOvTve2WkfgCytoDTPaMnodXh1WrXOq0haVYHj99ynJlsg==
+ dependencies:
+ "@types/unist" "^2.0.0"
+ unist-util-is "^4.0.0"
+
+unist-util-visit@2.0.3, unist-util-visit@^2.0.0, unist-util-visit@^2.0.3:
+ version "2.0.3"
+ resolved "https://registry.yarnpkg.com/unist-util-visit/-/unist-util-visit-2.0.3.tgz#c3703893146df47203bb8a9795af47d7b971208c"
+ integrity sha512-iJ4/RczbJMkD0712mGktuGpm/U4By4FfDonL7N/9tATGIF4imikjOuagyMY53tnZq3NP6BcmlrHhEKAfGWjh7Q==
+ dependencies:
+ "@types/unist" "^2.0.0"
+ unist-util-is "^4.0.0"
+ unist-util-visit-parents "^3.0.0"
+
+universalify@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/universalify/-/universalify-2.0.0.tgz#75a4984efedc4b08975c5aeb73f530d02df25717"
+ integrity sha512-hAZsKq7Yy11Zu1DE0OzWjw7nnLZmJZYTDZZyEFHZdUhV8FkH5MCfoU1XMaxXovpyW5nq5scPqq0ZDP9Zyl04oQ==
+
+unpipe@1.0.0, unpipe@~1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/unpipe/-/unpipe-1.0.0.tgz#b2bf4ee8514aae6165b4817829d21b2ef49904ec"
+ integrity sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==
+
+update-browserslist-db@^1.0.5:
+ version "1.0.9"
+ resolved "https://registry.yarnpkg.com/update-browserslist-db/-/update-browserslist-db-1.0.9.tgz#2924d3927367a38d5c555413a7ce138fc95fcb18"
+ integrity sha512-/xsqn21EGVdXI3EXSum1Yckj3ZVZugqyOZQ/CxYPBD/R+ko9NSUScf8tFF4dOKY+2pvSSJA/S+5B8s4Zr4kyvg==
+ dependencies:
+ escalade "^3.1.1"
+ picocolors "^1.0.0"
+
+update-notifier@^5.1.0:
+ version "5.1.0"
+ resolved "https://registry.yarnpkg.com/update-notifier/-/update-notifier-5.1.0.tgz#4ab0d7c7f36a231dd7316cf7729313f0214d9ad9"
+ integrity sha512-ItnICHbeMh9GqUy31hFPrD1kcuZ3rpxDZbf4KUDavXwS0bW5m7SLbDQpGX3UYr072cbrF5hFUs3r5tUsPwjfHw==
+ dependencies:
+ boxen "^5.0.0"
+ chalk "^4.1.0"
+ configstore "^5.0.1"
+ has-yarn "^2.1.0"
+ import-lazy "^2.1.0"
+ is-ci "^2.0.0"
+ is-installed-globally "^0.4.0"
+ is-npm "^5.0.0"
+ is-yarn-global "^0.3.0"
+ latest-version "^5.1.0"
+ pupa "^2.1.1"
+ semver "^7.3.4"
+ semver-diff "^3.1.1"
+ xdg-basedir "^4.0.0"
+
+uri-js@^4.2.2:
+ version "4.4.1"
+ resolved "https://registry.yarnpkg.com/uri-js/-/uri-js-4.4.1.tgz#9b1a52595225859e55f669d928f88c6c57f2a77e"
+ integrity sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==
+ dependencies:
+ punycode "^2.1.0"
+
+url-loader@^4.1.1:
+ version "4.1.1"
+ resolved "https://registry.yarnpkg.com/url-loader/-/url-loader-4.1.1.tgz#28505e905cae158cf07c92ca622d7f237e70a4e2"
+ integrity sha512-3BTV812+AVHHOJQO8O5MkWgZ5aosP7GnROJwvzLS9hWDj00lZ6Z0wNak423Lp9PBZN05N+Jk/N5Si8jRAlGyWA==
+ dependencies:
+ loader-utils "^2.0.0"
+ mime-types "^2.1.27"
+ schema-utils "^3.0.0"
+
+url-parse-lax@^3.0.0:
+ version "3.0.0"
+ resolved "https://registry.yarnpkg.com/url-parse-lax/-/url-parse-lax-3.0.0.tgz#16b5cafc07dbe3676c1b1999177823d6503acb0c"
+ integrity sha512-NjFKA0DidqPa5ciFcSrXnAltTtzz84ogy+NebPvfEgAck0+TNg4UJ4IN+fB7zRZfbgUf0syOo9MDxFkDSMuFaQ==
+ dependencies:
+ prepend-http "^2.0.0"
+
+use-composed-ref@^1.3.0:
+ version "1.3.0"
+ resolved "https://registry.yarnpkg.com/use-composed-ref/-/use-composed-ref-1.3.0.tgz#3d8104db34b7b264030a9d916c5e94fbe280dbda"
+ integrity sha512-GLMG0Jc/jiKov/3Ulid1wbv3r54K9HlMW29IWcDFPEqFkSO2nS0MuefWgMJpeHQ9YJeXDL3ZUF+P3jdXlZX/cQ==
+
+use-isomorphic-layout-effect@^1.1.1:
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/use-isomorphic-layout-effect/-/use-isomorphic-layout-effect-1.1.2.tgz#497cefb13d863d687b08477d9e5a164ad8c1a6fb"
+ integrity sha512-49L8yCO3iGT/ZF9QttjwLF/ZD9Iwto5LnH5LmEdk/6cFmXddqi2ulF0edxTwjj+7mqvpVVGQWvbXZdn32wRSHA==
+
+use-latest@^1.2.1:
+ version "1.2.1"
+ resolved "https://registry.yarnpkg.com/use-latest/-/use-latest-1.2.1.tgz#d13dfb4b08c28e3e33991546a2cee53e14038cf2"
+ integrity sha512-xA+AVm/Wlg3e2P/JiItTziwS7FK92LWrDB0p+hgXloIMuVCeJJ8v6f0eeHyPZaJrM+usM1FkFfbNCrJGs8A/zw==
+ dependencies:
+ use-isomorphic-layout-effect "^1.1.1"
+
+util-deprecate@^1.0.1, util-deprecate@^1.0.2, util-deprecate@~1.0.1:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/util-deprecate/-/util-deprecate-1.0.2.tgz#450d4dc9fa70de732762fbd2d4a28981419a0ccf"
+ integrity sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==
+
+utila@~0.4:
+ version "0.4.0"
+ resolved "https://registry.yarnpkg.com/utila/-/utila-0.4.0.tgz#8a16a05d445657a3aea5eecc5b12a4fa5379772c"
+ integrity sha512-Z0DbgELS9/L/75wZbro8xAnT50pBVFQZ+hUEueGDU5FN51YSCYM+jdxsfCiHjwNP/4LCDD0i/graKpeBnOXKRA==
+
+utility-types@^3.10.0:
+ version "3.10.0"
+ resolved "https://registry.yarnpkg.com/utility-types/-/utility-types-3.10.0.tgz#ea4148f9a741015f05ed74fd615e1d20e6bed82b"
+ integrity sha512-O11mqxmi7wMKCo6HKFt5AhO4BwY3VV68YU07tgxfz8zJTIxr4BpsezN49Ffwy9j3ZpwwJp4fkRwjRzq3uWE6Rg==
+
+utils-merge@1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/utils-merge/-/utils-merge-1.0.1.tgz#9f95710f50a267947b2ccc124741c1028427e713"
+ integrity sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==
+
+uuid@^8.3.2:
+ version "8.3.2"
+ resolved "https://registry.yarnpkg.com/uuid/-/uuid-8.3.2.tgz#80d5b5ced271bb9af6c445f21a1a04c606cefbe2"
+ integrity sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==
+
+v8-compile-cache-lib@^3.0.1:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/v8-compile-cache-lib/-/v8-compile-cache-lib-3.0.1.tgz#6336e8d71965cb3d35a1bbb7868445a7c05264bf"
+ integrity sha512-wa7YjyUGfNZngI/vtK0UHAN+lgDCxBPCylVXGp0zu59Fz5aiGtNXaq3DhIov063MorB+VfufLh3JlF2KdTK3xg==
+
+validate-npm-package-license@^3.0.1:
+ version "3.0.4"
+ resolved "https://registry.yarnpkg.com/validate-npm-package-license/-/validate-npm-package-license-3.0.4.tgz#fc91f6b9c7ba15c857f4cb2c5defeec39d4f410a"
+ integrity sha512-DpKm2Ui/xN7/HQKCtpZxoRWBhZ9Z0kqtygG8XCgNQ8ZlDnxuQmWhj566j8fN4Cu3/JmbhsDo7fcAJq4s9h27Ew==
+ dependencies:
+ spdx-correct "^3.0.0"
+ spdx-expression-parse "^3.0.0"
+
+value-equal@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/value-equal/-/value-equal-1.0.1.tgz#1e0b794c734c5c0cade179c437d356d931a34d6c"
+ integrity sha512-NOJ6JZCAWr0zlxZt+xqCHNTEKOsrks2HQd4MqhP1qy4z1SkbEP467eNx6TgDKXMvUOb+OENfJCZwM+16n7fRfw==
+
+vary@~1.1.2:
+ version "1.1.2"
+ resolved "https://registry.yarnpkg.com/vary/-/vary-1.1.2.tgz#2299f02c6ded30d4a5961b0b9f74524a18f634fc"
+ integrity sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==
+
+vfile-location@^3.0.0, vfile-location@^3.2.0:
+ version "3.2.0"
+ resolved "https://registry.yarnpkg.com/vfile-location/-/vfile-location-3.2.0.tgz#d8e41fbcbd406063669ebf6c33d56ae8721d0f3c"
+ integrity sha512-aLEIZKv/oxuCDZ8lkJGhuhztf/BW4M+iHdCwglA/eWc+vtuRFJj8EtgceYFX4LRjOhCAAiNHsKGssC6onJ+jbA==
+
+vfile-message@^2.0.0:
+ version "2.0.4"
+ resolved "https://registry.yarnpkg.com/vfile-message/-/vfile-message-2.0.4.tgz#5b43b88171d409eae58477d13f23dd41d52c371a"
+ integrity sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==
+ dependencies:
+ "@types/unist" "^2.0.0"
+ unist-util-stringify-position "^2.0.0"
+
+vfile@^4.0.0:
+ version "4.2.1"
+ resolved "https://registry.yarnpkg.com/vfile/-/vfile-4.2.1.tgz#03f1dce28fc625c625bc6514350fbdb00fa9e624"
+ integrity sha512-O6AE4OskCG5S1emQ/4gl8zK586RqA3srz3nfK/Viy0UPToBc5Trp9BVFb1u0CjsKrAWwnpr4ifM/KBXPWwJbCA==
+ dependencies:
+ "@types/unist" "^2.0.0"
+ is-buffer "^2.0.0"
+ unist-util-stringify-position "^2.0.0"
+ vfile-message "^2.0.0"
+
+wait-on@^6.0.1:
+ version "6.0.1"
+ resolved "https://registry.yarnpkg.com/wait-on/-/wait-on-6.0.1.tgz#16bbc4d1e4ebdd41c5b4e63a2e16dbd1f4e5601e"
+ integrity sha512-zht+KASY3usTY5u2LgaNqn/Cd8MukxLGjdcZxT2ns5QzDmTFc4XoWBgC+C/na+sMRZTuVygQoMYwdcVjHnYIVw==
+ dependencies:
+ axios "^0.25.0"
+ joi "^17.6.0"
+ lodash "^4.17.21"
+ minimist "^1.2.5"
+ rxjs "^7.5.4"
+
+watchpack@^2.4.0:
+ version "2.4.0"
+ resolved "https://registry.yarnpkg.com/watchpack/-/watchpack-2.4.0.tgz#fa33032374962c78113f93c7f2fb4c54c9862a5d"
+ integrity sha512-Lcvm7MGST/4fup+ifyKi2hjyIAwcdI4HRgtvTpIUxBRhB+RFtUh8XtDOxUfctVCnhVi+QQj49i91OyvzkJl6cg==
+ dependencies:
+ glob-to-regexp "^0.4.1"
+ graceful-fs "^4.1.2"
+
+wbuf@^1.1.0, wbuf@^1.7.3:
+ version "1.7.3"
+ resolved "https://registry.yarnpkg.com/wbuf/-/wbuf-1.7.3.tgz#c1d8d149316d3ea852848895cb6a0bfe887b87df"
+ integrity sha512-O84QOnr0icsbFGLS0O3bI5FswxzRr8/gHwWkDlQFskhSPryQXvrTMxjxGP4+iWYoauLoBvfDpkrOauZ+0iZpDA==
+ dependencies:
+ minimalistic-assert "^1.0.0"
+
+wcwidth@^1.0.1:
+ version "1.0.1"
+ resolved "https://registry.yarnpkg.com/wcwidth/-/wcwidth-1.0.1.tgz#f0b0dcf915bc5ff1528afadb2c0e17b532da2fe8"
+ integrity sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg==
+ dependencies:
+ defaults "^1.0.3"
+
+web-namespaces@^1.0.0:
+ version "1.1.4"
+ resolved "https://registry.yarnpkg.com/web-namespaces/-/web-namespaces-1.1.4.tgz#bc98a3de60dadd7faefc403d1076d529f5e030ec"
+ integrity sha512-wYxSGajtmoP4WxfejAPIr4l0fVh+jeMXZb08wNc0tMg6xsfZXj3cECqIK0G7ZAqUq0PP8WlMDtaOGVBTAWztNw==
+
+webidl-conversions@^3.0.0:
+ version "3.0.1"
+ resolved "https://registry.yarnpkg.com/webidl-conversions/-/webidl-conversions-3.0.1.tgz#24534275e2a7bc6be7bc86611cc16ae0a5654871"
+ integrity sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==
+
+webpack-bundle-analyzer@^4.5.0:
+ version "4.6.1"
+ resolved "https://registry.yarnpkg.com/webpack-bundle-analyzer/-/webpack-bundle-analyzer-4.6.1.tgz#bee2ee05f4ba4ed430e4831a319126bb4ed9f5a6"
+ integrity sha512-oKz9Oz9j3rUciLNfpGFjOb49/jEpXNmWdVH8Ls//zNcnLlQdTGXQQMsBbb/gR7Zl8WNLxVCq+0Hqbx3zv6twBw==
+ dependencies:
+ acorn "^8.0.4"
+ acorn-walk "^8.0.0"
+ chalk "^4.1.0"
+ commander "^7.2.0"
+ gzip-size "^6.0.0"
+ lodash "^4.17.20"
+ opener "^1.5.2"
+ sirv "^1.0.7"
+ ws "^7.3.1"
+
+webpack-dev-middleware@^5.3.1:
+ version "5.3.3"
+ resolved "https://registry.yarnpkg.com/webpack-dev-middleware/-/webpack-dev-middleware-5.3.3.tgz#efae67c2793908e7311f1d9b06f2a08dcc97e51f"
+ integrity sha512-hj5CYrY0bZLB+eTO+x/j67Pkrquiy7kWepMHmUMoPsmcUaeEnQJqFzHJOyxgWlq746/wUuA64p9ta34Kyb01pA==
+ dependencies:
+ colorette "^2.0.10"
+ memfs "^3.4.3"
+ mime-types "^2.1.31"
+ range-parser "^1.2.1"
+ schema-utils "^4.0.0"
+
+webpack-dev-server@^4.9.3:
+ version "4.11.0"
+ resolved "https://registry.yarnpkg.com/webpack-dev-server/-/webpack-dev-server-4.11.0.tgz#290ee594765cd8260adfe83b2d18115ea04484e7"
+ integrity sha512-L5S4Q2zT57SK7tazgzjMiSMBdsw+rGYIX27MgPgx7LDhWO0lViPrHKoLS7jo5In06PWYAhlYu3PbyoC6yAThbw==
+ dependencies:
+ "@types/bonjour" "^3.5.9"
+ "@types/connect-history-api-fallback" "^1.3.5"
+ "@types/express" "^4.17.13"
+ "@types/serve-index" "^1.9.1"
+ "@types/serve-static" "^1.13.10"
+ "@types/sockjs" "^0.3.33"
+ "@types/ws" "^8.5.1"
+ ansi-html-community "^0.0.8"
+ bonjour-service "^1.0.11"
+ chokidar "^3.5.3"
+ colorette "^2.0.10"
+ compression "^1.7.4"
+ connect-history-api-fallback "^2.0.0"
+ default-gateway "^6.0.3"
+ express "^4.17.3"
+ graceful-fs "^4.2.6"
+ html-entities "^2.3.2"
+ http-proxy-middleware "^2.0.3"
+ ipaddr.js "^2.0.1"
+ open "^8.0.9"
+ p-retry "^4.5.0"
+ rimraf "^3.0.2"
+ schema-utils "^4.0.0"
+ selfsigned "^2.0.1"
+ serve-index "^1.9.1"
+ sockjs "^0.3.24"
+ spdy "^4.0.2"
+ webpack-dev-middleware "^5.3.1"
+ ws "^8.4.2"
+
+webpack-merge@^5.8.0:
+ version "5.8.0"
+ resolved "https://registry.yarnpkg.com/webpack-merge/-/webpack-merge-5.8.0.tgz#2b39dbf22af87776ad744c390223731d30a68f61"
+ integrity sha512-/SaI7xY0831XwP6kzuwhKWVKDP9t1QY1h65lAFLbZqMPIuYcD9QAW4u9STIbU9kaJbPBB/geU/gLr1wDjOhQ+Q==
+ dependencies:
+ clone-deep "^4.0.1"
+ wildcard "^2.0.0"
+
+webpack-sources@^3.2.2, webpack-sources@^3.2.3:
+ version "3.2.3"
+ resolved "https://registry.yarnpkg.com/webpack-sources/-/webpack-sources-3.2.3.tgz#2d4daab8451fd4b240cc27055ff6a0c2ccea0cde"
+ integrity sha512-/DyMEOrDgLKKIG0fmvtz+4dUX/3Ghozwgm6iPp8KRhvn+eQf9+Q7GWxVNMk3+uCPWfdXYC4ExGBckIXdFEfH1w==
+
+webpack@^5.73.0:
+ version "5.74.0"
+ resolved "https://registry.yarnpkg.com/webpack/-/webpack-5.74.0.tgz#02a5dac19a17e0bb47093f2be67c695102a55980"
+ integrity sha512-A2InDwnhhGN4LYctJj6M1JEaGL7Luj6LOmyBHjcI8529cm5p6VXiTIW2sn6ffvEAKmveLzvu4jrihwXtPojlAA==
+ dependencies:
+ "@types/eslint-scope" "^3.7.3"
+ "@types/estree" "^0.0.51"
+ "@webassemblyjs/ast" "1.11.1"
+ "@webassemblyjs/wasm-edit" "1.11.1"
+ "@webassemblyjs/wasm-parser" "1.11.1"
+ acorn "^8.7.1"
+ acorn-import-assertions "^1.7.6"
+ browserslist "^4.14.5"
+ chrome-trace-event "^1.0.2"
+ enhanced-resolve "^5.10.0"
+ es-module-lexer "^0.9.0"
+ eslint-scope "5.1.1"
+ events "^3.2.0"
+ glob-to-regexp "^0.4.1"
+ graceful-fs "^4.2.9"
+ json-parse-even-better-errors "^2.3.1"
+ loader-runner "^4.2.0"
+ mime-types "^2.1.27"
+ neo-async "^2.6.2"
+ schema-utils "^3.1.0"
+ tapable "^2.1.1"
+ terser-webpack-plugin "^5.1.3"
+ watchpack "^2.4.0"
+ webpack-sources "^3.2.3"
+
+webpackbar@^5.0.2:
+ version "5.0.2"
+ resolved "https://registry.yarnpkg.com/webpackbar/-/webpackbar-5.0.2.tgz#d3dd466211c73852741dfc842b7556dcbc2b0570"
+ integrity sha512-BmFJo7veBDgQzfWXl/wwYXr/VFus0614qZ8i9znqcl9fnEdiVkdbi0TedLQ6xAK92HZHDJ0QmyQ0fmuZPAgCYQ==
+ dependencies:
+ chalk "^4.1.0"
+ consola "^2.15.3"
+ pretty-time "^1.1.0"
+ std-env "^3.0.1"
+
+websocket-driver@>=0.5.1, websocket-driver@^0.7.4:
+ version "0.7.4"
+ resolved "https://registry.yarnpkg.com/websocket-driver/-/websocket-driver-0.7.4.tgz#89ad5295bbf64b480abcba31e4953aca706f5760"
+ integrity sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==
+ dependencies:
+ http-parser-js ">=0.5.1"
+ safe-buffer ">=5.1.0"
+ websocket-extensions ">=0.1.1"
+
+websocket-extensions@>=0.1.1:
+ version "0.1.4"
+ resolved "https://registry.yarnpkg.com/websocket-extensions/-/websocket-extensions-0.1.4.tgz#7f8473bc839dfd87608adb95d7eb075211578a42"
+ integrity sha512-OqedPIGOfsDlo31UNwYbCFMSaO9m9G/0faIHj5/dZFDMFqPTcx6UwqyOy3COEaEOg/9VsGIpdqn62W5KhoKSpg==
+
+whatwg-url@^5.0.0:
+ version "5.0.0"
+ resolved "https://registry.yarnpkg.com/whatwg-url/-/whatwg-url-5.0.0.tgz#966454e8765462e37644d3626f6742ce8b70965d"
+ integrity sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==
+ dependencies:
+ tr46 "~0.0.3"
+ webidl-conversions "^3.0.0"
+
+which@^1.2.14, which@^1.3.1:
+ version "1.3.1"
+ resolved "https://registry.yarnpkg.com/which/-/which-1.3.1.tgz#a45043d54f5805316da8d62f9f50918d3da70b0a"
+ integrity sha512-HxJdYWq1MTIQbJ3nw0cqssHoTNU267KlrDuGZ1WYlxDStUtKUhOaJmh112/TZmHxxUfuJqPXSOm7tDyas0OSIQ==
+ dependencies:
+ isexe "^2.0.0"
+
+which@^2.0.1:
+ version "2.0.2"
+ resolved "https://registry.yarnpkg.com/which/-/which-2.0.2.tgz#7c6a8dd0a636a0327e10b59c9286eee93f3f51b1"
+ integrity sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==
+ dependencies:
+ isexe "^2.0.0"
+
+widest-line@^3.1.0:
+ version "3.1.0"
+ resolved "https://registry.yarnpkg.com/widest-line/-/widest-line-3.1.0.tgz#8292333bbf66cb45ff0de1603b136b7ae1496eca"
+ integrity sha512-NsmoXalsWVDMGupxZ5R08ka9flZjjiLvHVAWYOKtiKM8ujtZWr9cRffak+uSE48+Ob8ObalXpwyeUiyDD6QFgg==
+ dependencies:
+ string-width "^4.0.0"
+
+widest-line@^4.0.1:
+ version "4.0.1"
+ resolved "https://registry.yarnpkg.com/widest-line/-/widest-line-4.0.1.tgz#a0fc673aaba1ea6f0a0d35b3c2795c9a9cc2ebf2"
+ integrity sha512-o0cyEG0e8GPzT4iGHphIOh0cJOV8fivsXxddQasHPHfoZf1ZexrfeA21w2NaEN1RHE+fXlfISmOE8R9N3u3Qig==
+ dependencies:
+ string-width "^5.0.1"
+
+wildcard@^2.0.0:
+ version "2.0.0"
+ resolved "https://registry.yarnpkg.com/wildcard/-/wildcard-2.0.0.tgz#a77d20e5200c6faaac979e4b3aadc7b3dd7f8fec"
+ integrity sha512-JcKqAHLPxcdb9KM49dufGXn2x3ssnfjbcaQdLlfZsL9rH9wgDQjUtDxbo8NE0F6SFvydeu1VhZe7hZuHsB2/pw==
+
+word-wrap@^1.0.3:
+ version "1.2.3"
+ resolved "https://registry.yarnpkg.com/word-wrap/-/word-wrap-1.2.3.tgz#610636f6b1f703891bd34771ccb17fb93b47079c"
+ integrity sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ==
+
+wordwrap@^1.0.0:
+ version "1.0.0"
+ resolved "https://registry.yarnpkg.com/wordwrap/-/wordwrap-1.0.0.tgz#27584810891456a4171c8d0226441ade90cbcaeb"
+ integrity sha512-gvVzJFlPycKc5dZN4yPkP8w7Dc37BtP1yczEneOb4uq34pXZcvrtRTmWV8W+Ume+XCxKgbjM+nevkyFPMybd4Q==
+
+wrap-ansi@^7.0.0:
+ version "7.0.0"
+ resolved "https://registry.yarnpkg.com/wrap-ansi/-/wrap-ansi-7.0.0.tgz#67e145cff510a6a6984bdf1152911d69d2eb9e43"
+ integrity sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==
+ dependencies:
+ ansi-styles "^4.0.0"
+ string-width "^4.1.0"
+ strip-ansi "^6.0.0"
+
+wrap-ansi@^8.0.1:
+ version "8.0.1"
+ resolved "https://registry.yarnpkg.com/wrap-ansi/-/wrap-ansi-8.0.1.tgz#2101e861777fec527d0ea90c57c6b03aac56a5b3"
+ integrity sha512-QFF+ufAqhoYHvoHdajT/Po7KoXVBPXS2bgjIam5isfWJPfIOnQZ50JtUiVvCv/sjgacf3yRrt2ZKUZ/V4itN4g==
+ dependencies:
+ ansi-styles "^6.1.0"
+ string-width "^5.0.1"
+ strip-ansi "^7.0.1"
+
+wrappy@1:
+ version "1.0.2"
+ resolved "https://registry.yarnpkg.com/wrappy/-/wrappy-1.0.2.tgz#b5243d8f3ec1aa35f1364605bc0d1036e30ab69f"
+ integrity sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==
+
+write-file-atomic@^3.0.0:
+ version "3.0.3"
+ resolved "https://registry.yarnpkg.com/write-file-atomic/-/write-file-atomic-3.0.3.tgz#56bd5c5a5c70481cd19c571bd39ab965a5de56e8"
+ integrity sha512-AvHcyZ5JnSfq3ioSyjrBkH9yW4m7Ayk8/9My/DD9onKeu/94fwrMocemO2QAJFAlnnDN+ZDS+ZjAR5ua1/PV/Q==
+ dependencies:
+ imurmurhash "^0.1.4"
+ is-typedarray "^1.0.0"
+ signal-exit "^3.0.2"
+ typedarray-to-buffer "^3.1.5"
+
+ws@^7.3.1:
+ version "7.5.9"
+ resolved "https://registry.yarnpkg.com/ws/-/ws-7.5.9.tgz#54fa7db29f4c7cec68b1ddd3a89de099942bb591"
+ integrity sha512-F+P9Jil7UiSKSkppIiD94dN07AwvFixvLIj1Og1Rl9GGMuNipJnV9JzjD6XuqmAeiswGvUmNLjr5cFuXwNS77Q==
+
+ws@^8.4.2:
+ version "8.8.1"
+ resolved "https://registry.yarnpkg.com/ws/-/ws-8.8.1.tgz#5dbad0feb7ade8ecc99b830c1d77c913d4955ff0"
+ integrity sha512-bGy2JzvzkPowEJV++hF07hAD6niYSr0JzBNo/J29WsB57A2r7Wlc1UFcTR9IzrPvuNVO4B8LGqF8qcpsVOhJCA==
+
+xdg-basedir@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/xdg-basedir/-/xdg-basedir-4.0.0.tgz#4bc8d9984403696225ef83a1573cbbcb4e79db13"
+ integrity sha512-PSNhEJDejZYV7h50BohL09Er9VaIefr2LMAf3OEmpCkjOi34eYyQYAXUTjEQtZJTKcF0E2UKTh+osDLsgNim9Q==
+
+xml-js@^1.6.11:
+ version "1.6.11"
+ resolved "https://registry.yarnpkg.com/xml-js/-/xml-js-1.6.11.tgz#927d2f6947f7f1c19a316dd8eea3614e8b18f8e9"
+ integrity sha512-7rVi2KMfwfWFl+GpPg6m80IVMWXLRjO+PxTq7V2CDhoGak0wzYzFgUY2m4XJ47OGdXd8eLE8EmwfAmdjw7lC1g==
+ dependencies:
+ sax "^1.2.4"
+
+xtend@^4.0.0, xtend@^4.0.1, xtend@~4.0.1:
+ version "4.0.2"
+ resolved "https://registry.yarnpkg.com/xtend/-/xtend-4.0.2.tgz#bb72779f5fa465186b1f438f674fa347fdb5db54"
+ integrity sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==
+
+y18n@^5.0.5:
+ version "5.0.8"
+ resolved "https://registry.yarnpkg.com/y18n/-/y18n-5.0.8.tgz#7f4934d0f7ca8c56f95314939ddcd2dd91ce1d55"
+ integrity sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==
+
+yallist@^4.0.0:
+ version "4.0.0"
+ resolved "https://registry.yarnpkg.com/yallist/-/yallist-4.0.0.tgz#9bb92790d9c0effec63be73519e11a35019a3a72"
+ integrity sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==
+
+yaml@^1.10.0, yaml@^1.10.2, yaml@^1.7.2:
+ version "1.10.2"
+ resolved "https://registry.yarnpkg.com/yaml/-/yaml-1.10.2.tgz#2301c5ffbf12b467de8da2333a459e29e7920e4b"
+ integrity sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg==
+
+yargs-parser@^20.2.2, yargs-parser@^20.2.3:
+ version "20.2.9"
+ resolved "https://registry.yarnpkg.com/yargs-parser/-/yargs-parser-20.2.9.tgz#2eb7dc3b0289718fc295f362753845c41a0c94ee"
+ integrity sha512-y11nGElTIV+CT3Zv9t7VKl+Q3hTQoT9a1Qzezhhl6Rp21gJ/IVTW7Z3y9EWXhuUBC2Shnf+DX0antecpAwSP8w==
+
+yargs@^16.0.0, yargs@^16.2.0:
+ version "16.2.0"
+ resolved "https://registry.yarnpkg.com/yargs/-/yargs-16.2.0.tgz#1c82bf0f6b6a66eafce7ef30e376f49a12477f66"
+ integrity sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw==
+ dependencies:
+ cliui "^7.0.2"
+ escalade "^3.1.1"
+ get-caller-file "^2.0.5"
+ require-directory "^2.1.1"
+ string-width "^4.2.0"
+ y18n "^5.0.5"
+ yargs-parser "^20.2.2"
+
+yn@3.1.1:
+ version "3.1.1"
+ resolved "https://registry.yarnpkg.com/yn/-/yn-3.1.1.tgz#1e87401a09d767c1d5eab26a6e4c185182d2eb50"
+ integrity sha512-Ux4ygGWsu2c7isFWe8Yu1YluJmqVhxqK2cLXNQA5AcC3QfbGNpM7fu0Y8b/z16pXLnFxZYvWhd3fhBY9DLmC6Q==
+
+yocto-queue@^0.1.0:
+ version "0.1.0"
+ resolved "https://registry.yarnpkg.com/yocto-queue/-/yocto-queue-0.1.0.tgz#0294eb3dee05028d31ee1a5fa2c556a6aaf10a1b"
+ integrity sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==
+
+zwitch@^1.0.0:
+ version "1.0.5"
+ resolved "https://registry.yarnpkg.com/zwitch/-/zwitch-1.0.5.tgz#d11d7381ffed16b742f6af7b3f223d5cd9fe9920"
+ integrity sha512-V50KMwwzqJV0NpZIZFwfOD5/lyny3WlSzRiXgA0G7VUnRlqttta1L6UQIHzd6EuBY/cHGfwTIck7w1yH6Q5zUw==