| = Installation on Amazon Elastic Kubernetes Service (EKS) Quick Start |
| :page-toclevels: 3 |
| |
| include::partial$include.adoc[] |
| |
| NOTE: To install Akka Cloud Platform on Amazon Elastic Kubernetes Service (EKS), you must have an Amazon account and a subscription to the {aws-marketplace}[Akka Cloud Platform {tab-icon}, window="tab"]. |
| |
| == Expected Time to Install |
| |
| A new installation will take approximately _from 40 minutes_ to _1 hour_ depending on which components are installed. |
| By default, the following steps will be executed: |
| |
| 1. Create an Amazon EKS cluster |
| 2. Install the Kubernetes Metrics Server |
| 3. Install the Akka Cloud Platform Helm chart |
| 4. Install the Grafana Helm chart |
| 5. Install the Prometheus Helm Chart |
| 6. Set up an Amazon Managed Streaming for Apache Kafka (MSK) cluster |
| 7. Set up an Amazon Aurora Relational Database Service (RDS) cluster |
| 8. Optional: Install the https://aws-otel.github.io/[AWS OTel Collector {tab-icon}, window="tab"] to see application traces in the X-Ray Console |
| |
| :sectnums: |
| |
| == Verify Prerequisites |
| |
| Before installation, you must subscribe to the {aws-marketplace}[Akka Cloud Platform {tab-icon}, window="tab"] on AWS Marketplace by clicking the Subscribe button on the product page and accepting the terms and conditions. |
| |
| To install and use the Akka Cloud Platform, you must have the following tools installed. We recommend using the latest versions of these tools. If you do not have them yet or need to update, we provide links to installation instructions: |
| |
| * The https://git-scm.com/book/en/v2/Getting-Started-Installing-Git[`git` {tab-icon}, window="tab"] command-line tool to clone the Pulumi Playbook. |
| |
| * The https://docs.npmjs.com/downloading-and-installing-node-js-and-npm[`npm` {tab-icon}, window="tab"] command-line tool to download and setup the NPM dependencies of the Pulumi Playbook. |
| |
| * The https://www.pulumi.com/[Pulumi {tab-icon}, window="tab"] cloud engineering (provisioning) tool to run the Playbook. |
| |
| In addition, in order to operate the cluster, you need to setup: |
| |
| * The Kubernetes command-line tool, `kubectl`, allows you to run commands against Kubernetes clusters. Follow the instructions in the https://kubernetes.io/docs/tasks/tools/#kubectl[Kubernetes documentation {tab-icon}, window="tab"] to install `kubectl`. |
| |
| NOTE: We recommend Node 14 or later. |
| |
| == Login to your AWS account |
| |
| If you are a first time AWS user, please register your account at https://aws.amazon.com/[https://aws.amazon.com/ {tab-icon}, window="tab"]. |
| |
| ifdef::review[REVIEWERS: I was confused whether the following steps need to be done only by the first time user, or by all?] |
| |
| . Navigate to https://console.aws.amazon.com/iam/home?#/users[AWS Identity and Access Management (IAM) console {tab-icon}, window="tab"] to create a user and create access keys under Security Credential tab. |
| |
| . Open a terminal and from a command prompt use the `aws` tool to install the credentials: |
| |
| [source,shell script] |
| ---- |
| aws configure |
| ---- |
| |
| == Quick Start with Pulumi Playbook |
| |
| Pulumi is a cloud provisioning tool used to set up cloud infrastructure and Kubernetes cluster resources. |
| The fastest way to get started with Akka Cloud Platform on EKS is to run the Akka Cloud Platform Deployment Pulumi playbook. |
| |
| To begin, download and install the https://www.pulumi.com/docs/get-started/install/[Pulumi {tab-icon}, window="tab"] cloud engineering (provisioning) tool for your platform. |
| |
| Download (clone) the https://github.com/lightbend/akka-cloud-platform-deploy[Akka Cloud Platform Deployment Pulumi Playbook {tab-icon}, window="tab"] with https://git-scm.com/book/en/v2/Getting-Started-Installing-Git[`git` {tab-icon}, window="tab"]. |
| |
| [source,shell script] |
| ---- |
| git clone https://github.com/lightbend/akka-cloud-platform-deploy |
| ---- |
| |
| Navigate to the base directory of the cloned repository in a terminal. |
| |
| [source,shell script] |
| ---- |
| cd akka-cloud-platform-deploy |
| ---- |
| |
| The same repository contains the scripts for AWS and GCP deployment. Navigate to the `aws` folder. |
| |
| [source,shell script] |
| ---- |
| cd aws |
| ---- |
| |
| Install `npm` dependencies: |
| |
| [source,shell script] |
| ---- |
| npm install |
| ---- |
| |
| Next you will initialize a https://www.pulumi.com/docs/intro/concepts/stack/[Pulumi Stack {tab-icon}, window="tab"]. |
| A stack is an instance of the deployed Pulumi playbook and its associated state. |
| For example, you might use different stacks to represent different environments such as `dev`, `testing`, and `staging`. |
| By default state is managed using Pulumi's state management service (a remote service), but you can choose a https://www.pulumi.com/docs/intro/concepts/state/#backends[self-managed state backend {tab-icon}, window="tab"] if you like. |
| Using Pulumi's service is the fastest way to get started and provides several other benefits such as sharing state with others, view progress of a deployment, and view output variables and the resulting cloud resources that were created. |
| To learn more about https://www.pulumi.com/docs/intro/concepts/stack/[Pulumi Stack {tab-icon}, window="tab"] and https://www.pulumi.com/docs/intro/concepts/state/[Pulumi State Management {tab-icon}, window="tab"] see the Pulumi documentation. |
| |
| Initialize a new Pulumi stack. |
| |
| [source,shell script] |
| ---- |
| pulumi stack init |
| ---- |
| |
| Choose an AWS region to use, for example: |
| |
| [source,shell script] |
| ---- |
| pulumi config set aws:region eu-central-1 |
| ---- |
| |
| When you run a `pulumi up` (deploy) all the resources defined in the playbook will be provisioned to your target environment. |
| In almost all cases resources will start with a prefix `acp-<STACK NAME>`, where `acp` stands for "Akka Cloud Platform" and `<STACK NAME>` is the name you chose for the Pulumi stack. |
| Most resource names will end with a randomly generated alphanumeric string. |
| |
| When you run `pulumi up` you will be prompted with a list of resources that the playbook will create. |
| Select `yes` to deploy the stack to AWS. |
| |
| Preview and provision the playbook. |
| |
| [source,shell script] |
| ---- |
| pulumi up |
| ---- |
| |
| While the cluster is provisioning you will be informed, in the terminal, about which resources are in the process of being created or have been created. |
| If you are using the Pulumi service for the state then the `pulumi up` command will include a link to "View Live" the progress of the deployment. |
| The `pulumi up` command will exit when all resources are fully deployed and online. |
| |
| If a failure occurs, the command will exit early and inform you of the reason. |
| When this occurs and you can correct the issue yourself then rerun `pulumi up` to resume updating where you left off. |
| If the error can't be resolved, you can use the `pulumi destroy` command to roll back any partially deployed resources. |
| |
| If the deployment is successful, all of the pertinent details of the cluster will be output to the console (except for passwords). |
| |
| After executing `pulumi up` you will see something like the following: |
| |
| [source,shell script] |
| ---- |
| Previewing update (<STACK NAME>) |
| |
| View Live: https://app.pulumi.com/<USER NAME>/akka-cloud-platform-deploy/<STACK NAME>/previews/f8c74175-2815-462a-9925-78022ad1c49d |
| |
| Type Name Plan |
| + pulumi:pulumi:Stack akka-cloud-platform-deploy-<STACK NAME> create |
| |
| ... |
| |
| Resources: |
| + 140 to create |
| |
| Do you want to perform this update? yes |
| |
| Updating (<STACK NAME>) |
| |
| View Live: https://app.pulumi.com/<USER NAME>/akka-cloud-platform-deploy/<STACK NAME>/updates/18 |
| |
| Type Name Status |
| + pulumi:pulumi:Stack akka-cloud-platform-deploy-<STACK NAME> created |
| |
| ... |
| ---- |
| |
| When the stack is deployed successfully you will have a list of outputs presented. Example outputs for a stack named `dev`: |
| |
| [source,shell script] |
| ---- |
| Outputs: |
| clusterName : "acp-<STACK NAME>-eks-eksCluster-6b31eca" |
| jdbcClusterId : "cluster-ZJODO3OADYNKFPZVVEMAOMTYLA" |
| jdbcEndpoint : "acp-<STACK NAME>-rds.cluster-c4wajgyncibh.us-west-2.rds.amazonaws.com" |
| jdbcPassword : "[secret]" |
| jdbcReaderEndpoint : "acp-<STACK NAME>-rds.cluster-ro-c4wajgyncibh.us-west-2.rds.amazonaws.com" |
| jdbcSecret : "acp-<STACK NAME>-jdbc-secret" |
| jdbcUsername : "postgres" |
| kafkaBootstrapBrokers : "b-1.acp-<STACK NAME>.9fhrqh.c5.kafka.us-west-2.amazonaws.com:9092,b-2.acp-<STACK NAME>.9fhrqh.c5.kafka.us-west-2.amazonaws.com:9092" |
| kafkaBootstrapBrokersTls : "b-1.acp-<STACK NAME>.9fhrqh.c5.kafka.us-west-2.amazonaws.com:9094,b-2.acp-<STACK NAME>.9fhrqh.c5.kafka.us-west-2.amazonaws.com:9094" |
| kafkaBootstrapServerSecret : "acp-<STACK NAME>-kafka-secret" |
| kafkaZookeeperConnectString: "z-1.acp-<STACK NAME>.9fhrqh.c5.kafka.us-west-2.amazonaws.com:2181,z-2.acp-<STACK NAME>.9fhrqh.c5.kafka.us-west-2.amazonaws.com:2181,z-3.acp-<STACK NAME>.9fhrqh.c5.kafka.us-west-2.amazonaws.com:2181" |
| kubeconfig : { |
| ... |
| } |
| operatorNamespace : "lightbend" |
| |
| Resources: |
| + 150 created |
| |
| Duration: 40m12s |
| ---- |
| |
| These output variables are required to access your cluster and associated resources. |
| You can reference variables manually, or access their values at the command line with `pulumi stack output <OUTPUT KEY>`. |
| Later in this documentation page you can find the full list of xref:deployment:aws-install-quickstart.adoc#output-variables[output variables]. |
| |
| For example, to return the Kubernetes namespace that the Akka Cloud Platform Operator is installed in: |
| |
| [source, shell script] |
| ---- |
| $ pulumi stack output operatorNamespace |
| lightbend |
| ---- |
| |
| To destroy the cluster and all its dependencies: |
| |
| [source,shell script] |
| ---- |
| pulumi destroy |
| ---- |
| |
| NOTE: You may need to run `pulumi destroy` a couple of times to destroy all resources effectively. You can run with a `--refresh` flag to refresh the state of the stack's resources before destroying resources. For example, `pulumi destroy --refresh`. |
| |
| === Connect to the Amazon EKS Cluster |
| |
| To setup your local `kubectl` configuration you can obtain the Pulumi `kubeconfig` output by exporting it to a file and defining the `KUBECONFIG` environment variable: |
| |
| [source,shell script] |
| ---- |
| pulumi stack output kubeconfig > kubeconfig.yml |
| export KUBECONFIG=.$PWD/kubeconfig.yml |
| ---- |
| |
| Or, if you have the `aws` command line tool you can use `aws eks update-kubeconfig` to update your `\~/.kube/config`. |
| |
| [source,shell script] |
| ---- |
| aws eks update-kubeconfig --region $(pulumi config get "aws:region") --name $(pulumi stack output clusterName) |
| ---- |
| |
| === Connect to the MSK Kafka cluster |
| |
| The `confluentinc/cp-kafka` Docker image can be used to manage Kafka state. |
| |
| For example, to create a Kafka `shopping-cart-events` topic: |
| |
| [source,shell script] |
| ---- |
| kubectl run -i --tty kafka-mgmt --image=confluentinc/cp-kafka --restart=Never --rm -- \ |
| kafka-topics \ |
| --bootstrap-server="$(pulumi stack output kafkaBootstrapBrokers)" \ |
| --create \ |
| --topic shopping-cart-events \ |
| --replication-factor 2 \ |
| --partitions 4 |
| ---- |
| |
| [NOTE] |
| ==== |
| Kafka Kubernetes secrets are created in the Akka Cloud Platform Operator namespace. |
| These must be copied to each microservice namespace that references them. |
| Here is one way to copy a resource from one namespace to another: |
| |
| [source,shell script] |
| ---- |
| OPERATOR_NAMESPACE=$(pulumi stack output operatorNamespace) |
| KAFKA_SECRET=$(pulumi stack output kafkaBootstrapServerSecret) |
| APP_NAMESPACE=<APP NAMESPACE> |
| APP_KAKFA_SECRET=<APP SECRET NAME> |
| kubectl get secret $KAFKA_SECRET -n $OPERATOR_NAMESPACE -o yaml | \ |
| sed s/"namespace: $OPERATOR_NAMESPACE"/"namespace: $APP_NAMESPACE"/ | \ |
| sed s/"name: $KAFKA_SECRET"/"name: $APP_KAKFA_SECRET"/ | \ |
| kubectl apply -n $APP_NAMESPACE -f - |
| ---- |
| ==== |
| |
| [#connect-aurora-database] |
| === Connect to the Aurora RDS database |
| |
| To open a Postgres interactive shell with the Aurora DB: |
| |
| [source,shell script] |
| ---- |
| kubectl run -i --tty rds-mgmt --image=postgres --restart=Never --rm \ |
| --env=PGPASSWORD=$(pulumi stack output jdbcPassword --show-secrets) -- \ |
| psql -h $(pulumi stack output jdbcEndpoint) -U postgres |
| ---- |
| |
| To import a DDL script: |
| |
| [source,shell script] |
| ---- |
| kubectl run -i rds-mgmt --image=postgres --restart=Never --rm \ |
| --env=PGPASSWORD=$(pulumi stack output jdbcPassword --show-secrets) -- \ |
| psql -h $(pulumi stack output jdbcEndpoint) -U postgres -t < create_tables.sql |
| ---- |
| |
| [NOTE] |
| ==== |
| JDBC (RDS) Kubernetes secrets are created in the Akka Cloud Platform Operator namespace. |
| These must be copied to each microservice namespace that references them. |
| Here is one way to copy a resource from one namespace to another: |
| |
| [source,shell script] |
| ---- |
| OPERATOR_NAMESPACE=$(pulumi stack output operatorNamespace) |
| JDBC_SECRET=$(pulumi stack output jdbcSecret) |
| APP_NAMESPACE=<APP NAMESPACE> |
| APP_JDBC_SECRET=<APP SECRET NAME> |
| kubectl get secret $JDBC_SECRET -n $OPERATOR_NAMESPACE -o yaml | \ |
| sed s/"namespace: $OPERATOR_NAMESPACE"/"namespace: $APP_NAMESPACE"/ | \ |
| sed s/"name: $JDBC_SECRET"/"name: $APP_JDBC_SECRET"/ | \ |
| kubectl apply -n $APP_NAMESPACE -f - |
| ---- |
| ==== |
| |
| [#install-aws-otel-collector] |
| === Install AWS OTel Collector to see application traces in X-Ray Console |
| |
| The AWS OTel Collector collects traces from applications instrumented with xref:telemetry:index.adoc[Lightbend Telemetry] 2.16.1+ and sends them to X-Ray. |
| |
| NOTE: If you don't use Pulumi then follow the https://aws-otel.github.io/docs/getting-started/collector[Getting Started with the AWS Distro for OpenTelemetry Collector] instruction. |
| |
| Opt-in to install AWS OTel Collector into your EKS cluster: |
| |
| [source,shell script] |
| ---- |
| pulumi config set otel.collector.install true |
| ---- |
| |
| Choose an AWS region for X-Ray if it differs from `aws:region`: |
| |
| [source,shell script] |
| ---- |
| pulumi config set xray.region eu-central-1 |
| ---- |
| |
| Set AWS X-Ray credentials: |
| |
| [source,shell script] |
| ---- |
| pulumi config set xray.access-key-id <your access key id> |
| pulumi config set xray.secret-access-key <your secret access key> |
| ---- |
| |
| After that run `pulumi up` to install AWS OTel Collector. If your cluster already exists it will only install AWS OTel Collector if it's not there yet or if its configuration has changed. |
| |
| Once the AWS OTel Collector is deployed you can get `awsOTelCollectorServiceEndpoint`. You will need that endpoint to configure your application to report traces. The traces are reported to the AWS OTel Collector Zipkin endpoint within the Lightbend Telemetry config: |
| |
| [source,shell script] |
| ---- |
| $ pulumi stack output awsOTelCollectorServiceEndpoint |
| aws-otel-collector-svc.aws-otel-collector.svc.cluster.local |
| ---- |
| |
| In order to see the traces in X-Ray Console, you will need to setup Lightbend Telemetry for your application and configure it to produce traces. Please follow xref:telemetry:aws-xray-support.adoc[AWS X-Ray tracing support] for the details. |
| |
| include::partial$connect-to-grafana.adoc[] |
| |
| [#output-variables] |
| == Output Variables |
| |
| To print all output variables for a deployed stack you can run the following command: |
| |
| [source,shell script] |
| ---- |
| pulumi stack output |
| ---- |
| |
| To output the value of only one variable use the same command and include the variable. For example, to return the clusterName. |
| |
| [source,shell script] |
| ---- |
| pulumi stack output clusterName |
| ---- |
| |
| The following output variables are available: |
| |
| Amazon EKS |
| |
| * `clusterName` - Cluster name |
| * `kubeconfig` - Complete contents of a `KUBECONFIG` configuration for the provisioned cluster |
| * `operatorNamespace` - Kubernetes namespace where the Akka Cloud Platform operator is running |
| |
| Amazon Aurora Cluster |
| |
| * `jdbcClusterId` - Cluster ID |
| * `jdbcUsername` - Master username |
| * `jdbcPassword` - Master password. This is redacted in standard console output. To retrieve the value use the `pulumi stack output` command. For example, to assign the password to an environment variable: `DB_PASSWD=$(pulumi stack output jdbcPassword --show-secrets)`. |
| * `jdbcEndpoint` - Database writer endpoint hostname |
| * `jdbcReaderEndpoint` - Database reader endpoint hostname |
| * `jdbcSecret` - The Kubernetes Secret (in `operatorNamespace` namespace) containing relational database connection details. |
| |
| AWS MSK Kafka Cluster |
| |
| * `kafkaBootstrapBrokers` - Kafka bootstrap servers for `PLAINTEXT` connection (no TLS encryption) |
| * `kafkaBootstrapBrokersTls` - Kafka bootstrap servers for `TLS` connection |
| * `kafkaZookeeperConnectString` - Kafka ZooKeeper quorum connection string |
| * `kafkaBootstrapServerSecret` - The Kubernetes Secret (in `operatorNamespace` namespace) containing the `PLAINTEXT` Kafka bootstrap servers connection string. |
| |
| AWS OTel Collector |
| |
| * `awsOTelCollectorServiceEndpoint` - AWS OTel Collector service endpoint. Supports Zipkin protocol for traces. |
| |
| == Pulumi Playbook configuration |
| |
| You can configure the playbook with the following syntax at the command line: |
| |
| [source,shell script] |
| ---- |
| pulumi config set <CONFIG KEY> <CONFIG VALUE>` |
| ---- |
| |
| Or by populating the stack's `Pulumi.<STACK NAME>.yaml`. |
| |
| .Table Configuration |
| |=== |
| |Configuration Key | Description | Required | Default value |
| |
| | `aws:region` |
| | The AWS region for the Amazon EKS cluster and other services. |
| | Yes |
| | `-` |
| |
| | `akka.operator.namespace` |
| | The Kubernetes namespace to install the Akka Cloud Platform operator into. |
| | No |
| | `"lightbend"` |
| |
| | `akka.operator.version` |
| | The version of the Akka Cloud Platform operator to install. |
| | No |
| | `"1.1.22"` |
| |
| | `akka.operator.installTelemetryBackends` |
| | Whether to install Lightbend Telemetry backends (Prometheus and Grafana) for monitoring. |
| | No |
| | `true` |
| |
| | `eks.kubernetes.version` |
| | The https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html[version of the Amazon EKS Kubernetes {tab-icon}, window="tab"] cluster. |
| | No |
| | `1.20` |
| |
| | `eks.kubernetes.node.desiredCapacity` |
| | The desired number of nodes in the Amazon EKS Kubernetes cluster. |
| | No |
| | `3` |
| |
| | `eks.kubernetes.node.minSize` |
| | The minimum number of nodes in the Amazon EKS Kubernetes cluster. |
| | No |
| | `1` |
| |
| | `eks.kubernetes.node.maxSize` |
| | The maximum number of nodes in the Amazon EKS Kubernetes cluster. |
| | No |
| | `4` |
| |
| | `vpc.numberOfAvailabilityZones` |
| | The number of Availability Zones in the Amazon EKS VPC. |
| | No |
| | `3` |
| |
| | `mks.createCluster` |
| | Whether to deploy a Kafka cluster (MSK) via a cloud provider managed service. |
| | No |
| | `true` |
| |
| | `msk.kafka.version` |
| | The https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html[Kafka version {tab-icon}, window="tab"] for Amazon MSK. |
| | No |
| | `2.8.0` |
| |
| | `msk.kafka.numberOfBrokerNodes` |
| | The number of Kafka broker nodes in the Amazon MSK cluster. |
| | No |
| | `2` |
| |
| | `msk.kafka.brokerNodeGroupInfo.instanceType` |
| | The Amazon EC2 instance type for the Kafka broker nodes in the Amazon MSK cluster. |
| | No |
| | `kafka.m5.large` |
| |
| | `msk.kafka.brokerNodeGroupInfo.ebsVolumeSize` |
| | The size of the EBS volume for the Kafka broker nodes in the Amazon MSK cluster. |
| | No |
| | `1000` |
| |
| | `msk.kafka.encryptionInfo.encryptionInTransit` |
| | https://www.pulumi.com/docs/reference/pkg/aws/msk/cluster/#clusterencryptioninfoencryptionintransit[Encryption setting {tab-icon}, window="tab"] for data in transit between clients and brokers. |
| | No |
| | `TLS_PLAINTEXT` |
| |
| | `rds.createCluster` |
| | Deploy a JDBC database (Aurora RDS) via a cloud provider managed service. |
| | No |
| | `true` |
| |
| | `otel.collector.install` |
| | Install the AWS OTel Collector service to send traces to AWS X-Ray console. |
| | No |
| | `false` |
| |
| | `otel.collector.namespace` |
| | AWS OTel Collector namespace. |
| | No |
| | `"aws-otel-collector"` |
| |
| | `otel.collector.debug` |
| | Enable debug level logging for the AWS OTel Collector service. When enabled it will produce log to the standard output. |
| | No |
| | `false` |
| |
| | `xray.region` |
| | AWS X-Ray Region. By default the same as `aws:region`. |
| | Yes, when `otel.collector.install` enabled |
| | `aws:region` |
| |
| | `xray.access-key-id` |
| | AWS X-Ray Access Key ID. |
| | Yes, when `otel.collector.install` enabled |
| | `-` |
| |
| | `xray.secret-access-key` |
| | AWS X-Ray Secret Access Key. |
| | Yes, when `otel.collector.install` enabled |
| | `-` |
| |
| |=== |