[website][upgrade]feat: website upgrade / docs migration - 2.3.2 / started/functions/concepts (#13197)

Signed-off-by: LiLi <urfreespace@gmail.com>
diff --git a/site2/website-next/docusaurus.config.js b/site2/website-next/docusaurus.config.js
index 3098043..7603852 100644
--- a/site2/website-next/docusaurus.config.js
+++ b/site2/website-next/docusaurus.config.js
@@ -200,6 +200,10 @@
               to: "docs/2.4.0/"
             },
             {
+              label: "2.3.2",
+              to: "docs/2.3.2/"
+            },
+            {
               label: "2.2.0",
               to: "docs/2.2.0/",
             },
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-architecture-overview.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-architecture-overview.md
new file mode 100644
index 0000000..6a501d2
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-architecture-overview.md
@@ -0,0 +1,172 @@
+---
+id: concepts-architecture-overview
+title: Architecture Overview
+sidebar_label: "Architecture"
+original_id: concepts-architecture-overview
+---
+
+At the highest level, a Pulsar instance is composed of one or more Pulsar clusters. Clusters within an instance can [replicate](concepts-replication) data amongst themselves.
+
+In a Pulsar cluster:
+
+* One or more brokers handles and load balances incoming messages from producers, dispatches messages to consumers, communicates with the Pulsar configuration store to handle various coordination tasks, stores messages in BookKeeper instances (aka bookies), relies on a cluster-specific ZooKeeper cluster for certain tasks, and more.
+* A BookKeeper cluster consisting of one or more bookies handles [persistent storage](#persistent-storage) of messages.
+* A ZooKeeper cluster specific to that cluster handles coordination tasks between Pulsar clusters.
+
+The diagram below provides an illustration of a Pulsar cluster:
+
+![Pulsar architecture diagram](/assets/pulsar-system-architecture.png)
+
+At the broader instance level, an instance-wide ZooKeeper cluster called the configuration store handles coordination tasks involving multiple clusters, for example [geo-replication](concepts-replication).
+
+## Brokers
+
+The Pulsar message broker is a stateless component that's primarily responsible for running two other components:
+
+* An HTTP server that exposes a {@inject: rest:REST:/} API for both administrative tasks and [topic lookup](concepts-clients.md#client-setup-phase) for producers and consumers. The producers connect to the brokers to publish messages and the consumers connect to the brokers to consume the messages.
+* A dispatcher, which is an asynchronous TCP server over a custom [binary protocol](developing-binary-protocol) used for all data transfers
+
+Messages are typically dispatched out of a [managed ledger](#managed-ledgers) cache for the sake of performance, *unless* the backlog exceeds the cache size. If the backlog grows too large for the cache, the broker will start reading entries from BookKeeper.
+
+Finally, to support geo-replication on global topics, the broker manages replicators that tail the entries published in the local region and republish them to the remote region using the Pulsar [Java client library](client-libraries-java).
+
+> For a guide to managing Pulsar brokers, see the [brokers](admin-api-brokers) guide.
+
+## Clusters
+
+A Pulsar instance consists of one or more Pulsar *clusters*. Clusters, in turn, consist of:
+
+* One or more Pulsar [brokers](#brokers)
+* A ZooKeeper quorum used for cluster-level configuration and coordination
+* An ensemble of bookies used for [persistent storage](#persistent-storage) of messages
+
+Clusters can replicate amongst themselves using [geo-replication](concepts-replication).
+
+> For a guide to managing Pulsar clusters, see the [clusters](admin-api-clusters) guide.
+
+## Metadata store
+
+The Pulsar metadata store maintains all the metadata of a Pulsar cluster, such as topic metadata, schema, broker load data, and so on. Pulsar uses [Apache ZooKeeper](https://zookeeper.apache.org/) for metadata storage, cluster configuration, and coordination. The Pulsar metadata store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster. You can use one ZooKeeper cluster for both Pulsar metadata store and [BookKeeper metadata store](https://bookkeeper.apache.org/docs/latest/getting-started/concepts/#metadata-storage). If you want to deploy Pulsar brokers connected to an existing BookKeeper cluster, you need to deploy separate ZooKeeper clusters for Pulsar metadata store and BookKeeper metadata store respectively.
+
+In a Pulsar instance:
+
+* A configuration store quorum stores configuration for tenants, namespaces, and other entities that need to be globally consistent.
+* Each cluster has its own local ZooKeeper ensemble that stores cluster-specific configuration and coordination such as which brokers are responsible for which topics as well as ownership metadata, broker load reports, BookKeeper ledger metadata, and more.
+
+## Configuration store
+
+The configuration store maintains all the configurations of a Pulsar instance, such as clusters, tenants, namespaces, partitioned topic related configurations, and so on. A Pulsar instance can have a single local cluster, multiple local clusters, or multiple cross-region clusters. Consequently, the configuration store can share the configurations across multiple clusters under a Pulsar instance. The configuration store can be deployed on a separate ZooKeeper cluster or deployed on an existing ZooKeeper cluster.
+
+## Persistent storage
+
+Pulsar provides guaranteed message delivery for applications. If a message successfully reaches a Pulsar broker, it will be delivered to its intended target.
+
+This guarantee requires that non-acknowledged messages are stored in a durable manner until they can be delivered to and acknowledged by consumers. This mode of messaging is commonly called *persistent messaging*. In Pulsar, N copies of all messages are stored and synced on disk, for example 4 copies across two servers with mirrored [RAID](https://en.wikipedia.org/wiki/RAID) volumes on each server.
+
+### Apache BookKeeper
+
+Pulsar uses a system called [Apache BookKeeper](http://bookkeeper.apache.org/) for persistent message storage. BookKeeper is a distributed [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging) (WAL) system that provides a number of crucial advantages for Pulsar:
+
+* It enables Pulsar to utilize many independent logs, called [ledgers](#ledgers). Multiple ledgers can be created for topics over time.
+* It offers very efficient storage for sequential data that handles entry replication.
+* It guarantees read consistency of ledgers in the presence of various system failures.
+* It offers even distribution of I/O across bookies.
+* It's horizontally scalable in both capacity and throughput. Capacity can be immediately increased by adding more bookies to a cluster.
+* Bookies are designed to handle thousands of ledgers with concurrent reads and writes. By using multiple disk devices---one for journal and another for general storage--bookies are able to isolate the effects of read operations from the latency of ongoing write operations.
+
+In addition to message data, *cursors* are also persistently stored in BookKeeper. Cursors are [subscription](reference-terminology.md#subscription) positions for [consumers](reference-terminology.md#consumer). BookKeeper enables Pulsar to store consumer position in a scalable fashion.
+
+At the moment, Pulsar supports persistent message storage. This accounts for the `persistent` in all topic names. Here's an example:
+
+```http
+
+persistent://my-tenant/my-namespace/my-topic
+
+```
+
+> Pulsar also supports ephemeral ([non-persistent](concepts-messaging.md#non-persistent-topics)) message storage.
+
+
+You can see an illustration of how brokers and bookies interact in the diagram below:
+
+![Brokers and bookies](/assets/broker-bookie.png)
+
+
+### Ledgers
+
+A ledger is an append-only data structure with a single writer that is assigned to multiple BookKeeper storage nodes, or bookies. Ledger entries are replicated to multiple bookies. Ledgers themselves have very simple semantics:
+
+* A Pulsar broker can create a ledger, append entries to the ledger, and close the ledger.
+* After the ledger has been closed---either explicitly or because the writer process crashed---it can then be opened only in read-only mode.
+* Finally, when entries in the ledger are no longer needed, the whole ledger can be deleted from the system (across all bookies).
+
+#### Ledger read consistency
+
+The main strength of Bookkeeper is that it guarantees read consistency in ledgers in the presence of failures. Since the ledger can only be written to by a single process, that process is free to append entries very efficiently, without need to obtain consensus. After a failure, the ledger will go through a recovery process that will finalize the state of the ledger and establish which entry was last committed to the log. After that point, all readers of the ledger are guaranteed to see the exact same content.
+
+#### Managed ledgers
+
+Given that Bookkeeper ledgers provide a single log abstraction, a library was developed on top of the ledger called the *managed ledger* that represents the storage layer for a single topic. A managed ledger represents the abstraction of a stream of messages with a single writer that keeps appending at the end of the stream and multiple cursors that are consuming the stream, each with its own associated position.
+
+Internally, a single managed ledger uses multiple BookKeeper ledgers to store the data. There are two reasons to have multiple ledgers:
+
+1. After a failure, a ledger is no longer writable and a new one needs to be created.
+2. A ledger can be deleted when all cursors have consumed the messages it contains. This allows for periodic rollover of ledgers.
+
+### Journal storage
+
+In BookKeeper, *journal* files contain BookKeeper transaction logs. Before making an update to a [ledger](#ledgers), a bookie needs to ensure that a transaction describing the update is written to persistent (non-volatile) storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold (configured using the [`journalMaxSizeMB`](reference-configuration.md#bookkeeper-journalMaxSizeMB) parameter).
+
+## Pulsar proxy
+
+One way for Pulsar clients to interact with a Pulsar [cluster](#clusters) is by connecting to Pulsar message [brokers](#brokers) directly. In some cases, however, this kind of direct connection is either infeasible or undesirable because the client doesn't have direct access to broker addresses. If you're running Pulsar in a cloud environment or on [Kubernetes](https://kubernetes.io) or an analogous platform, for example, then direct client connections to brokers are likely not possible.
+
+The **Pulsar proxy** provides a solution to this problem by acting as a single gateway for all of the brokers in a cluster. If you run the Pulsar proxy (which, again, is optional), all client connections with the Pulsar cluster will flow through the proxy rather than communicating with brokers.
+
+> For the sake of performance and fault tolerance, you can run as many instances of the Pulsar proxy as you'd like.
+
+Architecturally, the Pulsar proxy gets all the information it requires from ZooKeeper. When starting the proxy on a machine, you only need to provide ZooKeeper connection strings for the cluster-specific and instance-wide configuration store clusters. Here's an example:
+
+```bash
+
+$ bin/pulsar proxy \
+  --zookeeper-servers zk-0,zk-1,zk-2 \
+  --configuration-store-servers zk-0,zk-1,zk-2
+
+```
+
+> #### Pulsar proxy docs
+> For documentation on using the Pulsar proxy, see the [Pulsar proxy admin documentation](administration-proxy).
+
+
+Some important things to know about the Pulsar proxy:
+
+* Connecting clients don't need to provide *any* specific configuration to use the Pulsar proxy. You won't need to update the client configuration for existing applications beyond updating the IP used for the service URL (for example if you're running a load balancer over the Pulsar proxy).
+* [TLS encryption](security-tls-transport.md) and [authentication](security-tls-authentication) is supported by the Pulsar proxy
+
+## Service discovery
+
+[Clients](getting-started-clients) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions in the [Deploying a Pulsar instance](deploy-bare-metal.md#service-discovery-setup) guide.
+
+You can use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an endpoint, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
+
+The diagram below illustrates Pulsar service discovery:
+
+![alt-text](/assets/pulsar-service-discovery.png)
+
+In this diagram, the Pulsar cluster is addressable via a single DNS name: `pulsar-cluster.acme.com`. A [Python client](client-libraries-python), for example, could access this Pulsar cluster like this:
+
+```python
+
+from pulsar import Client
+
+client = Client('pulsar://pulsar-cluster.acme.com:6650')
+
+```
+
+:::note
+
+In Pulsar, each topic is handled by only one broker. Initial requests from a client to read, update or delete a topic are sent to a broker that may not be the topic owner. If the broker cannot handle the request for this topic, it redirects the request to the appropriate broker.
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-authentication.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-authentication.md
new file mode 100644
index 0000000..b375ecb
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-authentication.md
@@ -0,0 +1,9 @@
+---
+id: concepts-authentication
+title: Authentication and Authorization
+sidebar_label: "Authentication and Authorization"
+original_id: concepts-authentication
+---
+
+Pulsar supports a pluggable [authentication](security-overview.md) mechanism which can be configured at the proxy and/or the broker. Pulsar also supports a pluggable [authorization](security-authorization) mechanism. These mechanisms work together to identify the client and its access rights on topics, namespaces and tenants.
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-clients.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-clients.md
new file mode 100644
index 0000000..8751fc2
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-clients.md
@@ -0,0 +1,87 @@
+---
+id: concepts-clients
+title: Pulsar Clients
+sidebar_label: "Clients"
+original_id: concepts-clients
+---
+
+Pulsar exposes a client API with language bindings for [Java](client-libraries-java.md),  [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp). The client API optimizes and encapsulates Pulsar's client-broker communication protocol and exposes a simple and intuitive API for use by applications.
+
+Under the hood, the current official Pulsar client libraries support transparent reconnection and/or connection failover to brokers, queuing of messages until acknowledged by the broker, and heuristics such as connection retries with backoff.
+
+> #### Custom client libraries
+> If you'd like to create your own client library, we recommend consulting the documentation on Pulsar's custom [binary protocol](developing-binary-protocol)
+
+
+## Client setup phase
+
+When an application wants to create a producer/consumer, the Pulsar client library will initiate a setup phase that is composed of two steps:
+
+1. The client will attempt to determine the owner of the topic by sending an HTTP lookup request to the broker. The request could reach one of the active brokers which, by looking at the (cached) zookeeper metadata will know who is serving the topic or, in case nobody is serving it, will try to assign it to the least loaded broker.
+1. Once the client library has the broker address, it will create a TCP connection (or reuse an existing connection from the pool) and authenticate it. Within this connection, client and broker exchange binary commands from a custom protocol. At this point the client will send a command to create producer/consumer to the broker, which will comply after having validated the authorization policy.
+
+Whenever the TCP connection breaks, the client will immediately re-initiate this setup phase and will keep trying with exponential backoff to re-establish the producer or consumer until the operation succeeds.
+
+## Reader interface
+
+In Pulsar, the "standard" [consumer interface](concepts-messaging.md#consumers) involves using consumers to listen on [topics](reference-terminology.md#topic), process incoming messages, and finally acknowledge those messages when they've been processed.  Whenever a new subscription is created, it is initially positioned at the end of the topic (by default), and consumers associated with that subscription will begin reading with the first message created afterwards.  Whenever a consumer connects to a topic using a pre-existing subscription, it begins reading from the earliest message un-acked within that subscription.  In summary, with the consumer interface, subscription cursors are automatically managed by Pulsar in response to [message acknowledgements](concepts-messaging.md#acknowledgement).
+
+The **reader interface** for Pulsar enables applications to manually manage cursors. When you use a reader to connect to a topic---rather than a consumer---you need to specify *which* message the reader begins reading from when it connects to a topic. When connecting to a topic, the reader interface enables you to begin with:
+
+* The **earliest** available message in the topic
+* The **latest** available message in the topic
+* Some other message between the earliest and the latest. If you select this option, you'll need to explicitly provide a message ID. Your application will be responsible for "knowing" this message ID in advance, perhaps fetching it from a persistent data store or cache.
+
+The reader interface is helpful for use cases like using Pulsar to provide [effectively-once](https://streaml.io/blog/exactly-once/) processing semantics for a stream processing system. For this use case, it's essential that the stream processing system be able to "rewind" topics to a specific message and begin reading there. The reader interface provides Pulsar clients with the low-level abstraction necessary to "manually position" themselves within a topic.
+
+![The Pulsar consumer and reader interfaces](/assets/pulsar-reader-consumer-interfaces.png)
+
+> ### Non-partitioned topics only
+> The reader interface for Pulsar cannot currently be used with [partitioned topics](concepts-messaging.md#partitioned-topics).
+
+Here's a Java example that begins reading from the earliest available message on a topic:
+
+```java
+
+import org.apache.pulsar.client.api.Message;
+import org.apache.pulsar.client.api.MessageId;
+import org.apache.pulsar.client.api.Reader;
+
+// Create a reader on a topic and for a specific message (and onward)
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic("reader-api-test")
+    .startMessageId(MessageId.earliest)
+    .create();
+
+while (true) {
+    Message message = reader.readNext();
+
+    // Process the message
+}
+
+```
+
+To create a reader that will read from the latest available message:
+
+```java
+
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(MessageId.latest)
+    .create();
+
+```
+
+To create a reader that will read from some message between earliest and latest:
+
+```java
+
+byte[] msgIdBytes = // Some byte array
+MessageId id = MessageId.fromByteArray(msgIdBytes);
+Reader<byte[]> reader = pulsarClient.newReader()
+    .topic(topic)
+    .startMessageId(id)
+    .create();
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-messaging.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-messaging.md
new file mode 100644
index 0000000..c954ffa
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-messaging.md
@@ -0,0 +1,400 @@
+---
+id: concepts-messaging
+title: Messaging Concepts
+sidebar_label: "Messaging"
+original_id: concepts-messaging
+---
+
+Pulsar is built on the [publish-subscribe](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) pattern, aka pub-sub. In this pattern, [producers](#producers) publish messages to [topics](#topics). [Consumers](#consumers) can then [subscribe](#subscription-types) to those topics, process incoming messages, and send an acknowledgement when processing is complete.
+
+Once a subscription has been created, all messages will be [retained](concepts-architecture-overview.md#persistent-storage) by Pulsar, even if the consumer gets disconnected. Retained messages will be discarded only when a consumer acknowledges that they've been successfully processed.
+
+## Messages
+
+Messages are the basic "unit" of Pulsar. They're what producers publish to topics and what consumers then consume from topics (and acknowledge when the message has been processed). Messages are the analogue of letters in a postal service system.
+
+Component | Purpose
+:---------|:-------
+Value / data payload | The data carried by the message. All Pulsar messages carry raw bytes, although message data can also conform to data [schemas](concepts-schema-registry)
+Key | Messages can optionally be tagged with keys, which can be useful for things like [topic compaction](concepts-topic-compaction)
+Properties | An optional key/value map of user-defined properties
+Producer name | The name of the producer that produced the message (producers are automatically given default names, but you can apply your own explicitly as well)
+Sequence ID | Each Pulsar message belongs to an ordered sequence on its topic. A message's sequence ID is its ordering in that sequence.
+Publish time | The timestamp of when the message was published (automatically applied by the producer)
+Event time | An optional timestamp that applications can attach to the message representing when something happened, e.g. when the message was processed. The event time of a message is 0 if none is explicitly set.
+
+
+> For a more in-depth breakdown of Pulsar message contents, see the documentation on Pulsar's [binary protocol](developing-binary-protocol).
+
+## Producers
+
+A producer is a process that attaches to a topic and publishes messages to a Pulsar [broker](reference-terminology.md#broker) for processing.
+
+### Send modes
+
+Producers can send messages to brokers either synchronously (sync) or asynchronously (async).
+
+| Mode       | Description                                                                                                                                                                                                                                                                                                                                                              |
+|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync send  | The producer will wait for acknowledgement from the broker after sending each message. If acknowledgment isn't received then the producer will consider the send operation a failure.                                                                                                                                                                                    |
+| Async send | The producer will put the message in a blocking queue and return immediately. The client library will then send the message to the broker in the background. If the queue is full (max size [configurable](reference-configuration.md#broker), the producer could be blocked or fail immediately when calling the API, depending on arguments passed to the producer. |
+
+### Compression
+
+Messages published by producers can be compressed during transportation in order to save bandwidth. Pulsar currently supports the following types of compression:
+
+* [LZ4](https://github.com/lz4/lz4)
+* [ZLIB](https://zlib.net/)
+* [ZSTD](https://facebook.github.io/zstd/)
+* [SNAPPY](https://google.github.io/snappy/)
+
+### Batching
+
+If batching is enabled, the producer will accumulate and send a batch of messages in a single request. Batching size is defined by the maximum number of messages and maximum publish latency.
+
+## Consumers
+
+A consumer is a process that attaches to a topic via a subscription and then receives messages.
+
+### Receive modes
+
+Messages can be received from [brokers](reference-terminology.md#broker) either synchronously (sync) or asynchronously (async).
+
+| Mode          | Description                                                                                                                                                                                                   |
+|:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Sync receive  | A sync receive will be blocked until a message is available.                                                                                                                                                  |
+| Async receive | An async receive will return immediately with a future value---a [`CompletableFuture`](http://www.baeldung.com/java-completablefuture) in Java, for example---that completes once a new message is available. |
+
+### Listeners
+
+Client libraries provide listener implementation for consumers. For example, the [Java client](client-libraries-java) provides a {@inject: javadoc:MesssageListener:/client/org/apache/pulsar/client/api/MessageListener} interface. In this interface, the `received` method is called whenever a new message is received.
+
+### Acknowledgement
+
+When a consumer has consumed a message successfully, the consumer sends an acknowledgement request to the broker, so that the broker will discard the message. Otherwise, it [stores](concepts-architecture-overview.md#persistent-storage) the message.
+
+Messages can be acknowledged either one by one or cumulatively. With cumulative acknowledgement, the consumer only needs to acknowledge the last message it received. All messages in the stream up to (and including) the provided message will not be re-delivered to that consumer.
+
+
+> Cumulative acknowledgement cannot be used with [shared subscription type](#subscription-types), because shared mode involves multiple consumers having access to the same subscription.
+
+In Shared subscription type, messages can be acknowledged individually.
+
+### Negative acknowledgement
+
+When a consumer does not consume a message successfully at a time, and wants to consume the message again, the consumer can send a negative acknowledgement to the broker, and then the broker will redeliver the message.
+
+Messages can be negatively acknowledged either individually or cumulatively, depending on the consumption subscription type.
+
+In the exclusive and failover subscription types, consumers only negatively acknowledge the last message they have received.
+
+In the shared and Key_Shared subscription types, you can negatively acknowledge messages individually.
+
+### Acknowledgement timeout
+
+When a message is not consumed successfully, and you want to trigger the broker to redeliver the message automatically, you can adopt the unacknowledged message automatic re-delivery mechanism. Client will track the unacknowledged messages within the entire `acktimeout` time range, and send a `redeliver unacknowledged messages` request to the broker automatically when the acknowledgement timeout is specified.
+
+:::note
+
+Use negative acknowledgement prior to acknowledgement timeout. Negative acknowledgement controls re-delivery of individual messages with more precise, and avoids invalid redeliveries when the message processing time exceeds the acknowledgement timeout.
+
+:::
+
+### Dead letter topic
+
+Dead letter topic enables you to consume new messages when some messages cannot be consumed successfully by a consumer. In this mechanism, messages that are failed to be consumed are stored in a separate topic, which is called dead letter topic. You can decide how to handle messages in the dead letter topic.
+
+The following example shows how to enable dead letter topic in Java client.
+
+```java
+
+Consumer<byte[]> consumer = pulsarClient.newConsumer(Schema.BYTES)
+              .topic(topic)
+              .subscriptionName("my-subscription")
+              .subscriptionType(SubscriptionType.Shared)
+              .deadLetterPolicy(DeadLetterPolicy.builder()
+                    .maxRedeliverCount(maxRedeliveryCount)
+                    .build())
+              .subscribe();
+
+```
+
+Dead letter topic depends on message re-delivery. You need to confirm message re-delivery method: negative acknowledgement or acknowledgement timeout. Use negative acknowledgement prior to acknowledgement timeout. 
+
+:::note
+
+Currently, dead letter topic is enabled only in Shared subscription type.
+
+:::
+
+## Topics
+
+As in other pub-sub systems, topics in Pulsar are named channels for transmitting messages from [producers](reference-terminology.md#producer) to [consumers](reference-terminology.md#consumer). Topic names are URLs that have a well-defined structure:
+
+```http
+
+{persistent|non-persistent}://tenant/namespace/topic
+
+```
+
+Topic name component | Description
+:--------------------|:-----------
+`persistent` / `non-persistent` | This identifies the type of topic. Pulsar supports two kind of topics: [persistent](concepts-architecture-overview.md#persistent-storage) and [non-persistent](#non-persistent-topics) (persistent is the default, so if you don't specify a type the topic will be persistent). With persistent topics, all messages are durably [persisted](concepts-architecture-overview.md#persistent-storage) on disk (that means on multiple disks unless the broker is standalone), whereas data for [non-persistent](#non-persistent-topics) topics isn't persisted to storage disks.
+`tenant`             | The topic's tenant within the instance. Tenants are essential to multi-tenancy in Pulsar and can be spread across clusters.
+`namespace`          | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Most topic configuration is performed at the [namespace](#namespaces) level. Each tenant can have multiple namespaces.
+`topic`              | The final part of the name. Topic names are freeform and have no special meaning in a Pulsar instance.
+
+
+> #### No need to explicitly create new topics
+> You don't need to explicitly create topics in Pulsar. If a client attempts to write or receive messages to/from a topic that does not yet exist, Pulsar will automatically create that topic under the [namespace](#namespaces) provided in the [topic name](#topics).
+> If no tenant or namespace is specified when a client creates a topic, the topic is created in the default tenant and namespace. You can also create a topic in a specified tenant and namespace, such as `persistent://my-tenant/my-namespace/my-topic`. `persistent://my-tenant/my-namespace/my-topic` means the `my-topic` topic is created in the `my-namespace` namespace of the `my-tenant` tenant.
+
+
+## Namespaces
+
+A namespace is a logical nomenclature within a tenant. A tenant can create multiple namespaces via the [admin API](admin-api-namespaces.md#create). For instance, a tenant with different applications can create a separate namespace for each application. A namespace allows the application to create and manage a hierarchy of topics. The topic `my-tenant/app1` is a namespace for the application `app1` for `my-tenant`. You can create any number of [topics](#topics) under the namespace.
+
+## Subscription types
+
+A subscription is a named configuration rule that determines how messages are delivered to consumers. There are three available subscription types in Pulsar: [exclusive](#exclusive), [shared](#shared), and [failover](#failover). These types are illustrated in the figure below.
+
+![Subscription types](/assets/pulsar-subscription-types.png)
+
+### Exclusive
+
+In *exclusive* type, only a single consumer is allowed to attach to the subscription. If more than one consumer attempts to subscribe to a topic using the same subscription, the consumer receives an error.
+
+In the diagram above, only **Consumer A-0** is allowed to consume messages.
+
+> Exclusive is the default subscription type.
+
+![Exclusive subscriptions](/assets/pulsar-exclusive-subscriptions.png)
+
+### Failover
+
+In *Failover* type, multiple consumers can attach to the same subscription. The consumers will be lexically sorted by the consumer's name and the first consumer will initially be the only one receiving messages. This consumer is called the *master consumer*.
+
+When the master consumer disconnects, all (non-acked and subsequent) messages will be delivered to the next consumer in line.
+
+In the diagram above, Consumer-C-1 is the master consumer while Consumer-C-2 would be the next in line to receive messages if Consumer-C-1 disconnected.
+
+![Failover subscriptions](/assets/pulsar-failover-subscriptions.png)
+
+### Shared
+
+In *shared* or *round robin* mode, multiple consumers can attach to the same subscription. Messages are delivered in a round robin distribution across consumers, and any given message is delivered to only one consumer. When a consumer disconnects, all the messages that were sent to it and not acknowledged will be rescheduled for sending to the remaining consumers.
+
+In the diagram above, **Consumer-B-1** and **Consumer-B-2** are able to subscribe to the topic, but **Consumer-C-1** and others could as well.
+
+> #### Limitations of shared mode
+> There are two important things to be aware of when using shared mode:
+> * Message ordering is not guaranteed.
+> * You cannot use cumulative acknowledgment with shared mode.
+
+![Shared subscriptions](/assets/pulsar-shared-subscriptions.png)
+
+### Key_shared
+
+In *Key_Shared* mode, multiple consumers can attach to the same subscription. Messages are delivered in a distribution across consumers and message with same key or same ordering key are delivered to only one consumer. No matter how many times the message is re-delivered, it is delivered to the same consumer. When a consumer connected or disconnected will cause served consumer change for some key of message.
+
+> #### Limitations of Key_Shared mode
+> There are two important things to be aware of when using Key_Shared mode:
+> * You need to specify a key or orderingKey for messages
+> * You cannot use cumulative acknowledgment with Key_Shared mode.
+
+![Key_Shared subscriptions](/assets/pulsar-key-shared-subscriptions.png)
+
+**Key_Shared subscription is a beta feature. You can disable it at broker.config.**
+
+## Multi-topic subscriptions
+
+When a consumer subscribes to a Pulsar topic, by default it subscribes to one specific topic, such as `persistent://public/default/my-topic`. As of Pulsar version 1.23.0-incubating, however, Pulsar consumers can simultaneously subscribe to multiple topics. You can define a list of topics in two ways:
+
+* On the basis of a [**reg**ular **ex**pression](https://en.wikipedia.org/wiki/Regular_expression) (regex), for example `persistent://public/default/finance-.*`
+* By explicitly defining a list of topics
+
+> When subscribing to multiple topics by regex, all topics must be in the same [namespace](#namespaces)
+
+When subscribing to multiple topics, the Pulsar client will automatically make a call to the Pulsar API to discover the topics that match the regex pattern/list and then subscribe to all of them. If any of the topics don't currently exist, the consumer will auto-subscribe to them once the topics are created.
+
+> #### No ordering guarantees across multiple topics
+> When a producer sends messages to a single topic, all messages are guaranteed to be read from that topic in the same order. However, these guarantees do not hold across multiple topics. So when a producer sends message to multiple topics, the order in which messages are read from those topics is not guaranteed to be the same.
+
+Here are some multi-topic subscription examples for Java:
+
+```java
+
+import java.util.regex.Pattern;
+
+import org.apache.pulsar.client.api.Consumer;
+import org.apache.pulsar.client.api.PulsarClient;
+
+PulsarClient pulsarClient = // Instantiate Pulsar client object
+
+// Subscribe to all topics in a namespace
+Pattern allTopicsInNamespace = Pattern.compile("persistent://public/default/.*");
+Consumer allTopicsConsumer = pulsarClient.subscribe(allTopicsInNamespace, "subscription-1");
+
+// Subscribe to a subsets of topics in a namespace, based on regex
+Pattern someTopicsInNamespace = Pattern.compile("persistent://public/default/foo.*");
+Consumer someTopicsConsumer = pulsarClient.subscribe(someTopicsInNamespace, "subscription-1");
+
+```
+
+For code examples, see:
+
+* [Java](client-libraries-java.md#multi-topic-subscriptions)
+
+## Partitioned topics
+
+Normal topics can be served only by a single broker, which limits the topic's maximum throughput. *Partitioned topics* are a special type of topic that be handled by multiple brokers, which allows for much higher throughput.
+
+Behind the scenes, a partitioned topic is actually implemented as N internal topics, where N is the number of partitions. When publishing messages to a partitioned topic, each message is routed to one of several brokers. The distribution of partitions across brokers is handled automatically by Pulsar.
+
+The diagram below illustrates this:
+
+![](/assets/partitioning.png)
+
+Here, the topic **Topic1** has five partitions (**P0** through **P4**) split across three brokers. Because there are more partitions than brokers, two brokers handle two partitions a piece, while the third handles only one (again, Pulsar handles this distribution of partitions automatically).
+
+Messages for this topic are broadcast to two consumers. The [routing mode](#routing-modes) determines both which broker handles each partition, while the [subscription type](#subscription-types) determines which messages go to which consumers.
+
+Decisions about routing and subscription types can be made separately in most cases. In general, throughput concerns should guide partitioning/routing decisions while subscription decisions should be guided by application semantics.
+
+There is no difference between partitioned topics and normal topics in terms of how subscription types work, as partitioning only determines what happens between when a message is published by a producer and processed and acknowledged by a consumer.
+
+Partitioned topics need to be explicitly created via the [admin API](admin-api-overview). The number of partitions can be specified when creating the topic.
+
+### Routing modes
+
+When publishing to partitioned topics, you must specify a *routing mode*. The routing mode determines which partition---that is, which internal topic---each message should be published to.
+
+There are three {@inject: javadoc:MessageRoutingMode:/client/org/apache/pulsar/client/api/MessageRoutingMode} available:
+
+Mode     | Description 
+:--------|:------------
+`RoundRobinPartition` | If no key is provided, the producer will publish messages across all partitions in round-robin fashion to achieve maximum throughput. Please note that round-robin is not done per individual message but rather it's set to the same boundary of batching delay, to ensure batching is effective. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition. This is the default mode. 
+`SinglePartition`     | If no key is provided, the producer will randomly pick one single partition and publish all the messages into that partition. While if a key is specified on the message, the partitioned producer will hash the key and assign message to a particular partition.
+`CustomPartition`     | Use custom message router implementation that will be called to determine the partition for a particular message. User can create a custom routing mode by using the [Java client](client-libraries-java) and implementing the {@inject: javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} interface.
+
+### Ordering guarantee
+
+The ordering of messages is related to MessageRoutingMode and Message Key. Usually, user would want an ordering of Per-key-partition guarantee.
+
+If there is a key attached to message, the messages will be routed to corresponding partitions based on the hashing scheme specified by {@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} in {@inject: javadoc:ProducerBuilder:/client/org/apache/pulsar/client/api/ProducerBuilder}, when using either `SinglePartition` or `RoundRobinPartition` mode.
+
+Ordering guarantee | Description | Routing Mode and Key
+:------------------|:------------|:------------
+Per-key-partition  | All the messages with the same key will be in order and be placed in same partition. | Use either `SinglePartition` or `RoundRobinPartition` mode, and Key is provided by each message.
+Per-producer       | All the messages from the same producer will be in order. | Use `SinglePartition` mode, and no Key is provided for each message.
+
+### Hashing scheme
+
+{@inject: javadoc:HashingScheme:/client/org/apache/pulsar/client/api/HashingScheme} is an enum that represent sets of standard hashing functions available when choosing the partition to use for a particular message.
+
+There are 2 types of standard hashing functions available: `JavaStringHash` and `Murmur3_32Hash`. 
+The default hashing function for producer is `JavaStringHash`.
+Please pay attention that `JavaStringHash` is not useful when producers can be from different multiple language clients, under this use case, it is recommended to use `Murmur3_32Hash`.
+
+
+
+## Non-persistent topics
+
+
+By default, Pulsar persistently stores *all* unacknowledged messages on multiple [BookKeeper](concepts-architecture-overview.md#persistent-storage) bookies (storage nodes). Data for messages on persistent topics can thus survive broker restarts and subscriber failover.
+
+Pulsar also, however, supports **non-persistent topics**, which are topics on which messages are *never* persisted to disk and live only in memory. When using non-persistent delivery, killing a Pulsar broker or disconnecting a subscriber to a topic means that all in-transit messages are lost on that (non-persistent) topic, meaning that clients may see message loss.
+
+Non-persistent topics have names of this form (note the `non-persistent` in the name):
+
+```http
+
+non-persistent://tenant/namespace/topic
+
+```
+
+> For more info on using non-persistent topics, see the [Non-persistent messaging cookbook](cookbooks-non-persistent).
+
+In non-persistent topics, brokers immediately deliver messages to all connected subscribers *without persisting them* in [BookKeeper](concepts-architecture-overview.md#persistent-storage). If a subscriber is disconnected, the broker will not be able to deliver those in-transit messages, and subscribers will never be able to receive those messages again. Eliminating the persistent storage step makes messaging on non-persistent topics slightly faster than on persistent topics in some cases, but with the caveat that some of the core benefits of Pulsar are lost.
+
+> With non-persistent topics, message data lives only in memory. If a message broker fails or message data can otherwise not be retrieved from memory, your message data may be lost. Use non-persistent topics only if you're *certain* that your use case requires it and can sustain it.
+
+By default, non-persistent topics are enabled on Pulsar brokers. You can disable them in the broker's [configuration](reference-configuration.md#broker-enableNonPersistentTopics). You can manage non-persistent topics using the [`pulsar-admin topics`](referencereference--pulsar-admin/#topics-1) interface.
+
+### Performance
+
+Non-persistent messaging is usually faster than persistent messaging because brokers don't persist messages and immediately send acks back to the producer as soon as that message is deliver to all connected subscribers. Producers thus see comparatively low publish latency with non-persistent topic.
+
+### Client API
+
+Producers and consumers can connect to non-persistent topics in the same way as persistent topics, with the crucial difference that the topic name must start with `non-persistent`. All three subscription types---[exclusive](#exclusive), [shared](#shared), and [failover](#failover)---are supported for non-persistent topics.
+
+Here's an example [Java consumer](client-libraries-java.md#consumers) for a non-persistent topic:
+
+```java
+
+PulsarClient client = PulsarClient.create("pulsar://localhost:6650");
+String npTopic = "non-persistent://public/default/my-topic";
+String subscriptionName = "my-subscription-name";
+
+Consumer consumer = client.subscribe(npTopic, subscriptionName);
+
+```
+
+Here's an example [Java producer](client-libraries-java.md#producer) for the same non-persistent topic:
+
+```java
+
+Producer producer = client.createProducer(npTopic);
+
+```
+
+## Message retention and expiry
+
+By default, Pulsar message brokers:
+
+* immediately delete *all* messages that have been acknowledged by a consumer, and
+* [persistently store](concepts-architecture-overview.md#persistent-storage) all unacknowledged messages in a message backlog.
+
+Pulsar has two features, however, that enable you to override this default behavior:
+
+* Message **retention** enables you to store messages that have been acknowledged by a consumer
+* Message **expiry** enables you to set a time to live (TTL) for messages that have not yet been acknowledged
+
+> All message retention and expiry is managed at the [namespace](#namespaces) level. For a how-to, see the [Message retention and expiry](cookbooks-retention-expiry) cookbook.
+
+The diagram below illustrates both concepts:
+
+![Message retention and expiry](/assets/retention-expiry.png)
+
+With message retention, shown at the top, a <span style={{color: " #89b557"}}>retention policy</span> applied to all topics in a namespace dictates that some messages are durably stored in Pulsar even though they've already been acknowledged. Acknowledged messages that are not covered by the retention policy are <span style={{color: " #bb3b3e"}}>deleted</span>. Without a retention policy, *all* of the <span style={{color: " #19967d"}}>acknowledged messages</span> would be deleted.
+
+With message expiry, shown at the bottom, some messages are <span style={{color: " #bb3b3e"}}>deleted</span>, even though they <span style={{color: " #337db6"}}>haven't been acknowledged</span>, because they've expired according to the <span style={{color: " #e39441"}}>TTL applied to the namespace</span> (for example because a TTL of 5 minutes has been applied and the messages haven't been acknowledged but are 10 minutes old).
+
+## Message deduplication
+
+Message **duplication** occurs when a message is [persisted](concepts-architecture-overview.md#persistent-storage) by Pulsar more than once. Message ***de**duplication** is an optional Pulsar feature that prevents unnecessary message duplication by processing each message only once, **even if the message is received more than once*.
+
+The following diagram illustrates what happens when message deduplication is disabled vs. enabled:
+
+![Pulsar message deduplication](/assets/message-deduplication.png)
+
+
+Message deduplication is disabled in the scenario shown at the top. Here, a producer publishes message 1 on a topic; the message reaches a Pulsar broker and is [persisted](concepts-architecture-overview.md#persistent-storage) to BookKeeper. The producer then sends message 1 again (in this case due to some retry logic), and the message is received by the broker and stored in BookKeeper again, which means that duplication has occurred.
+
+In the second scenario at the bottom, the producer publishes message 1, which is received by the broker and persisted, as in the first scenario. When the producer attempts to publish the message again, however, the broker knows that it has already seen message 1 and thus does not persist the message.
+
+> Message deduplication is handled at the namespace level. For more instructions, see the [message deduplication cookbook](cookbooks-deduplication).
+
+
+### Producer idempotency
+
+The other available approach to message deduplication is to ensure that each message is *only produced once*. This approach is typically called **producer idempotency**. The drawback of this approach is that it defers the work of message deduplication to the application. In Pulsar, this is handled at the [broker](reference-terminology.md#broker) level, which means that you don't need to modify your Pulsar client code. Instead, you only need to make administrative changes (see the [Managing message deduplication](cookbooks-deduplication) cookbook for a guide).
+
+### Deduplication and effectively-once semantics
+
+Message deduplication makes Pulsar an ideal messaging system to be used in conjunction with stream processing engines (SPEs) and other systems seeking to provide [effectively-once](https://streaml.io/blog/exactly-once) processing semantics. Messaging systems that don't offer automatic message deduplication require the SPE or other system to guarantee deduplication, which means that strict message ordering comes at the cost of burdening the application with the responsibility of deduplication. With Pulsar, strict ordering guarantees come at no application-level cost.
+
+> More in-depth information can be found in [this post](https://streaml.io/blog/pulsar-effectively-once/) on the [Streamlio blog](https://streaml.io/blog)
+
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-multi-tenancy.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-multi-tenancy.md
new file mode 100644
index 0000000..be752cc
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-multi-tenancy.md
@@ -0,0 +1,59 @@
+---
+id: concepts-multi-tenancy
+title: Multi Tenancy
+sidebar_label: "Multi Tenancy"
+original_id: concepts-multi-tenancy
+---
+
+Pulsar was created from the ground up as a multi-tenant system. To support multi-tenancy, Pulsar has a concept of tenants. Tenants can be spread across clusters and can each have their own [authentication and authorization](security-overview) scheme applied to them. They are also the administrative unit at which storage quotas, [message TTL](cookbooks-retention-expiry.md#time-to-live-ttl), and isolation policies can be managed.
+
+The multi-tenant nature of Pulsar is reflected mostly visibly in topic URLs, which have this structure:
+
+```http
+
+persistent://tenant/namespace/topic
+
+```
+
+As you can see, the tenant is the most basic unit of categorization for topics (more fundamental than the namespace and topic name).
+
+## Tenants
+
+To each tenant in a Pulsar instance you can assign:
+
+* An [authorization](security-authorization) scheme
+* The set of [clusters](reference-terminology.md#cluster) to which the tenant's configuration applies
+
+## Namespaces
+
+Tenants and namespaces are two key concepts of Pulsar to support multi-tenancy.
+
+* Pulsar is provisioned for specified tenants with appropriate capacity allocated to the tenant.
+* A namespace is the administrative unit nomenclature within a tenant. The configuration policies set on a namespace apply to all the topics created in that namespace. A tenant may create multiple namespaces via self-administration using the REST API and the [`pulsar-admin`](reference-pulsar-admin) CLI tool. For instance, a tenant with different applications can create a separate namespace for each application.
+
+Names for topics in the same namespace will look like this:
+
+```http
+
+persistent://tenant/app1/topic-1
+
+persistent://tenant/app1/topic-2
+
+persistent://tenant/app1/topic-3
+
+```
+
+### Namespace change events and topic-level policies
+
+Pulsar is a multi-tenant event streaming system. Administrators can manage the tenants and namespaces by setting policies at different levels. However, the policies, such as retention policy and storage quota policy, are only available at a namespace level. In many use cases, users need to set a policy at the topic level. The namespace change events approach is proposed for supporting topic-level policies in an efficient way. In this approach, Pulsar is used as an event log to store namespace change events (such as topic policy changes). This approach has a few benefits:
+
+- Avoid using ZooKeeper and introduce more loads to ZooKeeper.
+- Use Pulsar as an event log for propagating the policy cache. It can scale efficiently.
+- Use Pulsar SQL to query the namespace changes and audit the system.
+
+Each namespace has a system topic `__change_events`. This system topic is used for storing change events for a given namespace. The following figure illustrates how to use namespace change events to implement a topic-level policy.
+
+1. Pulsar Admin clients communicate with the Admin Restful API to update topic level policies.
+2. Any broker that receives the Admin HTTP request publishes a topic policy change event to the corresponding `__change_events` topic of the namespace.
+3. Each broker that owns a namespace bundle(s) subscribes to the `__change_events` topic to receive change events of the namespace. It then applies the change events to the policy cache.
+4. Once the policy cache is updated, the broker sends the response back to the Pulsar Admin clients.
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-overview.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-overview.md
new file mode 100644
index 0000000..b903fa4
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-overview.md
@@ -0,0 +1,31 @@
+---
+id: concepts-overview
+title: Pulsar Overview
+sidebar_label: "Overview"
+original_id: concepts-overview
+---
+
+Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. Pulsar was originally developed by Yahoo, it is under the stewardship of the [Apache Software Foundation](https://www.apache.org/).
+
+Key features of Pulsar are listed below:
+
+* Native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](administration-geo) of messages across clusters.
+* Very low publish and end-to-end latency.
+* Seamless scalability to over a million topics.
+* A simple [client API](concepts-clients.md) with bindings for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python.md) and [C++](client-libraries-cpp).
+* Multiple [subscription types](concepts-messaging.md#subscription-types) ([exclusive](concepts-messaging.md#exclusive), [shared](concepts-messaging.md#shared), and [failover](concepts-messaging.md#failover)) for topics.
+* Guaranteed message delivery with [persistent message storage](concepts-architecture-overview.md#persistent-storage) provided by [Apache BookKeeper](http://bookkeeper.apache.org/).
+* A serverless light-weight computing framework [Pulsar Functions](functions-overview) offers the capability for stream-native data processing.
+* A serverless connector framework [Pulsar IO](io-overview), which is built on Pulsar Functions, makes it easier to move data in and out of Apache Pulsar.
+* [Tiered Storage](concepts-tiered-storage) offloads data from hot/warm storage to cold/longterm storage (such as S3 and GCS) when the data is aging out.
+
+## Contents
+
+- [Messaging Concepts](concepts-messaging)
+- [Architecture Overview](concepts-architecture-overview)
+- [Pulsar Clients](concepts-clients)
+- [Geo Replication](concepts-replication)
+- [Multi Tenancy](concepts-multi-tenancy)
+- [Authentication and Authorization](concepts-authentication)
+- [Topic Compaction](concepts-topic-compaction)
+- [Tiered Storage](concepts-tiered-storage)
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-replication.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-replication.md
new file mode 100644
index 0000000..6e23962
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-replication.md
@@ -0,0 +1,9 @@
+---
+id: concepts-replication
+title: Geo Replication
+sidebar_label: "Geo Replication"
+original_id: concepts-replication
+---
+
+Pulsar enables messages to be produced and consumed in different geo-locations. For instance, your application may be publishing data in one region or market and you would like to process it for consumption in other regions or markets. [Geo-replication](administration-geo) in Pulsar enables you to do that.
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-schema-registry.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-schema-registry.md
new file mode 100644
index 0000000..c2a42c1
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-schema-registry.md
@@ -0,0 +1,86 @@
+---
+id: concepts-schema-registry
+title: Schema Registry
+sidebar_label: "Schema Registry"
+original_id: concepts-schema-registry
+---
+
+Type safety is extremely important in any application built around a message bus like Pulsar. Producers and consumers need some kind of mechanism for coordinating types at the topic level lest a wide variety of potential problems arise (for example serialization and deserialization issues). Applications typically adopt one of two basic approaches to type safety in messaging:
+
+1. A "client-side" approach in which message producers and consumers are responsible for not only serializing and deserializing messages (which consist of raw bytes) but also "knowing" which types are being transmitted via which topics. If a producer is sending temperature sensor data on the topic `topic-1`, consumers of that topic will run into trouble if they attempt to parse that data as, say, moisture sensor readings.
+2. A "server-side" approach in which producers and consumers inform the system which data types can be transmitted via the topic. With this approach, the messaging system enforces type safety and ensures that producers and consumers remain synced.
+
+Both approaches are available in Pulsar, and you're free to adopt one or the other or to mix and match on a per-topic basis.
+
+1. For the "client-side" approach, producers and consumers can send and receive messages consisting of raw byte arrays and leave all type safety enforcement to the application on an "out-of-band" basis.
+1. For the "server-side" approach, Pulsar has a built-in **schema registry** that enables clients to upload data schemas on a per-topic basis. Those schemas dictate which data types are recognized as valid for that topic.
+
+#### Note
+>
+> Currently, the Pulsar schema registry is only available for the [Java client](client-libraries-java.md), [CGo client](client-libraries-go.md), [Python client](client-libraries-python.md), and [C++ client](client-libraries-cpp).
+
+## Basic architecture
+
+Schemas are automatically uploaded when you create a typed Producer with a Schema. Additionally, Schemas can be manually uploaded to, fetched from, and updated via Pulsar's {@inject: rest:REST:tag/schemas} API.
+
+> #### Other schema registry backends
+> Out of the box, Pulsar uses the [Apache BookKeeper](concepts-architecture-overview#persistent-storage) log storage system for schema storage. You can, however, use different backends if you wish. Documentation for custom schema storage logic is coming soon.
+
+## How schemas work
+
+Pulsar schemas are applied and enforced *at the topic level* (schemas cannot be applied at the namespace or tenant level). Producers and consumers upload schemas to Pulsar brokers.
+
+Pulsar schemas are fairly simple data structures that consist of:
+
+* A **name**. In Pulsar, a schema's name is the topic to which the schema is applied.
+* A **payload**, which is a binary representation of the schema
+* A schema [**type**](#supported-schema-formats)
+* User-defined **properties** as a string/string map. Usage of properties is wholly application specific. Possible properties might be the Git hash associated with a schema, an environment like `dev` or `prod`, etc.
+
+## Schema versions
+
+In order to illustrate how schema versioning works, let's walk through an example. Imagine that the Pulsar [Java client](client-libraries-java) created using the code below attempts to connect to Pulsar and begin sending messages:
+
+```java
+
+PulsarClient client = PulsarClient.builder()
+        .serviceUrl("pulsar://localhost:6650")
+        .build();
+
+Producer<SensorReading> producer = client.newProducer(JSONSchema.of(SensorReading.class))
+        .topic("sensor-data")
+        .sendTimeout(3, TimeUnit.SECONDS)
+        .create();
+
+```
+
+The table below lists the possible scenarios when this connection attempt occurs and what will happen in light of each scenario:
+
+Scenario | What happens
+:--------|:------------
+No schema exists for the topic | The producer is created using the given schema. The schema is transmitted to the broker and stored (since no existing schema is "compatible" with the `SensorReading` schema). Any consumer created using the same schema/topic can consume messages from the `sensor-data` topic.
+A schema already exists; the producer connects using the same schema that's already stored | The schema is transmitted to the Pulsar broker. The broker determines that the schema is compatible. The broker attempts to store the schema in [BookKeeper](concepts-architecture-overview.md#persistent-storage) but then determines that it's already stored, so it's then used to tag produced messages.
+A schema already exists; the producer connects using a new schema that is compatible | The producer transmits the schema to the broker. The broker determines that the schema is compatible and stores the new schema as the current version (with a new version number).
+
+> Schemas are versioned in succession. Schema storage happens in the broker that handles the associated topic so that version assignments can be made. Once a version is assigned/fetched to/for a schema, all subsequent messages produced by that producer are tagged with the appropriate version.
+
+
+## Supported schema formats
+
+The following formats are supported by the Pulsar schema registry:
+
+* None. If no schema is specified for a topic, producers and consumers will handle raw bytes.
+* `String` (used for UTF-8-encoded strings)
+* [JSON](https://www.json.org/)
+* [Protobuf](https://developers.google.com/protocol-buffers/)
+* [Avro](https://avro.apache.org/)
+
+For usage instructions, see the documentation for your preferred client library:
+
+* [Java](client-libraries-java.md#schemas)
+
+> Support for other schema formats will be added in future releases of Pulsar.
+
+## Managing Schemas
+
+You can use Pulsar admin tools to manage schemas for topics.
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-tiered-storage.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-tiered-storage.md
new file mode 100644
index 0000000..0b45b0a
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-tiered-storage.md
@@ -0,0 +1,18 @@
+---
+id: concepts-tiered-storage
+title: Tiered Storage
+sidebar_label: "Tiered Storage"
+original_id: concepts-tiered-storage
+---
+
+Pulsar's segment oriented architecture allows for topic backlogs to grow very large, effectively without limit. However, this can become expensive over time.
+
+One way to alleviate this cost is to use Tiered Storage. With tiered storage, older messages in the backlog can be moved from BookKeeper to a cheaper storage mechanism, while still allowing clients to access the backlog as if nothing had changed.
+
+![Tiered Storage](/assets/pulsar-tiered-storage.png)
+
+> Data written to BookKeeper is replicated to 3 physical machines by default. However, once a segment is sealed in BookKeeper it becomes immutable and can be copied to long term storage. Long term storage can achieve cost savings by using mechanisms such as [Reed-Solomon error correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction) to require fewer physical copies of data.
+
+Pulsar currently supports S3, Google Cloud Storage (GCS), and filesystem for [long term store](https://pulsar.apache.org/docs/en/cookbooks-tiered-storage/). Offloading to long term storage triggered via a Rest API or command line interface. The user passes in the amount of topic data they wish to retain on BookKeeper, and the broker will copy the backlog data to long term storage. The original data will then be deleted from BookKeeper after a configured delay (4 hours by default).
+
+> For a guide for setting up tiered storage, see the [Tiered storage cookbook](cookbooks-tiered-storage).
diff --git a/site2/website-next/versioned_docs/version-2.3.2/concepts-topic-compaction.md b/site2/website-next/versioned_docs/version-2.3.2/concepts-topic-compaction.md
new file mode 100644
index 0000000..c85e703
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/concepts-topic-compaction.md
@@ -0,0 +1,37 @@
+---
+id: concepts-topic-compaction
+title: Topic Compaction
+sidebar_label: "Topic Compaction"
+original_id: concepts-topic-compaction
+---
+
+Pulsar was built with highly scalable [persistent storage](concepts-architecture-overview.md#persistent-storage) of message data as a primary objective. Pulsar topics enable you to persistently store as many unacknowledged messages as you need while preserving message ordering. By default, Pulsar stores *all* unacknowledged/unprocessed messages produced on a topic. Accumulating many unacknowledged messages on a topic is necessary for many Pulsar use cases but it can also be very time intensive for Pulsar consumers to "rewind" through the entire log of messages.
+
+> For a more practical guide to topic compaction, see the [Topic compaction cookbook](cookbooks-compaction).
+
+For some use cases consumers don't need a complete "image" of the topic log. They may only need a few values to construct a more "shallow" image of the log, perhaps even just the most recent value. For these kinds of use cases Pulsar offers **topic compaction**. When you run compaction on a topic, Pulsar goes through a topic's backlog and removes messages that are *obscured* by later messages, i.e. it goes through the topic on a per-key basis and leaves only the most recent message associated with that key.
+
+Pulsar's topic compaction feature:
+
+* Allows for faster "rewind" through topic logs
+* Applies only to [persistent topics](concepts-architecture-overview.md#persistent-storage)
+* Triggered automatically when the backlog reaches a certain size or can be triggered manually via the command line. See the [Topic compaction cookbook](cookbooks-compaction)
+* Is conceptually and operationally distinct from [retention and expiry](concepts-messaging.md#message-retention-and-expiry). Topic compaction *does*, however, respect retention. If retention has removed a message from the message backlog of a topic, the message will also not be readable from the compacted topic ledger.
+
+> #### Topic compaction example: the stock ticker
+> An example use case for a compacted Pulsar topic would be a stock ticker topic. On a stock ticker topic, each message bears a timestamped dollar value for stocks for purchase (with the message key holding the stock symbol, e.g. `AAPL` or `GOOG`). With a stock ticker you may care only about the most recent value(s) of the stock and have no interest in historical data (i.e. you don't need to construct a complete image of the topic's sequence of messages per key). Compaction would be highly beneficial in this case because it would keep consumers from needing to rewind through obscured messages.
+
+
+## How topic compaction works
+
+When topic compaction is triggered [via the CLI](cookbooks-compaction), Pulsar will iterate over the entire topic from beginning to end. For each key that it encounters the compaction routine will keep a record of the latest occurrence of that key.
+
+After that, the broker will create a new [BookKeeper ledger](concepts-architecture-overview.md#ledgers) and make a second iteration through each message on the topic. For each message, if the key matches the latest occurrence of that key, then the key's data payload, message ID, and metadata will be written to the newly created ledger. If the key doesn't match the latest then the message will be skipped and left alone. If any given message has an empty payload, it will be skipped and considered deleted (akin to the concept of [tombstones](https://en.wikipedia.org/wiki/Tombstone_(data_store)) in key-value databases). At the end of this second iteration through the topic, the newly created BookKeeper ledger is closed and two things are written to the topic's metadata: the ID of the BookKeeper ledger and the message ID of the last compacted message (this is known as the **compaction horizon** of the topic). Once this metadata is written compaction is complete.
+
+After the initial compaction operation, the Pulsar [broker](reference-terminology.md#broker) that owns the topic is notified whenever any future changes are made to the compaction horizon and compacted backlog. When such changes occur:
+
+* Clients (consumers and readers) that have read compacted enabled will attempt to read messages from a topic and either:
+  * Read from the topic like normal (if the message ID is greater than or equal to the compaction horizon) or
+  * Read beginning at the compaction horizon (if the message ID is lower than the compaction horizon)
+
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-api.md b/site2/website-next/versioned_docs/version-2.3.2/functions-api.md
new file mode 100644
index 0000000..ee4fe90
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-api.md
@@ -0,0 +1,799 @@
+---
+id: functions-api
+title: The Pulsar Functions API
+sidebar_label: "API"
+original_id: functions-api
+---
+
+[Pulsar Functions](functions-overview) provides an easy-to-use API that developers can use to create and manage processing logic for the Apache Pulsar messaging system. With Pulsar Functions, you can write functions of any level of complexity in [Java](#functions-for-java) or [Python](#functions-for-python) and run them in conjunction with a Pulsar cluster without needing to run a separate stream processing engine.
+
+> For a more in-depth overview of the Pulsar Functions feature, see the [Pulsar Functions overview](functions-overview).
+
+## Core programming model
+
+Pulsar Functions provide a wide range of functionality but are based on a very simple programming model. You can think of Pulsar Functions as lightweight processes that
+
+* consume messages from one or more Pulsar topics and then
+* apply some user-defined processing logic to each incoming message. That processing logic could be just about anything you want, including
+  * producing the resulting, processed message on another Pulsar topic, or
+  * doing something else with the message, such as writing results to an external database.
+
+You could use Pulsar Functions, for example, to set up the following processing chain:
+
+* A [Python](#functions-for-python) function listens on the `raw-sentences` topic and "[sanitizes](#example-function)" incoming strings (removing extraneous whitespace and converting all characters to lower case) and then publishes the results to a `sanitized-sentences` topic
+* A [Java](#functions-for-java) function listens on the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic
+* Finally, a Python function listens on the `results` topic and writes the results to a MySQL table
+
+### Example function
+
+Here's an example "input sanitizer" function written in Python and stored in a `sanitizer.py` file:
+
+```python
+
+def clean_string(s):
+    return s.strip().lower()
+
+def process(input):
+    return clean_string(input)
+
+```
+
+Some things to note about this Pulsar Function:
+
+* There is no client, producer, or consumer object involved. All message "plumbing" is already taken care of for you, enabling you to worry only about processing logic.
+* No topics, subscription types, tenants, or namespaces are specified in the function logic itself. Instead, topics are specified upon [deployment](#example-deployment). This means that you can use and re-use Pulsar Functions across topics, tenants, and namespaces without needing to hard-code those attributes.
+
+### Example deployment
+
+Deploying Pulsar Functions is handled by the [`pulsar-admin`](reference-pulsar-admin) CLI tool, in particular the [`functions`](reference-pulsar-admin.md#functions) command. Here's an example command that would run our [sanitizer](#example-function) function from above in [local run](functions-deploying.md#local-run-mode) mode:
+
+```bash
+
+$ bin/pulsar-admin functions localrun \
+  --py sanitizer.py \          # The Python file with the function's code
+  --classname sanitizer \      # The class or function holding the processing logic
+  --tenant public \            # The function's tenant (derived from the topic name by default)
+  --namespace default \        # The function's namespace (derived from the topic name by default)
+  --name sanitizer-function \  # The name of the function (the class name by default)
+  --inputs dirty-strings-in \  # The input topic(s) for the function
+  --output clean-strings-out \ # The output topic for the function
+  --log-topic sanitizer-logs   # The topic to which all functions logs are published
+
+```
+
+For instructions on running functions in your Pulsar cluster, see the [Deploying Pulsar Functions](functions-deploying) guide.
+
+### Available APIs
+
+In both Java and Python, you have two options for writing Pulsar Functions:
+
+Interface | Description | Use cases
+:---------|:------------|:---------
+Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python) | Functions that don't require access to the function's [context](#context)
+Pulsar Function SDK for Java/Python | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces | Functions that require access to the function's [context](#context)
+
+In Python, for example, this language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, would have no external dependencies:
+
+```python
+
+def process(input):
+    return "{}!".format(input)
+
+```
+
+This function, however, would use the Pulsar Functions [SDK for Python](#python-sdk-functions):
+
+```python
+
+from pulsar import Function
+
+class DisplayFunctionName(Function):
+    def process(self, input, context):
+        function_name = context.function_name()
+        return "The function processing this message has the name {0}".format(function_name)
+
+```
+
+### Functions, Messages and Message Types
+
+Pulsar Functions can take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(just Java at the moment) one can write typed Functions as well. In this scenario, there are two ways one can bind messages to types.
+* [Schema Registry](#Schema-Registry)
+* [SerDe](#SerDe)
+
+### Schema Registry
+Pulsar has a built in [Schema Registry](concepts-schema-registry) and comes bundled with a variety of popular schema types(avro, json and protobuf). Pulsar Functions can leverage existing schema information from input topics to derive the input type. The same applies for output topic as well.
+
+### SerDe
+
+SerDe stands for **Ser**ialization and **De**serialization. All Pulsar Functions use SerDe for message handling. How SerDe works by default depends on the language you're using for a particular function:
+
+* In [Python](#python-serde), the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns
+* In [Java](#java-serde), a number of commonly used types (`String`s, `Integer`s, etc.) are supported by default
+
+In both languages, however, you can write your own custom SerDe logic for more complex, application-specific types. See the docs for [Java](#java-serde) and [Python](#python-serde) for language-specific instructions.
+
+### Context
+
+Both the [Java](#java-sdk-functions) and [Python](#python-sdk-functions) SDKs provide access to a **context object** that can be used by the function. This context object provides a wide variety of information and functionality to the function:
+
+* The name and ID of the Pulsar Function
+* The message ID of each message. Each Pulsar message is automatically assigned an ID.
+* The key, event time, properties and partition key of each message
+* The name of the topic on which the message was sent
+* The names of all input topics as well as the output topic associated with the function
+* The name of the class used for [SerDe](#serialization-and-deserialization-serde)
+* The [tenant](reference-terminology.md#tenant) and namespace associated with the function
+* The ID of the Pulsar Functions instance running the function
+* The version of the function
+* The [logger object](functions-overview.md#logging) used by the function, which can be used to create function log messages
+* Access to arbitrary [user config](#user-config) values supplied via the CLI
+* An interface for recording [metrics](functions-metrics)
+* An interface for storing and retrieving state in [state storage](functions-overview.md#state-storage)
+* A function to publish new messages onto arbitrary topics.
+* A function to acknowledge the message being processed (if auto-acknowledgement is disabled).
+
+### User config
+
+When you run or update Pulsar Functions created using the [SDK](#available-apis), you can pass arbitrary key/values to them via the command line with the `--userConfig` flag. Key/values must be specified as JSON. Here's an example of a function creation command that passes a user config key/value to a function:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --name word-filter \
+  # Other function configs
+  --user-config '{"forbidden-word":"rosebud"}'
+
+```
+
+If the function were a Python function, that config value could be accessed like this:
+
+```python
+
+from pulsar import Function
+
+class WordFilter(Function):
+    def process(self, context, input):
+        forbidden_word = context.user_config()["forbidden-word"]
+
+        # Don't publish the message if it contains the user-supplied
+        # forbidden word
+        if forbidden_word in input:
+            pass
+        # Otherwise publish the message
+        else:
+            return input
+
+```
+
+## Functions for Java
+
+Writing Pulsar Functions in Java involves implementing one of two interfaces:
+
+* The [`java.util.Function`](https://docs.oracle.com/javase/8/docs/api/java/util/function/Function.html) interface
+* The {@inject: javadoc:Function:/pulsar-functions/org/apache/pulsar/functions/api/Function} interface. This interface works much like the `java.util.Function` interface, but with the important difference that it provides a {@inject: javadoc:Context:/pulsar-functions/org/apache/pulsar/functions/api/Context} object that you can use in a [variety of ways](#context)
+
+### Get started
+
+In order to write Pulsar Functions in Java, you'll need to install the proper [dependencies](#dependencies) and package your function [as a JAR](#packaging).
+
+#### Dependencies
+
+How you get started writing Pulsar Functions in Java depends on which API you're using:
+
+* If you're writing a [Java native function](#java-native-functions), you won't need any external dependencies.
+* If you're writing a [Java SDK function](#java-sdk-functions), you'll need to import the `pulsar-functions-api` library.
+
+  Here's an example for a Maven `pom.xml` configuration file:
+
+  ```xml
+  
+  <dependency>
+    <groupId>org.apache.pulsar</groupId>
+    <artifactId>pulsar-functions-api</artifactId>
+    <version>2.1.1-incubating</version>
+  </dependency>
+  
+  ```
+
+  Here's an example for a Gradle `build.gradle` configuration file:
+
+  ```groovy
+  
+  dependencies {
+  compile group: 'org.apache.pulsar', name: 'pulsar-functions-api', version: '2.1.1-incubating'
+  }
+  
+  ```
+
+#### Packaging
+
+Whether you're writing Java Pulsar Functions using the [native](#java-native-functions) Java `java.util.Function` interface or using the [Java SDK](#java-sdk-functions), you'll need to package your function(s) as a "fat" JAR.
+
+> #### Starter repo
+> If you'd like to get up and running quickly, you can use [this repo](https://github.com/streamlio/pulsar-functions-java-starter), which contains the necessary Maven configuration to build a fat JAR as well as some example functions.
+
+### Java native functions
+
+If your function doesn't require access to its [context](#context), you can create a Pulsar Function by implementing the [`java.util.Function`](https://docs.oracle.com/javase/8/docs/api/java/util/function/Function.html) interface, which has this very simple, single-method signature:
+
+```java
+
+public interface Function<I, O> {
+    O apply(I input);
+}
+
+```
+
+Here's an example function that takes a string as its input, adds an exclamation point to the end of the string, and then publishes the resulting string:
+
+```java
+
+import java.util.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+    @Override
+    public String process(String input) {
+        return String.format("%s!", input);
+    }
+}
+
+```
+
+In general, you should use native functions when you don't need access to the function's [context](#context). If you *do* need access to the function's context, then we recommend using the [Pulsar Functions Java SDK](#java-sdk-functions).
+
+#### Java native examples
+
+There is one example Java native function in this {@inject: github:folder:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples}:
+
+* {@inject: github:JavaNativeExclamationFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java}
+
+### Java SDK functions
+
+To get started developing Pulsar Functions using the Java SDK, you'll need to add a dependency on the `pulsar-functions-api` artifact to your project. Instructions can be found [above](#dependencies).
+
+> An easy way to get up and running with Pulsar Functions in Java is to clone the [`pulsar-functions-java-starter`](https://github.com/streamlio/pulsar-functions-java-starter) repo and follow the instructions there.
+
+
+#### Java SDK examples
+
+There are several example Java SDK functions in this {@inject: github:folder:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples}:
+
+Function name | Description
+:-------------|:-----------
+[`ContextFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ContextFunction.java) | Illustrates [context](#context)-specific functionality like [logging](#java-logging) and [metrics](#java-metrics)
+[`WordCountFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java) | Illustrates usage of Pulsar Function [state-storage](functions-overview.md#state-storage)
+[`ExclamationFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java) | A basic string manipulation function for the Java SDK [`LoggingFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/LoggingFunction.java) | A function that shows how [logging](#java-logging) works for Java [`PublishFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/PublishFunction.java) | Publishes results to a topic specified in the function's [user config](#java-user-config) (rather than on the function's output topic)
+[`UserConfigFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/UserConfigFunction.java) | A function that consumes [user-supplied configuration](#java-user-config) values [`UserMetricFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/UserMetricFunction.java) | A function that records metrics [`VoidFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/UserMetricFunction.java)  | A simple [void function](#void-functions)
+
+### Java context object
+
+The {@inject: javadoc:Context:/client/org/apache/pulsar/functions/api/Context} interface provides a number of methods that you can use to access the function's [context](#context). The various method signatures for the `Context` interface are listed below:
+
+```java
+
+public interface Context {
+    Record<?> getCurrentRecord();
+    Collection<String> getInputTopics();
+    String getOutputTopic();
+    String getOutputSchemaType();
+    String getTenant();
+    String getNamespace();
+    String getFunctionName();
+    String getFunctionId();
+    String getInstanceId();
+    String getFunctionVersion();
+    Logger getLogger();
+    void incrCounter(String key, long amount);
+    void incrCounterAsync(String key, long amount);
+    long getCounter(String key);
+    long getCounterAsync(String key);
+    void putState(String key, ByteBuffer value);
+    void putStateAsync(String key, ByteBuffer value);
+    ByteBuffer getState(String key);
+    ByteBuffer getStateAsync(String key);
+    Map<String, Object> getUserConfigMap();
+    Optional<Object> getUserConfigValue(String key);
+    Object getUserConfigValueOrDefault(String key, Object defaultValue);
+    void recordMetric(String metricName, double value);
+    <O> CompletableFuture<Void> publish(String topicName, O object, String schemaOrSerdeClassName);
+    <O> CompletableFuture<Void> publish(String topicName, O object);
+}
+
+```
+
+Here's an example function that uses several methods available via the `Context` object:
+
+```java
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.stream.Collectors;
+
+public class ContextFunction implements Function<String, Void> {
+    public Void process(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", "));
+        String functionName = context.getFunctionName();
+
+        String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n",
+                input,
+                inputTopics);
+
+        LOG.info(logMessage);
+
+        String metricName = String.format("function-%s-messages-received", functionName);
+        context.recordMetric(metricName, 1);
+
+        return null;
+    }
+}
+
+```
+
+### Void functions
+
+Pulsar Functions can publish results to an output topic, but this isn't required. You can also have functions that simply produce a log, write results to a database, etc. Here's a function that writes a simple log every time a message is received:
+
+```java
+
+import org.slf4j.Logger;
+
+public class LogFunction implements PulsarFunction<String, Void> {
+    public String apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        LOG.info("The following message was received: {}", input);
+        return null;
+    }
+}
+
+```
+
+> When using Java functions in which the output type is `Void`, the function must *always* return `null`.
+
+### Java SerDe
+
+Pulsar Functions use [SerDe](#serialization-and-deserialization-serde) when publishing data to and consuming data from Pulsar topics. When you're writing Pulsar Functions in Java, the following basic Java types are built in and supported by default:
+
+* `String`
+* `Double`
+* `Integer`
+* `Float`
+* `Long`
+* `Short`
+* `Byte`
+
+Built-in vs. custom. For custom, you need to implement this interface:
+
+```java
+
+public interface SerDe<T> {
+    T deserialize(byte[] input);
+    byte[] serialize(T input);
+}
+
+```
+
+#### Java SerDe example
+
+Imagine that you're writing Pulsar Functions in Java that are processing tweet objects. Here's a simple example `Tweet` class:
+
+```java
+
+public class Tweet {
+    private String username;
+    private String tweetContent;
+
+    public Tweet(String username, String tweetContent) {
+        this.username = username;
+        this.tweetContent = tweetContent;
+    }
+
+    // Standard setters and getters
+}
+
+```
+
+In order to be able to pass `Tweet` objects directly between Pulsar Functions, you'll need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`.
+
+```java
+
+package com.example.serde;
+
+import org.apache.pulsar.functions.api.SerDe;
+
+import java.util.regex.Pattern;
+
+public class TweetSerde implements SerDe<Tweet> {
+    public Tweet deserialize(byte[] input) {
+        String s = new String(input);
+        String[] fields = s.split(Pattern.quote("|"));
+        return new Tweet(fields[0], fields[1]);
+    }
+
+    public byte[] serialize(Tweet input) {
+        return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes();
+    }
+}
+
+```
+
+To apply this custom SerDe to a particular Pulsar Function, you would need to:
+
+* Package the `Tweet` and `TweetSerde` classes into a JAR
+* Specify a path to the JAR and SerDe class name when deploying the function
+
+Here's an example [`create`](reference-pulsar-admin.md#create-1) operation:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --jar /path/to/your.jar \
+  --output-serde-classname com.example.serde.TweetSerde \
+  # Other function attributes
+
+```
+
+> #### Custom SerDe classes must be packaged with your function JARs
+> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. That means that you'll need to always include your SerDe classes in your function JARs. If not, Pulsar will return an error.
+
+### Java logging
+
+Pulsar Functions that use the [Java SDK](#java-sdk-functions) have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. Here's a simple example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`:
+
+```java
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class LoggingFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String messageId = new String(context.getMessageId());
+
+        if (input.contains("danger")) {
+            LOG.warn("A warning was received in message {}", messageId);
+        } else {
+            LOG.info("Message {} received\nContent: {}", messageId, input);
+        }
+
+        return null;
+    }
+}
+
+```
+
+If you want your function to produce logs, you need to specify a log topic when creating or running the function. Here's an example:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --jar my-functions.jar \
+  --classname my.package.LoggingFunction \
+  --log-topic persistent://public/default/logging-function-logs \
+  # Other function configs
+
+```
+
+Now, all logs produced by the `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic.
+
+### Java user config
+
+The Java SDK's [`Context`](#context) object enables you to access key/value pairs provided to the Pulsar Function via the command line (as JSON). Here's an example function creation command that passes a key/value pair:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  # Other function configs
+  --user-config '{"word-of-the-day":"verdure"}'
+
+```
+
+To access that value in a Java function:
+
+```java
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.Optional;
+
+public class UserConfigFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        Optional<String> wotd = context.getUserConfigValue("word-of-the-day");
+        if (wotd.isPresent()) {
+            LOG.info("The word of the day is {}", wotd);
+        } else {
+            LOG.warn("No word of the day provided");
+        }
+        return null;
+    }
+}
+
+```
+
+The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (i.e. every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line.
+
+You can also access the entire user config map or set a default value in case no value is present:
+
+```java
+
+// Get the whole config map
+Map<String, String> allConfigs = context.getUserConfigMap();
+
+// Get value or resort to default
+String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious");
+
+```
+
+> For all key/value pairs passed to Java Pulsar Functions, both the key *and* the value are `String`s. If you'd like the value to be of a different type, you will need to deserialize from the `String` type.
+
+### Java metrics
+
+You can record metrics using the [`Context`](#context) object on a per-key basis. You can, for example, set a metric for the key `process-count` and a different metric for the key `elevens-count` every time the function processes a message. Here's an example:
+
+```java
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class MetricRecorderFunction implements Function<Integer, Void> {
+    @Override
+    public void apply(Integer input, Context context) {
+        // Records the metric 1 every time a message arrives
+        context.recordMetric("hit-count", 1);
+
+        // Records the metric only if the arriving number equals 11
+        if (input == 11) {
+            context.recordMetric("elevens-count", 1);
+        }
+
+        return null;
+    }
+}
+
+```
+
+> For instructions on reading and using metrics, see the [Monitoring](deploy-monitoring) guide.
+
+
+## Functions for Python
+
+Writing Pulsar Functions in Python entails implementing one of two things:
+
+* A `process` function that takes an input (message data from the function's input topic(s)), applies some kind of logic to it, and either returns an object (to be published to the function's output topic) or `pass`es and thus doesn't produce a message
+* A `Function` class that has a `process` method that provides a message input to process and a [context](#context) object
+
+### Get started
+
+Regardless of which [deployment mode](functions-deploying) you're using, 'pulsar-client' python library has to installed on any machine that's running Pulsar Functions written in Python.
+
+That could be your local machine for [local run mode](functions-deploying.md#local-run-mode) or a machine running a Pulsar [broker](reference-terminology.md#broker) for [cluster mode](functions-deploying.md#cluster-mode). To install those libraries using pip:
+
+```bash
+
+$ pip install pulsar-client
+
+```
+
+### Packaging
+
+At the moment, the code for Pulsar Functions written in Python must be contained within a single Python file. In the future, Pulsar Functions may support other packaging formats, such as [**P**ython **EX**ecutables](https://github.com/pantsbuild/pex) (PEXes).
+
+### Python native functions
+
+If your function doesn't require access to its [context](#context), you can create a Pulsar Function by implementing a `process` function, which provides a single input object that you can process however you wish. Here's an example function that takes a string as its input, adds an exclamation point at the end of the string, and then publishes the resulting string:
+
+```python
+
+def process(input):
+    return "{0}!".format(input)
+
+```
+
+In general, you should use native functions when you don't need access to the function's [context](#context). If you *do* need access to the function's context, then we recommend using the [Pulsar Functions Python SDK](#python-sdk-functions).
+
+#### Python native examples
+
+There is one example Python native function in this {@inject: github:folder:/pulsar-functions/python-examples}:
+
+* {@inject: github:`native_exclamation_function.py`:/pulsar-functions/python-examples/native_exclamation_function.py}
+
+### Python SDK functions
+
+To get started developing Pulsar Functions using the Python SDK, you'll need to install the [`pulsar-client`](/api/python) library using the instructions [above](#getting-started).
+
+#### Python SDK examples
+
+There are several example Python functions in this {@inject: github:folder:/pulsar-functions/python-examples}:
+
+Function file | Description
+:-------------|:-----------
+[`exclamation_function.py`](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py) | Adds an exclamation point at the end of each incoming string [`logging_function.py`](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/logging_function.py) | Logs each incoming message [`thumbnailer.py`](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/thumbnailer.py) | Takes image data as input and outputs a 128x128 thumbnail of each image
+
+#### Python context object
+
+The [`Context`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/context.py) class provides a number of methods that you can use to access the function's [context](#context). The various methods for the `Context` class are listed below:
+
+Method | What it provides
+:------|:----------------
+`get_message_id` | The message ID of the message being processed
+`get_message_key` | The key of the message being processed
+`get_message_eventtime` | The event time of the message being processed
+`get_message_properties` | The properties of the message being processed
+`get_current_message_topic_name` | The topic of the message being currently being processed
+`get_function_tenant` | The tenant under which the current Pulsar Function runs under
+`get_function_namespace` | The namespace under which the current Pulsar Function runs under
+`get_function_name` | The name of the current Pulsar Function
+`get_function_id` | The ID of the current Pulsar Function
+`get_instance_id` | The ID of the current Pulsar Functions instance
+`get_function_version` | The version of the current Pulsar Function
+`get_logger` | A logger object that can be used for [logging](#python-logging)
+`get_user_config_value` | Returns the value of a [user-defined config](#python-user-config) (or `None` if the config doesn't exist)
+`get_user_config_map` | Returns the entire user-defined config as a dict
+`get_secret` | The secret value associated with the name
+`get_partition_key` | The partition key of the input message
+`record_metric` | Records a per-key [metric](#python-metrics)
+`publish` | Publishes a message to the specified Pulsar topic
+`get_output_serde_class_name` | The name of the output [SerDe](#python-serde) class
+`ack` | [Acks](reference-terminology.md#acknowledgment-ack) the message being processed to Pulsar
+`incr_counter` | Increase the counter of a given key in the managed state
+`get_counter` | Get the counter of a given key in the managed state
+`del_counter` | Delete the counter of a given key in the managed state
+`put_state` | Update the value of a given key in the managed state
+`get_state` | Get the value of a given key in the managed state
+
+### Python SerDe
+
+Pulsar Functions use [SerDe](#serialization-and-deserialization-serde) when publishing data to and consuming data from Pulsar topics (this is true of both [native](#python-native-functions) functions and [SDK](#python-sdk-functions) functions). You can specify the SerDe when [creating](functions-deploying.md#cluster-mode) or [running](functions-deploying.md#local-run-mode) functions. Here's an example:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name my_function \
+  --py my_function.py \
+  --classname my_function.MyFunction \
+  --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \
+  --output-serde-classname Serde3 \
+  --output output-topic-1
+
+```
+
+In this case, there are two input topics, `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Function logic, include processing function and SerDe classes, must be contained within a single Python file.
+
+When using Pulsar Functions for Python, you essentially have three SerDe options:
+
+1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe will mean that this option is used.
+2. You can use the [`PickeSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python's [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe.
+3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type.
+
+The table below shows when you should use each SerDe:
+
+SerDe option | When to use
+:------------|:-----------
+`IdentitySerde` | When you're working with simple types like strings, Booleans, integers, and the like
+`PickleSerDe` | When you're working with complex, application-specific types and are comfortable with `pickle`'s "best effort" approach
+Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes
+
+#### Python SerDe example
+
+Imagine that you're writing Pulsar Functions in Python that are processing tweet objects. Here's a simple `Tweet` class:
+
+```python
+
+class Tweet(object):
+    def __init__(self, username, tweet_content):
+        self.username = username
+        self.tweet_content = tweet_content
+
+```
+
+In order to use this class in Pulsar Functions, you'd have two options:
+
+1. You could specify `PickleSerDe`, which would apply the [`pickle`](https://docs.python.org/3/library/pickle.html) library's SerDe
+1. You could create your own SerDe class. Here's a simple example:
+
+  ```python
+  
+  from pulsar import SerDe
+
+  class TweetSerDe(SerDe):
+     def __init__(self, tweet):
+         self.tweet = tweet
+
+     def serialize(self, input):
+         return bytes("{0}|{1}".format(self.tweet.username, self.tweet.tweet_content))
+
+     def deserialize(self, input_bytes):
+         tweet_components = str(input_bytes).split('|')
+         return Tweet(tweet_components[0], tweet_componentsp[1])
+  
+  ```
+
+### Python logging
+
+Pulsar Functions that use the [Python SDK](#python-sdk-functions) have access to a logging object that can be used to produce logs at the chosen log level. Here's a simple example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`:
+
+```python
+
+from pulsar import Function
+
+class LoggingFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        msg_id = context.get_message_id()
+        if 'danger' in input:
+            logger.warn("A warning was received in message {0}".format(context.get_message_id()))
+        else:
+            logger.info("Message {0} received\nContent: {1}".format(msg_id, input))
+
+```
+
+If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. Here's an example:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --py logging_function.py \
+  --classname logging_function.LoggingFunction \
+  --log-topic logging-function-logs \
+  # Other function configs
+
+```
+
+Now, all logs produced by the `LoggingFunction` above can be accessed via the `logging-function-logs` topic.
+
+### Python user config
+
+The Python SDK's [`Context`](#context) object enables you to access key/value pairs provided to the Pulsar Function via the command line (as JSON). Here's an example function creation command that passes a key/value pair:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  # Other function configs \
+  --user-config '{"word-of-the-day":"verdure"}'
+
+```
+
+To access that value in a Python function:
+
+```python
+
+from pulsar import Function
+
+class UserConfigFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        wotd = context.get_user_config_value('word-of-the-day')
+        if wotd is None:
+            logger.warn('No word of the day provided')
+        else:
+            logger.info("The word of the day is {0}".format(wotd))
+
+```
+
+### Python metrics
+
+You can record metrics using the [`Context`](#context) object on a per-key basis. You can, for example, set a metric for the key `process-count` and a different metric for the key `elevens-count` every time the function processes a message. Here's an example:
+
+```python
+
+from pulsar import Function
+
+class MetricRecorderFunction(Function):
+    def process(self, input, context):
+        context.record_metric('hit-count', 1)
+
+        if input == 11:
+            context.record_metric('elevens-count', 1)
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-deploying.md b/site2/website-next/versioned_docs/version-2.3.2/functions-deploying.md
new file mode 100644
index 0000000..8ad8dbe
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-deploying.md
@@ -0,0 +1,261 @@
+---
+id: functions-deploying
+title: Deploying and managing Pulsar Functions
+sidebar_label: "Deploying functions"
+original_id: functions-deploying
+---
+
+At the moment, there are two deployment modes available for Pulsar Functions:
+
+Mode | Description
+:----|:-----------
+Local run mode | The function runs in your local environment, for example on your laptop
+Cluster mode | The function runs *inside of* your Pulsar cluster, on the same machines as your Pulsar brokers
+
+> #### Contributing new deployment modes
+> The Pulsar Functions feature was designed, however, with extensibility in mind. Other deployment options will be available in the future. If you'd like to add a new deployment option, we recommend getting in touch with the Pulsar developer community at [dev@pulsar.apache.org](mailto:dev@pulsar.apache.org).
+
+## Requirements
+
+In order to deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this:
+
+* You can run a [standalone cluster](getting-started-standalone) locally on your own machine
+* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](deploy-dcos), and more
+
+If you're running a non-[standalone](reference-terminology.md#standalone) cluster, you'll need to obtain the service URL for the cluster. How you obtain the service URL will depend on how you deployed your Pulsar cluster.
+
+If you're going to deploy and trigger python user-defined functions, you should install [the pulsar python client](http://pulsar.apache.org/docs/en/client-libraries-python/) first.
+
+## Command-line interface
+
+Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions, and several others.
+
+### Fully Qualified Function Name (FQFN)
+
+Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function's tenant, namespace, and function name. FQFN's look like this:
+
+```http
+
+tenant/namespace/name
+
+```
+
+FQFNs enable you to, for example, create multiple functions with the same name provided that they're in different namespaces.
+
+### Default arguments
+
+When managing Pulsar Functions, you'll need to specify a variety of information about those functions, including tenant, namespace, input and output topics, etc. There are some parameters, however, that have default values that will be supplied if omitted. The table below lists the defaults:
+
+Parameter | Default
+:---------|:-------
+Function name | Whichever value is specified for the class name (minus org, library, etc.). The flag `--classname org.example.MyFunction`, for example, would give the function a name of `MyFunction`.
+Tenant | Derived from the input topics' names. If the input topics are under the `marketing` tenant---i.e. the topic names have the form `persistent://marketing/{namespace}/{topicName}`---then the tenant will be `marketing`.
+Namespace | Derived from the input topics' names. If the input topics are under the `asia` namespace under the `marketing` tenant---i.e. the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace will be `asia`.
+Output topic | `{input topic}-{function name}-output`. A function with an input topic name of `incoming` and a function name of `exclamation`, for example, would have an output topic of `incoming-exclamation-output`.
+Subscription type | For at-least-once and at-most-once [processing guarantees](functions-guarantees), the [`SHARED`](concepts-messaging.md#shared) is applied by default; for effectively-once guarantees, [`FAILOVER`](concepts-messaging.md#failover) is applied
+Processing guarantees | [`ATLEAST_ONCE`](functions-guarantees)
+Pulsar service URL | `pulsar://localhost:6650`
+
+#### Example use of defaults
+
+Take this `create` command:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --jar my-pulsar-functions.jar \
+  --classname org.example.MyFunction \
+  --inputs my-function-input-topic1,my-function-input-topic2
+
+```
+
+The created function would have default values supplied for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`).
+
+## Local run mode
+
+If you run a Pulsar Function in **local run** mode, it will run on the machine from which the command is run (this could be your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, etc.). Here's an example [`localrun`](reference-pulsar-admin.md#localrun) command:
+
+```bash
+
+$ bin/pulsar-admin functions localrun \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+
+```
+
+By default, the function will connect to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you'd like to use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. Here's an example:
+
+```bash
+
+$ bin/pulsar-admin functions localrun \
+  --broker-service-url pulsar://my-cluster-host:6650 \
+  # Other function parameters
+
+```
+
+## Cluster mode
+
+When you run a Pulsar Function in **cluster mode**, the function code will be uploaded to a Pulsar broker and run *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. Here's an example:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+
+```
+
+### Updating cluster mode functions
+
+You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. This command, for example, would update the function created in the section [above](#cluster-mode):
+
+```bash
+
+$ bin/pulsar-admin functions update \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/new-input-topic \
+  --output persistent://public/default/new-output-topic
+
+```
+
+### Parallelism
+
+Pulsar Functions run as processes called **instances**. When you run a Pulsar Function, it runs as a single instance by default (and in [local run mode](#local-run-mode) you can *only* run a single instance of a function).
+
+You can also specify the *parallelism* of a function, i.e. the number of instances to run, when you create the function. You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. Here's an example:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --parallelism 3 \
+  # Other function info
+
+```
+
+You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface.
+
+```bash
+
+$ bin/pulsar-admin functions update \
+  --parallelism 5 \
+  # Other function
+
+```
+
+If you're specifying a function's configuration via YAML, use the `parallelism` parameter. Here's an example config file:
+
+```yaml
+
+# function-config.yaml
+parallelism: 3
+inputs:
+- persistent://public/default/input-1
+output: persistent://public/default/output-1
+# other parameters
+
+```
+
+And here's the corresponding update command:
+
+```bash
+
+$ bin/pulsar-admin functions update \
+  --function-config-file function-config.yaml
+
+```
+
+### Function instance resources
+
+When you run Pulsar Functions in [cluster run](#cluster-mode) mode, you can specify the resources that are assigned to each function [instance](#parallelism):
+
+Resource | Specified as... | Runtimes
+:--------|:----------------|:--------
+CPU | The number of cores | Docker (coming soon)
+RAM | The number of bytes | Process, Docker
+Disk space | The number of bytes | Docker
+
+Here's an example function creation command that allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --jar target/my-functions.jar \
+  --classname org.example.functions.MyFunction \
+  --cpu 8 \
+  --ram 8589934592 \
+  --disk 10737418240
+
+```
+
+> #### Resources are *per instance*
+> The resources that you apply to a given Pulsar Function are applied to each [instance](#parallelism) of the function. If you apply 8 GB of RAM to a function with a parallelism of 5, for example, then you are applying 40 GB of RAM total for the function. You should always make sure to factor parallelism---i.e. the number of instances---into your resource calculations
+
+## Triggering Pulsar Functions
+
+If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function's output (if any) via the command line.
+
+> Triggering a function is ultimately no different from invoking a function by producing a message on one of the function's input topics. The [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command is essentially a convenient mechanism for sending messages to functions without needing to use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library.
+
+To show an example of function triggering, let's start with a simple [Python function](functions-api.md#functions-for-python) that returns a simple string based on the input:
+
+```python
+
+# myfunc.py
+def process(input):
+    return "This function has been triggered with a value of {0}".format(input)
+
+```
+
+Let's run that function in [local run mode](functions-deploying.md#local-run-mode):
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name myfunc \
+  --py myfunc.py \
+  --classname myfunc \
+  --inputs persistent://public/default/in \
+  --output persistent://public/default/out
+
+```
+
+Now let's make a consumer listen on the output topic for messages coming from the `myfunc` function using the [`pulsar-client consume`](reference-cli-tools.md#consume) command:
+
+```bash
+
+$ bin/pulsar-client consume persistent://public/default/out \
+  --subscription-name my-subscription
+  --num-messages 0 # Listen indefinitely
+
+```
+
+Now let's trigger that function:
+
+```bash
+
+$ bin/pulsar-admin functions trigger \
+  --tenant public \
+  --namespace default \
+  --name myfunc \
+  --trigger-value "hello world"
+
+```
+
+The consumer listening on the output topic should then produce this in its logs:
+
+```
+
+----- got message -----
+This function has been triggered with a value of hello world
+
+```
+
+> #### Topic info not required
+> In the `trigger` command above, you may have noticed that you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you didn't need to know the function's input topic(s).
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-guarantees.md b/site2/website-next/versioned_docs/version-2.3.2/functions-guarantees.md
new file mode 100644
index 0000000..d9b1438
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-guarantees.md
@@ -0,0 +1,47 @@
+---
+id: functions-guarantees
+title: Processing guarantees
+sidebar_label: "Processing guarantees"
+original_id: functions-guarantees
+---
+
+Pulsar Functions provides three different messaging semantics that you can apply to any function:
+
+Delivery semantics | Description
+:------------------|:-------
+**At-most-once** delivery | Each message that is sent to the function will most likely be processed but also may not be (hence the "at most")
+**At-least-once** delivery | Each message that is sent to the function could be processed more than once (hence the "at least")
+**Effectively-once** delivery | Each message that is sent to the function will have one output associated with it
+
+## Applying processing guarantees to a function
+
+You can set the processing guarantees for a Pulsar Function when you create the Function. This [`pulsar-function create`](reference-pulsar-admin.md#create-1) command, for example, would apply effectively-once guarantees to the Function:
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --processing-guarantees EFFECTIVELY_ONCE \
+  # Other function configs
+
+```
+
+The available options are:
+
+* `ATMOST_ONCE`
+* `ATLEAST_ONCE`
+* `EFFECTIVELY_ONCE`
+
+> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, then the function will provide at-least-once guarantees.
+
+## Updating the processing guarantees of a function
+
+You can change the processing guarantees applied to a function once it's already been created using the [`update`](reference-pulsar-admin.md#update-1) command. Here's an example:
+
+```bash
+
+$ bin/pulsar-admin functions update \
+  --processing-guarantees ATMOST_ONCE \
+  # Other function configs
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-metrics.md b/site2/website-next/versioned_docs/version-2.3.2/functions-metrics.md
new file mode 100644
index 0000000..8add669
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-metrics.md
@@ -0,0 +1,7 @@
+---
+id: functions-metrics
+title: Metrics for Pulsar Functions
+sidebar_label: "Metrics"
+original_id: functions-metrics
+---
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-overview.md b/site2/website-next/versioned_docs/version-2.3.2/functions-overview.md
new file mode 100644
index 0000000..25dc602
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-overview.md
@@ -0,0 +1,209 @@
+---
+id: functions-overview
+title: Pulsar Functions overview
+sidebar_label: "Overview"
+original_id: functions-overview
+---
+
+**Pulsar Functions** are lightweight compute processes that
+
+* consume messages from one or more Pulsar topics,
+* apply a user-supplied processing logic to each message,
+* publish the results of the computation to another topic.
+
+
+## Goals
+With Pulsar Functions, you can create complex processing logic without deploying a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://heron.incubator.apache.org/), [Apache Flink](https://flink.apache.org/)). Pulsar Functions are computing infrastructure of Pulsar messaging system. The core goal is tied to a series of other goals:
+
+* Developer productivity (language-native vs Pulsar Functions SDK functions)
+* Easy troubleshooting
+* Operational simplicity (no need for an external processing system)
+
+## Inspirations
+Pulsar Functions are inspired by (and take cues from) several systems and paradigms:
+
+* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org)
+* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/)
+
+Pulsar Functions can be described as
+
+* [Lambda](https://aws.amazon.com/lambda/)-style functions that are
+* specifically designed to use Pulsar as a message bus.
+
+## Programming model
+Pulsar Functions provide a wide range of functionality, and the core programming model is simple. Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Each time a message is received, the function will complete the following tasks.   
+
+  * Apply some processing logic to the input and write output to:
+    * An **output topic** in Pulsar
+    * [Apache BookKeeper](functions-develop.md#state-storage)
+  * Write logs to a **log topic** (potentially for debugging purposes)
+  * Increment a [counter](#word-count-example)
+
+![Pulsar Functions core programming model](/assets/pulsar-functions-overview.png)
+
+You can use Pulsar Functions to set up the following processing chain:
+
+* A Python function listens for the `raw-sentences` topic and "sanitizes" incoming strings (removing extraneous whitespace and converting all characters to lowercase) and then publishes the results to a `sanitized-sentences` topic.
+* A Java function listens for the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic
+* Finally, a Python function listens for the `results` topic and writes the results to a MySQL table.
+
+
+### Word count example
+
+If you implement the classic word count example using Pulsar Functions, it looks something like this:
+
+![Pulsar Functions word count example](/assets/pulsar-functions-word-count.png)
+
+To write the function in Java with [Pulsar Functions SDK for Java](functions-develop.md#available-apis), you can write the function as follows.
+
+```java
+
+package org.example.functions;
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+import java.util.Arrays;
+
+public class WordCountFunction implements Function<String, Void> {
+    // This function is invoked every time a message is published to the input topic
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Arrays.asList(input.split(" ")).forEach(word -> {
+            String counterKey = word.toLowerCase();
+            context.incrCounter(counterKey, 1);
+        });
+        return null;
+    }
+}
+
+```
+
+Bundle and build the JAR file to be deployed, and then deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows.
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --jar target/my-jar-with-dependencies.jar \
+  --classname org.example.functions.WordCountFunction \
+  --tenant public \
+  --namespace default \
+  --name word-count \
+  --inputs persistent://public/default/sentences \
+  --output persistent://public/default/count
+
+```
+
+### Content-based routing example
+
+Pulsar Functions are used in many cases. The following is a sophisticated example that involves content-based routing.
+
+For example, a function takes items (strings) as input and publishes them to either a `fruits` or `vegetables` topic, depending on the item. Or, if an item is neither fruit nor vegetable, a warning is logged to a [log topic](functions-develop.md#logger). The following is a visual representation.
+
+![Pulsar Functions routing example](/assets/pulsar-functions-routing-example.png)
+
+If you implement this routing functionality in Python, it looks something like this:
+
+```python
+
+from pulsar import Function
+
+class RoutingFunction(Function):
+    def __init__(self):
+        self.fruits_topic = "persistent://public/default/fruits"
+        self.vegetables_topic = "persistent://public/default/vegetables"
+
+    @staticmethod
+    def is_fruit(item):
+        return item in [b"apple", b"orange", b"pear", b"other fruits..."]
+
+    @staticmethod
+    def is_vegetable(item):
+        return item in [b"carrot", b"lettuce", b"radish", b"other vegetables..."]
+
+    def process(self, item, context):
+        if self.is_fruit(item):
+            context.publish(self.fruits_topic, item)
+        elif self.is_vegetable(item):
+            context.publish(self.vegetables_topic, item)
+        else:
+            warning = "The item {0} is neither a fruit nor a vegetable".format(item)
+            context.get_logger().warn(warning)
+
+```
+
+If this code is stored in `~/router.py`, then you can deploy it in your Pulsar cluster using the [command line](functions-deploy.md#command-line-interface) as follows.
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --py ~/router.py \
+  --classname router.RoutingFunction \
+  --tenant public \
+  --namespace default \
+  --name route-fruit-veg \
+  --inputs persistent://public/default/basket-items
+
+```
+
+### Functions, messages and message types
+Pulsar Functions take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(Java), you can write typed Functions, and bind messages to types in the following ways. 
+* [Schema Registry](functions-develop.md#schema-registry)
+* [SerDe](functions-develop.md#serde)
+
+
+## Fully Qualified Function Name (FQFN)
+Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function tenant, namespace, and function name. FQFN looks like this:
+
+```http
+
+tenant/namespace/name
+
+```
+
+FQFNs enable you to create multiple functions with the same name provided that they are in different namespaces.
+
+## Supported languages
+Currently, you can write Pulsar Functions in Java, Python, and Go. For details, refer to [Develop Pulsar Functions](functions-develop).
+
+## Processing guarantees
+Pulsar Functions provide three different messaging semantics that you can apply to any function.
+
+Delivery semantics | Description
+:------------------|:-------
+**At-most-once** delivery | Each message sent to the function is likely to be processed, or not to be processed (hence "at most").
+**At-least-once** delivery | Each message sent to the function can be processed more than once (hence the "at least").
+**Effectively-once** delivery | Each message sent to the function will have one output associated with it.
+
+
+### Apply processing guarantees to a function
+You can set the processing guarantees for a Pulsar Function when you create the Function. The following [`pulsar-function create`](reference-pulsar-admin.md#create-1) command creates a function with effectively-once guarantees applied.
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --name my-effectively-once-function \
+  --processing-guarantees EFFECTIVELY_ONCE \
+  # Other function configs
+
+```
+
+The available options for `--processing-guarantees` are:
+
+* `ATMOST_ONCE`
+* `ATLEAST_ONCE`
+* `EFFECTIVELY_ONCE`
+
+> By default, Pulsar Functions provide at-least-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, the function provides at-least-once guarantees.
+
+### Update the processing guarantees of a function
+You can change the processing guarantees applied to a function using the [`update`](reference-pulsar-admin.md#update-1) command. The following is an example.
+
+```bash
+
+$ bin/pulsar-admin functions update \
+  --processing-guarantees ATMOST_ONCE \
+  # Other function configs
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-quickstart.md b/site2/website-next/versioned_docs/version-2.3.2/functions-quickstart.md
new file mode 100644
index 0000000..d4d4aff
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-quickstart.md
@@ -0,0 +1,458 @@
+---
+id: functions-quickstart
+title: Get started with Pulsar Functions
+sidebar_label: "Get started"
+original_id: functions-quickstart
+---
+
+This tutorial walks you through running a [standalone](reference-terminology.md#standalone) Pulsar [cluster](reference-terminology.md#cluster) on your machine, and then running your first Pulsar Function using that cluster. The first Pulsar Function runs in local run mode (outside your Pulsar [cluster](reference-terminology.md#cluster)), while the second runs in cluster mode (inside your cluster).
+
+> In local run mode, Pulsar Functions communicate with Pulsar cluster, but run outside of the cluster.
+
+## Prerequisites
+
+Install [Maven](https://maven.apache.org/download.cgi) on your machine.
+
+## Run a standalone Pulsar cluster
+
+In order to run Pulsar Functions, you need to run a Pulsar cluster locally first. The easiest way is to run Pulsar in [standalone](reference-terminology.md#standalone) mode. Follow these steps to start up a standalone cluster.
+
+```bash
+
+$ wget pulsar:binary_release_url
+$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
+$ cd apache-pulsar-@pulsar:version@
+$ bin/pulsar standalone \
+  --advertised-address 127.0.0.1
+
+```
+
+When running Pulsar in standalone mode, the `public` tenant and the `default` namespace are created automatically. The tenant and namespace are used throughout this tutorial.
+
+## Run a Pulsar Function in local run mode
+
+You can start with a simple function that takes a string as input from a Pulsar topic, adds an exclamation point to the end of the string, and then publishes the new string to another Pulsar topic. The following is the code for the function.
+
+```java
+
+package org.apache.pulsar.functions.api.examples;
+
+import java.util.function.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+    @Override
+    public String apply(String input) {
+        return String.format("%s!", input);
+    }
+}
+
+```
+
+A JAR file containing this function and several other functions (written in Java) is included with the binary distribution you have downloaded (in the `examples` folder). Run the function in local mode on your laptop but outside your Pulsar cluster with the following commands.
+
+```bash
+
+$ bin/pulsar-admin functions localrun \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --name exclamation
+
+```
+
+> #### Multiple input topics
+>
+> In the example above, a single topic is specified using the `--inputs` flag. You can also specify multiple input topics with a comma-separated list using the same flag. 
+>
+
+> ```bash
+> 
+> --inputs topic1,topic2
+>
+> 
+> ```
+
+
+You can open up another shell and use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool to listen for messages on the output topic.
+
+```bash
+
+$ bin/pulsar-client consume persistent://public/default/exclamation-output \
+  --subscription-name my-subscription \
+  --num-messages 0
+
+```
+
+> Setting the `--num-messages` flag to `0` means that consumers listen on the topic indefinitely, rather than only accepting a certain number of messages.
+
+With a listener up and running, you can open up another shell and produce a message on the input topic that you specify.
+
+```bash
+
+$ bin/pulsar-client produce persistent://public/default/exclamation-input \
+  --num-produce 1 \
+  --messages "Hello world"
+
+```
+
+When the message has been successfully processed by the exclamation function, you will see the following output. To shut down the function, press **Ctrl+C**.
+
+```
+
+----- got message -----
+Hello world!
+
+```
+
+### Process explanation
+
+* The `Hello world` message you publish to the input topic (`persistent://public/default/exclamation-input`) is passed to the exclamation function.
+* The exclamation function processes the message (providing a result of `Hello world!`) and publishes the result to the output topic (`persistent://public/default/exclamation-output`).
+* If the exclamation function *does not* run, Pulsar will durably store the message data published to the input topic in [Apache BookKeeper](https://bookkeeper.apache.org) until a consumer consumes and acknowledges the message.
+
+## Run a Pulsar Function in cluster mode
+
+[Local run mode](#run-a-pulsar-function-in-local-run-mode) is useful for development and test. However, when you use Pulsar for real deployment, you run it in **cluster mode**. In cluster mode, Pulsar Functions run *inside* of your Pulsar cluster and are managed using the same [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface.
+
+The following command deploys the same exclamation function you run locally in your Pulsar cluster, rather than outside of it.
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --name exclamation
+
+```
+
+You will see `Created successfully` in the output. Check the list of functions running in your cluster.
+
+```bash
+
+$ bin/pulsar-admin functions list \
+  --tenant public \
+  --namespace default
+
+```
+
+You will see the `exclamation` function. Check the status of your deployed function using the `getstatus` command.
+
+```bash
+
+$ bin/pulsar-admin functions getstatus \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+
+```
+
+You will see the following JSON output.
+
+```json
+
+{
+  "functionStatusList": [
+    {
+      "running": true,
+      "instanceId": "0"
+    }
+  ]
+}
+
+```
+
+As you can see, the instance is currently running, and an instance with the ID of `0` is running. With the `get` command, you can get other information about the function, for example, topics, tenant, namespace, and so on.
+
+```bash
+
+$ bin/pulsar-admin functions get \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+
+```
+
+You will see the following JSON output.
+
+```json
+
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "exclamation",
+  "className": "org.apache.pulsar.functions.api.examples.ExclamationFunction",
+  "output": "persistent://public/default/exclamation-output",
+  "autoAck": true,
+  "inputs": [
+    "persistent://public/default/exclamation-input"
+  ],
+  "parallelism": 1
+}
+
+```
+
+As you can see, only one instance of the function is running in your cluster. Update the parallel functions to `3` using the `update` command.
+
+```bash
+
+$ bin/pulsar-admin functions update \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation \
+  --parallelism 3
+
+```
+
+You will see `Updated successfully` in the output. If you enter the `get` command, you see that the parallel functions are increased to `3`, meaning that three instances of the function are running in your cluster.
+
+```json
+
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "exclamation",
+  "className": "org.apache.pulsar.functions.api.examples.ExclamationFunction",
+  "output": "persistent://public/default/exclamation-output",
+  "autoAck": true,
+  "inputs": [
+    "persistent://public/default/exclamation-input"
+  ],
+  "parallelism": 3
+}
+
+```
+
+Shut down the running function with the `delete` command.
+
+```bash
+
+$ bin/pulsar-admin functions delete \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+
+```
+
+When you see `Deleted successfully` in the output, you've successfully run, updated, and shut down functions running in cluster mode. 
+
+## Write and run a new function
+
+In order to write and run [Python](functions-api.md#functions-for-python) functions, you need to install some dependencies.
+
+```bash
+
+$ pip install pulsar-client
+
+```
+
+In the examples above, you run and manage pre-written Pulsar Functions and learn how they work. You can also write your own functions with Python API. In the following example, the function takes a string as input, reverses the string, and publishes the reversed string to the specified topic.
+
+First, create a new Python file.
+
+```bash
+
+$ touch reverse.py
+
+```
+
+Add the following information in the Python file.
+
+```python
+
+def process(input):
+    return input[::-1]
+
+```
+
+The `process` method defines the processing logic of Pulsar Functions. It uses Python slice magic to reverse each incoming string. You can deploy the function using the `create` command.
+
+```bash
+
+$ bin/pulsar-admin functions create \
+  --py reverse.py \
+  --classname reverse \
+  --inputs persistent://public/default/backwards \
+  --output persistent://public/default/forwards \
+  --tenant public \
+  --namespace default \
+  --name reverse
+
+```
+
+If you see `Created successfully`, the function is ready to accept incoming messages. Because the function is running in cluster mode, you can **trigger** the function using the [`trigger`](reference-pulsar-admin.md#trigger) command. This command sends a message that you specify to the function and returns the function output. The following is an example.
+
+```bash
+
+$ bin/pulsar-admin functions trigger \
+  --name reverse \
+  --tenant public \
+  --namespace default \
+  --trigger-value "sdrawrof won si tub sdrawkcab saw gnirts sihT"
+
+```
+
+You will get the following output.
+
+```
+
+This string was backwards but is now forwards
+
+```
+
+You have created a new Pulsar Function, deployed it in your Pulsar standalone cluster in [cluster mode](#run-a-pulsar-function-in-cluster-mode), and triggered the Function. 
+
+## Write and run a Go function
+Go function depends on `pulsar-client-go`. Make sure that you have built `pulsar-client-go` before using Go function.
+
+To write and run a Go function, complete the following steps.
+
+1. Create a new Go file.
+
+```
+
+touch helloFunc.go
+
+```
+
+2. Append a byte for messages from the input topic.    
+The following is a `helloFunc.go` example. Each message from the input topic is appended with a `110` byte, and then delivered to the output topic.
+
+```
+
+package main
+
+import (
+	"context"
+
+	"github.com/apache/pulsar/pulsar-function-go/pf"
+)
+
+func HandleResponse(ctx context.Context, in []byte) ([]byte, error) {
+	res := append(in, 110)
+	return res, nil
+}
+
+func main() {
+	pf.Start(HandleResponse)
+}
+
+```
+
+3. Compile code.
+
+```
+
+go build -o examplepulsar helloFunc.go
+
+```
+
+4. Run Go function. 
+
+```
+
+$ bin/pulsar-admin functions create \
+  --go examplepulsar \
+  --inputs persistent://public/default/backwards \
+  --output persistent://public/default/forwards \
+  --tenant public \
+  --namespace default \
+  --name gofunc
+
+```
+
+If you see `Created successfully`, the function is ready to accept incoming messages. Start a producer and produce messages to the `backwards` input topic. Start a consumer and consume messages from the `forwards` output topic, you will see `110` is appended to all messages.
+
+The `--classname` parameter is not specified when running Go function, because there is no `Class` concept in Go, which is different from Java and Python.
+
+:::note
+
+When you use the `--go` command to specify an executable file, make sure you have executable permissions.
+
+:::
+
+## Package Python dependencies
+
+When you deploy Python functions in a cluster offline, you need to package the required dependencies in a ZIP file before deployment.
+
+### Client requirements
+
+The following programs are required to be installed on the client machine.
+
+```
+
+pip \\ required for getting python dependencies
+zip \\ for building zip archives
+
+```
+
+### Python dependencies
+
+A file named **requirements.txt** is needed with required dependencies for the Python function.
+
+```
+
+sh==1.12.14
+
+```
+
+Prepare the Pulsar Function in the **src** folder.
+
+Run the following command to gather Python dependencies in the **deps** folder.
+
+```
+
+pip download \
+--only-binary :all: \
+--platform manylinux1_x86_64 \
+--python-version 27 \
+--implementation cp \
+--abi cp27m -r requirements.txt -d deps
+
+```
+
+Sample output
+
+```
+
+Collecting sh==1.12.14 (from -r requirements.txt (line 1))
+  Using cached https://files.pythonhosted.org/packages/4a/22/17b22ef5b049f12080f5815c41bf94de3c229217609e469001a8f80c1b3d/sh-1.12.14-py2.py3-none-any.whl
+  Saved ./deps/sh-1.12.14-py2.py3-none-any.whl
+Successfully downloaded sh
+
+```
+
+:::note
+
+`pulsar-client` is not needed as a dependency as it has already installed in the worker node.
+
+:::
+
+#### Package
+Create a destination folder with the desired package name, for example, **exclamation**. Copy the **src** and **deps** folders into it, and compress the folder into a ZIP archive.
+
+Sample sequence
+
+```
+
+cp -R deps exclamation/
+cp -R src exclamation/
+
+ls -la exclamation/
+total 7
+drwxr-xr-x   5 a.ahmed  staff  160 Nov  6 17:51 .
+drwxr-xr-x  12 a.ahmed  staff  384 Nov  6 17:52 ..
+drwxr-xr-x   3 a.ahmed  staff   96 Nov  6 17:51 deps
+drwxr-xr-x   3 a.ahmed  staff   96 Nov  6 17:51 src
+
+zip -r exclamation.zip exclamation
+
+```
+
+After package all the required dependencies into the **exclamation.zip** file, you can deploy functions in a Pulsar worker. The Pulsar worker does not need internet connectivity to download packages, because they are all included in the ZIP file.
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-state.md b/site2/website-next/versioned_docs/version-2.3.2/functions-state.md
new file mode 100644
index 0000000..d3c7c78
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-state.md
@@ -0,0 +1,197 @@
+---
+id: functions-state
+title: Pulsar Functions State Storage (Developer Preview)
+sidebar_label: "State Storage"
+original_id: functions-state
+---
+
+Since Pulsar 2.1.0 release, Pulsar integrates with Apache BookKeeper [table service](https://docs.google.com/document/d/155xAwWv5IdOitHh1NVMEwCMGgB28M3FyMiQSxEpjE-Y/edit#heading=h.56rbh52koe3f)
+for storing the `State` for functions. For example, A `WordCount` function can store its `counters` state into BookKeeper's table service via Pulsar Functions [State API](#api).
+
+## API
+
+### Java API
+
+Currently Pulsar Functions expose following APIs for mutating and accessing State. These APIs are available in the [Context](functions-api.md#context) object when
+you are using [Java SDK](functions-api.md#java-sdk-functions) functions.
+
+#### incrCounter
+
+```java
+
+    /**
+     * Increment the builtin distributed counter referred by key
+     * @param key The name of the key
+     * @param amount The amount to be incremented
+     */
+    void incrCounter(String key, long amount);
+
+```
+
+The application can use `incrCounter` to change the counter of a given `key` by the given `amount`.
+
+#### incrCounterAsync
+
+```java
+
+     /**
+     * Increment the builtin distributed counter referred by key
+     * but dont wait for the completion of the increment operation
+     *
+     * @param key The name of the key
+     * @param amount The amount to be incremented
+     */
+    CompletableFuture<Void> incrCounterAsync(String key, long amount);
+
+```
+
+The application can use `incrCounterAsync` to asynchronously change the counter of a given `key` by the given `amount`.
+
+#### getCounter
+
+```java
+
+    /**
+     * Retrieve the counter value for the key.
+     *
+     * @param key name of the key
+     * @return the amount of the counter value for this key
+     */
+    long getCounter(String key);
+
+```
+
+The application can use `getCounter` to retrieve the counter of a given `key` mutated by `incrCounter`.
+
+Besides the `counter` API, Pulsar also exposes a general key/value API for functions to store
+general key/value state.
+
+#### getCounterAsync
+
+```java
+
+     /**
+     * Retrieve the counter value for the key, but don't wait
+     * for the operation to be completed
+     *
+     * @param key name of the key
+     * @return the amount of the counter value for this key
+     */
+    CompletableFuture<Long> getCounterAsync(String key);
+
+```
+
+The application can use `getCounterAsync` to asynchronously retrieve the counter of a given `key` mutated by `incrCounterAsync`.
+
+#### putState
+
+```java
+
+    /**
+     * Update the state value for the key.
+     *
+     * @param key name of the key
+     * @param value state value of the key
+     */
+    void putState(String key, ByteBuffer value);
+
+```
+
+#### putStateAsync
+
+```java
+
+    /**
+     * Update the state value for the key, but don't wait for the operation to be completed
+     *
+     * @param key name of the key
+     * @param value state value of the key
+     */
+    CompletableFuture<Void> putStateAsync(String key, ByteBuffer value);
+
+```
+
+The application can use `putStateAsync` to asynchronously update the state of a given `key`.
+
+#### getState
+
+```
+
+    /**
+     * Retrieve the state value for the key.
+     *
+     * @param key name of the key
+     * @return the state value for the key.
+     */
+    ByteBuffer getState(String key);
+
+```
+
+#### getStateAsync
+
+```java
+
+    /**
+     * Retrieve the state value for the key, but don't wait for the operation to be completed
+     *
+     * @param key name of the key
+     * @return the state value for the key.
+     */
+    CompletableFuture<ByteBuffer> getStateAsync(String key);
+
+```
+
+The application can use `getStateAsync` to asynchronously retrieve the state of a given `key`.
+
+### Python API
+
+State currently is not supported at [Python SDK](functions-api.md#python-sdk-functions).
+
+## Query State
+
+A Pulsar Function can use the [State API](#api) for storing state into Pulsar's state storage
+and retrieving state back from Pulsar's state storage. Additionally Pulsar also provides
+CLI commands for querying its state.
+
+```shell
+
+$ bin/pulsar-admin functions querystate \
+    --tenant <tenant> \
+    --namespace <namespace> \
+    --name <function-name> \
+    --state-storage-url <bookkeeper-service-url> \
+    --key <state-key> \
+    [---watch]
+
+```
+
+If `--watch` is specified, the CLI will watch the value of the provided `state-key`.
+
+## Example
+
+### Java Example
+
+{@inject: github:WordCountFunction:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java} is a very good example
+demonstrating on how Application can easily store `state` in Pulsar Functions.
+
+```java
+
+public class WordCountFunction implements Function<String, Void> {
+    @Override
+    public Void process(String input, Context context) throws Exception {
+        Arrays.asList(input.split("\\.")).forEach(word -> context.incrCounter(word, 1));
+        return null;
+    }
+}
+
+```
+
+The logic of this `WordCount` function is pretty simple and straightforward:
+
+1. The function first splits the received `String` into multiple words using regex `\\.`.
+2. For each `word`, the function increments the corresponding `counter` by 1 (via `incrCounter(key, amount)`).
+
+### Python Example
+
+State currently is not supported at [Python SDK](functions-api.md#python-sdk-functions).
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/functions-worker.md b/site2/website-next/versioned_docs/version-2.3.2/functions-worker.md
new file mode 100644
index 0000000..101ce6f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/functions-worker.md
@@ -0,0 +1,273 @@
+---
+id: functions-worker
+title: Deploy and manage functions worker
+sidebar_label: "Functions Worker"
+original_id: functions-worker
+---
+
+Pulsar `functions-worker` is a logic component to run Pulsar Functions in cluster mode. Two options are available, and you can select either of the two options based on your requirements. 
+- [run with brokers](#run-Functions-worker-with-brokers)
+- [run it separately](#run-Functions-worker-separately) in a different broker
+
+:::note
+
+The `--- Service Urls---` lines in the following diagrams represent Pulsar service URLs that Pulsar client and admin use to connect to a Pulsar cluster.
+
+:::
+
+## Run Functions-worker with brokers
+
+The following diagram illustrates the deployment of functions-workers running along with brokers.
+
+![assets/functions-worker-corun.png](/assets/functions-worker-corun.png)
+
+To enable functions-worker running as part of a broker, you need to set `functionsWorkerEnabled` to `true` in the `broker.conf` file.
+
+```conf
+
+functionsWorkerEnabled=true
+
+```
+
+When you set `functionsWorkerEnabled` to `true`, it means that you start functions-worker as part of a broker. You need to configure the `conf/functions_worker.yml` file to customize your functions_worker.
+
+Before you run Functions-work with broker, you have to configure Functions-worker, and then start it with brokers.
+
+### Configure Functions-Worker to run with brokers
+In this mode, since `functions-worker` is running as part of broker, most of the settings already inherit from your broker configuration (for example, configurationStore settings, authentication settings, and so on).
+
+Pay attention to the following required settings when configuring functions-worker in this mode.
+
+- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`, which is good for standalone deployment. For production deployment, to ensure high availability, set it to be more than `2` .
+- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration).
+
+If authentication is enabled on the BookKeeper cluster, configure the following BookKeeper authentication settings.
+
+- `bookkeeperClientAuthenticationPlugin`: the BookKeeper client authentication plugin name.
+- `bookkeeperClientAuthenticationParametersName`: the BookKeeper client authentication plugin parameters name.
+- `bookkeeperClientAuthenticationParameters`: the BookKeeper client authentication plugin parameters.
+
+### Start Functions-worker with broker
+
+Once you have configured the `functions_worker.yml` file, you can start or restart your broker. 
+
+And then you can use the following command to verify if `functions-worker` is running well.
+
+```bash
+
+curl <broker-ip>:8080/admin/v2/worker/cluster
+
+```
+
+After entering the command above, a list of active function workers in the cluster is returned. The output is something similar as follows.
+
+```json
+
+[{"workerId":"<worker-id>","workerHostname":"<worker-hostname>","port":8080}]
+
+```
+
+## Run Functions-worker separately
+
+This section illustrates how to run `functions-worker` as a separate process in separate machines.
+
+![assets/functions-worker-separated.png](/assets/functions-worker-separated.png)
+
+> Note    
+In this mode, make sure `functionsWorkerEnabled` is set to `false`, so you won't start `functions-worker` with brokers by mistake.
+
+### Configure Functions-worker to run separately
+
+To run function-worker separately, you have to configure the following parameters. 
+
+#### Worker parameters
+
+- `workerId`: The type is string. It is unique across clusters, used to identify a worker machine.
+- `workerHostname`: The hostname of the worker machine.
+- `workerPort`: The port that the worker server listens on. Keep it as default if you don't customize it.
+- `workerPortTls`: The TLS port that the worker server listens on. Keep it as default if you don't customize it.
+
+#### Function package parameter
+
+- `numFunctionPackageReplicas`: The number of replicas to store function packages. The default value is `1`.
+
+#### Function metadata parameter
+
+- `pulsarServiceUrl`: The Pulsar service URL for your broker cluster.
+- `pulsarWebServiceUrl`: The Pulsar web service URL for your broker cluster.
+- `pulsarFunctionsCluster`: Set the value to your Pulsar cluster name (same as the `clusterName` setting in the broker configuration).
+
+If authentication is enabled for your broker cluster, you *should* configure the authentication plugin and parameters for the functions worker to communicate with the brokers.
+
+- `clientAuthenticationPlugin`
+- `clientAuthenticationParameters`
+
+#### Security settings
+
+If you want to enable security on functions workers, you *should*:
+- [Enable TLS transport encryption](#enable-tls-transport-encryption)
+- [Enable Authentication Provider](#enable-authentication-provider)
+- [Enable Authorization Provider](#enable-authorization-provider)
+
+**Enable TLS transport encryption**
+
+To enable TLS transport encryption, configure the following settings.
+
+```
+
+tlsEnabled: true
+tlsCertificateFilePath: /path/to/functions-worker.cert.pem
+tlsKeyFilePath:         /path/to/functions-worker.key-pk8.pem
+tlsTrustCertsFilePath:  /path/to/ca.cert.pem
+
+```
+
+For details on TLS encryption, refer to [Transport Encryption using TLS](security-tls-transport).
+
+**Enable Authentication Provider**
+
+To enable authentication on Functions Worker, configure the following settings.
+> Note  
+Substitute the *providers list* with the providers you want to enable.
+
+```
+
+authenticationEnabled: true
+authenticationProviders: [ provider1, provider2 ]
+
+```
+
+For *SASL Authentication* provider, add `saslJaasClientAllowedIds` and `saslJaasBrokerSectionName`
+under `properties` if needed. 
+
+```
+
+properties:
+  saslJaasClientAllowedIds: .*pulsar.*
+  saslJaasBrokerSectionName: Broker
+
+```
+
+For *Token Authentication* prodivder, add necessary settings under `properties` if needed.
+See [Token Authentication](security-token-admin) for more details.
+
+```
+
+properties:
+  tokenSecretKey:       file://my/secret.key 
+  # If using public/private
+  # tokenPublicKey:     file:///path/to/public.key
+
+```
+
+**Enable Authorization Provider**
+
+To enable authorization on Functions Worker, you need to configure `authorizationEnabled` and `configurationStoreServers`. The authentication provider connects to `configurationStoreServers` to receive namespace policies.
+
+```yaml
+
+authorizationEnabled: true
+configurationStoreServers: <configuration-store-servers>
+
+```
+
+You should also configure a list of superuser roles. The superuser roles are able to access any admin API. The following is a configuration example.
+
+```yaml
+
+superUserRoles:
+  - role1
+  - role2
+  - role3
+
+```
+
+#### BookKeeper Authentication
+
+If authentication is enabled on the BookKeeper cluster, you should configure the BookKeeper authentication settings as follows:
+
+- `bookkeeperClientAuthenticationPlugin`: the plugin name of BookKeeper client authentication.
+- `bookkeeperClientAuthenticationParametersName`: the plugin parameters name of BookKeeper client authentication.
+- `bookkeeperClientAuthenticationParameters`: the plugin parameters of BookKeeper client authentication.
+
+### Start Functions-worker
+
+Once you have finished configuring the `functions_worker.yml` configuration file, you can use the following command to start a `functions-worker`:
+
+```bash
+
+bin/pulsar functions-worker
+
+```
+
+### Configure Proxies for Functions-workers
+
+When you are running `functions-worker` in a separate cluster, the admin rest endpoints are split into two clusters. `functions`, `function-worker`, `source` and `sink` endpoints are now served
+by the `functions-worker` cluster, while all the other remaining endpoints are served by the broker cluster.
+Hence you need to configure your `pulsar-admin` to use the right service URL accordingly.
+
+In order to address this inconvenience, you can start a proxy cluster for routing the admin rest requests accordingly. Hence you will have one central entry point for your admin service.
+
+If you already have a proxy cluster, continue reading. If you haven't setup a proxy cluster before, you can follow the [instructions](http://pulsar.apache.org/docs/en/administration-proxy/) to
+start proxies.    
+
+![assets/functions-worker-separated.png](/assets/functions-worker-separated-proxy.png)
+
+To enable routing functions related admin requests to `functions-worker` in a proxy, you can edit the `proxy.conf` file to modify the following settings:
+
+```conf
+
+functionWorkerWebServiceURL=<pulsar-functions-worker-web-service-url>
+functionWorkerWebServiceURLTLS=<pulsar-functions-worker-web-service-url>
+
+```
+
+## Compare the Run-with-Broker and Run-separately modes
+
+As described above, you can run Function-worker with brokers, or run it separately. And it is more convenient to run functions-workers along with brokers. However, running functions-workers in a separate cluster provides better resource isolation for running functions in `Process` or `Thread` mode.
+
+Use which mode for your cases, refer to the following guidelines to determine.
+
+Use the `Run-with-Broker` mode in the following cases:
+- a) if resource isolation is not required when running functions in `Process` or `Thread` mode; 
+- b) if you configure the functions-worker to run functions on Kubernetes (where the resource isolation problem is addressed by Kubernetes).
+
+Use the `Run-separately` mode in the following cases:
+-  a) you don't have a Kubernetes cluster; 
+-  b) if you want to run functions and brokers separately.
+
+## Troubleshooting
+
+**Error message: Namespace missing local cluster name in clusters list**
+
+```
+
+Failed to get partitioned topic metadata: org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException: Namespace missing local cluster name in clusters list: local_cluster=xyz ns=public/functions clusters=[standalone]
+
+```
+
+The error message prompts when either of the cases occurs:
+- a) a broker is started with `functionsWorkerEnabled=true`, but the `pulsarFunctionsCluster` is not set to the correct cluster in the `conf/functions_worker.yaml` file;
+- b) setting up a geo-replicated Pulsar cluster with `functionsWorkerEnabled=true`, while brokers in one cluster run well, brokers in the other cluster do not work well.
+
+**Workaround**
+
+If any of these cases happens, follow the instructions below to fix the problem:
+
+1. Get the current clusters list of `public/functions` namespace.
+
+```bash
+
+bin/pulsar-admin namespaces get-clusters public/functions
+
+```
+
+2. Check if the cluster is in the clusters list. If the cluster is not in the list, add it to the list and update the clusters list.
+
+```bash
+
+bin/pulsar-admin namespaces set-clusters --cluster=<existing-clusters>,<new-cluster> public/functions
+
+```
+
+3. Set the correct cluster name in `pulsarFunctionsCluster` in the `conf/functions_worker.yml` file. 
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.3.2/getting-started-clients.md b/site2/website-next/versioned_docs/version-2.3.2/getting-started-clients.md
new file mode 100644
index 0000000..aaf4bec
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/getting-started-clients.md
@@ -0,0 +1,57 @@
+---
+id: client-libraries
+title: Pulsar client libraries
+sidebar_label: "Use Pulsar with client libraries"
+original_id: client-libraries
+---
+
+Pulsar supports the following client libraries:
+
+- [Java client](#java-client)
+- [Go client](#go-client)
+- [Python client](#python-client)
+- [C++ client](#c-client)
+
+## Java client
+
+For instructions on how to use the Pulsar Java client to produce and consume messages, see [Pulsar Java client](client-libraries-java).
+
+Two independent sets of Javadoc API docs are available.
+
+Library | Purpose
+:-------|:-------
+[`org.apache.pulsar.client.api`](/api/client) | The [Pulsar Java client](client-libraries-java) is used to produce and consume messages on Pulsar topics.
+[`org.apache.pulsar.client.admin`](/api/admin) | The Java client for the [Pulsar admin interface](admin-api-overview).
+
+
+## Go client
+
+For a tutorial on using the Pulsar Go client, see [Pulsar Go client](client-libraries-go).
+
+
+## Python client
+
+For a tutorial on using the Pulsar Python client, see [Pulsar Python client](client-libraries-python).
+
+There are also [pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client [here](/api/python).
+
+## C++ client
+
+For a tutorial on using the Pulsar C++ clent, see [Pulsar C++ client](client-libraries-cpp).
+
+There are also [Doxygen](http://www.stack.nl/~dimitri/doxygen/)-generated API docs for the C++ client [here](/api/cpp).
+
+## Feature Matrix
+Pulsar client feature matrix for different languages is listed on [Client Features Matrix](https://github.com/apache/pulsar/wiki/Client-Features-Matrix) page.
+
+## Thirdparty Clients
+
+Besides the official released clients, there are also multiple projects on developing a Pulsar client in different languages.
+
+> If you have developed a new Pulsar client, feel free to submit a pull request and add your client to the list below.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
diff --git a/site2/website-next/versioned_docs/version-2.3.2/getting-started-docker.md b/site2/website-next/versioned_docs/version-2.3.2/getting-started-docker.md
new file mode 100644
index 0000000..e39d94f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/getting-started-docker.md
@@ -0,0 +1,191 @@
+---
+id: standalone-docker
+title: Set up a standalone Pulsar in Docker
+sidebar_label: "Run Pulsar in Docker"
+original_id: standalone-docker
+---
+
+For local development and testing, you can run Pulsar in standalone
+mode on your own machine within a Docker container.
+
+If you have not installed Docker, download the [Community edition](https://www.docker.com/community-edition)
+and follow the instructions for your OS.
+
+## Start Pulsar in Docker
+
+* For MacOS and Linux:
+
+  ```shell
+  
+  $ docker run -it \
+  -p 6650:6650 \
+  -p 8080:8080 \
+  -v $PWD/data:/pulsar/data \
+  apachepulsar/pulsar:@pulsar:version@ \
+  bin/pulsar standalone
+  
+  ```
+
+* For Windows:  
+
+  ```shell
+  
+  $ docker run -it \
+  -p 6650:6650 \
+  -p 8080:8080 \
+  -v "$PWD/data:/pulsar/data".ToLower() \
+  apachepulsar/pulsar:@pulsar:version@ \
+  bin/pulsar standalone
+  
+  ```
+
+A few things to note about this command:
+ * `$PWD/data` : The docker host directory in Windows operating system must be lowercase.`$PWD/data` provides you with the specified directory, for example: `E:/data`.
+ * `-v $PWD/data:/pulsar/data`: This makes the process inside the container to store the
+   data and metadata in the filesystem outside the container, in order not to start "fresh" every time the container is restarted.
+
+If you start Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```
+
+2017-08-09 22:34:04,030 - INFO  - [main:WebService@213] - Web Service started at http://127.0.0.1:8080
+2017-08-09 22:34:04,038 - INFO  - [main:PulsarService@335] - messaging service is ready, bootstrap service on port=8080, broker url=pulsar://127.0.0.1:6650, cluster=standalone, configs=org.apache.pulsar.broker.ServiceConfiguration@4db60246
+...
+
+```
+
+:::tip
+
+When you start a local standalone cluster, a `public/default`
+
+:::
+
+namespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.
+For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar in Docker
+
+Pulsar offers client libraries for [Java](client-libraries-java.md), [Go](client-libraries-go.md), [Python](client-libraries-python) 
+and [C++](client-libraries-cpp). If you're running a local standalone cluster, you can
+use one of these root URLs to interact with your cluster:
+
+* `pulsar://localhost:6650`
+* `http://localhost:8080`
+
+The following example will guide you get started with Pulsar quickly by using the [Python](client-libraries-python)
+client API.
+
+Install the Pulsar Python client library directly from [PyPI](https://pypi.org/project/pulsar-client/):
+
+```shell
+
+$ pip install pulsar-client
+
+```
+
+### Consume a message
+
+Create a consumer and subscribe to the topic:
+
+```python
+
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+consumer = client.subscribe('my-topic',
+                            subscription_name='my-sub')
+
+while True:
+    msg = consumer.receive()
+    print("Received message: '%s'" % msg.data())
+    consumer.acknowledge(msg)
+
+client.close()
+
+```
+
+### Produce a message
+
+Now start a producer to send some test messages:
+
+```python
+
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+producer = client.create_producer('my-topic')
+
+for i in range(10):
+    producer.send(('hello-pulsar-%d' % i).encode('utf-8'))
+
+client.close()
+
+```
+
+## Get the topic statistics
+
+In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.
+For details on APIs, refer to [Admin API Overview](admin-api-overview).
+
+In the simplest example, you can use curl to probe the stats for a particular topic:
+
+```shell
+
+$ curl http://localhost:8080/admin/v2/persistent/public/default/my-topic/stats | python -m json.tool
+
+```
+
+The output is something like this:
+
+```json
+
+{
+  "averageMsgSize": 0.0,
+  "msgRateIn": 0.0,
+  "msgRateOut": 0.0,
+  "msgThroughputIn": 0.0,
+  "msgThroughputOut": 0.0,
+  "publishers": [
+    {
+      "address": "/172.17.0.1:35048",
+      "averageMsgSize": 0.0,
+      "clientVersion": "1.19.0-incubating",
+      "connectedSince": "2017-08-09 20:59:34.621+0000",
+      "msgRateIn": 0.0,
+      "msgThroughputIn": 0.0,
+      "producerId": 0,
+      "producerName": "standalone-0-1"
+    }
+  ],
+  "replication": {},
+  "storageSize": 16,
+  "subscriptions": {
+    "my-sub": {
+      "blockedSubscriptionOnUnackedMsgs": false,
+      "consumers": [
+        {
+          "address": "/172.17.0.1:35064",
+          "availablePermits": 996,
+          "blockedConsumerOnUnackedMsgs": false,
+          "clientVersion": "1.19.0-incubating",
+          "connectedSince": "2017-08-09 21:05:39.222+0000",
+          "consumerName": "166111",
+          "msgRateOut": 0.0,
+          "msgRateRedeliver": 0.0,
+          "msgThroughputOut": 0.0,
+          "unackedMessages": 0
+        }
+      ],
+      "msgBacklog": 0,
+      "msgRateExpired": 0.0,
+      "msgRateOut": 0.0,
+      "msgRateRedeliver": 0.0,
+      "msgThroughputOut": 0.0,
+      "type": "Exclusive",
+      "unackedMessages": 0
+    }
+  }
+}
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/getting-started-standalone.md b/site2/website-next/versioned_docs/version-2.3.2/getting-started-standalone.md
new file mode 100644
index 0000000..4c35336
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/getting-started-standalone.md
@@ -0,0 +1,258 @@
+---
+slug: /
+id: standalone
+title: Set up a standalone Pulsar locally
+sidebar_label: "Run Pulsar locally"
+original_id: standalone
+---
+
+For local development and testing, you can run Pulsar in standalone mode on your machine. The standalone mode includes a Pulsar broker, the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
+
+> #### Pulsar in production? 
+> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal) guide.
+
+## Install Pulsar standalone
+
+### System requirements
+
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and **Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later versions.
+
+### Install Pulsar using binary release
+
+To get started with Pulsar, download a binary tarball release in one of the following ways:
+
+* download from the Apache mirror (<a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ binary release</a>)
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)  
+  
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+  
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:binary_release_url
+  
+  ```
+
+After you download the tarball, untar it and use the `cd` command to navigate to the resulting directory:
+
+```bash
+
+$ tar xvfz apache-pulsar-@pulsar:version@-bin.tar.gz
+$ cd apache-pulsar-@pulsar:version@
+
+```
+
+#### What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin).
+`conf` | Configuration files for Pulsar, including [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more.
+`examples` | A Java JAR file containing [Pulsar Functions](functions-overview) example.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`licenses` | License files, in the`.txt` form, for various components of the Pulsar [codebase](https://github.com/apache/pulsar).
+
+These directories are created once you begin running Pulsar.
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`instances` | Artifacts created for [Pulsar Functions](functions-overview).
+`logs` | Logs created by the installation.
+
+#### Install other optional components
+
+:::tip
+
+If you want to use builtin connectors and tiered storage offloaders, you can install them according to the following instructions:
+* [Install builtin connectors (optional)](#install-builtin-connectors-optional)
+* [Install tiered storage offloaders (optional)](#install-tiered-storage-offloaders-optional)
+Otherwise, skip this step and perform the next step [Start Pulsar standalone](#start-pulsar-standalone). Pulsar can be successfully installed without installing bulitin connectors and tiered storage offloaders.
+
+:::
+
+##### Install builtin connectors (optional)
+
+Since `2.1.0-incubating` release, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+To enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:connector_release_url" download>Pulsar IO Connectors @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:connector_release_url/{connector}-@pulsar:version@.nar
+  
+  ```
+
+After you download the nar file, copy the file to the `connectors` directory in the pulsar directory. 
+For example, if you download the `pulsar-io-aerospike-@pulsar:version@.nar` connector file, enter the following commands:
+
+```bash
+
+$ mkdir connectors
+$ mv pulsar-io-aerospike-@pulsar:version@.nar connectors
+
+$ ls connectors
+pulsar-io-aerospike-@pulsar:version@.nar
+...
+
+```
+
+:::note
+
+* If you are running Pulsar in a bare metal cluster, make sure `connectors` tarball is unzipped in every pulsar directory of the broker
+(or in every pulsar directory of function-worker if you are running a separate worker cluster for Pulsar Functions).
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+:::
+
+##### Install tiered storage offloaders (optional)
+
+:::tip
+
+Since `2.2.0` release, Pulsar releases a separate binary distribution, containing the tiered storage offloaders.
+To enable tiered storage feature, follow the instructions below; otherwise skip this section.
+
+:::
+
+To get started with [tiered storage offloaders](concepts-tiered-storage), you need to download the offloaders tarball release on every broker node in one of the following ways:
+
+* download from the Apache mirror <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage Offloaders @pulsar:version@ release</a>
+
+* download from the Pulsar [downloads page](pulsar:download_page_url)
+
+* download from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+
+* use [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:offloader_release_url
+  
+  ```
+
+After you download the tarball, untar the offloaders package and copy the offloaders as `offloaders`
+in the pulsar directory:
+
+```bash
+
+$ tar xvfz apache-pulsar-offloaders-@pulsar:version@-bin.tar.gz
+
+// you will find a directory named `apache-pulsar-offloaders-@pulsar:version@` in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-@pulsar:version@.nar
+
+```
+
+For more information on how to configure tiered storage, see [Tiered storage cookbook](cookbooks-tiered-storage).
+
+:::note
+
+* If you are running Pulsar in a bare metal cluster, make sure that `offloaders` tarball is unzipped in every broker's pulsar directory.
+* If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos)),
+you can use the `apachepulsar/pulsar-all` image instead of the `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled tiered storage offloaders.
+
+:::
+
+## Start Pulsar standalone
+
+Once you have an up-to-date local copy of the release, you can start a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start Pulsar in standalone mode.
+
+```bash
+
+$ bin/pulsar standalone
+
+```
+
+If you have started Pulsar successfully, you will see `INFO`-level log messages like this:
+
+```bash
+
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Configuration Store cache started
+2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
+
+```
+
+:::tip
+
+* The service is running on your terminal, which is under your direct control. If you need to run other commands, open a new terminal window.  
+
+:::
+
+You can also run the service as a background process using the `pulsar-daemon start standalone` command. For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+> 
+> * When you start a local standalone cluster, a `public/default` [namespace](concepts-messaging.md#namespaces) is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces. For more information, see [Topics](concepts-messaging.md#topics).
+
+## Use Pulsar standalone
+
+Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client). The pulsar-client tool enables you to consume and produce messages to a Pulsar topic in a running cluster. 
+
+### Consume a message
+
+The following command consumes a message with the subscription name `first-subscription` to the `my-topic` topic:
+
+```bash
+
+$ bin/pulsar-client consume my-topic -s "first-subscription"
+
+```
+
+If the message has been successfully consumed, you will see a confirmation like the following in the `pulsar-client` logs:
+
+```
+
+09:56:55.566 [pulsar-client-io-1-1] INFO  org.apache.pulsar.client.impl.MultiTopicsConsumerImpl - [TopicsConsumerFakeTopicNamee2df9] [first-subscription] Success subscribe new topic my-topic in topics consumer, partitions: 4, allTopicPartitionsNumber: 4
+
+```
+
+:::tip
+
+As you have noticed that we do not explicitly create the `my-topic` topic, to which we consume the message. When you consume a message to a topic that does not yet exist, Pulsar creates that topic for you automatically. Producing a message to a topic that does not exist will automatically create that topic for you as well.
+
+:::
+
+### Produce a message
+
+The following command produces a message saying `hello-pulsar` to the `my-topic` topic:
+
+```bash
+
+$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
+
+```
+
+If the message has been successfully published to the topic, you will see a confirmation like the following in the `pulsar-client` logs:
+
+```
+
+13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
+
+```
+
+## Stop Pulsar standalone
+
+Press `Ctrl+C` to stop a local standalone Pulsar.
+
+:::tip
+
+If the service runs as a background process using the `pulsar-daemon start standalone` command, then use the `pulsar-daemon stop standalone`  command to stop the service.
+For more information, see [pulsar-daemon](https://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-daemon).
+
+:::
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/pulsar-2.0.md b/site2/website-next/versioned_docs/version-2.3.2/pulsar-2.0.md
new file mode 100644
index 0000000..11c5e66c
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/pulsar-2.0.md
@@ -0,0 +1,72 @@
+---
+id: pulsar-2.0
+title: Pulsar 2.0
+sidebar_label: "Pulsar 2.0"
+original_id: pulsar-2.0
+---
+
+Pulsar 2.0 is a major new release for Pulsar that brings some bold changes to the platform, including [simplified topic names](#topic-names), the addition of the [Pulsar Functions](functions-overview) feature, some terminology changes, and more.
+
+## New features in Pulsar 2.0
+
+Feature | Description
+:-------|:-----------
+[Pulsar Functions](functions-overview) | A lightweight compute option for Pulsar
+
+## Major changes
+
+There are a few major changes that you should be aware of, as they may significantly impact your day-to-day usage.
+
+### Properties versus tenants
+
+Previously, Pulsar had a concept of properties. A property is essentially the exact same thing as a tenant, so the "property" terminology has been removed in version 2.0. The [`pulsar-admin properties`](reference-pulsar-admin.md#pulsar-admin) command-line interface, for example, has been replaced with the [`pulsar-admin tenants`](reference-pulsar-admin.md#pulsar-admin-tenants) interface. In some cases the properties terminology is still used but is now considered deprecated and will be removed entirely in a future release.
+
+### Topic names
+
+Prior to version 2.0, *all* Pulsar topics had the following form:
+
+```http
+
+{persistent|non-persistent}://property/cluster/namespace/topic
+
+```
+
+Two important changes have been made in Pulsar 2.0:
+
+* There is no longer a [cluster component](#no-cluster)
+* Properties have been [renamed to tenants](#tenants)
+* You can use a [flexible](#flexible-topic-naming) naming system to shorten many topic names
+* `/` is not allowed in topic name
+
+#### No cluster component
+
+The cluster component has been removed from topic names. Thus, all topic names now have the following form:
+
+```http
+
+{persistent|non-persistent}://tenant/namespace/topic
+
+```
+
+> Existing topics that use the legacy name format will continue to work without any change, and there are no plans to change that.
+
+
+#### Flexible topic naming
+
+All topic names in Pulsar 2.0 internally have the form shown [above](#no-cluster-component) but you can now use shorthand names in many cases (for the sake of simplicity). The flexible naming system stems from the fact that there is now a default topic type, tenant, and namespace:
+
+Topic aspect | Default
+:------------|:-------
+topic type | `persistent`
+tenant | `public`
+namespace | `default`
+
+The table below shows some example topic name translations that use implicit defaults:
+
+Input topic name | Translated topic name
+:----------------|:---------------------
+`my-topic` | `persistent://public/default/my-topic`
+`my-tenant/my-namespace/my-topic` | `persistent://my-tenant/my-namespace/my-topic`
+
+> For [non-persistent topics](concepts-messaging.md#non-persistent-topics) you'll need to continue to specify the entire topic name, as the default-based rules for persistent topic names don't apply. Thus you cannot use a shorthand name like `non-persistent://my-topic` and would need to use `non-persistent://public/default/my-topic` instead
+
diff --git a/site2/website-next/versioned_docs/version-2.3.2/window-functions-context.md b/site2/website-next/versioned_docs/version-2.3.2/window-functions-context.md
new file mode 100644
index 0000000..f80fea5
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.3.2/window-functions-context.md
@@ -0,0 +1,581 @@
+---
+id: window-functions-context
+title: Window Functions Context
+sidebar_label: "Window Functions: Context"
+original_id: window-functions-context
+---
+
+Java SDK provides access to a **window context object** that can be used by a window function. This context object provides a wide variety of information and functionality for Pulsar window functions as below.
+
+- [Spec](#spec)
+
+  * Names of all input topics and the output topic associated with the function.
+  * Tenant and namespace associated with the function.
+  * Pulsar window function name, ID, and version.
+  * ID of the Pulsar function instance running the window function.
+  * Number of instances that invoke the window function.
+  * Built-in type or custom class name of the output schema.
+  
+- [Logger](#logger)
+  
+  * Logger object used by the window function, which can be used to create window function log messages.
+
+- [User config](#user-config)
+  
+  * Access to arbitrary user configuration values.
+
+- [Routing](#routing)
+  
+  * Routing is supported in Pulsar window functions. Pulsar window functions send messages to arbitrary topics as per the `publish` interface.
+
+- [Metrics](#metrics)
+  
+  * Interface for recording metrics.
+
+- [State storage](#state-storage)
+  
+  * Interface for storing and retrieving state in [state storage](#state-storage).
+
+## Spec
+
+Spec contains the basic information of a function.
+
+### Get input topics
+
+The `getInputTopics` method gets the **name list** of all input topics.
+
+This example demonstrates how to get the name list of all input topics in a Java window function.
+
+```java
+
+public class GetInputTopicsWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        Collection<String> inputTopics = context.getInputTopics();
+        System.out.println(inputTopics);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get output topic
+
+The `getOutputTopic` method gets the **name of a topic** to which the message is sent.
+
+This example demonstrates how to get the name of an output topic in a Java window function.
+
+```java
+
+public class GetOutputTopicWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        String outputTopic = context.getOutputTopic();
+        System.out.println(outputTopic);
+
+        return null;
+    }
+}
+
+```
+
+### Get tenant
+
+The `getTenant` method gets the tenant name associated with the window function.
+
+This example demonstrates how to get the tenant name in a Java window function.
+
+```java
+
+public class GetTenantWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        String tenant = context.getTenant();
+        System.out.println(tenant);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get namespace
+
+The `getNamespace` method gets the namespace associated with the window function.
+
+This example demonstrates how to get the namespace in a Java window function.
+
+```java
+
+public class GetNamespaceWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        String ns = context.getNamespace();
+        System.out.println(ns);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get function name
+
+The `getFunctionName` method gets the window function name.
+
+This example demonstrates how to get the function name in a Java window function.
+
+```java
+
+public class GetNameOfWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        String functionName = context.getFunctionName();
+        System.out.println(functionName);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get function ID
+
+The `getFunctionId` method gets the window function ID.
+
+This example demonstrates how to get the function ID in a Java window function.
+
+```java
+
+public class GetFunctionIDWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        String functionID = context.getFunctionId();
+        System.out.println(functionID);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get function version
+
+The `getFunctionVersion` method gets the window function version.
+
+This example demonstrates how to get the function version of a Java window function.
+
+```java
+
+public class GetVersionOfWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        String functionVersion = context.getFunctionVersion();
+        System.out.println(functionVersion);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get instance ID
+
+The `getInstanceId` method gets the instance ID of a window function.
+
+This example demonstrates how to get the instance ID in a Java window function.
+
+```java
+
+public class GetInstanceIDWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        int instanceId = context.getInstanceId();
+        System.out.println(instanceId);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get num instances
+
+The `getNumInstances` method gets the number of instances that invoke the window function.
+
+This example demonstrates how to get the number of instances in a Java window function.
+
+```java
+
+public class GetNumInstancesWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        int numInstances = context.getNumInstances();
+        System.out.println(numInstances);
+
+        return null;
+    }
+
+}
+
+```
+
+### Get output schema type
+
+The `getOutputSchemaType` method gets the built-in type or custom class name of the output schema.
+
+This example demonstrates how to get the output schema type of a Java window function.
+
+```java
+
+public class GetOutputSchemaTypeWindowFunction implements WindowFunction<String, Void> {
+
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        String schemaType = context.getOutputSchemaType();
+        System.out.println(schemaType);
+
+        return null;
+    }
+}
+
+```
+
+## Logger
+
+Pulsar window functions using Java SDK has access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level.
+
+This example logs either a `WARNING`-level or `INFO`-level log based on whether the incoming string contains the word `danger` or not in a Java function.
+
+```java
+
+import java.util.Collection;
+import org.apache.pulsar.functions.api.Record;
+import org.apache.pulsar.functions.api.WindowContext;
+import org.apache.pulsar.functions.api.WindowFunction;
+import org.slf4j.Logger;
+
+public class LoggingWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        Logger log = context.getLogger();
+        for (Record<String> record : inputs) {
+            log.info(record + "-window-log");
+        }
+        return null;
+    }
+
+}
+
+```
+
+If you need your function to produce logs, specify a log topic when creating or running the function. 
+
+```bash
+
+bin/pulsar-admin functions create \
+  --jar my-functions.jar \
+  --classname my.package.LoggingFunction \
+  --log-topic persistent://public/default/logging-function-logs \
+  # Other function configs
+
+```
+
+You can access all logs produced by `LoggingFunction` via the `persistent://public/default/logging-function-logs` topic.
+
+## Metrics
+
+Pulsar window functions can publish arbitrary metrics to the metrics interface which can be queried. 
+
+:::note
+
+If a Pulsar window function uses the language-native interface for Java, that function is not able to publish metrics and stats to Pulsar.
+
+:::
+
+You can record metrics using the context object on a per-key basis. 
+
+This example sets a metric for the `process-count` key and a different metric for the `elevens-count` key every time the function processes a message in a Java function. 
+
+```java
+
+import java.util.Collection;
+import org.apache.pulsar.functions.api.Record;
+import org.apache.pulsar.functions.api.WindowContext;
+import org.apache.pulsar.functions.api.WindowFunction;
+
+
+/**
+ * Example function that wants to keep track of
+ * the event time of each message sent.
+ */
+public class UserMetricWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+
+        for (Record<String> record : inputs) {
+            if (record.getEventTime().isPresent()) {
+                context.recordMetric("MessageEventTime", record.getEventTime().get().doubleValue());
+            }
+        }
+
+        return null;
+    }
+}
+
+```
+
+## User config
+
+When you run or update Pulsar Functions that are created using SDK, you can pass arbitrary key/value pairs to them with the `--user-config` flag. Key/value pairs **must** be specified as JSON. 
+
+This example passes a user configured key/value to a function.
+
+```bash
+
+bin/pulsar-admin functions create \
+  --name word-filter \
+ --user-config '{"forbidden-word":"rosebud"}' \
+  # Other function configs
+
+```
+
+### API
+You can use the following APIs to get user-defined information for window functions.
+#### getUserConfigMap
+
+`getUserConfigMap` API gets a map of all user-defined key/value configurations for the window function.
+
+```java
+
+/**
+     * Get a map of all user-defined key/value configs for the function.
+     *
+     * @return The full map of user-defined config values
+     */
+    Map<String, Object> getUserConfigMap();
+
+```
+
+#### getUserConfigValue
+
+The `getUserConfigValue` API gets a user-defined key/value.
+
+```java
+
+/**
+     * Get any user-defined key/value.
+     *
+     * @param key The key
+     * @return The Optional value specified by the user for that key.
+     */
+    Optional<Object> getUserConfigValue(String key);
+
+```
+
+#### getUserConfigValueOrDefault
+
+The `getUserConfigValueOrDefault` API gets a user-defined key/value or a default value if none is present.
+
+```java
+
+/**
+     * Get any user-defined key/value or a default value if none is present.
+     *
+     * @param key
+     * @param defaultValue
+     * @return Either the user config value associated with a given key or a supplied default value
+     */
+    Object getUserConfigValueOrDefault(String key, Object defaultValue);
+
+```
+
+This example demonstrates how to access key/value pairs provided to Pulsar window functions.
+
+Java SDK context object enables you to access key/value pairs provided to Pulsar window functions via the command line (as JSON). 
+
+:::tip
+
+For all key/value pairs passed to Java window functions, both the `key` and the `value` are `String`. To set the value to be a different type, you need to deserialize it from the `String` type.
+
+:::
+
+This example passes a key/value pair in a Java window function.
+
+```bash
+
+bin/pulsar-admin functions create \
+   --user-config '{"word-of-the-day":"verdure"}' \
+  # Other function configs
+
+```
+
+This example accesses values in a Java window function.
+
+The `UserConfigFunction` function logs the string `"The word of the day is verdure"` every time the function is invoked (which means every time a message arrives). The user config of `word-of-the-day` is changed **only** when the function is updated with a new config value via 
+multiple ways, such as the command line tool or REST API.
+
+```java
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.Optional;
+
+public class UserConfigWindowFunction implements WindowFunction<String, String> {
+    @Override
+    public String process(Collection<Record<String>> input, WindowContext context) throws Exception {
+        Optional<Object> whatToWrite = context.getUserConfigValue("WhatToWrite");
+        if (whatToWrite.get() != null) {
+            return (String)whatToWrite.get();
+        } else {
+            return "Not a nice way";
+        }
+    }
+
+}
+
+```
+
+If no value is provided, you can access the entire user config map or set a default value.
+
+```java
+
+// Get the whole config map
+Map<String, String> allConfigs = context.getUserConfigMap();
+
+// Get value or resort to default
+String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious");
+
+```
+
+## Routing
+
+You can use the `context.publish()` interface to publish as many results as you want.
+
+This example shows that the `PublishFunction` class uses the built-in function in the context to publish messages to the `publishTopic` in a Java function.
+
+```java
+
+public class PublishWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> input, WindowContext context) throws Exception {
+        String publishTopic = (String) context.getUserConfigValueOrDefault("publish-topic", "publishtopic");
+        String output = String.format("%s!", input);
+        context.publish(publishTopic, output);
+
+        return null;
+    }
+
+}
+
+```
+
+## State storage
+
+Pulsar window functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. Apache Pulsar installation (including the standalone installation) includes the deployment of BookKeeper bookies.
+
+Apache Pulsar integrates with Apache BookKeeper `table service` to store the `state` for functions. For example, the `WordCount` function can store its `counters` state into BookKeeper table service via Pulsar Functions state APIs.
+
+States are key-value pairs, where the key is a string and the value is arbitrary binary data—counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual Pulsar Function and shared between instances of that function.
+
+Currently, Pulsar window functions expose Java API to access, update, and manage states. These APIs are available in the context object when you use Java SDK functions.
+
+| Java API| Description
+|---|---
+|`incrCounter`|Increases a built-in distributed counter referred by key.
+|`getCounter`|Gets the counter value for the key.
+|`putState`|Updates the state value for the key.
+
+You can use the following APIs to access, update, and manage states in Java window functions. 
+
+#### incrCounter
+
+The `incrCounter` API increases a built-in distributed counter referred by key.
+
+Applications use the `incrCounter` API to change the counter of a given `key` by the given `amount`. If the `key` does not exist, a new key is created.
+
+```java
+
+    /**
+     * Increment the builtin distributed counter referred by key
+     * @param key The name of the key
+     * @param amount The amount to be incremented
+     */
+    void incrCounter(String key, long amount);
+
+```
+
+#### getCounter
+
+The `getCounter` API gets the counter value for the key.
+
+Applications uses the `getCounter` API to retrieve the counter of a given `key` changed by the `incrCounter` API.
+
+```java
+
+    /**
+     * Retrieve the counter value for the key.
+     *
+     * @param key name of the key
+     * @return the amount of the counter value for this key
+     */
+    long getCounter(String key);
+
+```
+
+Except the `getCounter` API, Pulsar also exposes a general key/value API (`putState`) for functions to store general key/value state.
+
+#### putState
+
+The `putState` API updates the state value for the key.
+
+```java
+
+    /**
+     * Update the state value for the key.
+     *
+     * @param key name of the key
+     * @param value state value of the key
+     */
+    void putState(String key, ByteBuffer value);
+
+```
+
+This example demonstrates how applications store states in Pulsar window functions.
+
+The logic of the `WordCountWindowFunction` is simple and straightforward.
+
+1. The function first splits the received string into multiple words using regex `\\.`.
+
+2. For each `word`, the function increments the corresponding `counter` by 1 via `incrCounter(key, amount)`.
+
+```java
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+import java.util.Arrays;
+
+public class WordCountWindowFunction implements WindowFunction<String, Void> {
+    @Override
+    public Void process(Collection<Record<String>> inputs, WindowContext context) throws Exception {
+        for (Record<String> input : inputs) {
+            Arrays.asList(input.getValue().split("\\.")).forEach(word -> context.incrCounter(word, 1));
+        }
+        return null;
+
+    }
+}
+
+```
+
diff --git a/site2/website-next/versioned_sidebars/version-2.3.2-sidebars.json b/site2/website-next/versioned_sidebars/version-2.3.2-sidebars.json
new file mode 100644
index 0000000..cdaf461
--- /dev/null
+++ b/site2/website-next/versioned_sidebars/version-2.3.2-sidebars.json
@@ -0,0 +1,114 @@
+{
+  "version-2.3.2/docsSidebar": [
+    {
+      "type": "category",
+      "label": "Getting Started",
+      "items": [
+        {
+          "type": "doc",
+          "id": "version-2.3.2/pulsar-2.0"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/standalone"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/standalone-docker"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/client-libraries"
+        }
+      ]
+    },
+    {
+      "type": "category",
+      "label": "Concepts and Architecture",
+      "items": [
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-overview"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-messaging"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-architecture-overview"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-clients"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-replication"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-multi-tenancy"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-authentication"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-topic-compaction"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-tiered-storage"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/concepts-schema-registry"
+        }
+      ]
+    },
+    {
+      "type": "category",
+      "label": "Pulsar Functions",
+      "items": [
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-overview"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-quickstart"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-api"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-deploying"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-guarantees"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-state"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-metrics"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/functions-worker"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.3.2/window-functions-context"
+        }
+      ]
+    }
+  ]
+}
\ No newline at end of file
diff --git a/site2/website-next/versions.json b/site2/website-next/versions.json
index 76e17de..faf51b8 100644
--- a/site2/website-next/versions.json
+++ b/site2/website-next/versions.json
@@ -15,5 +15,6 @@
   "2.4.2",
   "2.4.1",
   "2.4.0",
+  "2.3.2",
   "2.2.0"
 ]