Phoenix-Adapters is a compatibility layer that allows applications written for Amazon DynamoDB to run against Apache Phoenix (on HBase) as the underlying storage engine -- with zero code changes on the application side.
Organizations that use DynamoDB on AWS face challenges when they need to:
Phoenix-DynamoDB Adapter provides a RESTful API server that:
Client applications using any AWS SDK (Java, Python, Node.js, Go, etc.) only need to change the endpoint URL to point to the Phoenix REST server instead of the AWS DynamoDB service -- no code changes are required.
| Module | Purpose |
|---|---|
phoenix-ddb-rest | REST server (Jetty-based), API routing, service implementations |
phoenix-ddb-utils | Shared utilities: BSON conversion, CDC/stream utils, Phoenix helpers |
phoenix-ddb-assembly | Distribution packaging (tarball) |
coverage-report | Code coverage aggregation |
┌───────────────────────────────┐
│ Client Application │
│ (AWS SDK: Java/Python/JS) │
└──────────────┬────────────────┘
│ HTTP POST (JSON)
│ X-Amz-Target: DynamoDB_20120810.<Operation>
▼
┌───────────────────────────────┐
│ Phoenix DynamoDB REST Server │
│ (Jetty + Jersey JAX-RS) │
│ │
│ ┌─────────────────────────┐ │
│ │ AccessKeyAuthFilter │ │ ← Optional authentication
│ │ (if configured) │ │
│ └────────────┬────────────┘ │
│ ▼ │
│ ┌─────────────────────────┐ │
│ │ RootResource (Router) │ │ ← Single POST endpoint at /
│ │ Routes by X-Amz-Target │ │
│ └────────────┬────────────┘ │
│ ▼ │
│ ┌─────────────────────────┐ │
│ │ Service Layer │ │ ← CreateTableService, PutItemService, etc.
│ └────────────┬────────────┘ │
│ ▼ │
│ ┌─────────────────────────┐ │
│ │ BSON Conversion Layer │ │ ← DDB attributes ↔ BSON documents
│ └────────────┬────────────┘ │
│ ▼ │
│ ┌─────────────────────────┐ │
│ │ Phoenix JDBC Driver │ │ ← SQL execution
│ └────────────┬────────────┘ │
└───────────────┼───────────────┘
▼
┌───────────────────────────────┐
│ Apache Phoenix / HBase │
│ (Persistent Storage) │
└───────────────────────────────┘
POST /. The operation is determined by the X-Amz-Target header (e.g., DynamoDB_20120810.CreateTable).COL. Primary key and Secondary key columns are stored separately for indexing.BSON_VALUE(), BSON_CONDITION_EXPRESSION(), and BSON_UPDATE_EXPRESSION() to operate on documents at the database level.Every API call is an HTTP POST to the root path / with:
| Component | Value | Example |
|---|---|---|
| Method | POST | |
| URL | http://<host>:<port>/ | http://localhost:8842/ |
| Content-Type | application/x-amz-json-1.0 or application/json | |
| X-Amz-Target | DynamoDB_20120810.<Operation> | DynamoDB_20120810.CreateTable |
| Body | JSON request payload | {"TableName": "MyTable", ...} |
200 OK with JSON body400 Bad Request with error body400 Bad Request with ResourceNotFoundException400 Bad Request with ConditionalCheckFailedException400 Bad Request with ResourceInUseExceptionError response format:
{ "__type": "com.amazonaws.dynamodb.v20120810#ValidationException", "message": "Error description here" }
These are the types allowed for primary key (partition key and sort key) attributes:
| DynamoDB Type | DynamoDB Code | Phoenix SQL Type | Description |
|---|---|---|---|
| String | S | VARCHAR | UTF-8 encoded string |
| Number | N | DOUBLE | Numeric values |
| Binary | B | VARBINARY_ENCODED | Binary data |
Non-key attributes are stored inside the BSON COL column and support the full DynamoDB type system:
| DynamoDB Type | Code | Description |
|---|---|---|
| String | S | UTF-8 string |
| Number | N | Numeric value |
| Binary | B | Binary data (Base64-encoded) |
| Boolean | BOOL | true or false |
| Null | NULL | Null value |
| List | L | Ordered collection of values |
| Map | M | Unordered collection of key-value pairs |
| String Set | SS | Set of unique strings |
| Number Set | NS | Set of unique numbers |
| Binary Set | BS | Set of unique binary values |
Supported by: PutItem, UpdateItem, DeleteItem
Conditional expressions allow you to specify conditions that must be met for the operation to succeed.
Modern syntax (preferred):
{ "ConditionExpression": "attribute_exists(#pk) AND #status = :val", "ExpressionAttributeNames": {"#pk": "id", "#status": "status"}, "ExpressionAttributeValues": {":val": {"S": "active"}} }
Legacy syntax (auto-converted to modern):
{ "Expected": { "status": { "ComparisonOperator": "EQ", "AttributeValueList": [{"S": "active"}] } }, "ConditionalOperator": "AND" }
Supported comparison operators in legacy Expected: EQ, NE, LT, LE, GT, GE, BETWEEN, IN, BEGINS_WITH, CONTAINS, NOT_CONTAINS, NULL, NOT_NULL
When a condition check fails, a ConditionalCheckFailedException is returned.
Supported by: GetItem, BatchGetItem, Query, Scan
Projection expressions specify which attributes to include in the response.
Modern syntax (preferred):
{ "ProjectionExpression": "#n, age, address.city", "ExpressionAttributeNames": {"#n": "name"} }
Legacy syntax (auto-converted):
{ "AttributesToGet": ["name", "age"] }
Note: ProjectionExpression and AttributesToGet are mutually exclusive -- using both in the same request throws 400.
ExpressionAttributeNames allows you to use #alias placeholders in expressions to reference attribute names that:
{ "ExpressionAttributeNames": { "#s": "status", "#d": "date" } }
ExpressionAttributeValues provides typed value placeholders for use in expressions:
{ "ExpressionAttributeValues": { ":status": {"S": "active"}, ":minAge": {"N": "18"}, ":data": {"B": "base64encodeddata"} } }
Supported by: PutItem, UpdateItem, DeleteItem
Controls what data is returned after a write operation.
| ReturnValues Value | PutItem | UpdateItem | DeleteItem | Description |
|---|---|---|---|---|
NONE | Yes (default) | Yes (default) | Yes (default) | Returns nothing (only ConsumedCapacity) |
ALL_OLD | Yes | Yes | Yes | Returns the item as it was before the operation |
ALL_NEW | No | Yes | No | Returns the item as it is after the operation |
UPDATED_OLD | -- | Not supported (throws 400), use ALL_OLD instead | -- | Only applicable to UpdateItem |
UPDATED_NEW | -- | Not supported (throws 400), use ALL_NEW instead | -- | Only applicable to UpdateItem |
Supported by: PutItem, UpdateItem, DeleteItem
Controls whether the existing item is returned when a condition check fails.
| Value | Description |
|---|---|
NONE | No item returned on failure (default) |
ALL_OLD | Returns the existing item that caused the condition to fail |
Supported by: Query, Scan
Filter expressions are applied after items are read from the database but before they are returned to the client. They do not reduce the amount of data scanned.
Modern syntax:
{ "FilterExpression": "#status = :active AND age > :minAge", "ExpressionAttributeNames": {"#status": "status"}, "ExpressionAttributeValues": {":active": {"S": "active"}, ":minAge": {"N": "21"}} }
Legacy syntax (auto-converted):
For Query:
{ "QueryFilter": { "status": { "ComparisonOperator": "EQ", "AttributeValueList": [{"S": "active"}] } }, "ConditionalOperator": "AND" }
For Scan:
{ "ScanFilter": { "status": { "ComparisonOperator": "EQ", "AttributeValueList": [{"S": "active"}] } } }
All list/query/scan operations support pagination:
| API | Cursor Parameter (Request) | Cursor Parameter (Response) |
|---|---|---|
| ListTables | ExclusiveStartTableName | LastEvaluatedTableName |
| Query | ExclusiveStartKey | LastEvaluatedKey |
| Scan | ExclusiveStartKey | LastEvaluatedKey |
| ListStreams | ExclusiveStartStreamArn | LastEvaluatedStreamArn |
| DescribeStream | ExclusiveStartShardId | LastEvaluatedShardId |
| BatchGetItem | (via UnprocessedKeys) | UnprocessedKeys |
| GetRecords | ShardIterator | NextShardIterator |
Pagination rules:
| Limit | Value | APIs Affected |
|---|---|---|
| Query/Scan response size | 1 MB | Query, Scan |
| ListTables response size | 1 MB | ListTables |
| BatchGetItem response size | 16 MB | BatchGetItem |
| GetRecords response size | 1 MB | GetRecords |
| Query result limit (max per page) | 100 items OR 1 MB, whichever comes first | Query |
| Scan result limit (max per page) | 100 items OR 1 MB, whichever comes first | Scan |
| GetRecords limit (max per page) | 50 records OR 1 MB, whichever comes first | GetRecords |
| ListTables default limit | 100 tables | ListTables |
| ListStreams default limit | 100 streams | ListStreams |
| DescribeStream shard limit | 100 shards | DescribeStream |
| BatchWriteItem max items | 25 items | BatchWriteItem |
| BatchGetItem max keys | 100 keys | BatchGetItem |
Creates a new table in Phoenix with the specified key schema, attributes, optional indexes, and optional change streams.
X-Amz-Target: DynamoDB_20120810.CreateTable
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Name of the table to create |
KeySchema | List | Yes | Key elements (1 or 2 elements, see below) |
AttributeDefinitions | List | Yes | Type definitions for key attributes |
GlobalSecondaryIndexes | List | No | Global secondary indexes to create |
LocalSecondaryIndexes | List | No | Local secondary indexes to create |
StreamSpecification | Map | No | Enable change data capture stream |
KeySchema element structure:
{ "AttributeName": "id", "KeyType": "HASH" }
KeyType must be HASH (partition key) or RANGE (sort key)HASH; second (if present) should be RANGEAttributeDefinitions element structure:
{ "AttributeName": "id", "AttributeType": "S" }
AttributeType: S (String/VARCHAR), N (Number/DOUBLE), B (Binary/VARBINARY_ENCODED)GlobalSecondaryIndexes / LocalSecondaryIndexes element structure:
{ "IndexName": "status-index", "KeySchema": [ {"AttributeName": "status", "KeyType": "HASH"}, {"AttributeName": "created_at", "KeyType": "RANGE"} ] }
UNCOVERED INDEX with BSON_VALUE() expressionsStreamSpecification structure:
{ "StreamEnabled": true, "StreamViewType": "NEW_AND_OLD_IMAGES" }
StreamViewType values: NEW_IMAGE, OLD_IMAGE, NEW_AND_OLD_IMAGESStreamEnabled is true{ "TableDescription": { "TableName": "MyTable", "TableStatus": "ACTIVE", "KeySchema": [...], "AttributeDefinitions": [...], "CreationDateTime": 1700000000.000, "BillingModeSummary": {"BillingMode": "PROVISIONED"}, "GlobalSecondaryIndexes": [...], "LocalSecondaryIndexes": [...], "StreamSpecification": {...}, "LatestStreamArn": "...", "LatestStreamLabel": "..." } }
KeySchemaKeySchema must have a matching AttributeDefinitions entryS, N, or BStreamEnabled is true, StreamViewType must be non-emptyResourceInUseExceptionCREATE TABLE IF NOT EXISTS "SCHEMA"."MyTable" ( "id" VARCHAR NOT NULL, "COL" BSON, CONSTRAINT pk PRIMARY KEY ("id") ) IS_STRICT_TTL=false, UPDATE_CACHE_FREQUENCY=60000, ...
For tables with a sort key:
CREATE TABLE IF NOT EXISTS "SCHEMA"."MyTable" ( "id" VARCHAR NOT NULL, "sort_key" DOUBLE NOT NULL, "COL" BSON, CONSTRAINT pk PRIMARY KEY ("id", "sort_key") ) ...
For indexes:
CREATE UNCOVERED INDEX IF NOT EXISTS "status-index" ON "SCHEMA"."MyTable" (BSON_VALUE("COL", 'status', 'VARCHAR'), BSON_VALUE("COL", 'created_at', 'DOUBLE')) WHERE BSON_VALUE("COL", 'status', 'VARCHAR') IS NOT NULL
Drops a table and all its indexes (CASCADE).
X-Amz-Target: DynamoDB_20120810.DeleteTable
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Name of the table to delete |
{ "TableDescription": { "TableName": "MyTable", "TableStatus": "ACTIVE", "KeySchema": [...], "AttributeDefinitions": [...], "CreationDateTime": 1700000000.000, ... } }
The response contains the table description as it was before deletion.
DROP TABLE "SCHEMA"."MyTable" CASCADE
Returns the full description of a table including its schema, indexes, stream configuration, and status.
X-Amz-Target: DynamoDB_20120810.DescribeTable
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Name of the table to describe |
{ "Table": { "TableName": "MyTable", "TableStatus": "ACTIVE", "KeySchema": [ {"AttributeName": "pk", "KeyType": "HASH"}, {"AttributeName": "sk", "KeyType": "RANGE"} ], "AttributeDefinitions": [ {"AttributeName": "pk", "AttributeType": "S"}, {"AttributeName": "sk", "AttributeType": "N"} ], "CreationDateTime": 1700000000.000, "BillingModeSummary": {"BillingMode": "PROVISIONED"}, "ProvisionedThroughput": { "ReadCapacityUnits": 0, "WriteCapacityUnits": 0 }, "GlobalSecondaryIndexes": [ { "IndexName": "gsi-name", "KeySchema": [...], "IndexStatus": "ACTIVE", "Projection": {"ProjectionType": "ALL"} } ], "LocalSecondaryIndexes": [...], "StreamSpecification": { "StreamEnabled": true, "StreamViewType": "NEW_AND_OLD_IMAGES" }, "LatestStreamArn": "phoenix/cdc/stream/...", "LatestStreamLabel": "2024-01-15T10:30:00Z" } }
Index Status values: ACTIVE, CREATING, DELETING
Returns a list of all table names. Supports pagination.
X-Amz-Target: DynamoDB_20120810.ListTables
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
ExclusiveStartTableName | String | No | null | Pagination cursor: returns tables after this name (lexicographic order) |
Limit | Integer | No | 100 | Max table names to return |
{ "TableNames": ["Table1", "Table2", "Table3"], "LastEvaluatedTableName": "Table3" }
LastEvaluatedTableName is only present when there are more results (limit reached or 1 MB size limit hit)Modifies an existing table: add/remove indexes or enable change streams.
X-Amz-Target: DynamoDB_20120810.UpdateTable
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Table to update |
GlobalSecondaryIndexUpdates | List | No | Index operations (Create or Delete) |
AttributeDefinitions | List | Conditional | Required when creating a new index |
StreamSpecification | Map | No | Enable streams (only Disabled -> Enabled supported) |
GlobalSecondaryIndexUpdates element structure:
To create an index:
{ "Create": { "IndexName": "new-index", "KeySchema": [ {"AttributeName": "field1", "KeyType": "HASH"} ] } }
To delete (disable) an index:
{ "Delete": { "IndexName": "old-index" } }
{ "TableDescription": { ... } }
Create and Delete operations are supported to create or drop indexesALTER INDEX ... DISABLE (disables rather than drops)CREATE_DISABLEMERGE_ENABLED=false on the tableEnables or disables Time To Live (TTL) on a table.
X-Amz-Target: DynamoDB_20120810.UpdateTimeToLive
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Table name |
TimeToLiveSpecification | Map | Yes | TTL configuration (see below) |
TimeToLiveSpecification structure:
{ "AttributeName": "expiry_time", "Enabled": true }
{ "TimeToLiveSpecification": { "AttributeName": "expiry_time", "Enabled": true } }
Enable TTL:
ALTER TABLE "SCHEMA"."MyTable" SET TTL = '<ttl_expression based on attribute>'
Disable TTL:
ALTER TABLE "SCHEMA"."MyTable" SET TTL = 'FOREVER'
Returns the TTL configuration for a table.
X-Amz-Target: DynamoDB_20120810.DescribeTimeToLive
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Table name |
When TTL is enabled:
{ "TimeToLiveDescription": { "TimeToLiveStatus": "ENABLED", "AttributeName": "expiry_time" } }
When TTL is disabled:
{ "TimeToLiveDescription": { "TimeToLiveStatus": "DISABLED" } }
Returns the continuous backup/PITR configuration. This is a stub -- Phoenix does not support this feature, so it always returns DISABLED.
X-Amz-Target: DynamoDB_20120810.DescribeContinuousBackups
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Table name (validated for existence) |
{ "ContinuousBackupsDescription": { "ContinuousBackupsStatus": "DISABLED", "PointInTimeRecoveryDescription": { "PointInTimeRecoveryStatus": "DISABLED" } } }
Creates a new item or replaces an existing item with the same primary key. Supports conditional writes and returning the old item.
X-Amz-Target: DynamoDB_20120810.PutItem
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Target table |
Item | Map | Yes | The full item to write (must include PK attributes) |
ConditionExpression | String | No | Condition that must be satisfied |
ExpressionAttributeNames | Map | No | Name aliases for the expression |
ExpressionAttributeValues | Map | No | Typed value placeholders |
ReturnValues | String | No | NONE (default) or ALL_OLD |
ReturnValuesOnConditionCheckFailure | String | No | NONE or ALL_OLD |
Expected | Map | No | Legacy conditional (auto-converted if ConditionExpression is null) |
ConditionalOperator | String | No | AND or OR (used with Expected) |
Item structure:
{ "Item": { "id": {"S": "user-123"}, "name": {"S": "John Doe"}, "age": {"N": "30"}, "active": {"BOOL": true}, "tags": {"SS": ["admin", "user"]}, "metadata": {"M": {"key1": {"S": "value1"}}} } }
Without ReturnValues:
{ "ConsumedCapacity": { "ReadCapacityUnits": 1.0, "WriteCapacityUnits": 1.0, "CapacityUnits": 2.0 } }
With ReturnValues: ALL_OLD (when old item existed):
{ "ConsumedCapacity": { ... }, "Attributes": { "id": {"S": "user-123"}, "name": {"S": "Old Name"}, ... } }
ReturnValues must be NONE or ALL_OLDConditionExpression and Expected are mutually exclusive (throws 400; use one or the other)When a ConditionExpression is provided, the service evaluates whether the condition can be satisfied on an empty/non-existing item:
attribute_not_exists(id)): Uses ON DUPLICATE KEY UPDATE -- allows both insert and conditional updateattribute_exists(id)): Uses ON DUPLICATE KEY UPDATE_ONLY -- only updates existing itemsModifies specific attributes of an existing item (or creates it if using SET operations without a restricting condition).
X-Amz-Target: DynamoDB_20120810.UpdateItem
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Target table |
Key | Map | Yes | Primary key of the item to update |
UpdateExpression | String | No* | Modern update expression |
AttributeUpdates | Map | No* | Legacy update format (mutually exclusive with UpdateExpression) |
ConditionExpression | String | No | Condition that must be satisfied |
ExpressionAttributeNames | Map | No | Name aliases |
ExpressionAttributeValues | Map | No | Value placeholders |
ReturnValues | String | No | NONE, ALL_OLD, or ALL_NEW |
ReturnValuesOnConditionCheckFailure | String | No | NONE or ALL_OLD |
Expected | Map | No | Legacy conditional |
ConditionalOperator | String | No | AND or OR |
Key structure:
{ "Key": { "id": {"S": "user-123"}, "sort_key": {"N": "1"} } }
UpdateExpression syntax:
SET #name = :newName, age = :newAge REMOVE obsolete_field ADD view_count :increment DELETE tags :tagsToRemove
Supported clauses:
SET -- Set attribute valuesREMOVE -- Remove attributesADD -- Add to number or add elements to a setDELETE -- Remove elements from a setLegacy AttributeUpdates format:
{ "AttributeUpdates": { "name": {"Action": "PUT", "Value": {"S": "New Name"}}, "counter": {"Action": "ADD", "Value": {"N": "1"}}, "old_field": {"Action": "DELETE"} } }
| Legacy Action | BSON Equivalent | Description |
|---|---|---|
PUT | $SET | Set attribute value |
ADD | $ADD | Add to number or set |
DELETE (with value) | $DELETE_FROM_SET | Remove elements from set |
DELETE (no value) | $UNSET | Remove the attribute |
{ "ConsumedCapacity": { ... }, "Attributes": { ... } }
Attributes is present only when ReturnValues is ALL_OLD or ALL_NEWUpdateExpression and AttributeUpdates are mutually exclusive (throws 400; use one or the other)ConditionExpression and Expected are mutually exclusive (throws 400; use one or the other)ReturnValues must be NONE, ALL_OLD, or ALL_NEW (UPDATED_OLD and UPDATED_NEW throw 400; use ALL_OLD or ALL_NEW instead)ValidationException("Invalid document path used for update")Deletes a single item by primary key. Supports conditional deletes and returning the old item.
X-Amz-Target: DynamoDB_20120810.DeleteItem
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Target table |
Key | Map | Yes | Primary key of the item to delete |
ConditionExpression | String | No | Condition that must be satisfied |
ExpressionAttributeNames | Map | No | Name aliases |
ExpressionAttributeValues | Map | No | Value placeholders |
ReturnValues | String | No | NONE (default) or ALL_OLD |
ReturnValuesOnConditionCheckFailure | String | No | NONE or ALL_OLD |
Expected | Map | No | Legacy conditional |
ConditionalOperator | String | No | AND or OR |
Without ReturnValues:
{ "ConsumedCapacity": { ... } }
With ReturnValues: ALL_OLD:
{ "ConsumedCapacity": { ... }, "Attributes": { "id": {"S": "user-123"}, "name": {"S": "Deleted User"}, ... } }
ReturnValues must be NONE or ALL_OLDConditionExpression and Expected are mutually exclusive (throws 400; use one or the other)Simple delete:
DELETE FROM "SCHEMA"."MyTable" WHERE "id" = ?
Conditional delete:
DELETE FROM "SCHEMA"."MyTable" WHERE "id" = ? AND BSON_CONDITION_EXPRESSION(COL, ?)
Performs up to 25 put or delete operations across one or more tables in a single call. All operations are committed atomically.
X-Amz-Target: DynamoDB_20120810.BatchWriteItem
| Parameter | Type | Required | Description |
|---|---|---|---|
RequestItems | Map | Yes | Map of table name to list of write requests |
RequestItems structure:
{ "RequestItems": { "Table1": [ { "PutRequest": { "Item": { "id": {"S": "1"}, "name": {"S": "Alice"} } } }, { "DeleteRequest": { "Key": { "id": {"S": "2"} } } } ], "Table2": [ { "PutRequest": { "Item": { "pk": {"S": "A"}, "data": {"S": "hello"} } } } ] } }
Each write request must contain exactly one of:
PutRequest with Item (full item to put)DeleteRequest with Key (primary key to delete){ "UnprocessedItems": {} }
UnprocessedItems is always empty (all items succeed or the entire batch fails atomically).
PutRequest or DeleteRequest (throws 400 otherwise)connection.setAutoCommit(false) + connection.commit(). All operations succeed or fail together.ConditionExpression or ReturnValuesRetrieves a single item by its primary key.
X-Amz-Target: DynamoDB_20120810.GetItem
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Target table |
Key | Map | Yes | Primary key attributes |
ProjectionExpression | String | No | Attributes to return |
ExpressionAttributeNames | Map | No | Name aliases for projection |
AttributesToGet | List | No | Legacy projection (mutually exclusive with ProjectionExpression; throws 400 if both specified) |
Item found:
{ "Item": { "id": {"S": "user-123"}, "name": {"S": "John Doe"}, "age": {"N": "30"} }, "ConsumedCapacity": { ... } }
Item not found (no Item key in response):
{ "ConsumedCapacity": { ... } }
Retrieves multiple items by primary key across one or more tables.
X-Amz-Target: DynamoDB_20120810.BatchGetItem
| Parameter | Type | Required | Description |
|---|---|---|---|
RequestItems | Map | Yes | Map of table name to keys-and-attributes config |
RequestItems structure:
{ "RequestItems": { "Table1": { "Keys": [ {"id": {"S": "1"}}, {"id": {"S": "2"}}, {"id": {"S": "3"}} ], "ProjectionExpression": "name, #s", "ExpressionAttributeNames": {"#s": "status"} }, "Table2": { "Keys": [ {"pk": {"S": "A"}, "sk": {"N": "1"}} ] } } }
Each table's configuration supports:
| Field | Type | Description |
|---|---|---|
Keys | List | List of primary key maps (required) |
ProjectionExpression | String | Attributes to return |
ExpressionAttributeNames | Map | Name aliases |
AttributesToGet | List | Legacy projection |
{ "Responses": { "Table1": [ {"id": {"S": "1"}, "name": {"S": "Alice"}, "status": {"S": "active"}}, {"id": {"S": "2"}, "name": {"S": "Bob"}, "status": {"S": "active"}} ], "Table2": [ {"pk": {"S": "A"}, "sk": {"N": "1"}, "data": {"S": "hello"}} ] }, "UnprocessedKeys": { "Table1": { "Keys": [{"id": {"S": "3"}}], "ProjectionExpression": "name, #s", "ExpressionAttributeNames": {"#s": "status"} } } }
UnprocessedKeys along with their original projection/expression metadata.WHERE pk IN (?, ?, ...)Retrieves items from a table or index based on primary key conditions. Items are always returned sorted by sort key.
X-Amz-Target: DynamoDB_20120810.Query
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Target table |
IndexName | String | No | Secondary index to query |
KeyConditionExpression | String | Yes* | Key condition expression |
ExpressionAttributeNames | Map | No | Name aliases |
ExpressionAttributeValues | Map | No | Value placeholders |
FilterExpression | String | No | Post-read filter |
ProjectionExpression | String | No | Attributes to return |
Select | String | No | What to return (see below) |
Limit | Integer | No | Max items to return (capped at 100 items OR 1 MB, whichever comes first) |
ScanIndexForward | Boolean | No | true (default) = ASC, false = DESC |
ExclusiveStartKey | Map | No | Pagination cursor from previous response |
KeyConditions | Map | No | Legacy key conditions (mutually exclusive with KeyConditionExpression) |
QueryFilter | Map | No | Legacy filter (mutually exclusive with FilterExpression) |
ConditionalOperator | String | No | Used with QueryFilter |
Select values:
| Value | Description |
|---|---|
ALL_ATTRIBUTES | Return all attributes (default) |
SPECIFIC_ATTRIBUTES | Return only projected attributes (requires ProjectionExpression) |
COUNT | Return only the count, no items |
KeyConditionExpression patterns supported:
| Pattern | Example |
|---|---|
| Partition key only | pk = :pk_val |
| Partition + sort key equality | pk = :pk_val AND sk = :sk_val |
| Partition + sort key comparison | pk = :pk_val AND sk > :sk_val |
| Partition + sort key range | pk = :pk_val AND sk BETWEEN :lo AND :hi |
| Partition + sort key prefix | pk = :pk_val AND begins_with(sk, :prefix) |
Supported sort key operators: =, <, >, <=, >=, BETWEEN, begins_with
{ "Items": [ {"id": {"S": "user-1"}, "sort": {"N": "1"}, "name": {"S": "Alice"}}, {"id": {"S": "user-1"}, "sort": {"N": "2"}, "name": {"S": "Bob"}} ], "Count": 2, "ScannedCount": 5, "ConsumedCapacity": { ... }, "LastEvaluatedKey": { "id": {"S": "user-1"}, "sort": {"N": "2"} } }
Count: Number of items returned (after filtering)ScannedCount: Total items scanned (before filtering)LastEvaluatedKey: Present when there are more results; use as ExclusiveStartKey in the next requestSelect: COUNT, the response omits Items and only has Count and ScannedCountKeyConditionExpression and KeyConditions (throws 400; use one or the other)FilterExpression and QueryFilter (throws 400; use one or the other)ProjectionExpression and AttributesToGet (throws 400; use one or the other)Select: SPECIFIC_ATTRIBUTES requires ProjectionExpression (throws 400 if missing)Select: ALL_ATTRIBUTES is incompatible with ProjectionExpression (throws 400 if both specified)Returns all items from a table or index (full table scan). Supports filtering and parallel segment scanning.
X-Amz-Target: DynamoDB_20120810.Scan
| Parameter | Type | Required | Description |
|---|---|---|---|
TableName | String | Yes | Target table |
IndexName | String | No | Secondary index to scan |
FilterExpression | String | No | Post-scan filter |
ExpressionAttributeNames | Map | No | Name aliases |
ExpressionAttributeValues | Map | No | Value placeholders |
ProjectionExpression | String | No | Attributes to return |
Select | String | No | ALL_ATTRIBUTES, SPECIFIC_ATTRIBUTES, or COUNT |
Limit | Integer | No | Max items per page (capped at 100 items OR 1 MB, whichever comes first) |
ExclusiveStartKey | Map | No | Pagination cursor |
Segment | Integer | No | Segment number for parallel scan |
TotalSegments | Integer | No | Total segments for parallel scan |
ScanFilter | Map | No | Legacy filter (mutually exclusive with FilterExpression) |
ConditionalOperator | String | No | Used with ScanFilter |
Same structure as Query response:
{ "Items": [...], "Count": 10, "ScannedCount": 50, "ConsumedCapacity": { ... }, "LastEvaluatedKey": { ... } }
To scan a table in parallel, split the work across multiple threads/workers:
// Worker 0 {"TableName": "MyTable", "Segment": 0, "TotalSegments": 4} // Worker 1 {"TableName": "MyTable", "Segment": 1, "TotalSegments": 4} // Worker 2 {"TableName": "MyTable", "Segment": 2, "TotalSegments": 4} // Worker 3 {"TableName": "MyTable", "Segment": 3, "TotalSegments": 4}
How segments work:
PHOENIX_DDB_SEGMENT_RANGE) with TTL = 5400 secondsWhen scanning a table with composite primary key (HASH + RANGE) and an ExclusiveStartKey is provided, the scan executes two queries:
pk1 = ? AND pk2 > ?pk1 > ?Results from both queries are merged into a single response.
Segment must be >= 0 and < TotalSegments (throws 400 if out of range)FilterExpression and ScanFilter (throws 400; use one or the other)ProjectionExpression and AttributesToGet (throws 400; use one or the other)Change streams allow you to capture item-level changes (inserts, updates, deletes) from a table. This is implemented using Phoenix's CDC (Change Data Capture) feature.
1. Enable streams on a table (CreateTable or UpdateTable with StreamSpecification) 2. List available streams (ListStreams) 3. Describe a stream to get its shards (DescribeStream) 4. Get a shard iterator for a specific shard (GetShardIterator) 5. Read records using the shard iterator (GetRecords) 6. Continue reading with NextShardIterator until null
| Type | OldImage | NewImage | Description |
|---|---|---|---|
NEW_IMAGE | No | Yes | Post-modification state only |
OLD_IMAGE | Yes | No | Pre-modification state only |
NEW_AND_OLD_IMAGES | Yes | Yes | Both pre- and post-modification states |
Returns a list of all streams, optionally filtered by table name.
X-Amz-Target: DynamoDB_20120810.ListStreams
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
TableName | String | No | null | Filter streams for a specific table |
ExclusiveStartStreamArn | String | No | null | Pagination cursor |
Limit | Integer | No | 100 | Max streams to return |
{ "Streams": [ { "TableName": "MyTable", "StreamArn": "phoenix/cdc/stream/MyTable/...", "StreamLabel": "2024-01-15T10:30:00Z" } ], "LastEvaluatedStreamArn": "phoenix/cdc/stream/MyTable/..." }
LastEvaluatedStreamArn is present only when the result count equals the limit.
Returns detailed information about a stream including its shards.
X-Amz-Target: DynamoDB_20120810.DescribeStream
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
StreamArn | String | Yes | Stream ARN from ListStreams | |
ExclusiveStartShardId | String | No | null | Pagination cursor for shards |
Limit | Integer | No | 100 | Max shards to return |
{ "StreamDescription": { "StreamArn": "phoenix/cdc/stream/MyTable/...", "TableName": "MyTable", "StreamLabel": "2024-01-15T10:30:00Z", "StreamViewType": "NEW_AND_OLD_IMAGES", "CreationRequestDateTime": 1700000000.000, "KeySchema": [ {"AttributeName": "id", "KeyType": "HASH"} ], "StreamStatus": "ENABLED", "Shards": [ { "ShardId": "partition-1", "ParentShardId": "parent-partition-0", "SequenceNumberRange": { "StartingSequenceNumber": "170000000000000", "EndingSequenceNumber": "170100000099999" } } ], "LastEvaluatedShardId": "partition-1" } }
StreamStatus is ENABLEDEndingSequenceNumber is only present for closed shards (after a split)LastEvaluatedShardId is present only when shard count equals the limitGets a shard iterator for reading records from a specific position in a shard.
X-Amz-Target: DynamoDB_20120810.GetShardIterator
| Parameter | Type | Required | Description |
|---|---|---|---|
StreamArn | String | Yes | Stream ARN |
ShardId | String | Yes | Shard/partition ID |
ShardIteratorType | String | Yes | Where to start reading (see below) |
SequenceNumber | String | Conditional | Required for AT_SEQUENCE_NUMBER and AFTER_SEQUENCE_NUMBER |
ShardIteratorType values:
| Type | Description |
|---|---|
TRIM_HORIZON | Start from the oldest available record in the shard |
LATEST | Start from new records written after this call |
AT_SEQUENCE_NUMBER | Start at the exact sequence number specified |
AFTER_SEQUENCE_NUMBER | Start at the record after the specified sequence number |
{ "ShardIterator": "shardIterator/tableName/cdcObject/streamType/shardId/12345" }
The shard iterator is an encoded string containing: table name, CDC object name, stream type, shard ID, and starting sequence number.
Reads change records from a shard using a shard iterator.
X-Amz-Target: DynamoDB_20120810.GetRecords
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
ShardIterator | String | Yes | Shard iterator from GetShardIterator or previous GetRecords | |
Limit | Integer | No | 50 | Max records to return (capped at 50 records OR 1 MB, whichever comes first) |
{ "Records": [ { "eventName": "INSERT", "dynamodb": { "StreamViewType": "NEW_AND_OLD_IMAGES", "SequenceNumber": "170000000000001", "ApproximateCreationDateTime": 1700000000.123, "Keys": { "id": {"S": "user-123"} }, "NewImage": { "id": {"S": "user-123"}, "name": {"S": "John Doe"}, "age": {"N": "30"} }, "SizeBytes": 256 } }, { "eventName": "MODIFY", "dynamodb": { "StreamViewType": "NEW_AND_OLD_IMAGES", "SequenceNumber": "170000000000002", "ApproximateCreationDateTime": 1700000001.456, "Keys": {"id": {"S": "user-123"}}, "OldImage": {"id": {"S": "user-123"}, "name": {"S": "John Doe"}, "age": {"N": "30"}}, "NewImage": {"id": {"S": "user-123"}, "name": {"S": "John Doe"}, "age": {"N": "31"}}, "SizeBytes": 512 } }, { "eventName": "REMOVE", "dynamodb": { "StreamViewType": "NEW_AND_OLD_IMAGES", "SequenceNumber": "170000000000003", "ApproximateCreationDateTime": 1700000002.789, "Keys": {"id": {"S": "user-456"}}, "OldImage": {"id": {"S": "user-456"}, "name": {"S": "Jane"}}, "SizeBytes": 128 }, "userIdentity": { "Type": "Service", "PrincipalId": "phoenix/hbase" } } ], "NextShardIterator": "shardIterator/tableName/cdcObject/streamType/shardId/170000000000004" }
Event types:
| eventName | Meaning | OldImage | NewImage |
|---|---|---|---|
INSERT | New item created | Absent | Present |
MODIFY | Existing item updated | Present | Present |
REMOVE | Item deleted | Present | Absent |
Special fields:
userIdentity: Only present for TTL-based deletions (automatic expiry). Indicates the deletion was performed by the system rather than a user.NextShardIterator: null when the shard is closed and all records have been consumed. Use this to detect the end of a shard.Image inclusion depends on StreamViewType:
NEW_IMAGE: Only NewImage is includedOLD_IMAGE: Only OldImage is includedNEW_AND_OLD_IMAGES: Both are included (when applicable based on event type)Authentication is optional and can be enabled by configuring a credential store implementation.
Set the credential store class in the server configuration:
phoenix.ddb.rest.auth.credential.store.class=com.example.MyCredentialStore
The AccessKeyAuthFilter supports three ways to provide credentials (checked in order):
| Method | Header | Format |
|---|---|---|
| AWS SigV4 | Authorization | AWS4-HMAC-SHA256 Credential=AKID/... (extracts access key ID only) |
| AccessKeyId format | Authorization | AccessKeyId=AKID |
| Custom header | X-Access-Key-Id | AKID |
CredentialStoreuserName and accessKeyId as request attributes and proceeds403 ForbiddenTo implement custom authentication, create a class that implements:
public interface CredentialStore { UserCredentials getCredentials(String accessKeyId); }
The CredentialStore can use any storage mechanism: database, file, LDAP, Vault, etc.
When phoenix.ddb.rest.auth.credential.store.class is not configured, the auth filter is not registered and all requests are allowed without authentication. The AWS SDK still requires credentials to be set (use dummy values).
All errors are returned as HTTP 400 Bad Request with JSON body:
{ "__type": "com.amazonaws.dynamodb.v20120810#<ExceptionType>", "message": "<Error description>" }
| Exception Type | Constant | When Thrown |
|---|---|---|
ValidationException | com.amazonaws.dynamodb.v20120810#ValidationException | Invalid request parameters, unsupported operations |
ResourceNotFoundException | com.amazonaws.dynamodb.v20120810#ResourceNotFoundException | Table does not exist |
ConditionalCheckFailedException | com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException | Condition expression evaluated to false |
ResourceInUseException | com.amazonaws.dynamodb.v20120810#ResourceInUseException | Table already exists (on CreateTable) |
When ReturnValuesOnConditionCheckFailure is set to ALL_OLD, the error response includes the existing item:
{ "__type": "com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException", "message": "The conditional request failed", "Item": { "id": {"S": "user-123"}, "name": {"S": "Existing Name"}, ... } }
bin/phoenix-adapters rest start -p <port> -z <zk-quorum>
| Flag | Description | Default |
|---|---|---|
-p <port> | HTTP listen port | 8842 |
-z <zk-quorum> | ZooKeeper quorum for Phoenix connection | From HBase config or ZOO_KEEPER_QUORUM env var |
| Property | Default | Description |
|---|---|---|
phoenix.ddb.rest.port | 8842 | HTTP listen port |
phoenix.ddb.rest.host | 0.0.0.0 | Bind address |
phoenix.ddb.zk.quorum | (from HBase config) | ZooKeeper quorum string |
phoenix.ddb.rest.threads.max | 125 | Max Jetty thread pool size |
phoenix.ddb.rest.threads.min | 2 | Min Jetty thread pool size |
phoenix.ddb.rest.task.queue.size | -1 (unbounded) | Thread pool task queue size |
phoenix.ddb.rest.thread.idle.timeout | 60000 (60s) | Thread idle timeout (ms) |
phoenix.ddb.rest.http.idle.timeout | 30000 (30s) | HTTP connection idle timeout (ms) |
phoenix.ddb.rest.http.header.cache.size | 65534 | HTTP header cache size |
phoenix.ddb.rest.connector.accept.queue.size | -1 (default) | TCP accept queue size |
phoenix.ddb.rest.http.allow.options.method | true | Allow HTTP OPTIONS method |
phoenix.ddb.rest.connection.cleanup-interval | 10000 (10s) | Connection cache cleanup interval (ms) |
phoenix.ddb.rest.connection.max-idletime | 600000 (10min) | Max connection idle time (ms) |
phoenix.ddb.rest.support.proxyuser | false | Enable proxy user support |
phoenix.ddb.rest.auth.credential.store.class | (none) | Auth credential store class |
phoenix.ddb.rest.dns.interface | default | DNS interface for hostname resolution |
phoenix.ddb.rest.dns.nameserver | default | DNS nameserver |
IS_STRICT_TTL=false UPDATE_CACHE_FREQUENCY=60000 phoenix.max.lookback.age.seconds=97200 hbase.hregion.majorcompaction=172800000 org.apache.hadoop.hbase.index.lazy.post_batch.write=true
hbase.hregion.majorcompaction=172800000
| Variable | Description |
|---|---|
ZOO_KEEPER_QUORUM | ZooKeeper quorum (alternative to -z flag) |
JAVA_HOME | Path to Java installation |
PHOENIX_ADAPTERS_HOME | Installation root directory |
PHOENIX_ADAPTERS_CONF_DIR | Configuration directory |
PHOENIX_ADAPTERS_LOG_DIR | Log directory |
PHOENIX_REST_HEAPSIZE | JVM max heap size (e.g., 2g) |
PHOENIX_REST_OFFHEAPSIZE | JVM max off-heap size (e.g., 1g) |
PHOENIX_REST_OPTS | Additional JVM options |
PHOENIX_DDB_REST_OPTS | Additional JVM options for REST server |
The server exposes JMX metrics at http://<host>:<port>/jmx (no authentication required even when auth is enabled).
Each API operation tracks:
<Operation>SuccessTime -- duration in milliseconds for successful calls<Operation>FailureTime -- duration in milliseconds for failed calls| Feature | Status |
|---|---|
UPDATED_OLD return value (UpdateItem) | Not supported (throws 400), use ALL_OLD instead. |
UPDATED_NEW return value (UpdateItem) | Not supported (throws 400), use ALL_NEW instead. |
| Disabling streams | Not supported once enabled |
| Continuous Backups / PITR | Stub only (always returns DISABLED) |
Transactions (TransactWriteItems, TransactGetItems) | Not implemented |
PartiQL (ExecuteStatement, BatchExecuteStatement) | Not implemented |
| Table auto-scaling | Not applicable |
| Global Tables | Not applicable |
| DynamoDB Accelerator (DAX) | Not applicable |
| On-demand backup/restore | Not applicable |
| Export to S3 | Not applicable |
| Aspect | AWS DynamoDB | Phoenix-Adapters |
|---|---|---|
| Table status on create | Transitions CREATING -> ACTIVE | Immediately ACTIVE |
| Index deletion | Index is dropped | Index is disabled (ALTER INDEX ... DISABLE), it is dropped eventually asynchronously |
| Billing mode | PAY_PER_REQUEST or PROVISIONED | Always reports PROVISIONED (no actual billing) |
| Consumed capacity | Actual capacity units | Always hardcoded {ReadCapacityUnits: 1.0, WriteCapacityUnits: 1.0, CapacityUnits: 2.0} |
| Query/Scan limit | Up to 1 MB per page | Capped at 100 items OR 1 MB, whichever comes first |
| Stream shard iterators | Expire after 15 minutes | No automatic expiry |
| Item storage | Native DynamoDB format | BSON document in a single Phoenix column |
| Consistency | Eventual + (Strong for local indexes) | Depends on Phoenix/HBase configuration |
S (String), N (Number), or B (Binary)| # | Category | Operation | X-Amz-Target |
|---|---|---|---|
| 1 | DDL | CreateTable | DynamoDB_20120810.CreateTable |
| 2 | DDL | DeleteTable | DynamoDB_20120810.DeleteTable |
| 3 | DDL | DescribeTable | DynamoDB_20120810.DescribeTable |
| 4 | DDL | ListTables | DynamoDB_20120810.ListTables |
| 5 | DDL | UpdateTable | DynamoDB_20120810.UpdateTable |
| 6 | DDL | UpdateTimeToLive | DynamoDB_20120810.UpdateTimeToLive |
| 7 | DDL | DescribeTimeToLive | DynamoDB_20120810.DescribeTimeToLive |
| 8 | DDL | DescribeContinuousBackups | DynamoDB_20120810.DescribeContinuousBackups |
| 9 | DML | PutItem | DynamoDB_20120810.PutItem |
| 10 | DML | UpdateItem | DynamoDB_20120810.UpdateItem |
| 11 | DML | DeleteItem | DynamoDB_20120810.DeleteItem |
| 12 | DML | BatchWriteItem | DynamoDB_20120810.BatchWriteItem |
| 13 | DQL | GetItem | DynamoDB_20120810.GetItem |
| 14 | DQL | BatchGetItem | DynamoDB_20120810.BatchGetItem |
| 15 | DQL | Query | DynamoDB_20120810.Query |
| 16 | DQL | Scan | DynamoDB_20120810.Scan |
| 17 | Stream | ListStreams | DynamoDB_20120810.ListStreams |
| 18 | Stream | DescribeStream | DynamoDB_20120810.DescribeStream |
| 19 | Stream | GetShardIterator | DynamoDB_20120810.GetShardIterator |
| 20 | Stream | GetRecords | DynamoDB_20120810.GetRecords |