HugeGraph Docker Deployment

This directory contains Docker Compose files for running HugeGraph:

FileDescription
docker-compose.ymlSingle-node cluster using pre-built images from Docker Hub
docker-compose.dev.ymlSingle-node cluster built from source (for developers)
docker-compose-3pd-3store-3server.yml3-node distributed cluster (PD + Store + Server)

Prerequisites

  • Docker Engine 20.10+ (or Docker Desktop 4.x+)
  • Docker Compose v2 (included in Docker Desktop)
  • Memory: Allocate at least 12 GB to Docker Desktop (Settings → Resources → Memory). The 3-node cluster runs 9 JVM processes (3 PD + 3 Store + 3 Server) which are memory-intensive. Insufficient memory causes OOM kills that appear as silent Raft failures.

[!IMPORTANT] The 12 GB minimum is for Docker Desktop. On Linux with native Docker, ensure the host has at least 12 GB of free memory.


Single-Node Setup

Two compose files are available for running a single-node cluster (1 PD + 1 Store + 1 Server):

Option A: Quick Start (pre-built images)

Uses pre-built images from Docker Hub. Best for end users who want to run HugeGraph quickly.

cd docker
HUGEGRAPH_VERSION=1.7.0 docker compose up -d
  • Images: hugegraph/pd:1.7.0, hugegraph/store:1.7.0, hugegraph/server:1.7.0
  • pull_policy: always — always pulls the specified image tag

Note: Use release tags (e.g., 1.7.0) for stable deployments. The latest tag is intended for testing or development only.

  • PD healthcheck endpoint: /v1/health
  • Single PD, single Store (HG_PD_INITIAL_STORE_LIST: store:8500), single Server
  • Server healthcheck endpoint: /versions

Option B: Development Build (build from source)

Builds images locally from source Dockerfiles. Best for developers who want to test local changes.

cd docker
docker compose -f docker-compose.dev.yml up -d
  • Images: built from source via build: context: .. with Dockerfiles
  • No pull_policy — builds locally, doesn't pull
  • Entrypoint scripts are baked into the built image (no volume mounts)
  • PD healthcheck endpoint: /v1/health
  • Otherwise identical env vars and structure to the quickstart file

Key Differences

docker-compose.yml (quickstart)docker-compose.dev.yml (dev build)
ImagesPull from Docker HubBuild from source
Who it's forEnd usersDevelopers
pull_policyalwaysnot set (build)

Verify (both options):

curl http://localhost:8080/versions

3-Node Cluster Quickstart

cd docker
HUGEGRAPH_VERSION=1.7.0 docker compose -f docker-compose-3pd-3store-3server.yml up -d

# To stop and remove all data volumes (clean restart)
docker compose -f docker-compose-3pd-3store-3server.yml down -v

Startup ordering is enforced via depends_on with condition: service_healthy:

  1. PD nodes start first and must pass healthchecks (/v1/health)
  2. Store nodes start after all PD nodes are healthy
  3. Server nodes start after all Store nodes are healthy

This ensures PD and Store are healthy before the server starts. The server entrypoint still performs a best-effort partition wait after launch, so partition assignment may take a little longer.

Verify the cluster is healthy:

# Check PD health
curl http://localhost:8620/v1/health

# Check Store health
curl http://localhost:8520/v1/health

# Check Server (Graph API)
curl http://localhost:8080/versions

# List registered stores via PD
curl http://localhost:8620/v1/stores

# List partitions
curl http://localhost:8620/v1/partitions

Environment Variable Reference

Configuration is injected via environment variables. The old docker/configs/application-pd*.yml and docker/configs/application-store*.yml files are no longer used.

PD Environment Variables

VariableRequiredDefaultMaps To (application.yml)Description
HG_PD_GRPC_HOSTYesgrpc.hostThis node's hostname/IP for gRPC
HG_PD_RAFT_ADDRESSYesraft.addressThis node's Raft address (e.g. pd0:8610)
HG_PD_RAFT_PEERS_LISTYesraft.peers-listAll PD peers (e.g. pd0:8610,pd1:8610,pd2:8610)
HG_PD_INITIAL_STORE_LISTYespd.initial-store-listExpected stores (e.g. store0:8500,store1:8500,store2:8500)
HG_PD_GRPC_PORTNo8686grpc.portgRPC server port
HG_PD_REST_PORTNo8620server.portREST API port
HG_PD_DATA_PATHNo/hugegraph-pd/pd_datapd.data-pathMetadata storage path
HG_PD_INITIAL_STORE_COUNTNo1pd.initial-store-countMin stores for cluster availability

Deprecated aliases (still work but log a warning):

DeprecatedUse Instead
GRPC_HOSTHG_PD_GRPC_HOST
RAFT_ADDRESSHG_PD_RAFT_ADDRESS
RAFT_PEERSHG_PD_RAFT_PEERS_LIST
PD_INITIAL_STORE_LISTHG_PD_INITIAL_STORE_LIST

Store Environment Variables

VariableRequiredDefaultMaps To (application.yml)Description
HG_STORE_PD_ADDRESSYespdserver.addressPD gRPC addresses (e.g. pd0:8686,pd1:8686,pd2:8686)
HG_STORE_GRPC_HOSTYesgrpc.hostThis node's hostname (e.g. store0)
HG_STORE_RAFT_ADDRESSYesraft.addressThis node's Raft address (e.g. store0:8510)
HG_STORE_GRPC_PORTNo8500grpc.portgRPC server port
HG_STORE_REST_PORTNo8520server.portREST API port
HG_STORE_DATA_PATHNo/hugegraph-store/storageapp.data-pathData storage path

Deprecated aliases (still work but log a warning):

DeprecatedUse Instead
PD_ADDRESSHG_STORE_PD_ADDRESS
GRPC_HOSTHG_STORE_GRPC_HOST
RAFT_ADDRESSHG_STORE_RAFT_ADDRESS

Server Environment Variables

VariableRequiredDefaultMaps ToDescription
HG_SERVER_BACKENDYesbackend in hugegraph.propertiesStorage backend (e.g. hstore)
HG_SERVER_PD_PEERSYespd.peersPD cluster addresses (e.g. pd0:8686,pd1:8686,pd2:8686)
STORE_RESTNoUsed by wait-partition.shStore REST endpoint for partition verification (e.g. store0:8520)
PASSWORDNoEnables auth modeOptional authentication password

Deprecated aliases (still work but log a warning):

DeprecatedUse Instead
BACKENDHG_SERVER_BACKEND
PD_PEERSHG_SERVER_PD_PEERS

Port Reference

The table below reflects the published host ports in docker-compose-3pd-3store-3server.yml. The single-node compose file (docker-compose.yml) only publishes the REST/API ports (8620, 8520, 8080) by default.

ServiceContainer PortHost PortProtocolPurpose
pd086208620HTTPREST API
pd086868686gRPCPD gRPC
pd08610TCPRaft (internal only)
pd186208621HTTPREST API
pd186868687gRPCPD gRPC
pd286208622HTTPREST API
pd286868688gRPCPD gRPC
store085008500gRPCStore gRPC
store085108510TCPRaft
store085208520HTTPREST API
store185008501gRPCStore gRPC
store185108511TCPRaft
store185208521HTTPREST API
store285008502gRPCStore gRPC
store285108512TCPRaft
store285208522HTTPREST API
server080808080HTTPGraph API
server180808081HTTPGraph API
server280808082HTTPGraph API

Healthcheck Endpoints

ServiceEndpointExpected
PDGET /v1/health200 OK
StoreGET /v1/health200 OK
ServerGET /versions200 OK with version JSON

Troubleshooting

Containers Exiting or Restarting (OOM Kills)

Symptom: Containers exit with code 137, or restart loops. Raft logs show election timeouts.

Cause: Docker Desktop does not have enough memory. The 9 JVM processes require at least 12 GB.

Fix: Docker Desktop → Settings → Resources → Memory → set to 12 GB or higher. Restart Docker Desktop.

# Check if containers were OOM killed
docker inspect hg-pd0 | grep -i oom
docker stats --no-stream

Raft Leader Election Failure

Symptom: PD logs show repeated Leader election timeout. Store nodes cannot register.

Cause: PD nodes cannot reach each other on the Raft port (8610), or HG_PD_RAFT_PEERS_LIST is misconfigured.

Fix:

  1. Verify all PD containers are running: docker compose -f docker-compose-3pd-3store-3server.yml ps
  2. Check PD logs: docker logs hg-pd0
  3. Verify network connectivity: docker exec hg-pd0 ping pd1
  4. Ensure HG_PD_RAFT_PEERS_LIST is identical on all PD nodes

Partition Assignment Not Completing

Symptom: Server starts but graph operations fail. Store logs show partition not found.

Cause: PD has not finished assigning partitions to stores, or stores did not register successfully.

Fix:

  1. Check registered stores: curl http://localhost:8620/v1/stores
  2. Check partition status: curl http://localhost:8620/v1/partitions
  3. Wait for partition assignment (can take 1–3 minutes after all stores register)
  4. Check server logs for the wait-partition.sh script output: docker logs hg-server0

Connection Refused Errors

Symptom: Stores cannot connect to PD, or Server cannot connect to Store.

Cause: Services are using 127.0.0.1 instead of container hostnames, or the hg-net bridge network is misconfigured.

Fix: Ensure all HG_* env vars use container hostnames (pd0, store0, etc.), not 127.0.0.1 or localhost.