Comprehensive guide for developing, testing, and contributing to HugeGraph Store.
Required:
Optional (for testing):
# Clone HugeGraph repository git clone https://github.com/apache/hugegraph.git cd hugegraph # Checkout development branch git checkout 1.7-rebase
Import Project:
hugegraph directoryCode Style:
# Configure IDE code style # Ensure EditorConfig support is enabled # Code style is defined in .editorconfig at repository root
Run Configuration:
org.apache.hugegraph.store.node.StoreNodeApplication-Xms4g -Xmx4g -Dconfig.file=conf/application.ymlhugegraph-store/hg-store-dist/target/apache-hugegraph-store-incubating-1.7.0hg-store-nodeBuild entire project:
# From hugegraph root mvn clean install -DskipTests
Build Store module only:
# Build hugegraph-struct first (required dependency) mvn install -pl hugegraph-struct -am -DskipTests # Build Store cd hugegraph-store mvn clean install -DskipTests
Build with tests:
mvn clean install
hugegraph-struct (external dependency)
↓
hg-store-common
↓
├─→ hg-store-grpc (proto definitions)
├─→ hg-store-rocksdb
↓
hg-store-core
↓
├─→ hg-store-client
├─→ hg-store-node
↓
├─→ hg-store-cli
├─→ hg-store-dist
└─→ hg-store-test
Location: hugegraph-store/hg-store-common
Purpose: Shared utilities and query abstractions
Key Packages:
buffer: ByteBuffer utilitiesconstant: Constants and enumsquery: Query abstraction classesCondition: Filter conditionsAggregate: Aggregation typesQueryCondition: Query parametersterm: Term matching utilitiesutil: General utilitiesAdding New Utility:
util)hg-store-testLocation: hugegraph-store/hg-store-grpc
Purpose: gRPC protocol definitions
Structure:
hg-store-grpc/ ├── src/main/proto/ # Protocol definitions │ ├── store_session.proto │ ├── query.proto │ ├── graphpb.proto │ ├── store_state.proto │ ├── store_stream_meta.proto │ ├── healthy.proto │ └── store_common.proto └── target/generated-sources/ # Generated Java code (git-ignored)
Generated Code: Excluded from source control and Apache RAT checks
Location: hugegraph-store/hg-store-core
Purpose: Core storage engine logic
Key Classes:
HgStoreEngine.java (~500 lines):
PartitionEngine instancesPartitionEngine.java (~300 lines):
BusinessHandlerHgStoreStateMachine.java (~400 lines):
StateMachineBusinessHandler.java (interface) / BusinessHandlerImpl.java** (~800 lines):
Key Packages:
business/: Business logic layermeta/: Metadata managementraft/: Raft integrationpd/: PD client and integrationcmd/: Command processingsnapshot/: Snapshot managementLocation: hugegraph-store/hg-store-client
Purpose: Java client library
Key Classes:
HgStoreClient: Main client interfaceHgStoreSession: Session-based operationsHgStoreNodeManager: Connection managementHgStoreQuery: Query builderUsage: See Integration Guide
Location: hugegraph-store/hg-store-node
Purpose: Store node server
Key Classes:
StoreNodeApplication: Spring Boot main classHgStoreSessionService: gRPC service implementationHgStoreQueryService: Query service implementationStart Server:
cd hugegraph-store/hg-store-dist/target/apache-hugegraph-store-incubating-1.7.0 bin/start-hugegraph-store.sh
Clean build:
mvn clean install -DskipTests
Compile only:
mvn compile
Package distribution:
mvn clean package -DskipTests # Output: hg-store-dist/target/apache-hugegraph-store-incubating-<version>.tar.gz
Regenerate gRPC stubs (after modifying .proto files):
cd hugegraph-store/hg-store-grpc mvn clean compile # Generated files: target/generated-sources/protobuf/
Store tests use Maven profiles (all active by default):
<profile> <id>store-client-test</id> <activation><activeByDefault>true</activeByDefault></activation> </profile> <profile> <id>store-core-test</id> <activation><activeByDefault>true</activeByDefault></activation> </profile> <profile> <id>store-common-test</id> <activation><activeByDefault>true</activeByDefault></activation> </profile> <profile> <id>store-rocksdb-test</id> <activation><activeByDefault>true</activeByDefault></activation> </profile> <profile> <id>store-server-test</id> <activation><activeByDefault>true</activeByDefault></activation> </profile> <profile> <id>store-raftcore-test</id> <activation><activeByDefault>true</activeByDefault></activation> </profile>
All tests:
cd hugegraph-store mvn test
Specific profile:
mvn test -P store-core-test
Specific test class:
mvn test -Dtest=HgStoreEngineTest
Specific test method:
mvn test -Dtest=HgStoreEngineTest#testPartitionCreation
From IntelliJ:
Location: hugegraph-store/hg-store-test/src/main/java (non-standard location)
Packages:
client/: Client library testscommon/: Common utilities testscore/: Core storage testsraft/: Raft testssnapshot/: Snapshot testsstore/: Storage engine testsmeta/: Metadata testsraftcore/: Raft core testsrocksdb/: RocksDB testsservice/: Service testsBase Test Class: BaseTest.java
Example Test Class:
package org.apache.hugegraph.store.core; import org.apache.hugegraph.store.BaseTest; import org.junit.Test; import static org.junit.Assert.*; public class HgStoreEngineTest extends BaseTest { @Test public void testEngineCreation() { // Arrange HgStoreEngineConfig config = HgStoreEngineConfig.builder() .dataPath("./test-data") .build(); // Act HgStoreEngine engine = HgStoreEngine.getInstance(); engine.init(config); // Assert assertNotNull(engine); assertTrue(engine.isInitialized()); // Cleanup engine.shutdown(); } }
Integration Test Example:
@Test public void testRaftConsensus() throws Exception { // Setup 3-node Raft group List<PartitionEngine> engines = new ArrayList<>(); for (int i = 0; i < 3; i++) { PartitionEngine engine = createPartitionEngine(i); engines.add(engine); engine.start(); } // Wait for leader election Thread.sleep(2000); // Perform write on leader PartitionEngine leader = findLeader(engines); leader.put("key1".getBytes(), "value1".getBytes()); // Wait for replication Thread.sleep(1000); // Verify on all nodes for (PartitionEngine engine : engines) { byte[] value = engine.get("key1".getBytes()); assertEquals("value1", new String(value)); } // Cleanup for (PartitionEngine engine : engines) { engine.stop(); } }
Generate Coverage Report:
mvn clean test jacoco:report # Report: hg-store-test/target/site/jacoco/index.html
View in Browser:
open hg-store-test/target/site/jacoco/index.html
Create or edit .proto file in hg-store-grpc/src/main/proto/:
Example: my_service.proto
syntax = "proto3"; package org.apache.hugegraph.store.grpc; import "store_common.proto"; service MyService { rpc MyOperation(MyRequest) returns (MyResponse); } message MyRequest { Header header = 1; string key = 2; } message MyResponse { bytes value = 1; }
cd hg-store-grpc mvn clean compile # Generated classes: # - MyServiceGrpc.java (service stub) # - MyRequest.java # - MyResponse.java
Create service implementation in hg-store-node/src/main/java/.../service/:
package org.apache.hugegraph.store.node.service; import io.grpc.stub.StreamObserver; import org.apache.hugegraph.store.grpc.MyServiceGrpc; import org.apache.hugegraph.store.grpc.MyRequest; import org.apache.hugegraph.store.grpc.MyResponse; public class MyServiceImpl extends MyServiceGrpc.MyServiceImplBase { @Override public void myOperation(MyRequest request, StreamObserver<MyResponse> responseObserver) { try { // Extract request parameters String key = request.getKey(); // Perform operation (delegate to HgStoreEngine) byte[] value = performOperation(key); // Build response MyResponse response = MyResponse.newBuilder() .setValue(ByteString.copyFrom(value)) .build(); // Send response responseObserver.onNext(response); responseObserver.onCompleted(); } catch (Exception e) { responseObserver.onError(e); } } private byte[] performOperation(String key) { // Implementation return new byte[0]; } }
In StoreNodeApplication.java:
@Bean public Server grpcServer() { return ServerBuilder.forPort(grpcPort) .addService(new HgStoreSessionService()) .addService(new HgStoreQueryService()) .addService(new MyServiceImpl()) // Add new service .build(); }
Using grpcurl:
# List services grpcurl -plaintext localhost:8500 list # Call method grpcurl -plaintext -d '{"key": "test"}' localhost:8500 org.apache.hugegraph.store.grpc.MyService/MyOperation
Unit Test:
@Test public void testMyService() { // Setup gRPC channel ManagedChannel channel = ManagedChannelBuilder .forAddress("localhost", 8500) .usePlaintext() .build(); // Create stub MyServiceGrpc.MyServiceBlockingStub stub = MyServiceGrpc.newBlockingStub(channel); // Build request MyRequest request = MyRequest.newBuilder() .setKey("test") .build(); // Call service MyResponse response = stub.myOperation(request); // Verify assertNotNull(response.getValue()); // Cleanup channel.shutdown(); }
Debug Store Node in IntelliJ:
Debug with Remote Store:
Start Store with debug port:
# Edit start-hugegraph-store.sh JAVA_OPTS="$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005" bin/start-hugegraph-store.sh
Attach debugger in IntelliJ:
Log Configuration: hg-store-dist/src/assembly/static/conf/log4j2.xml
Enable Debug Logging:
<!-- Store core --> <Logger name="org.apache.hugegraph.store" level="DEBUG"/> <!-- Raft --> <Logger name="com.alipay.sofa.jraft" level="DEBUG"/> <!-- RocksDB --> <Logger name="org.rocksdb" level="DEBUG"/> <!-- gRPC --> <Logger name="io.grpc" level="DEBUG"/>
Restart to apply:
bin/restart-hugegraph-store.sh
View Logs:
tail -f logs/hugegraph-store.log tail -f logs/hugegraph-store.log | grep ERROR
Check Raft State:
# Raft logs location ls -lh storage/raft/partition-*/log/ # Raft snapshots ls -lh storage/raft/partition-*/snapshot/
Raft Metrics (in code):
// Get Raft node status RaftNode node = partitionEngine.getRaftNode(); NodeStatus status = node.getNodeStatus(); System.out.println("Term: " + status.getTerm()); System.out.println("State: " + status.getState()); // Leader, Follower, Candidate System.out.println("Peers: " + status.getPeers());
Enable Raft Logging:
<Logger name="com.alipay.sofa.jraft" level="DEBUG"/>
RocksDB Statistics:
// In code RocksDB db = rocksDBSession.getDb(); String stats = db.getProperty("rocksdb.stats"); System.out.println(stats);
Dump RocksDB Data (for inspection):
# Using ldb tool (included with RocksDB) ldb --db=storage/rocksdb scan --max_keys=100
JVM Profiling (using async-profiler):
# Download async-profiler wget https://github.com/jvm-profiling-tools/async-profiler/releases/download/v2.9/async-profiler-2.9-linux-x64.tar.gz tar -xzf async-profiler-2.9-linux-x64.tar.gz # Start profiling ./profiler.sh -d 60 -f flamegraph.html $(pgrep -f hugegraph-store) # View flamegraph open flamegraph.html
Memory Profiling:
# Heap dump jmap -dump:format=b,file=heap.bin $(pgrep -f hugegraph-store) # Analyze with VisualVM or Eclipse MAT
Java:
.editorconfig)Example:
public class MyClass { private static final Logger LOG = LoggerFactory.getLogger(MyClass.class); public void myMethod(String param) { if (param == null) { throw new IllegalArgumentException("param cannot be null"); } // Implementation } }
Format:
<type>(<scope>): <subject> <body> <footer>
Types:
feat: New featurefix: Bug fixdocs: Documentation changesrefactor: Code refactoringtest: Test additions or changeschore: Build or tooling changesExample:
feat(store): add query aggregation pushdown Implement COUNT, SUM, MIN, MAX, AVG aggregations at Store level to reduce network traffic and improve query performance. Closes #1234
Fork and Clone:
# Fork on GitHub git clone https://github.com/YOUR_USERNAME/hugegraph.git cd hugegraph git remote add upstream https://github.com/apache/hugegraph.git
Create Branch:
git checkout -b feature-my-feature
Develop and Test:
# Make changes # Add tests mvn clean install # Ensure all tests pass
Check Code Quality:
# License header check mvn apache-rat:check # Code style check mvn editorconfig:check
Commit:
git add . git commit -m "feat(store): add new feature"
Push and Create PR:
git push origin feature-my-feature # Create PR on GitHub
Code Review:
Merge:
Adding Dependencies:
When adding third-party dependencies:
pom.xmlinstall-dist/release-docs/licenses/install-dist/release-docs/LICENSEinstall-dist/release-docs/NOTICEinstall-dist/scripts/dependency/known-dependencies.txtRun Dependency Check:
cd install-dist/scripts/dependency ./regenerate_known_dependencies.sh
When to Update Docs:
Documentation Location:
hugegraph-store/README.mdhugegraph-store/docs/Official Documentation:
Community:
Related Projects:
For operational procedures, see Operations Guide.
For production best practices, see Best Practices.