tree: 1b93f4f9b85277f99db0408f0c8ac2c0e408467b [path history] [tgz]
  1. main.py
  2. README.md
  3. requirements.txt
hugegraph-llm/llm_api/README.md

Local LLM Api

Usage

If hugegraph-llm wants to use local LLM, you can configure it as follows.

Run the program:

python main.py \
    --model_name_or_path "Qwen/Qwen1.5-0.5B-Chat" \
    --device "cuda" \
    --port 7999

The LLM Section of config.ini can be configured as follows:

[LLM]
type = local_api
llm_url = http://localhost:7999/v1/chat/completions