English | δΈζ
This example demonstrates how to use the Push and Pull modes of Prometheus Pushgateway to monitor a Dubbo-Go application and visualize the data with Grafana.
The monitoring data flow is as follows:
Push Mode: Application (go-client / go-server) -> Prometheus Pushgateway -> Prometheus -> Grafana
Pull Mode: Application (go-client / go-server) -> Prometheus -> Grafana
| Component | Port | Description |
|---|---|---|
| Grafana | 3000 | A dashboard for visualizing metrics. |
| Prometheus | 9090 | Responsible for storing and querying metric data. It pulls data from the Pushgateway. |
| Pushgateway | 9091 | Used to receive metrics pushed from the Dubbo-Go application. |
| go-server | N/A | Dubbo-Go service provider (Provider) example. |
| go-client | N/A | Dubbo-Go service consumer (Consumer) example that continuously calls the server. |
Both client and server use the same configuration method:
# Pushgateway address (required) export PUSHGATEWAY_URL="127.0.0.1:9091" # Job name identifier (required) export JOB_NAME="dubbo-service" # Pushgateway authentication username (optional) export PUSHGATEWAY_USER="username" # Pushgateway authentication password (optional) export PUSHGATEWAY_PASS="1234" # ZooKeeper address (required) export ZK_ADDRESS="127.0.0.1:2181"
# Use Push mode (default) go run ./go-client/cmd/main.go go run ./go-server/cmd/main.go # Use Pull mode (do not push metrics to Pushgateway) go run ./go-client/cmd/main.go --push=false go run ./go-server/cmd/main.go --push=false
Please follow the steps below to run this example.
prometheus_pull.yml, prometheus_push.yml, go-client/cmd/main.go, and go-server/cmd/main.go according to your actual network environment.docker-compose.yml from - ./prometheus_pull.yml:/etc/prometheus/prometheus.yml to - ./prometheus_push.yml:/etc/prometheus/prometheus.yml, and then restart the services.First, start the Grafana, Prometheus, and Pushgateway services. We use docker-compose to do this with a single command.
# Enter the metrics directory cd metrics # Start all monitoring services in the background docker-compose up -d
You can now access the web UI for each service at the following addresses:
http://localhost:3000http://localhost:9090http://localhost:9091In the metrics directory, open a new terminal window and run the server program.
go run ./go-server/cmd/main.go
You will see logs indicating that the server has started successfully and registered its services.
In the metrics directory, open another new terminal window and run the client program. The client will continuously call the server's methods, with random failures to generate monitoring metrics.
go run ./go-client/cmd/main.go
The client will start printing call results while pushing monitoring metrics to the Pushgateway. You can see the pushed metrics on the Pushgateway UI (http://localhost:9091/metrics).
Now that all services are running, let's configure Grafana to display the data.
http://localhost:3000 (default username/password: admin/admin).http://host.docker.internal:9090.Note:
host.docker.internalis a special DNS name that allows Docker containers (like Grafana) to access the host machine's network. You can configure it according to your actual situation.
grafana.json into the Import via panel json text box, or click the Upload JSON file button to upload the grafana.json file.After a successful import, you will see a complete Dubbo observability dashboard! The data in the panels (like QPS, success rate, latency, etc.) will update dynamically as the client continues to make calls.
Enjoy!
Original design purpose of Pushgateway:Provide a temporary metric transit point for short-lived processes (batch jobs, cron jobs) to facilitate Prometheus scraping.
Default behavior: Pushgateway does not automatically delete metrics that have been reported but are no longer updated.
That is, once a job reports metrics, even if the job stops, the metrics corresponding to that set of labels (job/instance) will persist.
Implementation Principle:
job_pushed_at_secondstimestamp metric on startupFor details:tools/pgw-cleaner
Grafana dashboard shows “No Data”
http://host.docker.internal:9090) is correct and that the connection test was successful.http://localhost:9090), and check the Status -> Targets page to ensure the pushgateway job has a status of UP.dubbo_consumer_requests_succeed_total to confirm that data can be queried.Cannot connect to host.docker.internal
host.docker.internal is a built-in feature of Docker. If this address is not accessible, replace the IP address in metrics/prometheus.yml and the Grafana data source address with your actual IP address.To install Prometheus in Kubernetes (k8s), please refer to the kube-prometheus project.
Set the service type in prometheus-service.yaml to NodePort.
Add the dubboPodMoitor.yaml file to the manifests directory of kube-prometheus with the following content:
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: podmonitor labels: app: podmonitor namespace: monitoring spec: namespaceSelector: matchNames: - dubbo-system selector: matchLabels: app-type: dubbo podMetricsEndpoints: - port: metrics # Reference the port name 'metrics' of the dubbo-app path: /prometheus --- # Role-Based Access Control (RBAC) apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: dubbo-system name: pod-reader rules: - apiGroups: [ "" ] resources: [ "pods" ] verbs: [ "get", "list", "watch" ] --- # Role-Based Access Control (RBAC) apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pod-reader-binding namespace: dubbo-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pod-reader subjects: - kind: ServiceAccount name: prometheus-k8s namespace: monitoring
Execute kubectl apply -f Deployment.yaml
Open the Prometheus web interface, for example, http://localhost:9090/targets