Merge pull request #1255 from merico-dev/release-v0.8.0

chore: release v0.8.0
tree: 9bddbc8d57c8fae3468a04b46bdd325b0bb3e77d
  1. .github/
  2. api/
  3. cmd/
  4. config/
  5. config-ui/
  6. db/
  7. devops/
  8. docs/
  9. e2e/
  10. errors/
  11. grafana/
  12. logger/
  13. models/
  14. plugins/
  15. releases/
  16. scripts/
  17. services/
  18. test/
  19. utils/
  20. .editorconfig
  21. .env.example
  22. .gitignore
  23. ARCHITECTURE.md
  24. CONTRIBUTING-zh-CN.md
  25. CONTRIBUTING.md
  26. docker-compose.yml
  27. Dockerfile
  28. go.mod
  29. go.sum
  30. k8s-deploy.yaml
  31. LICENSE
  32. main.go
  33. Makefile
  34. MIGRATIONS.md
  35. README-zh-CN.md
  36. README.md
README.md

DevLake

PRs Welcome Discord badge Go Report Card

English中文

What is DevLake?

DevLake brings your DevOps data into one practical, customized, extensible view. Ingest, analyze, and visualize data from an ever-growing list of developer tools, with our open source product.

DevLake is designed for developer teams looking to make better sense of their development process and to bring a more data-driven approach to their own practices. You can ask DevLake many questions regarding your development process. Just connect and query.

Get started with just a few clicks

Why DevLake?

  1. Comprehensive understanding of software development lifecycle, digging workflow bottlenecks
  2. Timely review of team iteration performance, rapid feedback, agile adjustment
  3. Quickly build scenario-based data dashboards and drill down to analyze the root cause of problems

What can be accomplished with DevLake?

  1. Collect DevOps performance data for the whole process
  2. Share abstraction layer with similar tools to output standardized performance data
  3. Built-in 20+ performance metrics and drill-down analysis capability
  4. Support custom SQL analysis and drag and drop to build scenario-based data views
  5. Flexible architecture and plug-in design to support fast access to new data sources

See Demo

Click here to see demo. The demo is based on data from this repo.
Username/Password: test/test

Contents

Data Sources We Currently Support

Below is a list of data source plugins used to collect & enrich data from specific sources. Each has a README.md file with basic setup, troubleshooting, and metrics info.

For more information on building a new data source plugin, see Build a Plugin.

SectionSection InfoDocs
JiraSummary, Data & Metrics, Configuration, Plugin APILink
GitLabSummary, Data & Metrics, Configuration, Plugin APILink
JenkinsSummary, Data & Metrics, Configuration, Plugin APILink
GitHubSummary, Data & Metrics, Configuration, Plugin APILink
GitExtractorSummary, Data & Metrics, Configuration, Plugin APILink
RefDiffSummary, Data & Metrics, Configuration, Plugin APILink


Setup Guide

There're 3 ways to set up DevLake: user setup, developer setup and cloud setup.

User setup

  • If you only plan to run the product locally, this is the ONLY section you should need.
  • Commands written like this are to be run in your terminal.

Required Packages to Install

NOTE: After installing docker, you may need to run the docker application and restart your terminal

Commands to run in your terminal

IMPORTANT: DevLake doesn't support Database Schema Migration yet, upgrading an existing instance is likely to break, we recommend that you deploy a new instance instead.

  1. Download docker-compose.yml and env.example from latest release page into a folder

  2. Rename env.example to .env

  3. Start Docker on your machine, then run docker-compose up -d to start the services.

  4. Visit localhost:4000 to setup configuration files.

    • Navigate to desired plugins pages on the Integrations page
    • You will need to enter the required information for the plugins you intend to use.
    • Please reference the following for more details on how to configure each one: -> Jira -> GitLab -> Jenkins -> GitHub
    • Submit the form to update the values by clicking on the Save Connection button on each form page
    • devlake takes a while to fully boot up. if config-ui complaining about api being unreachable, please wait a few seconds and try refreshing the page.
    • To collect this repo for a quick preview, please provide a Github personal token on Data Integrations / Github page.
  5. Visit localhost:4000/create-pipeline to RUN a Pipeline and trigger data collection.

    Pipelines Runs can be initiated by the new “Create Run” Interface. Simply enable the Data Source Providers you wish to run collection for, and specify the data you want to collect, for instance, Project ID for Gitlab and Repository Name for GitHub.

    Once a valid pipeline configuration has been created, press Create Run to start/run the pipeline. After the pipeline starts, you will be automatically redirected to the Pipeline Activity screen to monitor collection activity.

    Pipelines is accessible from the main menu of the config-ui for easy access.

    • Manage All Pipelines http://localhost:4000/pipelines
    • Create Pipeline RUN http://localhost:4000/create-pipeline
    • Track Pipeline Activity http://localhost:4000/pipelines/activity/[RUN_ID]

    For advanced use cases and complex pipelines, please use the Raw JSON API to manually initiate a run using cURL or graphical API tool such as Postman. POST the following request to the DevLake API Endpoint.

    [
        [
            {
                "plugin": "github",
                "options": {
                    "repo": "lake",
                    "owner": "merico-dev"
                }
            }
        ]
    ]
    

    Please refer to this wiki How to trigger data collection.

  6. Click View Dashboards button when done (username: admin, password: admin). The button will be shown on the Trigger Collection page when data collection has finished.

Setup cron job

To synchronize data periodically, we provide lake-cli for easily sending data collection requests along with a cron job to periodically trigger the cli tool.


Developer Setup

Requirements

  • Docker
  • Golang v1.17+
  • Make
    • Mac (Already installed)
    • Windows: Download
    • Ubuntu: sudo apt-get install build-essential

How to setup dev environment

  1. Navigate to where you would like to install this project and clone the repository:

    git clone https://github.com/merico-dev/lake.git
    cd lake
    
  2. Install dependencies for plugins:

  3. Install Go packages

    	go get
    
  4. Copy the sample config file to new local file:

    cp .env.example .env
    
  5. Update the following variables in the file .env:

    • DB_URL: Replace mysql:3306 with 127.0.0.1:3306
  6. Start the MySQL and Grafana containers:

    Make sure the Docker daemon is running before this step.

    docker-compose up -d mysql grafana
    
  7. Run lake and config UI in dev mode in two seperate terminals:

    # run lake
    make dev
    # run config UI
    make configure-dev
    
  8. Visit config UI at localhost:4000 to configure data sources.

    • Navigate to desired plugins pages on the Integrations page
    • You will need to enter the required information for the plugins you intend to use.
    • Please reference the following for more details on how to configure each one: -> Jira -> GitLab, -> Jenkins -> GitHub
    • Submit the form to update the values by clicking on the Save Connection button on each form page
  9. Visit localhost:4000/create-pipeline to RUN a Pipeline and trigger data collection.

    Pipelines Runs can be initiated by the new “Create Run” Interface. Simply enable the Data Source Providers you wish to run collection for, and specify the data you want to collect, for instance, Project ID for Gitlab and Repository Name for GitHub.

    Once a valid pipeline configuration has been created, press Create Run to start/run the pipeline. After the pipeline starts, you will be automatically redirected to the Pipeline Activity screen to monitor collection activity.

    Pipelines is accessible from the main menu of the config-ui for easy access.

    • Manage All Pipelines http://localhost:4000/pipelines
    • Create Pipeline RUN http://localhost:4000/create-pipeline
    • Track Pipeline Activity http://localhost:4000/pipelines/activity/[RUN_ID]

    For advanced use cases and complex pipelines, please use the Raw JSON API to manually initiate a run using cURL or graphical API tool such as Postman. POST the following request to the DevLake API Endpoint.

    [
        [
            {
                "plugin": "github",
                "options": {
                    "repo": "lake",
                    "owner": "merico-dev"
                }
            }
        ]
    ]
    

    Please refer to this wiki How to trigger data collection.

  10. Click View Dashboards button when done (username: admin, password: admin). The button is shown in the top left.


Cloud setup

If you want to run DevLake in a clound environment, you can set up DevLake with Tin. See detailed setup guide

Disclaimer:

To protect your information, it is critical for users of the Tin hosting to set passwords to protect DevLake applications. We built DevLake as a self-hosted product, in part to ensure users have total protection and ownership of their data, while the same remains true for the Tin hosting, this risk point can only be eliminated by the end-user.

Tests

To run the tests:

make test

Make Contribution

This section list all the documents to help you contribute to the repo.

Understand the Architecture of DevLake

devlake-architecture

Add a Plugin

plugins/README.md

Add Plugin Metrics

plugins/HOW-TO-ADD-METRICS.md

Contributing Spec

CONTRIBUTING.md

User Guide, Help and more

Grafana

We use Grafana as a visualization tool to build charts for the data stored in our database. Using SQL queries, we can add panels to build, save, and edit customized dashboards.

All the details on provisioning and customizing a dashboard can be found in the Grafana Doc.

Need help?

Message us on Discord

FAQ

Q: When I run docker-compose up -d I get this error: “qemu: uncaught target signal 11 (Segmentation fault) - core dumped”. How do I fix this?

A: M1 Mac users need to download a specific version of docker on their machine. You can find it here.

License

This project is licensed under Apache License 2.0 - see the LICENSE file for details.