docs: update the DORA metric definition docs
diff --git a/docs/Metrics/CFR.md b/docs/Metrics/CFR.md
index 6f05a8e..eb2255b 100644
--- a/docs/Metrics/CFR.md
+++ b/docs/Metrics/CFR.md
@@ -1,5 +1,5 @@
 ---
-title: "DORA - Change Failure Rate(WIP)"
+title: "DORA - Change Failure Rate"
 description: >
   DORA - Change Failure Rate
 sidebar_position: 21
@@ -12,20 +12,20 @@
 Unlike Deployment Frequency and Lead Time for Changes that measure the throughput, Change Failure Rate measures the stability and quality of software delivery. A low CFR reflects a bad end-user experience as the production failure is relatively high.
 
 ## Which dashboard(s) does it exist in
-N/A
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
 
 
 ## How is it calculated?
 The number of failures per the number of deployments. For example, if there are five deployments in a day and one causes a failure, that is a 20% change failure rate.
 
-As you can see, there is not much distinction between performance benchmarks for CFR:
+Below are the benchmarks for different development teams from Google's report. However, it's difficult to tell which group a team falls into when the team's change failure rate is `18%` or `40%`. Therefore, DevLake provides its own benchmarks to address this problem:
 
-| Groups           | Benchmarks      |
-| -----------------| ----------------|
-| Elite performers | 0%-15%          |
-| High performers  | 16%-30%         |
-| Medium performers| 16%-30%         |
-| Low performers   | 16%-30%         |
+| Groups           | Benchmarks      | DevLake Benchmarks |
+| -----------------| ----------------| -------------------|
+| Elite performers | 0%-15%          | 0%-15%             |
+| High performers  | 16%-30%         | 16-20%             |
+| Medium performers| 16%-30%         | 21%-30%            |
+| Low performers   | 16%-30%         | > 30%              |
 
 <p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
 
@@ -47,6 +47,102 @@
 - Deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as `Deployments`.
 - Incident configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Incidents`.
 
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of change failure rate as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/cfr-monthly.jpeg)
+
+```
+with _deployments as (
+-- get the deployment count each month
+	SELECT
+		date_format(finished_date,'%y/%m') as month,
+		COUNT(distinct id) AS deployment_count
+	FROM
+		cicd_tasks
+	WHERE
+		type = 'DEPLOYMENT'
+		and result = 'SUCCESS'
+	GROUP BY 1
+),
+
+_incidents as (
+-- get the incident count each month
+	SELECT
+		date_format(created_date,'%y/%m') as month,
+		COUNT(distinct id) AS incident_count
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+	GROUP BY 1
+),
+
+_calendar_months as(
+-- deal with the month with no incidents
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month,
+	case 
+		when d.deployment_count is null or i.incident_count is null then 0 
+		else i.incident_count/d.deployment_count end as change_failure_rate
+FROM 
+	_calendar_months cm
+	left join _incidents i on cm.month = i.month
+	left join _deployments d on cm.month = d.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/cfr-monthly.jpeg)
+
+```
+with _deployment_count as (
+-- get the deployment deployed within the selected time period in the top-right corner
+	SELECT
+		COUNT(distinct id) AS deployment_count
+	FROM
+		cicd_tasks
+	WHERE
+		type = 'DEPLOYMENT'
+		and result = 'SUCCESS'
+    and $__timeFilter(finished_date)
+),
+
+_incident_count as (
+-- get the incident created within the selected time period in the top-right corner
+	SELECT
+		COUNT(distinct id) AS incident_count
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+		and $__timeFilter(created_date)
+)
+
+SELECT 
+	case 
+		when deployment_count is null or incident_count is null or deployment_count = 0 then NULL 
+		when incident_count/deployment_count <= .15 then "0-15%"
+		when incident_count/deployment_count <= .20 then "16%-20%"
+		when incident_count/deployment_count <= .30 then "21%-30%"
+		else "> 30%"
+		end as change_failure_rate
+FROM 
+	_deployment_count, _incident_count
+```
+
 ## How to improve?
 - Add unit tests for all new feature
 - "Shift left", start QA early and introduce more automated tests
diff --git a/docs/Metrics/DeploymentFrequency.md b/docs/Metrics/DeploymentFrequency.md
index 6b31853..90459ad 100644
--- a/docs/Metrics/DeploymentFrequency.md
+++ b/docs/Metrics/DeploymentFrequency.md
@@ -1,5 +1,5 @@
 ---
-title: "DORA - Deployment Frequency(WIP)"
+title: "DORA - Deployment Frequency"
 description: >
   DORA - Deployment Frequency
 sidebar_position: 18
@@ -12,18 +12,20 @@
 Deployment frequency reflects the efficiency of a team's deployment. A team that deploys more frequently can deliver the product faster and users' feature requirements can be met faster.
 
 ## Which dashboard(s) does it exist in
-N/A
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
 
 
 ## How is it calculated?
 Deployment frequency is calculated based on the number of deployment days, not the number of deployments, e.g.,daily, weekly, monthly, yearly.
 
-| Groups           | Benchmarks                           |
-| -----------------| -------------------------------------|
-| Elite performers | Multiple times a day                 |
-| High performers  | Once a week to once a month          |
-| Medium performers| Once a month to once every six months|
-| Low performers   | Less than once every six months      |
+Below are the benchmarks for different development teams from Google's report. DevLake uses the same benchmarks.
+
+| Groups           | Benchmarks                                    | DevLake Benchmarks                             |
+| -----------------| --------------------------------------------- | ---------------------------------------------- |
+| Elite performers | On-demand (multiple deploys per day)          | On-demand                                      |
+| High performers  | Between once per week and once per month      | Between once per week and once per month       |
+| Medium performers| Between once per month and once every 6 months| Between once per month and once every 6 months |
+| Low performers   | Fewer than once per six months                | Fewer than once per six months                 |
 
 <p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
 
@@ -39,6 +41,128 @@
 
 This metric relies on the deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as deployments.
 
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of deployment count as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/deployment-frequency-monthly.jpeg)
+
+```
+with _deployments as (
+-- get the deployment count each month
+	SELECT
+		date_format(finished_date,'%y/%m') as month,
+		COUNT(distinct id) AS deployment_count
+	FROM
+		cicd_tasks
+	WHERE
+		type = 'DEPLOYMENT'
+		and result = 'SUCCESS'
+	GROUP BY 1
+),
+
+_calendar_months as(
+-- deal with the month with no deployments
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month, 
+	case when d.deployment_count is null then 0 else d.deployment_count end as deployment_count
+FROM 
+	_calendar_months cm
+	left join _deployments d on cm.month = d.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/deployment-frequency-text.jpeg)
+
+```
+with last_few_calendar_months as(
+-- get the last few months within the selected time period in the top-right corner
+	SELECT CAST((SYSDATE()-INTERVAL (H+T+U) DAY) AS date) day
+	FROM ( SELECT 0 H
+			UNION ALL SELECT 100 UNION ALL SELECT 200 UNION ALL SELECT 300
+		) H CROSS JOIN ( SELECT 0 T
+			UNION ALL SELECT  10 UNION ALL SELECT  20 UNION ALL SELECT  30
+			UNION ALL SELECT  40 UNION ALL SELECT  50 UNION ALL SELECT  60
+			UNION ALL SELECT  70 UNION ALL SELECT  80 UNION ALL SELECT  90
+		) T CROSS JOIN ( SELECT 0 U
+			UNION ALL SELECT   1 UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4 UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7 UNION ALL SELECT   8 UNION ALL SELECT   9
+		) U
+	WHERE
+		(SYSDATE()-INTERVAL (H+T+U) DAY) > $__timeFrom()
+),
+
+_days_weeks_deploy as(
+	SELECT
+			date(DATE_ADD(last_few_calendar_months.day, INTERVAL -WEEKDAY(last_few_calendar_months.day) DAY)) as week,
+			MAX(if(deployments.day is not null, 1, 0)) as week_deployed,
+			COUNT(distinct deployments.day) as days_deployed
+	FROM 
+		last_few_calendar_months
+		LEFT JOIN(
+			SELECT
+				DATE(finished_date) AS day,
+				id
+			FROM cicd_tasks
+			WHERE
+				type = 'DEPLOYMENT'
+				and result = 'SUCCESS') deployments ON deployments.day = last_few_calendar_months.day
+	GROUP BY week
+	),
+
+_monthly_deploy as(
+	SELECT
+			date(DATE_ADD(last_few_calendar_months.day, INTERVAL -DAY(last_few_calendar_months.day)+1 DAY)) as month,
+			MAX(if(deployments.day is not null, 1, 0)) as months_deployed
+	FROM 
+		last_few_calendar_months
+		LEFT JOIN(
+			SELECT
+				DATE(finished_date) AS day,
+				id
+			FROM cicd_tasks
+			WHERE
+				type = 'DEPLOYMENT'
+				and result = 'SUCCESS') deployments ON deployments.day = last_few_calendar_months.day
+	GROUP BY month
+	),
+
+_median_number_of_deployment_days_per_week as (
+	SELECT x.days_deployed as median_number_of_deployment_days_per_week from _days_weeks_deploy x, _days_weeks_deploy y
+	GROUP BY x.days_deployed
+	HAVING SUM(SIGN(1-SIGN(y.days_deployed-x.days_deployed)))/COUNT(*) > 0.5
+	LIMIT 1
+),
+
+_median_number_of_deployment_days_per_month as (
+	SELECT x.months_deployed as median_number_of_deployment_days_per_month from _monthly_deploy x, _monthly_deploy y
+	GROUP BY x.months_deployed
+	HAVING SUM(SIGN(1-SIGN(y.months_deployed-x.months_deployed)))/COUNT(*) > 0.5
+	LIMIT 1
+)
+
+SELECT 
+	CASE  
+		WHEN median_number_of_deployment_days_per_week >= 3 THEN 'On-demand'
+		WHEN median_number_of_deployment_days_per_week >= 1 THEN 'Between once per week and once per month'
+		WHEN median_number_of_deployment_days_per_month >= 1 THEN 'Between once per month and once every 6 months'
+		ELSE 'Fewer than once per six months' END AS 'Deployment Frequency'
+FROM _median_number_of_deployment_days_per_week, _median_number_of_deployment_days_per_month
+```
+
 ## How to improve?
 - Trunk development. Work in small batches and often merge their work into shared trunks.
 - Integrate CI/CD tools for automated deployment
diff --git a/docs/Metrics/LeadTimeForChanges.md b/docs/Metrics/LeadTimeForChanges.md
index b964f20..6183ac9 100644
--- a/docs/Metrics/LeadTimeForChanges.md
+++ b/docs/Metrics/LeadTimeForChanges.md
@@ -1,5 +1,5 @@
 ---
-title: "DORA - Lead Time for Changes(WIP)"
+title: "DORA - Lead Time for Changes"
 description: >
   DORA - Lead Time for Changes
 sidebar_position: 19
@@ -12,7 +12,7 @@
 This metric measures the time it takes to commit code to the production environment and reflects the speed of software delivery. A lower average change preparation time means that your team is efficient at coding and deploying your project.
 
 ## Which dashboard(s) does it exist in
-N/A
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
 
 
 ## How is it calculated?
@@ -27,14 +27,14 @@
    - Calculate each commit's lead time for changes, which equals to `deploy-2`'s deployed_at - commit's authored_date
    - `Deploy-2`'s Lead time for changes = average lead time for changes of `commit-1`, `commit-2` and `commit-3`.
 
-Below are the benchmarks for different development teams:
+Below are the benchmarks for different development teams from Google's report. However, it's difficult to tell which group a team falls into when the team's median lead time for changes is `between one week and one month`. Therefore, DevLake provides its own benchmarks to address this problem:
 
-| Groups           | Benchmarks                           |
-| -----------------| -------------------------------------|
-| Elite performers | Less than one hour                   |
-| High performers  | Between one day and one week         |
-| Medium performers| Between one month and six months     |
-| Low performers   | More than six months                 |
+| Groups           | Benchmarks                           | DevLake Benchmarks 
+| -----------------| -------------------------------------| --------------------------------|
+| Elite performers | Less than one hour                   | Less than one hour              |
+| High performers  | Between one day and one week         | Less than one week              |
+| Medium performers| Between one month and six months     | Between one week and six months |
+| Low performers   | More than six months                 | More than six months            |
 
 <p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
 
@@ -49,6 +49,120 @@
 
 This metric relies on the deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as deployments.
 
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of median lead time for changes as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/lead-time-for-changes-monthly.jpeg)
+
+```
+with _deployment_change_lead_time as (
+-- to get each deployment's change lead time
+	SELECT
+		ct.id as deployment_id,
+		ct.name as deployment_name,
+		date_format(ct.finished_date,'%y/%m') as month,
+		avg(pr.change_timespan) as change_lead_time_of_a_deployment
+	FROM
+		cicd_tasks ct 
+		join cicd_pipeline_commits cpc on ct.pipeline_id = cpc.pipeline_id
+		join pull_requests pr on cpc.commit_sha = pr.merge_commit_sha
+	WHERE
+		ct.type = 'DEPLOYMENT'
+		and ct.result = 'success'
+		and $__timeFilter(ct.finished_date)
+	GROUP BY 1,2,3
+),
+
+_find_median_clt_each_month as (
+	SELECT 
+		x.month, x.change_lead_time_of_a_deployment
+	from _deployment_change_lead_time x join _deployment_change_lead_time y on x.month = y.month
+	WHERE x.change_lead_time_of_a_deployment is not null and y.change_lead_time_of_a_deployment is not null
+	GROUP BY x.month, x.change_lead_time_of_a_deployment
+	HAVING SUM(SIGN(1-SIGN(y.change_lead_time_of_a_deployment-x.change_lead_time_of_a_deployment)))/COUNT(*) > 0.5
+),
+
+_find_clt_rank_each_month as (
+	SELECT
+		*,
+		rank() over(PARTITION BY month ORDER BY change_lead_time_of_a_deployment) as _rank 
+	FROM
+		_find_median_clt_each_month
+),
+
+_clt as (
+	SELECT
+		month,
+		change_lead_time_of_a_deployment as med_change_lead_time
+	from _find_clt_rank_each_month
+	WHERE _rank = 1
+),
+
+_calendar_months as(
+-- to	deal with the month with no incidents
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month,
+	case 
+		when _clt.med_change_lead_time is null then 0 
+		else _clt.med_change_lead_time/60 end as med_change_lead_time_in_hour
+FROM 
+	_calendar_months cm
+	left join _clt on cm.month = _clt.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/lead-time-for-changes-text.jpeg)
+
+```
+with _deployment_change_lead_time as (
+-- get one deployment's change lead time
+	SELECT
+		ct.id as deployment_id,
+		ct.name as deployment_name,
+		ct.finished_date as deployed_at,
+		avg(pr.change_timespan) as change_lead_time_of_a_deployment
+	FROM
+		cicd_tasks ct 
+		join cicd_pipeline_commits cpc on ct.pipeline_id = cpc.pipeline_id
+		join pull_requests pr on cpc.commit_sha = pr.merge_commit_sha
+	WHERE
+		ct.type = 'DEPLOYMENT'
+		and ct.result = 'success'
+		and $__timeFilter(ct.finished_date)
+	GROUP BY 1,2,3
+),
+
+_median_change_lead_time as (
+	SELECT x.change_lead_time_of_a_deployment as median_change_lead_time from _deployment_change_lead_time x, _deployment_change_lead_time y
+	WHERE x.change_lead_time_of_a_deployment is not null and y.change_lead_time_of_a_deployment is not null
+	GROUP BY x.change_lead_time_of_a_deployment
+	HAVING SUM(SIGN(1-SIGN(y.change_lead_time_of_a_deployment-x.change_lead_time_of_a_deployment)))/COUNT(*) > 0.5
+	LIMIT 1
+)
+
+SELECT 
+  CASE
+    WHEN median_change_lead_time < 60 then "Less than one hour"
+    WHEN median_change_lead_time < 7 * 24 * 60 then "Less than one week"
+    WHEN median_change_lead_time < 180 * 24 * 60 then "Between one week and six months"
+    ELSE "More than six months"
+    END as median_change_lead_time
+FROM _median_change_lead_time
+```
+
 ## How to improve?
 - Break requirements into smaller, more manageable deliverables
 - Optimize the code review process
diff --git a/docs/Metrics/MTTR.md b/docs/Metrics/MTTR.md
index 690c4ef..d2fb407 100644
--- a/docs/Metrics/MTTR.md
+++ b/docs/Metrics/MTTR.md
@@ -1,7 +1,7 @@
 ---
-title: "DORA - Mean Time to Restore Service"
+title: "DORA - Median Time to Restore Service"
 description: >
-  DORA - Mean Time to Restore Service
+  DORA - Median Time to Restore Service
 sidebar_position: 20
 ---
 
@@ -12,7 +12,7 @@
 This metric is essential to measure the disaster control capability of your team and the robustness of the software.
 
 ## Which dashboard(s) does it exist in
-N/A
+DORA dashboard. See [live demo](https://grafana-lake.demo.devlake.io/grafana/d/qNo8_0M4z/dora?orgId=1).
 
 
 ## How is it calculated?
@@ -20,14 +20,14 @@
 
 If you have three incidents that happened in the given data range, one lasting 1 hour, one lasting 2 hours and one lasting 3 hours. Your MTTR will be: (1 + 2 + 3) / 3 = 2 hours.
 
-Below are the benchmarks for different development teams:
+Below are the benchmarks for different development teams from Google's report. However, it's difficult to tell which group a team falls into when the team's median time to restore service is `between one week and six months`. Therefore, DevLake provides its own benchmarks to address this problem:
 
-| Groups           | Benchmarks                           |
-| -----------------| -------------------------------------|
-| Elite performers | Less than one hour                   |
-| High performers  | Less one day                         |
-| Medium performers| Between one day and one week         |
-| Low performers   | More than six months                 |
+| Groups           | Benchmarks                           | DevLake Benchmarks   
+| -----------------| -------------------------------------| -------------------------------|
+| Elite performers | Less than one hour                   | Less than one hour             |
+| High performers  | Less one day                         | Less than one day              |
+| Medium performers| Between one day and one week         | Between one day and one week   |
+| Low performers   | More than six months                 | More than one week             |
 
 <p><i>Source: 2021 Accelerate State of DevOps, Google</i></p>
 
@@ -49,6 +49,109 @@
 - Deployment configuration in Jenkins, GitLab or GitHub transformation rules to let DevLake know what CI builds/jobs can be regarded as `Deployments`.
 - Incident configuration in Jira, GitHub or TAPD transformation rules to let DevLake know what CI builds/jobs can be regarded as `Incidents`.
 
+<b>SQL Queries</b>
+
+If you want to measure the monthly trend of median time to restore service as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/mttr-monthly.jpeg)
+
+```
+with _incidents as (
+-- get the incident count each month
+	SELECT
+		date_format(created_date,'%y/%m') as month,
+		cast(lead_time_minutes as signed) as lead_time_minutes
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+),
+
+_find_median_mttr_each_month as (
+	SELECT 
+		x.*
+	from _incidents x join _incidents y on x.month = y.month
+	WHERE x.lead_time_minutes is not null and y.lead_time_minutes is not null
+	GROUP BY x.month, x.lead_time_minutes
+	HAVING SUM(SIGN(1-SIGN(y.lead_time_minutes-x.lead_time_minutes)))/COUNT(*) > 0.5
+),
+
+_find_mttr_rank_each_month as (
+	SELECT
+		*,
+		rank() over(PARTITION BY month ORDER BY lead_time_minutes) as _rank 
+	FROM
+		_find_median_mttr_each_month
+),
+
+_mttr as (
+	SELECT
+		month,
+		lead_time_minutes as med_time_to_resolve
+	from _find_mttr_rank_each_month
+	WHERE _rank = 1
+),
+
+_calendar_months as(
+-- deal with the month with no incidents
+	SELECT date_format(CAST((SYSDATE()-INTERVAL (month_index) MONTH) AS date), '%y/%m') as month
+	FROM ( SELECT 0 month_index
+			UNION ALL SELECT   1  UNION ALL SELECT   2 UNION ALL SELECT   3
+			UNION ALL SELECT   4  UNION ALL SELECT   5 UNION ALL SELECT   6
+			UNION ALL SELECT   7  UNION ALL SELECT   8 UNION ALL SELECT   9
+			UNION ALL SELECT   10 UNION ALL SELECT  11
+		) month_index
+	WHERE (SYSDATE()-INTERVAL (month_index) MONTH) > SYSDATE()-INTERVAL 6 MONTH	
+)
+
+SELECT 
+	cm.month,
+	case 
+		when m.med_time_to_resolve is null then 0 
+		else m.med_time_to_resolve/60 end as med_time_to_resolve_in_hour
+FROM 
+	_calendar_months cm
+	left join _mttr m on cm.month = m.month
+ORDER BY 1
+```
+
+If you want to measure in which category your team falls into as the picture shown below, run the following SQL in Grafana.
+
+![](/img/Metrics/mttr-text.jpeg)
+
+``` 
+with _incidents as (
+-- get the incidents created within the selected time period in the top-right corner
+	SELECT
+		cast(lead_time_minutes as signed) as lead_time_minutes
+	FROM
+		issues
+	WHERE
+		type = 'INCIDENT'
+		and $__timeFilter(created_date)
+),
+
+_median_mttr as (
+	SELECT 
+		x.lead_time_minutes as med_time_to_resolve
+	from _incidents x, _incidents y
+	WHERE x.lead_time_minutes is not null and y.lead_time_minutes is not null
+	GROUP BY x.lead_time_minutes
+	HAVING SUM(SIGN(1-SIGN(y.lead_time_minutes-x.lead_time_minutes)))/COUNT(*) > 0.5
+	LIMIT 1
+)
+
+SELECT 
+	case
+		WHEN med_time_to_resolve < 60  then "Less than one hour"
+    WHEN med_time_to_resolve < 24 * 60 then "Less than one Day"
+    WHEN med_time_to_resolve < 7 * 24 * 60  then "Between one day and one week"
+    ELSE "More than one week"
+    END as med_time_to_resolve
+FROM 
+	_median_mttr
+```
+
 ## How to improve?
 - Use automated tools to quickly report failure
 - Prioritize recovery when a failure happens
diff --git a/static/img/Metrics/cfr-monthly.jpeg b/static/img/Metrics/cfr-monthly.jpeg
new file mode 100644
index 0000000..63f6309
--- /dev/null
+++ b/static/img/Metrics/cfr-monthly.jpeg
Binary files differ
diff --git a/static/img/Metrics/cfr-text.jpeg b/static/img/Metrics/cfr-text.jpeg
new file mode 100644
index 0000000..b0556cc
--- /dev/null
+++ b/static/img/Metrics/cfr-text.jpeg
Binary files differ
diff --git a/static/img/Metrics/deployment-frequency-monthly.jpeg b/static/img/Metrics/deployment-frequency-monthly.jpeg
new file mode 100644
index 0000000..673cae9
--- /dev/null
+++ b/static/img/Metrics/deployment-frequency-monthly.jpeg
Binary files differ
diff --git a/static/img/Metrics/deployment-frequency-text.jpeg b/static/img/Metrics/deployment-frequency-text.jpeg
new file mode 100644
index 0000000..ac0c725
--- /dev/null
+++ b/static/img/Metrics/deployment-frequency-text.jpeg
Binary files differ
diff --git a/static/img/Metrics/lead-time-for-changes-monthly.jpeg b/static/img/Metrics/lead-time-for-changes-monthly.jpeg
new file mode 100644
index 0000000..3d92e0f
--- /dev/null
+++ b/static/img/Metrics/lead-time-for-changes-monthly.jpeg
Binary files differ
diff --git a/static/img/Metrics/lead-time-for-changes-text.jpeg b/static/img/Metrics/lead-time-for-changes-text.jpeg
new file mode 100644
index 0000000..6e1ae4e
--- /dev/null
+++ b/static/img/Metrics/lead-time-for-changes-text.jpeg
Binary files differ
diff --git a/static/img/Metrics/mttr-monthly.jpeg b/static/img/Metrics/mttr-monthly.jpeg
new file mode 100644
index 0000000..4c1c0fd
--- /dev/null
+++ b/static/img/Metrics/mttr-monthly.jpeg
Binary files differ
diff --git a/static/img/Metrics/mttr-text.jpeg b/static/img/Metrics/mttr-text.jpeg
new file mode 100644
index 0000000..1733622
--- /dev/null
+++ b/static/img/Metrics/mttr-text.jpeg
Binary files differ