Add archtetypes (#184)

diff --git a/CONTRIBUTE.md b/CONTRIBUTE.md
index 84ffc20..2c1254b 100644
--- a/CONTRIBUTE.md
+++ b/CONTRIBUTE.md
@@ -77,7 +77,33 @@
 
 ### How to add a new blogpost
 
-In order to add a new blogpost, create a markdown file in `<ROOT DIRECTORY>/landing-pages/site/content/<LANGUAGE VERSION>/blog/<filename>.md`.
+To add a new blogpost with pre-filled frontmatter, in `<ROOT DIRECTORY>/landing-pages/site` run:
+
+    hugo new blog/my-new-blogpost.md
+
+That will create a markdown file `<ROOT DIRECTORY>/landing-pages/site/content/<LANGUAGE VERSION>/blog/my-new-blogpost.md`
+with following content:
+
+    ---
+    title: "My New Blogpost"
+    linkTitle: "My New Blogpost"
+    author: "Your Name"
+    twitter: "Your Twitter ID (optional, remove if not needed)"
+    github: "Your Github ID (optional, remove if not needed)"
+    linkedin: "Your LinkedIn ID (optional, remove if not needed)"
+    description: "Description"
+    tags: []
+    date: "2019-11-19"
+    draft: true
+    ---
+
+Below frontmatter, put your blogpost content.
+
+When you finish your writing blogpost, remember to **remove `draft: true`** from frontmatter.
+
+---
+
+To add a new blogpost manually, create a markdown file in `<ROOT DIRECTORY>/landing-pages/site/content/<LANGUAGE VERSION>/blog/<filename>.md`.
 The filename will also serve as URL for your blogpost.
 
 Then, **at the top of the file**, add frontmatter in following format:
@@ -96,10 +122,39 @@
 
 Below frontmatter, put your blogpost content.
 
-
 ### How to add a new case study
 
-In order to add a new case study, create a markdown file in `<ROOT DIRECTORY>/landing-pages/site/content/<LANGUAGE VERSION>/use-cases/<filename>.md`.
+To add a new case study with pre-filled frontmatter, in `<ROOT DIRECTORY>/landing-pages/site` run:
+
+    hugo new use-cases/my-use-case.md
+
+That will create a markdown file `<ROOT DIRECTORY>/landing-pages/site/content/<LANGUAGE VERSION>/use-cases/my-use-case.md`
+with following content:
+
+    ---
+    title: "My Use Case"
+    linkTitle: "My Use Case"
+    quote:
+        text: "Quote text"
+        author: "Quote's author"
+    logo: "logo-name-in-static-icons-directory.svg"
+    draft: true
+    ---
+
+    ##### What was the problem?
+    text
+
+    ##### How did Apache Airflow help to solve this problem?
+    text
+
+    ##### What are the results?
+    text
+
+When you finish your writing blogpost, remember to **remove `draft: true`** from frontmatter.
+
+---
+
+To add a new case study manually, create a markdown file in `<ROOT DIRECTORY>/landing-pages/site/content/<LANGUAGE VERSION>/use-cases/<filename>.md`.
 The filename will also serve as URL for the case study.
 
 Then, **at the top of the file**, add frontmatter in following format:
@@ -124,6 +179,8 @@
     #### What are the results?
     <text>
 
+---
+
 **Important** - put the logo file in `<ROOT DIRECTORY>/landing-pages/site/static/icons/` directory. Then, in the frontmatter,
 refer to it just by filename.
 
diff --git a/landing-pages/site/archetypes/blog.md b/landing-pages/site/archetypes/blog.md
new file mode 100644
index 0000000..09708c4
--- /dev/null
+++ b/landing-pages/site/archetypes/blog.md
@@ -0,0 +1,12 @@
+---
+title: "{{ replace .Name "-" " " | title }}"
+linkTitle: "{{ replace .Name "-" " " | title }}"
+author: "Your Name"
+twitter: "Your Twitter ID (optional, remove if not needed)"
+github: "Your Github ID (optional, remove if not needed)"
+linkedin: "Your LinkedIn ID (optional, remove if not needed)"
+description: "Description"
+tags: []
+date: "{{ now.Format "2006-01-02" }}"
+draft: true
+---
diff --git a/landing-pages/site/archetypes/use-cases.md b/landing-pages/site/archetypes/use-cases.md
new file mode 100644
index 0000000..a55989a
--- /dev/null
+++ b/landing-pages/site/archetypes/use-cases.md
@@ -0,0 +1,18 @@
+---
+title: "{{ replace .Name "-" " " | title }}"
+linkTitle: "{{ replace .Name "-" " " | title }}"
+quote:
+    text: "Quote text"
+    author: "Quote's author"
+logo: "logo-name-in-static-icons-directory.svg"
+draft: true
+---
+
+##### What was the problem?
+text
+
+##### How did Apache Airflow help to solve this problem?
+text
+
+##### What are the results?
+text
diff --git a/landing-pages/site/content/en/case-studies/example-case10.md b/landing-pages/site/content/en/case-studies/example-case10.md
deleted file mode 100644
index 6ea82d7..0000000
--- a/landing-pages/site/content/en/case-studies/example-case10.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Example 9"
-linkTitle: "Example 9"
-quote:
-    text: "A great ecosystem and community that comes together to address about any batch data pipeline need."
-    author: "Austin Benett, CTO at Spotify"
-logo_path: "icons/dish-logo.svg"
----
-
-##### What was the problem?
-We faced increasing complexity managing lengthy crontabs with scheduling being an issue, this required carefully planning timing due to resource constraints, usage patterns, and especially custom code needed for retry logic.  In the last case, having to verify success of previous jobs and/or steps prior to running the next.  Furthermore, time to results is important, but we were increasingly relying on buffers for processing, where things were effectively sitting idle and not processing, waiting for the next stage.
-
-##### How did Apache Airflow help to solve this problem?
-Relying on community built and existing hooks and operators to the majority of cloud services we use has allowed us to focus on business outcomes.
-
-##### What are the results?
-Airflow helps us manage many of our pain-points, letting us benefit from the overall ecosystem and
-community.  We are able to reduce time-to-end delivery of data products by being event-driven in our
-processing flows (in our first usage, for example, we were able to take out over 2 hours - on average - of various
-waiting between stages).  Furthermore, we are able to arrive at and iterate on products quicker as a result of
-not needing as much custom or roll-our-own solutions.  For Our code base is smaller and simpler, it is easier to
-follow, and to a large extent our DAGs serve as sufficient documentation for new contributors to understand
-what is going on.
diff --git a/landing-pages/site/content/en/case-studies/example-case11.md b/landing-pages/site/content/en/case-studies/example-case11.md
deleted file mode 100644
index 6ea82d7..0000000
--- a/landing-pages/site/content/en/case-studies/example-case11.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Example 9"
-linkTitle: "Example 9"
-quote:
-    text: "A great ecosystem and community that comes together to address about any batch data pipeline need."
-    author: "Austin Benett, CTO at Spotify"
-logo_path: "icons/dish-logo.svg"
----
-
-##### What was the problem?
-We faced increasing complexity managing lengthy crontabs with scheduling being an issue, this required carefully planning timing due to resource constraints, usage patterns, and especially custom code needed for retry logic.  In the last case, having to verify success of previous jobs and/or steps prior to running the next.  Furthermore, time to results is important, but we were increasingly relying on buffers for processing, where things were effectively sitting idle and not processing, waiting for the next stage.
-
-##### How did Apache Airflow help to solve this problem?
-Relying on community built and existing hooks and operators to the majority of cloud services we use has allowed us to focus on business outcomes.
-
-##### What are the results?
-Airflow helps us manage many of our pain-points, letting us benefit from the overall ecosystem and
-community.  We are able to reduce time-to-end delivery of data products by being event-driven in our
-processing flows (in our first usage, for example, we were able to take out over 2 hours - on average - of various
-waiting between stages).  Furthermore, we are able to arrive at and iterate on products quicker as a result of
-not needing as much custom or roll-our-own solutions.  For Our code base is smaller and simpler, it is easier to
-follow, and to a large extent our DAGs serve as sufficient documentation for new contributors to understand
-what is going on.
diff --git a/landing-pages/site/content/en/case-studies/example-case12.md b/landing-pages/site/content/en/case-studies/example-case12.md
deleted file mode 100644
index 6ea82d7..0000000
--- a/landing-pages/site/content/en/case-studies/example-case12.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Example 9"
-linkTitle: "Example 9"
-quote:
-    text: "A great ecosystem and community that comes together to address about any batch data pipeline need."
-    author: "Austin Benett, CTO at Spotify"
-logo_path: "icons/dish-logo.svg"
----
-
-##### What was the problem?
-We faced increasing complexity managing lengthy crontabs with scheduling being an issue, this required carefully planning timing due to resource constraints, usage patterns, and especially custom code needed for retry logic.  In the last case, having to verify success of previous jobs and/or steps prior to running the next.  Furthermore, time to results is important, but we were increasingly relying on buffers for processing, where things were effectively sitting idle and not processing, waiting for the next stage.
-
-##### How did Apache Airflow help to solve this problem?
-Relying on community built and existing hooks and operators to the majority of cloud services we use has allowed us to focus on business outcomes.
-
-##### What are the results?
-Airflow helps us manage many of our pain-points, letting us benefit from the overall ecosystem and
-community.  We are able to reduce time-to-end delivery of data products by being event-driven in our
-processing flows (in our first usage, for example, we were able to take out over 2 hours - on average - of various
-waiting between stages).  Furthermore, we are able to arrive at and iterate on products quicker as a result of
-not needing as much custom or roll-our-own solutions.  For Our code base is smaller and simpler, it is easier to
-follow, and to a large extent our DAGs serve as sufficient documentation for new contributors to understand
-what is going on.
diff --git a/landing-pages/site/content/en/case-studies/example-case13.md b/landing-pages/site/content/en/case-studies/example-case13.md
deleted file mode 100644
index 6ea82d7..0000000
--- a/landing-pages/site/content/en/case-studies/example-case13.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Example 9"
-linkTitle: "Example 9"
-quote:
-    text: "A great ecosystem and community that comes together to address about any batch data pipeline need."
-    author: "Austin Benett, CTO at Spotify"
-logo_path: "icons/dish-logo.svg"
----
-
-##### What was the problem?
-We faced increasing complexity managing lengthy crontabs with scheduling being an issue, this required carefully planning timing due to resource constraints, usage patterns, and especially custom code needed for retry logic.  In the last case, having to verify success of previous jobs and/or steps prior to running the next.  Furthermore, time to results is important, but we were increasingly relying on buffers for processing, where things were effectively sitting idle and not processing, waiting for the next stage.
-
-##### How did Apache Airflow help to solve this problem?
-Relying on community built and existing hooks and operators to the majority of cloud services we use has allowed us to focus on business outcomes.
-
-##### What are the results?
-Airflow helps us manage many of our pain-points, letting us benefit from the overall ecosystem and
-community.  We are able to reduce time-to-end delivery of data products by being event-driven in our
-processing flows (in our first usage, for example, we were able to take out over 2 hours - on average - of various
-waiting between stages).  Furthermore, we are able to arrive at and iterate on products quicker as a result of
-not needing as much custom or roll-our-own solutions.  For Our code base is smaller and simpler, it is easier to
-follow, and to a large extent our DAGs serve as sufficient documentation for new contributors to understand
-what is going on.
diff --git a/landing-pages/site/content/en/case-studies/example-case14.md b/landing-pages/site/content/en/case-studies/example-case14.md
deleted file mode 100644
index 6ea82d7..0000000
--- a/landing-pages/site/content/en/case-studies/example-case14.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Example 9"
-linkTitle: "Example 9"
-quote:
-    text: "A great ecosystem and community that comes together to address about any batch data pipeline need."
-    author: "Austin Benett, CTO at Spotify"
-logo_path: "icons/dish-logo.svg"
----
-
-##### What was the problem?
-We faced increasing complexity managing lengthy crontabs with scheduling being an issue, this required carefully planning timing due to resource constraints, usage patterns, and especially custom code needed for retry logic.  In the last case, having to verify success of previous jobs and/or steps prior to running the next.  Furthermore, time to results is important, but we were increasingly relying on buffers for processing, where things were effectively sitting idle and not processing, waiting for the next stage.
-
-##### How did Apache Airflow help to solve this problem?
-Relying on community built and existing hooks and operators to the majority of cloud services we use has allowed us to focus on business outcomes.
-
-##### What are the results?
-Airflow helps us manage many of our pain-points, letting us benefit from the overall ecosystem and
-community.  We are able to reduce time-to-end delivery of data products by being event-driven in our
-processing flows (in our first usage, for example, we were able to take out over 2 hours - on average - of various
-waiting between stages).  Furthermore, we are able to arrive at and iterate on products quicker as a result of
-not needing as much custom or roll-our-own solutions.  For Our code base is smaller and simpler, it is easier to
-follow, and to a large extent our DAGs serve as sufficient documentation for new contributors to understand
-what is going on.
diff --git a/landing-pages/site/content/en/case-studies/example-case15.md b/landing-pages/site/content/en/case-studies/example-case15.md
deleted file mode 100644
index 6ea82d7..0000000
--- a/landing-pages/site/content/en/case-studies/example-case15.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Example 9"
-linkTitle: "Example 9"
-quote:
-    text: "A great ecosystem and community that comes together to address about any batch data pipeline need."
-    author: "Austin Benett, CTO at Spotify"
-logo_path: "icons/dish-logo.svg"
----
-
-##### What was the problem?
-We faced increasing complexity managing lengthy crontabs with scheduling being an issue, this required carefully planning timing due to resource constraints, usage patterns, and especially custom code needed for retry logic.  In the last case, having to verify success of previous jobs and/or steps prior to running the next.  Furthermore, time to results is important, but we were increasingly relying on buffers for processing, where things were effectively sitting idle and not processing, waiting for the next stage.
-
-##### How did Apache Airflow help to solve this problem?
-Relying on community built and existing hooks and operators to the majority of cloud services we use has allowed us to focus on business outcomes.
-
-##### What are the results?
-Airflow helps us manage many of our pain-points, letting us benefit from the overall ecosystem and
-community.  We are able to reduce time-to-end delivery of data products by being event-driven in our
-processing flows (in our first usage, for example, we were able to take out over 2 hours - on average - of various
-waiting between stages).  Furthermore, we are able to arrive at and iterate on products quicker as a result of
-not needing as much custom or roll-our-own solutions.  For Our code base is smaller and simpler, it is easier to
-follow, and to a large extent our DAGs serve as sufficient documentation for new contributors to understand
-what is going on.
diff --git a/landing-pages/site/content/en/case-studies/example-case16.md b/landing-pages/site/content/en/case-studies/example-case16.md
deleted file mode 100644
index 6ea82d7..0000000
--- a/landing-pages/site/content/en/case-studies/example-case16.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Example 9"
-linkTitle: "Example 9"
-quote:
-    text: "A great ecosystem and community that comes together to address about any batch data pipeline need."
-    author: "Austin Benett, CTO at Spotify"
-logo_path: "icons/dish-logo.svg"
----
-
-##### What was the problem?
-We faced increasing complexity managing lengthy crontabs with scheduling being an issue, this required carefully planning timing due to resource constraints, usage patterns, and especially custom code needed for retry logic.  In the last case, having to verify success of previous jobs and/or steps prior to running the next.  Furthermore, time to results is important, but we were increasingly relying on buffers for processing, where things were effectively sitting idle and not processing, waiting for the next stage.
-
-##### How did Apache Airflow help to solve this problem?
-Relying on community built and existing hooks and operators to the majority of cloud services we use has allowed us to focus on business outcomes.
-
-##### What are the results?
-Airflow helps us manage many of our pain-points, letting us benefit from the overall ecosystem and
-community.  We are able to reduce time-to-end delivery of data products by being event-driven in our
-processing flows (in our first usage, for example, we were able to take out over 2 hours - on average - of various
-waiting between stages).  Furthermore, we are able to arrive at and iterate on products quicker as a result of
-not needing as much custom or roll-our-own solutions.  For Our code base is smaller and simpler, it is easier to
-follow, and to a large extent our DAGs serve as sufficient documentation for new contributors to understand
-what is going on.