delete old versions of static pages
diff --git a/community/cla.md b/community/cla.md
deleted file mode 100644
index 86ba640..0000000
--- a/community/cla.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Druid Contributor License Agreement
-layout: simple_page
-canonical: 'https://druid.apache.org/community/cla'
----
-
-Per Apache policy, all Druid committers are required to have a contributor license agreement (CLA) on file with
-the Apache Software Foundation.
-
-Other contributors are free to submit patches without having a CLA on file. You should only submit patches to the
-project when you intend to license your contribution under the Apache 2.0 license, and have the legal right to do so.
diff --git a/community/index.md b/community/index.md
deleted file mode 100644
index cdfcbe9..0000000
--- a/community/index.md
+++ /dev/null
@@ -1,192 +0,0 @@
----
-title: Apache Druid Community
-sectionid: community
-layout: simple_page
-canonical: 'https://druid.apache.org/community/'
----
-
-## Community
-
-Most discussion about Druid happens over [Slack](/community/join-slack?v=1), [GitHub](https://github.com/apache/druid), and the [Apache Dev list](https://lists.apache.org/list.html?dev@druid.apache.org), but those aren't the only way to interact with the Druid
-community. We also do chat, meetups, and more.
-
-Check out the following resources if you're looking for help, to discuss Druid development, or just stay up to date:
-
-* **Slack:** Many users and committers are present on Apache Druid Slack. Use this link to join and invite others: [https://druid.apache.org/community/join-slack](/community/join-slack?v=1). This is the perfect place to ask for help if you need it!
-* **GitHub:** Star us at [apache/druid](https://github.com/apache/druid) and use this to follow Druid development, raise
-issues, or contribute pull requests. If you're interested in development, please see the [Contributing](#contributing)
-section below for details on our development process.
-* **Development mailing list:** [dev@druid.apache.org](https://lists.apache.org/list.html?dev@druid.apache.org) for
-discussion about project development.
-* **Twitter:** Follow us on Twitter at [@druidio](https://twitter.com/druidio).
-
-Also check out:
-
-* **User mailing list:** [druid-user@googlegroups.com](https://groups.google.com/forum/#!forum/druid-user) for general
-discussion, questions, and announcements.
-* **LinkedIn** Connect with other Apache Druid professionals in the [LinkedIn group](https://www.linkedin.com/groups/8791983/)
-* **Meetups:** Check out [Apache Druid on meetup.com](https://www.meetup.com/topics/apache-druid/) for links to regular
-meetups in cities all over the world.
-* **StackOverflow:** While the user mailing list is the primary resource for asking questions, if you prefer
-StackOverflow, make sure to tag your question with `druid` or `apache-druid`.
-
-## Getting help
-
-The best place to get a wide variety of help about Druid is via `#druid` on the Apache Slack team. There is also a druid user 
-google group [druid-user@googlegroups.com](https://groups.google.com/forum/#!forum/druid-user) however slack is the preferred way to get help. You can also report issues and problems, or suggest
-new features, on [GitHub](https://github.com/apache/druid).
-
-Third party companies also provide commercial support and services for Druid, including:
-
-* [Cloudera](https://www.cloudera.com/)
-* [Datumo](https://datumo.io/)
-* [Deep.BI](https://www.deep.bi/solutions/apache-druid)
-* [Imply](https://imply.io/)
-* [Rill Data](https://www.rilldata.com/)
-
-## Contributing
-
-Druid is a community-led project and we are delighted to receive contributions
-of anything from minor fixes to big new features.
-
-### What to work on
-
-If you have an itch to scratch, then by all means do that! Fixing bugs you run
-into, or adding features you need, are both immensely helpful.
-
-If you're looking for some starter projects, we maintain a [list of issues](https://github.com/apache/druid/issues?q=is%3Aopen+is%3Aissue+label%3AStarter) suitable
-for new developers.
-
-There are plenty of ways to help outside writing Druid code. *Code review of pull requests*
-(even if you are not a committer), feature suggestions, reporting bugs, [documentation](/docs/{{ site.druid_stable_version }}/)
-and usability feedback all matter immensely. Another big way to help is
-through [client libraries](/docs/latest/development/libraries.html), which are
-avaialble in a variety of languages. If you develop a new one, we'll be happy
-to include it in the list.
-
-### Getting your changes accepted
-
-Patches to Druid are done through [GitHub pull requests](https://github.com/apache/druid/pulls).
-
-Pull requests require one approval (+1) from an established committer on code and text (for documentation) levels. The
-exception is major architectural changes or API changes, and/or changes to
-
- - HTTP requests and responses (e. g. a new HTTP endpoint)
- - [Interfaces for extensions](/docs/latest/development/modules.html)
- - Server configuration (e. g. altering the behavior of a config property)
- - Emitted metrics
- - Other major changes, judged by the discretion of Druid committers
-
-warrant additional design and compatibility review. Such pull requests require design approvals from three different
-committers (one of them could also be the author of the pull request). For those, it can help to discuss things
-on the Druid development list [dev@druid.apache.org](https://lists.apache.org/list.html?dev@druid.apache.org) or a github issue beforehand.
-
-In general please follow the [contributing guidelines](https://github.com/apache/druid/blob/master/CONTRIBUTING.md)
-when sending in pull requests. This will help review proceed as quickly as
-possible.
-
-### Testing
-
-All Pull Requests are automatically tested on [Travis CI](https://travis-ci.com/apache/druid) on both AMD64 and ARM64 architectures.
-
-### Committers
-
-Committers are collectively responsible for Druid's technical management. This involves
-setting the direction of the project, contributing code, and reviewing code contributed
-by others.
-
-You don't need to be a committer to contribute- pull requests are welcome from anyone.
-
-| Name                                                      | Organization                                       |
-| -----------------------------------------------------     | -------------------------------------------------- |
-| [Abhishek Agarwal](https://github.com/abhishekagarwal87)  | [Imply](https://imply.io/)                         |
-| [Alexander Saydakov](https://github.com/AlexanderSaydakov)| [Verizon Media](https://www.verizonmedia.com/)     |
-| [Atul Mohan](https://github.com/a2l007)                   | [Yahoo](https://www.yahoo.com/)                    |
-| [Benedict Jin](https://github.com/asdf2014)               | [Alibaba](https://www.alibaba.com/)                |
-| [Charles Allen](https://github.com/drcrallen)             | [Snap](https://www.snap.com/en-US/)                |
-| [Chi Cao Minh](https://github.com/ccaominh)               | [Imply](https://imply.io/)                         |
-| [Clint Wylie](https://github.com/clintropolis)            | [Imply](https://imply.io/)                         |
-| [David Glasser](https://github.com/glasser)               | [Apollo GraphQL](https://apollographql.com/)       |
-| [David Lim](https://github.com/dclim)                     | [Imply](https://imply.io/)                         |
-| [Daoyue Gao](https://github.com/gaodayue)                 | [Meituan](http://www.meituan.com/)                 |
-| [Dylan Wylie](https://github.com/Dylan1312)               | [SpotX](https://www.spotx.tv/)                     |
-| [Egor Rashin](https://github.com/egor-ryashin)            | [Rill Data](https://www.rilldata.com/)             |
-| [Eric Tschetter](https://github.com/cheddar)              | [Splunk](https://www.splunk.com/)                  |
-| [Fangjin Yang](https://github.com/fjy)                    | [Imply](https://imply.io/)                         |
-| [Fokko Driesprong](https://github.com/Fokko)              | [GoDataDriven](https://godatadriven.com/)          |
-| [Frank Chen](https://github.com/FrankChen021)             | [Shopee](https://shopee.sg/)                       |
-| [Furkan Kamaci](https://github.com/kamaci)                | [Lagom](https://lagom.ai/)                         |
-| [Gian Merlino](https://github.com/gianm)                  | [Imply](https://imply.io/)                         |
-| [Himanshu Gupta](https://github.com/himanshug)            | [Splunk](https://www.splunk.com/)                  |
-| [Jihoon Son](https://github.com/jihoonson)                | [Imply](https://imply.io/)                         |
-| [Jonathan Wei](https://github.com/jon-wei)                | [Imply](https://imply.io/)                         |
-| [Julian Hyde](https://github.com/julianhyde)              | [Looker](https://looker.com/)                      |
-| [Jun Rao](https://github.com/junrao)                      | [Confluent](https://www.confluent.io/)             |
-| [Kaijian Ding](https://github.com/kaijianding)            | [Alibaba](https://www.alibaba.com/)                |
-| [Kashif Faraz](https://github.com/kfaraz)                 | [Imply](https://imply.io/)                         |
-| [Kurt Young](https://github.com/kurtyoung)                | [Alibaba](https://www.alibaba.com/)                |
-| [Lijin Bin](https://github.com/binlijin)                  | [Alibaba](https://www.alibaba.com/)                |
-| [Lucas Capistrant](https://github.com/capistrant)         | [Target](https://www.target.com/)                  |
-| [Maggie Brewster](https://github.com/mcbrewster)          | [Imply](https://www.imply.io/)                     |
-| [Maxime Beauchemin](https://github.com/mistercrunch)      | [Preset](http://preset.io/)                        |
-| [Maytas Monsereenusorn](https://github.com/maytasm)      | [Imply](https://www.imply.io/)                     |
-| [Michael Schiff](https://github.com/michaelschiff)        | [Adobe](https://www.adobe.com/)                    |
-| [Mingming Qiu](https://github.com/QiuMM)                  | [Bytedance](https://bytedance.com/)                |
-| [Mohamed Slim Bouguerra](https://github.com/b-slim)       | [LinkedIn](https://www.linkedin.com/)              |
-| [Navis Ryu](https://github.com/navis)                     | [SK Telecom](https://www.sktelecom.com/index.html) |
-| [Niketh Sabbineni](https://github.com/niketh)             | [Verizon Media](https://www.verizonmedia.com/)     |
-| [Nishant Bangarwa](https://github.com/nishantmonu51)      | [Rill Data](https://www.rilldata.com/)             |
-| [Parag Jain](https://github.com/pjain1)                   | [Rill Data](https://www.rilldata.com/)             |
-| [P. Taylor Goetz](https://github.com/ptgoetz)             | [EPAM](https://www.epam.com/)                      |
-| [Roman Leventov](https://github.com/leventov)             | [Snap](https://www.snap.com/en-US/)                |
-| [Samarth Jain](https://github.com/samarthjain)            | [Netflix](https://www.netflix.com/)                |
-| [Steve Hetland](https://github.com/sthetland     )        | [Imply](https://imply.io/)                         |
-| [Suneet Saldanha](https://github.com/suneet-s)            | [Imply](https://imply.io/)                         |
-| [Surekha Saharan](https://github.com/surekhasaharan)      | [Imply](https://imply.io/)                         |
-| [Vadim Ogievetsky](https://github.com/vogievetsky)        | [Imply](https://imply.io/)                         |
-| [Xavier Léauté](https://github.com/xvrl)                  | [Confluent](https://www.confluent.io/)             |
-| [Xinyu Zhang](https://github.com/zhangxinyu1)             | [Qihoo 360](https://www.360.cn/)                   |
-| [Yue Zhang](https://github.com/zhangyue19921010)          |                                                    |
-| [Zach Sherman](https://github.com/zachjsh)                | [Imply](https://imply.io/)                         |
-
-### Becoming a committer
-
-If you'd like to become a committer, that's great! Please contact one of the
-existing committers for a walk through the process. Basically, what we're
-looking for is an interest in ongoing contributions to Druid.
-
-### General committer guidelines
-If you are an official Druid committer then congratulations! You are part of a fantastic group of people. Here are some guidelines to follow to help ensure the Druid project continues to grow and improve:
-
-1. You can merge your own pull request if it fits the rest of the criteria. A common thing to see is "+1 after travis" from other committers.
-1. A pull request should have at least one +1 from a committer who is not the author, on the "code/textual" level of review.
-1. Pull requests which have just one +1 from a committer couldn't be merged earlier than after 3 working days since PR submission.
-1. A pull request with just one +1 could be merged only by (or in coordination with) the committer who provided the review. Because the reviewer may think that the PR is complex or risky enough that needs another pair of eyes to look at it. If this is the case, the first reviewer should indicate this in the PR approval message.
-1. If a pull request has two or more +1's from committers who are not the author, it could be merged immediately and by any committer. But still, enough time since the PR submission should pass to give folks a reasonable chance to indicate a desire to comment on the pull request. AKA: don't merge a pull request that was submitted Friday evening until at least 1~2 regular work days have passed. Use good judgement here.
-1. Major architectural and backwards incompatible changes, or changes which have long-term maintainance consequences (see examples in [the "Getting your changes accepted" section above](#getting-your-changes-accepted)), should have at least three +1's from committers, on the "design" level of review. One approval could be from the author of the PR. The first committer who indicates that a PR needs design review should add the `Design Review` tag to such a pull request.
-1. Travis-CI should pass or have some **very** good reason why it won't pass for a pull request.
-1. You reasonably believe that all comments have been addressed.
-1. You are expected to be the champion for your own pull requests.
-1. Being a champion on a pull request can be a significant undertaking depending on the size of the code change and what parts of the code it touches. It may require communicating with other developers, reconciling differences, organizing community feedback, and/or following up with people who have commented in a pull request to ensure comments have been addressed.
-1. Sometimes code is presented as a work-in-progress or as a point of discussion. Use the `WIP` or `Discuss` tags on a pull request in such a case.
-1. If a pull request you are championing is taking longer than expected to merge, be sure to raise the issue in the developer sync.
-1. Limit the number of pull requests you are championing at the same time.
-1. Prioritize code reviews to look at pull requests that are blockers for the next release (see the Milestone marker on the pull request)
-1. Help serve as champion for pull requests that originate from new committers.
-1. If you feel a pull request is required for the next release, mark it as such in the Milestone of the pull request.
-1. Do not comment on a pull request unless you are willing to follow up on the edits.
-1. Give priority to getting older pull requests merged. (Either as their champion or as an active commenter)
-1. And most importantly.. the PMC desires to ensure a positive and effective developer experience! If you find that things are not functioning to your expectations, pleaes raise the issue.
-
-Remember, we all want to see this project thrive!
-
-## Governance
-
-The PMC (Project Management Committee) is responsible for the administrative
-aspects of the Druid project. The responsibilities of the PMC include:
-
-- Approving releases
-- Nominating new committers
-- Maintaining the project's shared resources, including the github account,
-  mailing lists, websites, social media channels, etc.
-- Maintaining guidelines for the project
diff --git a/community/join-slack.md b/community/join-slack.md
deleted file mode 100644
index 38a959a..0000000
--- a/community/join-slack.md
+++ /dev/null
@@ -1,4 +0,0 @@
----
-layout: redirect_page
-redirect_target: https://join.slack.com/t/apachedruidworkspace/shared_invite/zt-143m5afbr-4ebnAPJPaQfNOUu_g1MvSw
----
diff --git a/downloads.md b/downloads.md
deleted file mode 100644
index baa3d3f..0000000
--- a/downloads.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: Download
-layout: simple_page
-sectionid: download
-canonical: 'https://druid.apache.org/downloads.html'
----
-
-## Latest release
-
-{% for branch in site.druid_versions limit:1 %}
-{% for release in branch.versions limit:1 %}
-The latest Apache Druid stable release is {{ release.version }}.
-
-<p>
-<a class="large-button download" href="https://www.apache.org/dyn/closer.cgi?path=/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz" download onclick="trackDownload('button', 'https://www.apache.org/dyn/closer.cgi?path=/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz');"><span class="fa fa-download"></span> Download {{release.version}} release</a><br>
-</p>
-
-* Release date: {{ release.date | date: "%b %e %Y" }}
-* Binary download: [apache-druid-{{ release.version }}-bin.tar.gz](https://www.apache.org/dyn/closer.cgi?path=/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz) ([sha512](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz.sha512), [pgp](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz.asc))
-* Source download: [apache-druid-{{ release.version }}-src.tar.gz](https://www.apache.org/dyn/closer.cgi?path=/druid/{{ release.version }}/apache-druid-{{ release.version }}-src.tar.gz) ([sha512](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-src.tar.gz.sha512), [pgp](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-src.tar.gz.asc))
-* Hadoop3 compatible artifacts (use these only if you use Hadoop3 with your Druid cluster):
-  * Binary download: [apache-druid-{{ release.version }}-hadoop3-bin.tar.gz](https://www.apache.org/dyn/closer.cgi?path=/druid/{{ release.version }}/apache-druid-{{ release.version }}-hadoop3-bin.tar.gz) ([sha512](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-hadoop3-bin.tar.gz.sha512), [pgp](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-hadoop3-bin.tar.gz.asc))
-* Release notes: [{{ release.version }}](https://github.com/apache/druid/releases/tag/druid-{{ release.version }})
-{% endfor %}
-{% endfor %}
-
-You should verify your download by following these [procedures](https://www.apache.org/info/verification.html) and using these [KEYS](https://www.apache.org/dist/druid/KEYS).
-
-To learn more about releases, please visit the [versioning page](/docs/latest/development/versioning.html).
-
-## Get started
-
-To get started with Druid, visit the [quickstart](/docs/latest/tutorials/index.html).
-
-## Optional dependencies
-
-### MySQL JDBC Driver
-
-Due to licensing considerations, the MySQL metadata store extension does not include the required MySQL JDBC driver which
-must be downloaded separately. Please see the [documentation](/docs/latest/development/extensions-core/mysql.html) for instructions on how to include this library.
-
-## Release candidates
-
-Release candidates are announced on the [Druid dev mailing list](https://lists.apache.org/list.html?dev@druid.apache.org).
-
-If you are interested in being notified of upcoming release candidates and being involved in the dev community (e.g. validating and approving RC builds), you can subscribe to the list by emailing: **dev-subscribe@druid.apache.org**
-
-Previous discussions are available in the [Apache Mail Archives](https://lists.apache.org/list.html?dev@druid.apache.org).
-
-## Older releases
-
-{% for branch in site.druid_versions limit:1 offset:1 %}
-The previous major release branch of Apache Druid was {{ branch.release }}.
-{% for release in branch.versions limit:1 %}
-
-##### {{ release.version }}
-
-* Release date: {{ release.date | date: "%b %e %Y" }}
-* Binary download: [apache-druid-{{ release.version }}-bin.tar.gz](https://www.apache.org/dyn/closer.cgi?path=/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz) ([sha512](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz.sha512), [pgp](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-bin.tar.gz.asc))
-* Source download: [apache-druid-{{ release.version }}-src.tar.gz](https://www.apache.org/dyn/closer.cgi?path=/druid/{{ release.version }}/apache-druid-{{ release.version }}-src.tar.gz) ([sha512](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-src.tar.gz.sha512), [pgp](https://www.apache.org/dist/druid/{{ release.version }}/apache-druid-{{ release.version }}-src.tar.gz.asc))
-* Release notes: [{{ release.version }}](https://github.com/apache/druid/releases/tag/druid-{{ release.version }})
-{% endfor %}
-{% endfor %}
-
-All other releases of Apache Druid are available at the [Apache Druid release archives](https://archive.apache.org/dist/druid/).
-
-Archived versions of releases made while Druid was under the Apache Incubator are available at [Apache Druid (incubating) release archives](https://archive.apache.org/dist/incubator/druid/).
-
diff --git a/druid-powered.md b/druid-powered.md
deleted file mode 100644
index 519670e..0000000
--- a/druid-powered.md
+++ /dev/null
@@ -1,989 +0,0 @@
----
-title: Powered by Apache Druid
-subtitle: <a href="https://github.com/apache/druid-website-src/blob/master/druid-powered.md"><span class="fa fa-plus"></span> Add Your Company</a>
-layout: simple_page
-sectionid: powered-by
-canonical: 'https://druid.apache.org/druid-powered'
----
-
-There are numerous companies of various sizes in production with Druid. Some of them are listed below.
-
-## Adikteev
-
-[Adikteev](https://www.adikteev.com) is the leading mobile app re-engagement platform for performance-driven marketers, and is consistently ranked in the top 5 of the AppsFlyer Performance Index. By using Druid instead of relying on slow and stale dashboards, we have been able to achieve internal productivity gains, make better decisions faster, provide our external clients with strategic advice to improve the performance and effectiveness of their retargeting marketing campaigns, and notify clients quickly of potentially serious problems. 
-
-* [How Adikteev helps customers succeed using self-service analytics](https://imply.io/post/how-adikteev-helps-customers-succeed-using-self-service-analytics) (Margot Miller), *Imply Blog*, 20 Aug 2020.
-
-## AdsWizz
-
-[AdsWizz](https://www.adswizz.com/) is the leading global digital audio advertising solution provider, enabling the effective monetization and personalization of digital audio through a complete suite of advertising and analytics solutions. 
-
-We use Druid as the core analytics engine, which powers a multitude of real-time dashboards and widgets, used both by customers and internal staff. Sub-second query response times, the removal of obsolete data cubes, ad-hoc analytics and the seamless integration of historical and real-time data are the main reasons we have chosen Druid. Advertisers, agencies and publishers can get insights on hundreds of metrics and dimensions specific to the advertising industry, having the possibility to slice and dice the data according to their needs.
-
-## Airbnb
-
-Druid powers slice and dice analytics on both historical and realtime-time metrics. It significantly reduces latency of analytic queries and help people to get insights more interactively.
-
-* [How Druid enables analytics at Airbnb](https://medium.com/airbnb-engineering/druid-airbnb-data-platform-601c312f2a4c) (Pala Muthiah, Jinyang Li), *The AirBnb Tech Blog [Medium]*, 13 Nov 2018.
-
-* [How Airbnb Achieved Metric Consistency at Scale](https://medium.com/airbnb-engineering/how-airbnb-achieved-metric-consistency-at-scale-f23cc53dea70) (Amit Pahwa, Cristian Figueroa, Donghan Zhang, Haim Grosman, John Bodley, Jonathan Parks, Maggie Zhu, Philip Weiss, Robert Chang, Shao Xie, Sylvia Tomiyama, Xiaohui Sun), *The AirBnb Tech Blog [Medium]*, 30 Apr 2021.
-
-## Airbridge
-
-[Airbridge](https://airbridge.io) is an people-based attribution and omni-channel campaign analytics platform helping marketers measure and optimize their marketing campaigns. Thanks to Druid's data aggregation technology, marketers using Airbridge are able to receive real-time granular reports regarding their campaign's performance executed across a variety of devices, platforms, and channels.
-
-## Alibaba
-
-Druid is widely used in different business units of Alibaba Group.
-
-- At Alibaba Search Group, Druid is used to power real-time analytics of users' interaction with its popular e-commerce site.
-
-- At Alibaba Cloud, Druid powers its [E-MapReduce](https://www.alibabacloud.com/help/doc-detail/201882.html) cloud service, which is a big data platform based on open source engines such as Hadoop. See [E-MapReduce Druid](https://www.alibabacloud.com/help/doc-detail/72987.html) for more details.
-
-- At its Local Services Business Unit, Druid is responsible for realtime application performance monitoring and alerting. More details can be found in [this blog](https://druid.org.cn/?p=91) (Chinese).
-
-## Allegro
-
-[Allegro](https://allegro.tech/) is the most popular Polish shopping destination with about 14 million users monthly and over 1.5 million items sold on the platform daily.
-We have been using Druid since 2016 as business intelligence platform to power our clickstream analytics, marketing automation, anomaly detection, technical metrics and more.
-Our cluster (432 CPUs, 1300GB of RAM on historicals) processes billions of realtime events loaded from [Kafka](https://kafka.apache.org/)
-and hundreds batch indexing jobs on daily basis.
-It's a pleasure to work with Druid, extraordinarily efficient and exceptionally stable platform with vibrant community.
-
-## Alphaa AI
-
-We leverage Apache Druid in conjunction with the Alphaa AI Super AI NLP Engine to implement Natural Language Query on billions of rows of data for Alphaa Cloud customers.
-
-## Amobee
-
-Today’s consumers move across screens with ease. But advertisers and media companies alike struggle to unify audiences and optimize media investment. Only [Amobee](https://www.amobee.com/) brings it all together. We give you the power to bridge these silos with end-to-end solutions that help brands, agencies, and media companies optimize the consumer experience across linear TV, connected TV and digital, including social. You are in control, finding the audiences you seek and getting the performance that you need to drive meaningful results for your business. Our loyal customers have achieved dramatic performance improvements and efficiencies by holistically optimizing their full portfolio of media against strategic, in-market audiences. Together, these beloved household brands and premium media providers have made Amobee a world leading advertising & premium media management platform. Why? Like us, they believe that when all media investment can be mapped to business outcomes, everyone wins -- including the consumer.
-
-Amobee provides advertising solutions across tv, digital and social platforms. Amobee has massive incoming data, with customers requiring reporting analytics across unbounded combinations of use cases. Amobee uses Druid to power such intense customer needs via quick response dashboards and offline reports in Amobee’s Platform UI, as well as queries from demanding automated systems. Specifically, we love that Druid allows convergence of many data streams via its robust ingestion, parallel aggregation, redundant storage and concurrent query mechanisms.
-
-## Apollo GraphQL
-
-Apollo uses Druid to power our "Graph Manager" SaaS service, which helps application developers manage, validate and secure their organization’s data graph. In addition to managing the evolution of their GraphQL schema, users can draw insights from the operations metadata and execution trace data that are sent to our service, indexed, and stored in Druid.
-
-## Appsflyer
-
-Druid is the major player in the real-time analytics pipeline at Appsflyer, and it serves as customer facing analytics database for dashboard reporting.
-
-## Archive-It.org
-
-Druid is used to power dynamic analytics and charting in Archive-It Reports. Reports help Archive-It partners understand what content they captured, why some content did not get captured, and assists with quality assurance and crawl scoping to ensure they are building the highest quality collections.
-
-## Athena Health
-At [Athena Health](https://www.athenahealth.com/), we are creating a new performance management application for our clients, and one of its key components is Apache Druid. 
-
-* [Automating CI/CD for Druid Clusters at Athena Health](https://youtu.be/XGzMJBM8xeg) (Shyam Mudambi, Ramesh Kempanna, Karthik Urs), *Imply [YouTube]*, 16 Apr 2020.
-
-## Atomx
-
-Atomx is a new media exchange that connects networks, DSPs, SSPs, and other parties. Atomx uses Druid for it's advanced realtime reporting system. Using the Google Cloud modifications Atomx contributed to Druid, it can easily scale Druid with the fast growing platform.
-
-## Autonomic
-
-Autonomic Transportation Mobility Cloud (TMC) is the first industry open cloud platform for transportation and mobility data. One of our missions is to enable our customers to easily explore a tremendous amount of data and draw valuable insights in a timely fashion. To meet these needs, we deploy Apache Druid in our platform to aggregate and process time series data. Each component in Druid is running as an individual microservices, and can be easily deployed, managed, and scaled independently in our Kubernetes clusters. 
-
-## Avesta Technologies
-
-At [Avesta](https://avestatechnologies.com/), we use Druid as a central component in our cloud data platform to provide real-time analytics solutions to our clients. We are using Druid for Customer Data Analytics and extending it for Industrial IoT and Market Automation use cases. Druid has not only proven itself to be resilient and performant but has also helped our clients save enormous amounts in cloud and licensing costs.
-
-* [Community Spotlight: staying true to open-source roots at Avesta](https://imply.io/post/community-spotlight-staying-true-to-open-source-roots-at-avesta) (Dharam Gajera,Peter Marshall, Jelena Zanko), *Imply Blog*, 25 May 2021.
-
-## BIGO
-
-BIGO selects Druid as an OLAP engine to analyze app(Like, Bigolive, IMO, etc.) data in bigdata team. Typical analysis in Druid includes apm, A/B testing, push, funnel model, etc. Besides, Superset is used as dashboard for deep integration with Druid.
-
-## Billy Mobile
-
-Billy Mobile is a mobile advertising platform, excelling in the performance-based optimisation segment. We use Druid to power our real-time analytics dashboards, in which our publishers, advertisers and staff can get insights on how their campaigns, offers and traffic are performing, with sub-second query time and minute granularity . We are using a lambda-architecture aproach, ingesting the traffic in real time with Tranquility and Storm, and a batch layer via a tight integration with Hive and Hadoop, our Master Data Set. This way we can provide crucial fast access to data, while making sure to have the right numbers.
-
-## Blis
-
-[Blis](https://www.blis.com) is the trusted leader in location-powered advertising and analytics, helping brands understand, reach and engage consumers globally to deliver measurable results. The operations team uses Druid to manage media purchases and make intelligent pricing decisions. The analytic insights team, which helps customers maximize their value from Blis, uses Druid to discover opportunities and tell data-driven stories about their campaigns. Engineering and product teams use Druid to keep an eye on revenue and margin, with the ability to drill into unexpected changes.
-
-## Blueshift
-
-Blueshift is an AI-powered customer data activation platform enabling CRM and product marketers to intelligently manage their audiences and orchestrate large-scale personalized messaging campaigns at scale. Blueshift offers real-time campaign analytics in the product powered by Druid. Additionally, we use the same analytics backend to power automatic traffic allocation capabilities using Bayesian bandits, perform cohort analysis over multiple dimensions, and generate internal reports to measure the impact of ROI of AI across different types of campaigns and industry verticals. 
-
-> "We operate a multi-tenant Druid cluster in AWS. Additionally, we have integrated our data lake (Hive/Hbase) with Druid to manage data backfills."
-
-* [Blueshift: Scaling real-time campaign analytics with Apache Druid](https://imply.io/post/blueshift-scaling-real-time-campaign-analytics-apache-druid) (Anuraj Pandey), *Imply blog*, 8 Aug 2019.
-
-## Booking.com
-
-* [Data Engineering At Booking.com: a case study](https://youtu.be/9GE3yiVo1FM) (Andreas Kretz), *YouTube*, 15 Mar 2019.
-
-## Branch
-
-Branch uses Druid as their trusted analytics engine to power all of their data analysis needs. This ranges from the user-facing Branch Dashboard analytics that our partners rely on to gain insight into the performance of their links; to the data-driven business decisions that we need to make internally to build a sustainable business.
-
-## British Telecom (BT)
-
-* [How Druid powers real-time analytics at BT](https://youtu.be/QZUunUDQ8p4) (Pankaj Tiwari), *Imply [YouTube]*, 17 Apr 2020.
-
-## Central Bank of the Republic of Turkey
-
-The Central Bank of the Republic of Turkey addresses the challenge of analyzing and interpreting real-time tick data related to money market instruments using Apache Druid and Superset.
-
-* [Developing High Frequency Indicators Using Real-Time Tick Data on Apache Superset and Druid](https://youtu.be/o0JsVwFEdtA) (Zekeriya Besiroglu, Emre Tokel, Kerem Başol, M. Yağmur. Şahin), *DataWorks Summit [YouTube]*, 21 Mar 2019.
-
-## Charter Communications
-
-* [Druid at Charter Spectrum](https://speakerdeck.com/implydatainc/druid-at-charter) (Nate Vogel, Andy Amick), *Imply [SlideShare]*, 24 Jan 2019.
-
-## CircleHD
-
-[CircleHD](https://www.circlehd.com) is an enterprise video enablement platform, used by businesses for training, sales enablement and digital learning experience. CircleHD uses Druid to power the analytics system that provides real-time insight into adoption, usage and engagement.
-
-## Cisco
-
-Cisco uses Druid to power a real-time analytics platform for network flow data.
-
-* [Under the hood of Cisco’s Tetration Analytics platform](https://www.networkworld.com/article/3086250/under-the-hood-of-cisco-s-tetration-analytics-platform.html) (Brandon Butler), *Network World*, 20 Jun 2016.
-
-## Condé Nast
-
-Condé Nast uses Druid to track billions of events across our two dozen brands, both in real time and historically. Druid helps power dashboards, site performance trackers, and many other internal applications.
-
-## Confluent
-
-Druid has helped push operational visibility to the next level. Operating multi-tenant services requires fine-grained visibility down to the individual tenant, user, or application behavior, where most traditional monitoring stacks fail to scale or become cost-prohibitive. Leveraging Druid as part of our stack means we don't shy away from high-cardinality data. As a result, our teams can not only quickly troubleshoot issues but also glean detailed understanding to help improve the product.	
-
-* [Scaling Apache Druid for Real-Time Cloud Analytics at Confluent](https://www.confluent.io/blog/scaling-apache-druid-for-real-time-cloud-analytics-at-confluent/) (Zohreh Karimi, Harini Rajendran), *Confluent Blog*, 8 Nov 2021.
-
-## Contiamo
-
-[Contiamo](https://www.contiamo.com) uses cutting edge technology to enable agile development and deployment of data-driven frontends and automations. Druid powers various dashboards and frontends that deal with large amounts of event based data.
-
-## Criteo
-
-Criteo is using druid to provide its customers with user-centric analytics & reporting.
-With more than 1 billion unique users reached per month, 3 billion ads displayed per day, and a 70% growth in 2014, Criteo's previous stack was hard pressed keeping with the load. Replacing it with druid helped us achieved linear scalability while letting our customers explore their data in new and interactive ways.
-
-* [Real Time Metrics on Tracker Calls](http://labs.criteo.com/2016/02/real-time-metrics-on-tracker-calls/) (Camille Coueslant, Benoit Jehanno), *Criteo Engineering*, 18 Feb 2016.
-
-## CrunchMetrics
-
-CrunchMetrics has fully integrated their platform with Apache Druid, designed to quickly ingest massive quantities of event data, and provide low-latency queries on top of the data. With the new integration, customers can gain real-time, intelligent insights from streaming data. This development is in line with the current scalability requirements of 5G and the digital intensification we are all experiencing.
-
-* [CrunchMetrics Year in Review 2020: A Year That Was an Anomaly in Itself!](https://www.crunchmetrics.ai/blog/crunchmetrics-year-in-review-2020-a-year-that-was-an-anomaly-in-itself/) (Rohit Maheshwari), *CrunchMetrics website*, 30 Dec 2020.
-
-## Cuebook
-
-At [Cuebook](https://cuebook.ai/), Druid serves as the analytics database for our augmented analytics product.
-
-* [Augmented analytics on business metrics by Cuebook with Apache Druid](https://imply.io/post/augmented-analytics-on-business-metrics-by-cuebook) (Sachin Bansal, Peter Marshall), *Imply blog*, 18 Aug 2021.
-
-## DBS Bank
-
-[DBS](https://www.dbs.com) is using Apache Druid to handle the AML investigation for the compliance team. The AML (anti-money laundering) workflow generates alerts which are tracked within Druid. The transactional data is ingested from RDBMS to S3 and ingested back to Druid at regular intervals. Investigators can now slice and dice over millions of data with low latency.
-
-* [Holistic AML compliance using Apache Druid](https://youtu.be/Ly3HFbbEihQ) (Arpit Dubey), *Imply [YouTube]*, 15 Apr 2020.
-
-## Dataroid
-
-[Dataroid](https://www.dataroid.com/) is a digital analytics and customer engagement platform helping brands to understand their users' interactions with their digital assets in details and deliver a better experience with data-driven insights. 
-
-Druid powers Dataroid analytics dashboards allowing our customers to interactively deep dive into behavioral and performance data and explore hundreds of metrics in real time.
-
-## Datumo
-
-[Datumo](https://datumo.io/) is a 100% remote Big Data software house from Poland. 
-We are co-authors of [Turnilo](https://allegro.tech/2018/10/turnilo-lets-change-the-way-people-explore-big-data.html) - an open-source data exploration tool for Druid. 
-Our analytics platform [Storyteller](https://datumo.io/#product) uses power of Apache Druid and Turnilo to help our clients analyze large datasets in interactive manner. 
-Our biggest Druid deployment is responsible for real-time monitoring and data processing of a large video streaming platform (60 000 Events Per Second).
-
-## Deep.BI
-
-At [Deep.BI](https://www.deep.bi) we track user habits, engagement, product and content performance — processing up to terabytes or billions of events of data daily. Our goal is to provide real-time insights based on custom metrics from a variety of self-created dimensions. To accomplish this, our system collects every user interaction. We use Apache Flink for event enrichment, custom transformations, aggregations and serving machine learning models. The processed data is then indexed by Apache Druid for real-time analytics and our custom UI built on top of Druid and Apache Cassandra for delivery of the scores.
-
-Our talk from FlinkForward conference in Berlin 2019:
-
-* [Real-time Stream Analytics and User Scoring Using Apache Druid, Flink & Cassandra at Deep.BI](https://medium.com/deep-bi/real-time-stream-analytics-and-user-scoring-using-apache-druid-flink-cassandra-at-deep-bi-38e5dc40958d) (Hisham Itani), *Medium*, 24 Mar 2020.
-
-## Deema.agency
-
-Deema Agency Delivers Results
-As a leading digital advertising agency in Iran, we design, run and optimize performance and awareness-based digital campaigns for businesses and help them to achieve their goals. With state of the art programmatic advertising technology, extensive ad network, and advanced advertising platforms, we deliver your brand message to your target audience in Iran in the most effective way .
-We are challenging with the intensive data stream realtime for analytics and better experience with data-driven insights. 
-Druid powers our applications related metrics and helps us slice and dice analytics on both historical and realtime metrics.
-
-[Deema.agency](https://deema.agency)
-
-## Delta Projects
-
-[Delta Projects](https://www.deltaprojects.com) uses Druid to power real-time audience and advertising analytics interfaces.
-
-## Didi Chuxing
-
-Didi Chuxing is the world’s largest mobile transportation platform, offering a full range of commuting options to 400 cities in China.
-Didi uses Druid as a core component of our real-time bigdata processing pipeline. Druid powers Didi's real-time monitor system which has hundreds of key business metrics. We are deep impressed by Druid's fast aggregations and sub-second OLAP queries. With Druid, we can get insights from data in real-time.
-
-## Dream11
-
-Dream11 is consuming data from Apache Druid, Amazon Redshift, and Athena and building models on Looker to report user concurrency, to track user journeys, and to build interaction events based funnels.
-
-* [Data Highway — Dream11’s Inhouse Analytics Platform — The Burden and Benefits](https://medium.com/@D11Engg/data-highway-dream11s-inhouse-analytics-platform-the-burden-and-benefits-90b8777d282) (Lavanya Pulijala), *Dream11 Engineering [Medium]*, 27 Jan 2020.
-
-## Easemob
-
-Easemob Technologies, Inc. is the leading provider of open IM platform, cloud computing customer services. We enable PaaS services for Instant Messaging and SaaS services for Mobile Customer Service.
-
-## eBay
-
-eBay uses Druid to aggregate multiple data streams for real-time user behavior analytics by ingesting up at a very high rate(over 100,000 events/sec), with the ability to query or aggregate data by any random combination of dimensions, and support over 100 concurrent queries without impacting ingest rate and query latencies.
-
-* [Monitoring at eBay with Druid](https://tech.ebayinc.com/engineering/monitoring-at-ebay-with-druid/) (Garadi Mohan), *eBay Big Data Engineering*, 29 May 2019.
-* [embedded-druid: Leveraging Druid Capabilities in Stand-alone Applications](https://tech.ebayinc.com/engineering/embedded-druid-leveraging-druid-capabilities-in-stand-alone-applications/) (Ruchir Shah), *eBay Big Data Engineering*, 5 Feb 2016.
-* [A glance of Pulsar and Druid](https://blog.csdn.net/ebay/article/details/50205611) (Xiaoming Zhang), *eBay Big Data Engineering*, 7 Dec 2015.
-
-## Expedia
-
-Expedia built real-time customer segmentation into the Expedia Data Platform using Druid as its core component. It helps business and data science teams identify customers using many different criteria. 
-
-* [Fast Approximate Counting Using Druid and Datasketch](https://medium.com/expedia-group-tech/fast-approximate-counting-using-druid-and-datasketch-f5f163131acd) (Elan Halfin, Aravind Sethurathnam), *Expedia Group Technology [Medium]*, 10 Dec 2020.
-
-## FacilityConneX
-
-One of our key goals at [FacilityConneX](https://www.facilityconnex.com/) is to offer real-time insights that help our customers optimize their equipment or processes, reduce cost, or prevent incidents, to ultimately improve our customers business. This *real-time* requirement has always been a major technical challenge to scale within our SaaS environment. FacilityConneX has looked to **Druid** to help solve many of these challenging performance and growth issues.
-
-## Fanatics
-
-Druid powers our applications related metrics and helps us slice and dice analytics on both historical and realtime-time metrics. It significantly reduces latency of analytic queries and help people to get insights more interactively. Through druid data we can now do anomaly detection as well.
-
-## Finin
-
-[Finin](https://finin.in/) is India’s first ever consumer-facing neobank. They're using Apache Druid as a backend for analytics, helping the company with clickstream analytics, user activity and behaviour as well as application performance management. 
-
-## Flipkart
-
-At Flipkart, business and analytics teams need constant visibility into how the users are interacting with the platform across all channels – website, mobile apps and m-site. They are using Druid to power their real-time clickstream analytics reports that enable their business teams to make critical decisions based on current traffic patterns and plan automated as well as manual interventions for traffic shaping.
-
-* [Clickstream Analytics with Apache Druid](https://www.youtube.com/watch?v=vCk1zRxbAnM)
-
-## Flurry Analytics
-
-[Flurry](https://www.flurry.com) is a mobile app analytics platform that empowers product, development and growth experts to build better apps that users love. Mobile data hits the Flurry backend at a huge rate, updates statistics across hundreds of dimensions, and becomes queryable immediately. Flurry harnesses Druid to ingest data and serve queries at this massive rate.
-
-* [Druid @ Flurry](https://youtu.be/6c_hc3Qca5s) (Eric Tschetter), *Sift Science [YouTube]*, 16 Dec 2016.
-
-## FullContact
-
-* [Enrich API Brings Higher Match Rates with Multi-Field Enrichment Capabilities](https://www.fullcontact.com/blog/2019/06/27/enrich-api-brings-higher-match-rates-with-multi-field-enrichment-capabilities/) (Ken Michie), *Full Contact*, 27 Jun 2019.
-
-## Fyber
-
-[Fyber](https://www.fyber.com) is an app monetization company, developing a next-generation monetization platform for app game developers. Fyber uses Apache Druid for customer data analysis.
-
-## GameAnalytics
-
-* [Why GameAnalytics migrated to Apache Druid](https://imply.io/post/why-gameanalytics-migrated-to-druid) (Ramón Lastres Guerrero), *Imply blog*, 14 Feb 2019.
-
-## Glia
-
-[Glia](https://www.glia.com/) uses Druid to provide application usage insights to its clients.
-
-## GumGum
-
-Druid powers a dashboard used internally to visualize real-time analytics on GumGum's Real Time Bidding platform. GumGum runs Druid on Amazon EC2 and can ingest up to 300,000 events per second at peak time.
-
-* [Lambda Architecture with Druid at GumGum](https://whynosql.com/2015/11/06/lambda-architecture-with-druid-at-gumgum/) (Vaibhav Puranik), *WhyNoSql [Wordpress]*, 6 Nov 2015.
-* [Optimized Real-time Analytics using Spark Streaming and Apache Druid](https://medium.com/gumgum-tech/optimized-real-time-analytics-using-spark-streaming-and-apache-druid-d872a86ed99d) (Jatinder Assi), *GumGum Tech Blog [Medium]*, 2 Jun 2020.
-
-## Hawk
-
-[Hawk](https://hawk-tech.io/) is the first independent European platform to offer a transparent and technological advertising experience across all screens.
-
-Hawk's customers harness the power of Imply Druid to access an all-in-one solution to follow their campaign KPIs throughout their lifecycle; build their business dashboards and visualize the evolution of their KPIs; set up tailored reports; and accelerate the Time-To-Market thanks to a simplified data pipeline with Druid directly connected to their Kafka cluster and RDS database.
-
-* [Data Revolution at Hawk Powered by Imply](https://imply.io/customer-stories/data-revolution-at-hawk-powered-by-imply/) (Laura Manzanilla), *Imply blog*, 12 Oct 2021.
-
-## Hexaglobe
-
-[Hexaglobe](https://www.hexaglobe.com/) uses Druid for network and CDN analytics, and gleans intelligence from video services metrics for engineering and marketing purposes.
-
-* [How Druid can Help Analyse Video Services Metrics for QoS & Marketing Purposes](https://www.youtube.com/watch?v=vooDECOqfy0) (Pierre-Alexandre Entraygues, Anthony Courcoux ), *Imply [YouTube]*, 10 Dec 2021.
-
-## Hubvisor
-
-Druid powers [Hubvisor](https://www.hubvisor.io/) realtime insights for their customer programmatic advertising auctions.
-
-## Hulu
-
-At Hulu, we use Druid to power our analytics platform that enables us to interactively deep dive into the behaviors of our users and applications in real-time.
-
-## ICSOC Co., Ltd.
-
-[ICSOC Co., Ltd.](https://www.icsoc.net) is the leading call center service provider in China. In our big data department, as traditional ways cannot meet our requirements, eventually we found that Druid, designed in lambda, fully satisfies our requires. It simplified developing statistics or real-time products and highly speeded up OLAP queries.
-
-## Ibotta
-
-[Ibotta](https://home.ibotta.com/) is a free cash back rewards and payments app. The company has partnered with more than 1,500 brands and retailers to pay consumers cash for their purchases. Ibotta has paid out over $600 million in cash rewards to more than 35 million users since its founding in 2012.
-
-## Imply
-
-Imply uses Druid to power public demos and to power our internal monitoring and metrics.
-
-* [Pivot: An Open Source Data Exploration UI for Druid](https://imply.io/post/hello-pivot) (Vadim Ogievetsky), *Imply blog*, 26 Oct 2015.
-* [A Tour Through the "Big Data" Zoo](https://imply.io/post/big-data-zoo) (Fangjin Yang), *Imply blog*, 4 Nov 2015.
-* [Architecting Distributed Databases for Failure](https://imply.io/post/failure) (Fangjin Yang), *Imply blog*, 10 Dec 2015.
-* [Building a Streaming Analytics Stack with Apache Kafka and Druid](http://www.confluent.io/blog/building-a-streaming-analytics-stack-with-apache-kafka-and-druid) (Fangjin Yang), *Confluent blog*, 14 Jun 2016.
-* [Compressing Longs in Druid](https://imply.io/post/compressing-longs) (David Li), *Imply blog*, 7 Dec 2016.
-* [Securing Druid](https://imply.io/post/securing-druid) (Jon Wei), *Imply blog*, 13 Sep 2018.
-* [Druid query view: An elegant SQL interface for a more civilized age](https://imply.io/post/druid-console-query-view) (Margaret Brewster), *Imply blog*, 16 Oct 2019.
-
-## Inke
-
-[Inke](https://www.inke.cn) is a online live company which principally engaged in online live and technology. We use Druid for business intelligence (BI) analytics.
-
-## Inmobi
-
-Inmobi is a mobile advertising and discovery platform. We use Druid majorly for internal realtime reporting and analysis. We also use Caravel backed by Druid, which allows users to build interactive dashboards. Apart from that, we use Druid as a Datastore for faster ingestion of large amount of data and to query this data at sub second latencies.
-
-## Innowatts
-
-Innowatts uses Druid to quickly work with massive data sets, and differentiates its platform with the intended direction to read meters and forecast usage on the fly.
-
-* [Community Spotlight: Innowatts provides AI-driven analytics for the power industry](https://imply.io/post/innowatts-innovates-power-utilities-analytics) (Daniel Hernandez, Matt Sarrel), *Imply blog*, 30 Sep 2020.
-
-## Interactive Intelligence
-
-At ININ we're using Druid within a Lambda architecture to drive cloud based call center analytics. Many of our realtime dashboards, downloadable reports, and public APIs utilize Druid on the backend.
-
-## ironSource
-
-At ironSource, we use Druid to power the real-time capabilities of our in-app bidding solution, which programmatically auctions off publisher ad impressions to a range of ad sources. Using Druid enables us to process and query billions of data points in real time.
-
-* [Performance Tuning of Druid Cluster at High Scale](https://www.youtube.com/watch?v=_co3nPOh7YM) (Elad Eldor), *Imply [YouTube]*, 10 Dec 2021.
-
-## Italiaonline
-
-Italiaonline exploits Druid for Internet trends and analytics management inside its new [Data-Driven Contents Management System](https://www.italiaonline.it/corporate/en/2016/02/17/supereva-a-new-online-publishing-experience/).
-Italiaonline is the first Italian internet company, with the two most visited web portals, Libero and Virgilio, and the most used email service of the country @libero.it. Italiaonline features 16.8 million unique users per month*, 4.8 billion impressions per month**,  10.2 million active email accounts** and a 58% active reach*.
-
-** Source: Audiweb View, powered by Nielsen, TDA 2H 2015; ** Internal data, December 2015*
-
-## Jolata
-
-Jolata leverages Druid as the analytics data store for the realtime network perfomance managment platform. Injesting over 35 billion events per day, Jolata calculates a billion metrics every minute to visualize precise network metrics in real-time, and enable operators to quickly drill down and perform root cause analysis.
-
-* [Realtime Analytics Powered by Druid](https://www.linkedin.com/pulse/realtime-analytics-powered-druid-kiran-patchigolla) (Kiran Patchigolla), *Medium*, 11 Aug 2015.
-
-## Kering
-
-Thanks to Druid, Kering empowers its sales teams (more than 20,000 people worldwide) to quickly understand their sales performance, compare them to their objectives and identify new sales opportunities within their customer portfolio.  
-Kering was already providing data on demand via a search engine (Apache Solr), and the implementation of Apache Druid has enabled us to enrich the offer by :
-- Adding access to real-time updated KPIs (based on streaming ingestion)
-- Providing a sub-second response time
-- Offering linear scalability that allows the service to be deployed to tens of thousands of end users
-- Allowing massive data updates in a very short period of time (using parquet and mapReduce files) without any impact on the quality of service
-
-## KT NexR
-
-KT NexR is the leading provider of the data analytics platform. We use Druid as a real-time analysis tool to help our customers to analyze multidimensional data in interactive ways.
-
-## LDMobile
-
-LDMobile is a mobile DSP for the RTB. We use Druid to aggregate some metrics in order to propose to our customers a real-time dashboard showing performance indicators of their campaigns.
-
-## Libra AI Technologies & PaperGo
-
-EMA (Explainable Marketing Analytics) is a marketing analytics assistant powered by Druid. She helps marketeers and ecommerce owners leverage users behavioral data and make data driven decisions. The assistant has been implemented on top of PaperGo loyalty platform and LibraAI proprietary machine learning innovations and marketing analytics.
-
-## LifeBuzz
-
-LifeBuzz is a popular web property that serves tens of millions of pageviews per day. We use Druid for all our advanced analytics needs, including in-house analysis and professional realtime analytics for our sponsored media partners.
-
-## LiquidM
-
-LiquidM uses Druid for real-time drill-down reporting. LiquidM is also contributing back to the community by creating and maintaining a ruby client library for interacting with Druid located at <http://github.com/liquidm/ruby-druid>.
-
-## Lohika
-
-Lohika uses Druid to run analytics over large amounts of event-based data. They are using real-time ingestion with Apache Kafka to be able to better respond to changes in advertising campaigns. They are managing their cluster with Kubernetes.
-
-* [Apache Druid: Interactive Analytics at Scale](https://www.lohika.com/apache-druid-interactive-analytics-at-scale) (Volodymyr Iordanov), *Lohika website*, 18 May 2021.
-
-## Lyft
-
-* [Streaming SQL and Druid](https://youtu.be/ovZ9iAkQllo) (Arup Malakar), *SF Big Analytics [YouTube]*, 30 Aug 2018.
-
-
-## Lynx Analytics
-
-[Lynx Analytics](https://www.lynxanalytics.com/) helps B2B clients unlock social, behavioral and experience insights
-on hundreds of millions of customers. We build interactive dashboards powered by Druid to make the data science
-results easy to digest. Our clients love the flexibility and being able to explore the data in detail. The speed at
-which they can do that is in a big part thanks to Druid!
-
-## MakeMyTrip
-
-* [Data Insights Engine @ MakeMyTrip](https://medium.com/makemytrip-engineering/data-insights-engine-makemytrip-900bd353d99c) (Aditya Banerjee), *MakeMyTrip [Medium]*, 4 Jan 2019.
-
-## MAKESENS
-
-[MAKESENS](http://www.battery-doctor.cn/) use Druid to store and analyze battery data.
-
-## Marchex
-
-Marchex uses Druid to provide data for Marchex Call Analytics' new customer facing Speech Analytics dashboards.
-Druid's fast aggregation is critical for providing deep insights into call performance for its customers,
-enabling them to spot trends and improve performance of both marketing campaigns and call centers.
-
-## Metamarkets
-
-In 2011, Metamarkets originally created Druid to power its SaaS analytics platform.
-The technology platform processes trillions of real-time transactions monthly, across thousands of servers.
-With Druid, Metamarkets is able to provide insight to its customers using complex ad-hoc queries at a 95th percentile query time of less than 1 second.
-
-* [Introducing Druid: Real-Time Analytics at a Billion Rows Per Second](https://metamarkets.com/2011/druid-part-i-real-time-analytics-at-a-billion-rows-per-second/)
-* [Scaling the Druid Data Store](https://metamarkets.com/2012/scaling-druid/)
-* [Metamarkets Open Sources Druid](https://metamarkets.com/2012/metamarkets-open-sources-druid/)
-* [Druid Query Optimization with FIFO: Lessons from Our 5000-Core Cluster](https://metamarkets.com/2016/impact-on-query-speed-from-forced-processing-ordering-in-druid/)
-* [Distributing Data in Druid at Petabyte Scale](https://metamarkets.com/2016/distributing-data-in-druid-at-petabyte-scale/)
-
-## Millersoft
-
-Druid is the [Millersoft](https://www.millersoft.ltd.uk/) platform of choice for operational analytics. The seamless integration of real time and historical data sets within Druid has been a real boon for our data processing clients.
-The flexibility of the API via Rest/SQL/Native against a single data source also means that our dashboards, ad-hoc queries and pivot tables are all consistent. The ability to drill down in Druid to the transactions underpinning the aggregations also means that we can reconcile the results directly against operational systems. Druid destroys legacy database cubes at the end of long data integration pipelines.
-
-## Mindhouse
-
-Mindhouse streams data from Apache Kafka into Druid and runs dashboards as well as ad-hoc SQL queries to gain insights from user behavior and pinpoint potential issues in their meditation app.
-
-* [Mindhouse Achieves Monitoring Nirvana with Apache Druid](https://imply.io/post/mindhouse-achieves-monitoring-nirvana-with-apache-druid)
-
-## Mobiauto
-
-Druid has helped Mobiauto ingest and analyze millions of real time events generated on our online car sales website. We use this data to generate real time analytics used by the commercial team, and create APIs for our backend team that can be used to insert real time data to the website that helps our clients make informed decisions.
-
-## Monetate
-
-Druid is a critical component in Monetate's personalization platform, where it
-acts as the serving layer of a lambda architecture.  As such, Druid powers
-numerous real-time dashboards that provide marketers valuable insights into
-campaign performance and customer behavior.
-
-* [Gone Monetate : Personalizing Marketing at 100K events/second](http://brianoneill.blogspot.com/2015/09/gone-monetate-personalizing-marketing.html)
-* [Druid : Vagrant Up (and Tranquility!)](http://brianoneill.blogspot.com/2015/09/druid-vagrant-up-and-tranquility.html)
-* [Kinesis -> Druid : Options Analysis (to Push? to Pull? to Firehose? to Nay Nay?)](http://brianoneill.blogspot.com/2015/09/kinesis-druid-options-analysis-to-push.html)
-
-## mParticle
-
-At [mParticle](https://www.mparticle.com/) we have deployed Druid across our entire reporting stack. Druid has enabled us to ingest over 8.5 billion events daily while supporting reliably low latency queries. We have also enabled our support team and customers even greater insight into their data by exposing (controlled) queries to Druid via Looker.
-
-## MZ
-
-* Priyadarshi, Pushkar; Yurinok, Igor, Nepune, Bikrant. "Druid @ MZ." *YouTube*, uploaded by Imply, 5 Mar 2018, <https://youtu.be/zCk2BV9mQ0Y>
-
-## N3twork
-
-N3TWORK uses Druid for real-time analysis of its Internet of Interests social
-entertainment network. It uses Druid analytics both to optimize user
-experiences and to guide the evolution of its product.
-
-## Naver
-
-* [Web Analytics at Scale with Druid](https://www.slideshare.net/JasonJungsuHEO/web-analytics-at-scale-with-druid-at-navercom)
-
-## Netflix
-
-Netflix engineers use Druid to aggregate multiple data streams, ingesting up to
-two terabytes per hour, with the ability to query data as its being ingested.
-They use Druid to pinpoint anomalies within their infrastructure, endpoint
-activity and content flow.
-
-* Merlino, Gian; Herman, Matt; Pasari, Vivek; Jain, Samarth. "Druid @ Netflix". *YouTube*, uploaded by Netflix Data, 14 Nov 2018. <https://youtu.be/Qvhqe4yUKpw>
-* [Netflix & Amazon Case Study](https://aws.amazon.com/solutions/case-studies/netflix-kinesis-streams/)
-* [Announcing Suro: Backbone of Netflix's Data Pipeline](http://techblog.netflix.com/2013/12/announcing-suro-backbone-of-netflixs.html)
-* [How Netflix uses Druid for Real-time Insights to Ensure a High-Quality Experience](https://netflixtechblog.com/how-netflix-uses-druid-for-real-time-insights-to-ensure-a-high-quality-experience-19e1e8568d06)
-
-## Netsil
-
-Netsil is an observability and analytics company for modern cloud applications. The Netsil Application Operations Center (AOC) uses Druid for real-time queries on sharded data along with support for dynamic and multi-valued attributes. The AOC processes live service interactions in large-scale production applications and also stores massive amounts of historical metrics data. Druid was able to support these and several other AOC requirements allowing the AOC to be scalable and fault-tolerant.
-
-You can learn more about the AOC at http://netsil.com/download/
-
-* [A Comparison of Time Series Databases and Netsil’s Use of Druid](https://blog.netsil.com/a-comparison-of-time-series-databases-and-netsils-use-of-druid-db805d471206)
-
-## Nielsen (Nielsen Marketing Cloud)
-
-Nielsen Marketing Cloud uses Druid as it's core real-time analytics tool to help its clients monitor, test and improve its audience targeting capabilities.  With Druid, Nielsen provides its clients with in-depth consumer insights leveraging world-class Nielsen audience data.
-
-* [How Nielsen Marketing Cloud Uses Druid for Audience and Marketing Performance Analysis](https://imply.io/post/nielsen-marketing-audience-analysis)
-* [Counting Unique Users in Real-Time: Here’s a Challenge for You!"](https://youtu.be/MLUhzuQ0DHY) (Yakir Buskilla, Itai Yaffe) *YouTube*, uploaded by DataWorks Summit, 1 Apr 2019.
-* Buskilla, Yakir; Yaffe, Itai. "Counting Unique Users in Real-Time: Here’s a Challenge for You!" *SlideShare*, uploaded by DataWorks Summit, 1 Apr 2019. <https://www.slideshare.net/Hadoop_Summit/counting-unique-users-in-realtime-heres-a-challenge-for-you-139142580>
-* [Data Retention and Deletion in Apache Druid](https://medium.com/nmc-techblog/data-retention-and-deletion-in-apache-druid-74ffd12398a8)
-
-## Nodex
-
-[Nodex](https://www.nodex.co.uk/services/recruitment-crm-software) uses Druid in its real time analytics pipeline to deliver insights and analytics across a wide range of recruitment CRM software, Job boards and recruitment career portals. We ingest large amounts of data and needed something capable of real time metrics to offer KPI's to our clients to allow them to make better business decisions based on actual usage data.
-
-## Noon
-
-At Noon – The Social Learning Platform, on a daily basis we process close to 100M audio and sketch samples from more than 80K students to help measure the voice and sketch quality of our online classrooms. We built a real time analytics platform on Apache Druid and Apache Flink to provide realtime feedback on classroom quality & engagement metrics. 
-
-## NTT Com
-
-* [Kappa architecture at NTT Com: Building a streaming analytics stack with Druid and Kafka](https://imply.io/post/kappa-architecture-at-ntt)
-
-## Ona
-
-Ona <https://ona.io> is a software engineering and design firm based in Nairobi, Kenya and Washington, DC. Our mission is to improve the effectiveness of humanitarian and development aid by empowering organizations worldwide with collective and actionable intelligence. We use Druid to power dashboards and disseminate global health and service delivery data pulled from diverse sources.
-
-## OneAPM
-
-OneAPM <http://oneapm.com> is an IT service compmay focusing on Application Performance Management (APM). In OneAPM, Druid is used to power clients' interactive queries on performance data collected from their applications in realtime.
-
-## Oppo
-
-Oppo is one of the world largest mobile phone manufacturer. Druid is used to realtime data analyze.
-
-## Optimizely
-
-Optimizely uses Druid to power the results dashboard for Optimizely Personalization. Druid enables Optimizely to provide our customers with in-depth, customizable metrics in real time, allowing them to monitor, test and improve their Personalization campaigns with greater ease and flexibility than before.
-
-* [Slicing and dicing data with druid](https://medium.com/engineers-optimizely/slicing-and-dicing-data-with-druid-f61cca8a63d2)
-* [The anatomy of a Druid segment file](https://medium.com/engineers-optimizely/the-anatomy-of-a-druid-segment-file-bed89a93af1e)
-
-## Outbrain
-
-* [Understanding Spark Streaming with Kafka and Druid](https://medium.com/outbrain-engineering/understanding-spark-streaming-with-kafka-and-druid-25b69e28dcb7)
-
-## Paypal
-
-The Druid production deployment at PayPal processes a very large volume of data
-and is used for internal exploratory analytics by business analytic teams. Here
-is what they have to say:
-
-> Around early Feb, 2014, the Paypal Tracking Platform team, lead by Suresh Kumar, stumbled upon an article talking about a new
-> upcoming kid in Real Time Analytics world. After first glance it seemed just like any other
-> new cool looking technology. But after reading little deeper into the papers(they had
-> referred) and few blogs, it was clear it is different. The fundamental approach to query
-> the data itself looked very different and refreshing.
->
-> Coincidently, at the same time, the team was struggling to create a very high volume real-time data
-> query system. We had already explored Drill, Hive, Cassandra, TSDB, Shark etc.
-> Dating back at least a year, none of these technologies were fulfilling our low latency needs for very high
-> volumes of data.
->
-> So, as an option we started the Druid prototype and within couple of weeks it was looking like
-> a very promising alternate. Very soon with great help from Core Druid development team
-> our prototype was doing great.
->
-> We then started the prototype with large 7-10 billion records and see the response time for
-> query. It was quite amazing.
->
-> Today our Druid implementation in PayPal processes a very large volume of Data and is
-> used for our internal exploratory analytics by business analytic teams.
->
-> The thing we liked the most was amazing support provided by core Druid team. I have never
-> seen a Open Source Community providing such a very high level of responsiveness for ANY
-> issue related to Druid setup and tuning.
-
-* [PayPal merchant ecosystem using Apache Spark, Hive, Druid, and HBase](https://www.youtube.com/watch?v=HJvuU0CQS44)
-
-## PayU
-
-[PayU](https://payu.in) is a payment service provider. We have several services that publish our transactional data into Kafka. We transform the data using Kafka streams and write it flattened and enriched to a different topic. Kafka indexing service then reads the data from the topic and writes it to Druid. We provide dashboards to our merchants that refresh in real-time.
-
-## Pinterest
-
-* [Powering Ad Analytics with Apache Druid](https://www.youtube.com/watch?v=LBXtK0DSQIw)
-
-## Play Games24x7
-
-Play Games24x7 offers multiple casual as well as real money cash games to its users. To maintain a healthy environment for their players, they constantly look out for any elements that could lead to a win by unethical means in the game. They are using Druid to identify those users who follow this set of patterns, and mark them as fraudulent.
-
-* [Real Time, High Concurrency Use Case in Fraud Detection](https://www.youtube.com/watch?v=5RbJL8vHfi0)
-
-## Plumbr
-
-* [Introduction to Druid, fast distributed data store](https://youtu.be/rx6c7D9QmyU)
-* [Storing and accessing trillions of events](https://plumbr.eu/blog/programming/storing-and-querying-trillions-of-events)
-
-## Pollfish
-
-The [Pollfish](https://www.pollfish.com/) real-time analytics platform enables researchers to analyze survey data from over half-a-billion consumers across the world, informing advertising, branding, market research, content, and product strategies. Druid is used alongside Apache Kafka, Spark, Flink, Akka Streams, Finatra / Finagle microservices, Cassandra, PostgreSQL, Hive, and Scruid – an open source Scala library for Apache Druid.
-
-## Poshmark
-
-Poshmark uses Druid for reporting to monitor key business metrics and data exploration in real-time.
-
-## Preset
-
-Apache Superset was built by Maxime Beauchemin when he was at Airbnb, to visualize data from Apache Druid. Druid enables highly interactive and responsive Superset dashboards, massively shortening the time from question to answer. 
-
-* [Building Lightning Fast Dashboards Using Apache Druid & Apache Superset](https://www.youtube.com/watch?v=O0ESEZaZR88)
-
-## PubNative
-
-PubNative uses Druid for its real-time reports and analysis of millions of daily ad views, clicks, conversions and other events.
-
-## Quantiply
-
-Quantiply uses Druid for its feature learning, and governance layer for Compliance and Regulatory (CoRe) platform.
-
-## Raygun
-
-Raygun is a full stack software intelligence platform that monitors your applications for errors, crashes and performance issues. They use Druid to complete complex queries with large data sets. Having seen similar stories from others, including those who invested in other technologies such as the Hadoop ecosystem or key-value stores, Raygun began researching other more purpose-built analytics databases. Druid happens to be the one they settled on, as it offers several nice properties one wants in an analytics database.
-
-* [Using the Druid GroupBy V2 engine for real-time analytics](https://raygun.com/blog/2016/12/druid-groupby-v2-engine/)
-
-## Razorpay
-
-We have successfully deployed Druid at [Razorpay](https://razorpay.com/) for our use cases and see continued growth in its footprint. We were able to achieve p90, p95 values of less than 5s and 10s respectively and our dashboard performances have improved by at least 10x when compared to Presto.
-
-* [How Razorpay uses Druid for seamless analytics and product insights?](https://medium.com/@birendra.sahu_77409/how-razorpay-uses-druid-for-seamless-analytics-and-product-insights-364c01b87f1e)
-
-## Redborder
-
-redBorder is an open source, scale out, cybersecurity analytics platform based on Druid. We hope its full-blown web interface, dashboard and report systems, and ready-to-use real-time pipeline foster other Druid users to create a strong community around it. To see more, please visit redborder.org
-
-## Reddit
-
-Druid is a critical component in our advertising infrastructure, where it serves as the backend for our external reporting dashboards.
-
-* [Scaling Reporting at Reddit](https://redditblog.com/2021/02/26/scaling-reporting-at-reddit/)
-
-## Retargetly
-
-Retargetly is a Data Management Platform that enables publishers and advertisers to manage their first party user data, mix it with second and third party data from others providers and activate it into advertising campaigns (direct, programmatic, etc.). Druid enables us to show real time audience insights. It also provides a lot of flexibility on ad-hoc queries with low latency. We provide default graphs and metrics to our clients but they also have the possibility to make their own interactive queries on real-time.
-
-## Rill Data
-[Rill Data](https://rilldata.com) uses Druid to power its truly elastic, fully managed cloud service. 
-Rill uses Druid to deliver operational intelligence to business stakeholders with zero DevOps overhead.
-Rill's team operates the first and longest continuously running Druid service.
-
-## Roambee
-
-We introduced Druid to our technology stack while developing our new supply chain & logistics visibility and intelligence solution. Combining Druid with other data storage, streaming & processing platforms & solutions for the IoT sensor data and 3rd party data helped us design an effective solution for our customers worldwide.
-
-## Rovio
-
-We use Druid as the low-latency backend for pivot & time-series dashboards. We typically feed data to Druid in daily batches. Data is used at Rovio to continually improve our games and provide incredible experiences for the millions of players who play our games every day.
-
-* [Unlocking interactive dashboards at Rovio with Druid and Spark](https://medium.com/@Rovio_Tech/unlocking-interactive-dashboards-at-rovio-with-druid-and-spark-40f8fe6a0b05)
-
-## Rubicon Project
-
-Rubicon Project is the world’s largest independent sell-side advertising platform that helps premium websites and mobile apps sell ads easily and safely. Rubicon Project’s flagship reporting platform, Performance Analytics, is built on Apache Druid and is used by thousands of companies across the globe. Performance Analytics processed more than two trillion events per day during Black Friday. Druid delivers analytics from this data with an average response times under 600 ms, helping publishers maximize the yield of their digital ad inventory across the open internet and builds on our commitment to trust and transparency in the programmatic marketplace.
-
-* [Modernizing Rubicon Project’s Analytics Stack for Programmatic Advertising](https://imply.io/post/modernizing-rubicon-projects-analytics-stack)
-
-## Sage + Archer
-
-We are using Druid as our single source of truth for both realtime statistics and data analysis. Our clients can see and filter detailed metrics in our [self-service DSP](https://www.sage-archer.com). The DSP is used for mobile and digital-out-of-home advertising campaigns. Druid is powering both the front-end and the optimization algorithms within the system.
-
-* [Community Spotlight: smart advertising from Sage+Archer + Apache Druid](https://imply.io/post/community-spotlight-smart-advertising-from-sage-archer-apache-druid)
-
-## Salesforce
-
-Salesforce's Edge Intelligence team uses Apache Druid as a real-time analytical database to store application performance metrics extracted from log lines. Our customers within Salesforce, including engineers, product owners, customer service representatives, etc., use our service to define the way they want to ingest and query their data and obtain insights such as performance analysis, trend analysis, release comparison, issue triage, and troubleshooting. We chose Druid because it gives us the flexibility to define pre-aggregations, the ability to easily manage ingestion tasks, the ability to query data effectively, and the means to create a highly scalable architecture.
-
-[Delivering High-Quality Insights Interactively Using Apache Druid at Salesforce](https://engineering.salesforce.com/delivering-high-quality-insights-interactively-using-apache-druid-at-salesforce-7a038f0fe3d1)
-
-## Scalac
-
-Scalac migrated to Apache Druid and Apache Kafka (from Apache Cassandra and Apache Spark) for their Blockchain Observability application. The analytical dashboard they created is used to visualize the data from different blockchains. It is built on React with Hooks, Redux, Saga on the front side – and Node.js, Apache Druid, and Apache Kafka on the back-end.
-
-* [How We Reduced Costs and Simplified Solution by Using Apache Druid](https://scalac.io/blog/how-we-reduced-costs-and-simplified-solution-by-using-apache-druid/) (Adrian Juszczak), *Scalac website*, 4 Feb 2020.
-
-## Shopee
-
-At [Shopee](http://shopee.com/) Data Engineer Group, we use Druid for real-time analytics of users'(including seller and buyer) interaction with its popular e-commerce site.
-
-* [Apache Druid in Shopee](https://mp.weixin.qq.com/s/WFiPDPvxMUi_QVfBDqgVcQ)(in Chinese). Published on 2022-01-13 by [yuanlihan](https://github.com/yuanlihan), a contributor of Apache Druid.
-
-## Shopify
-
-Shopify uses Druid, along with Apache Flink and Airflow, to create an interactive analytics platform for realtime sales data with enormous volume and variety.
-
-* [The Lessons Learnt From Building Star Schema Type-1 Dimensions With Druid Joins in Shopify](https://www.youtube.com/watch?v=_xluVN75AaE)
-
-## Sift Science
-
-Sift Science provides an online trust platform that online businesses use to prevent fraud and abuse. We use Druid as a tool to gain real-time insights about our data and machine learning models.
-
-## SigNoz
-[SigNoz](https://signoz.io) is an open source observability platform. SigNoz uses distributed tracing to gain visibility into your systems and powers data using Kafka (to handle high ingestion rate and backpressure) and Apache Druid (Apache Druid is a high performance real-time analytics database), both proven in industry to handle scale.
-
-Druid powers aggregations after slicing and dicing of high-dimensional trace data.
-
-## Sina Weibo
-
-Weibo Advertising Platform deploys Druid as realtime data tool for online advertising analytics and business intelligence(BI). Druid processes TBs of realtime data per day with latency in one minute.
-
-Weibo UVE(Unified Value Evaluation) team of Advertising Platform is using Druid as the realtime analysis tool of the data insight system, which processing billions events everyday.
-
-## Singular
-
-Druid is the primary data store used for ad-hoc analytics in Singular, enabling our customers to generate insights based on real-time and historical data.
-
-* [How Singular Delivers Blazing Fast App Analytics](https://www.singular.net/singular-app-roi-analytics/)
-* [Achieving fast upserts for Apache Druid](https://singular-engineering-blog.medium.com/achieving-fast-upserts-for-apache-druid-db6c33fba466)
-
-## SK Telecom
-
-SK Telecom is the leading telecommunication and platform solution company. Druid enable us to discover the business insight interactively from telecommunication, manufacturing big data.
-
-* [Apache Druid on Kubernetes: Elastic scalable cloud-based system for real-time OLAP on high velocity data](https://feathercast.apache.org/2018/09/27/apache-druid-on-kubernetes-elastic-scalable-cloud-based-system-for-real-time-olap-on-high-velocity-data-jinchul-kim-eui-heo/)
-
-## Skimlinks
-
-[Skimlinks](https://www.skimlinks.com) is the leading commerce content monetization platform. Its technology automatically monetizes product links in commerce-related content.
-
-## Skyport Systems
-
-Skyport Systems provides zero-effort, low-touch secure servers that help organizations to rapidly deploy and compartmentalize security-critical workloads. We use Druid as part of our analytics backend to provide real-time insight to our customers about their workload behavior.
-
-## Slack
-
-* [Measuring Slack API performance using Druid](https://speakerdeck.com/vananth22/measuring-api-performance-using-druid)
-
-## Smart Bid
-
-Smart Bid is a unique marketing solution platform empowering advertising teams.
-A one-stop shop taking advantage of proprietary technology to analyze and reach the right audience with the right creative at the right time. We use Druid to gain real-time insights on real time bidding using our machine learning algorithms.
-
-## Smyte
-
-Smyte provides an API and UI for detecting and blocking bad actors on the internet. Druid powers the analytics portion of our user interface providing insight into what users are doing on the website, and specifically which features are unique between different sets of users.
-
-## Snap, Inc.
-
-* [Data Analytics and Processing at Snap](https://www.slideshare.net/CharlesAllen9/data-analytics-and-processing-at-snap-druid-meetup-la-september-2018)
-
-## Societe Generale
-
-Societe Generale, one of Europe's leading financial services groups and a major player in the economy for over 150 years, supports 29 million clients every day with 138,000 staff in 62 countries.
-
-Within the Societe Generale IT department, Apache Druid is used as Time Series Database in order to store performance metrics generated in real-time by thousands of servers, databases, middlewares. These data are stored in multiple Druid clusters in multiple regions (+840 vCPUs, +7000GB of RAM, +300 billions of events) and are used for many purposes, such as dashboarding and predictive maintenance use cases.
-
-## Splunk
-
-We went through the journey of deploying Apache Druid clusters on Kubernetes and created a [druid-operator](https://github.com/druid-io/druid-operator). We use this operator to deploy Druid clusters at Splunk.
-
-* [Druid on Kubernetes with Druid-operator](https://imply.io/virtual-druid-summit/druid-on-kubernetes-with-druid-operator)
-
-## SpotX
-
-* [BelFOSS 2018 SpotX: Hadoop, Spark & Druid](https://www.youtube.com/watch?v=OlmbuyCH4lQ)
-
-## Streamlyzer
-
-Streamlyzer uses Druid as a next generation online video analytics for online video companies or publishers. Streamlyzer is gathering information from real end-users of our customers and provides visualized real-time analytics in dashboard showing how video contents are delivered and how end-users are experiencing the streaming service.
-
-## Sugo
-
-Sugo is a company that focus on realtime multi-dimension analytics and mining on big data. We build our platform based on Druid, and developed our own extensions to make it more powerful.
-
-## SuperAwesome
-
-[SuperAwesome](https://www.superawesome.com)’s mission is to make the internet safer for kids. At the core of SuperAwesome’s analytics is Apache Druid, which helps us relay key insights and data points back to our stakeholders and customers, as well as use this data to power our products themselves. This all happens in a kid-safe way and enables us to deliver the best level of service to billions of children and teens every month.
-
-* [How we use Apache Druid’s real-time analytics to power kidtech at SuperAwesome](https://medium.com/superawesome-engineering/how-we-use-apache-druids-real-time-analytics-to-power-kidtech-at-superawesome-8da6a0fb28b1)
-* [Virtual Apache Druid Meetup featuring SuperAwesome](https://imply.io/videos/virtual-apache-druid-meetup-featuring-superswesome)
-
-## Sweet Couch
-
-Sweet Couch was a place to discover unique products which are buyable online. Druid powered Sweet Couch harvest which was an open analytics platform for tracking performance of online shops based out in India. All end user events were tracked and analysed using Druid for business insights.
-
-## Swisscom
-
-We are providing machine-learning driven, anomaly detection services to our operations teams, using Druid to cover all our storage and query needs. These anomalies currently help engineers during maintenance windows to correct for possible outages before they even happen. Our Dataset includes over 3000 core-network devices and over double that amount in wireless transmission equipment. We ingest and process over 25 million records every minute, but we’re just getting started with onboarding our platforms and services on Druid.
-
-## TalkingData
-
-[TalkingData](http://www.talkingdata.com/) is China’s largest independent Big Data service platform. TalkingData uses Druid with Atomcube, an extension for enhancement, to power analyze online application and advertising data.
-
-## Target
-
-Apache Druid’s speed and flexibility allow us to provide interactive analytics to front-line, edge-of-business consumers to address hundreds of unique use-cases across several business units.
-
-* [Enterprise Scale Analytics Platform Powered by Druid at Target](https://imply.io/virtual-druid-summit/enterprise-scale-analytics-platform-powered-by-druid-at-target) 
-
-## Tencent
-
-Tencent SCRM product use Druid for customer behavior analysis.
-
-## Time Warner Cable
-
-TWC uses Druid for exploratory analytics.
-
-## TrafficGuard
-
-TrafficGuard detects, mitigates and reports on ad fraud before it hits digital advertising budgets. Three formidable layers of protection block both general invalid traffic (GIVT) and sophisticated invalid traffic (SIVT) to ensure that digital advertising results in legitimate advertising engagement. With visibility of traffic across thousands of campaigns, TrafficGuard’s machine learning can identify emerging patterns, trends and indicators of fraud quickly and reliably.
-
-Druid is a key component of our Big Data Operation Insight Platform.We use imply.io
-Learn more about TrafficGuard’s comprehensive fraud mitigation at (https://www.trafficguard.ai)
-
-* [Using Druid to fight ad fraud](https://imply.io/post/using-druid-to-fight-ad-fraud)
-
-## Travelmykashmir
-
-[Travelmykashmir](https://www.travelmykashmir.com/)is a reliable cab and [bike rental service in Srinagar](https://www.bikerentalsrinagar.com
-). We offer you comfortable, well-maintained cabs and bikes for your local or long distance travel. If you are looking for safe, secure and comfortable journey 
-in the city then book a cab with us.
-
-## Trendyol.com - Alibaba Group Company
-
-Trendyol, which is the largest e-commerce company in Turkey, uses Druid for real-time analytics. It is mostly used for providing insights to their suppliers.
-
-## TripleLift
-
-TripleLift uses Druid to provide insights into performance aspects of its native programmatic exchange for sales/business development opportunities, and to provide reporting used by advertisers and publishers.
-
-## TripStack
-
-[TripStack](https://www.tripstack.com/) uses Druid to help gain business insight into the tens of billions of flight combinations that TripStack processes every day.
-
-## Triton Digital
-
-[Triton Digital](https://www.tritondigital.com/) is the global technology and services leader to the audio streaming and podcast industry. Operating in more than 80 countries, Triton Digital provides innovative technology that enables broadcasters, podcasters, and online music services to build their audience, maximize their revenue, and streamline their operations. 
-
-Triton Digital uses Druid to [power their programmatic analytics](https://www.rilldata.com/powering-programmatic-analytics-at-triton-digital) (*Rill Data*, Oct 2021).
-
-## TrueCar
-
-[TrueCar](https://www.truecar.com) is a leading automotive digital marketplace that enables car buyers to connect to our nationwide network of Certified Dealers. TrueCar uses Druid and Imply Cloud to help them make their dashboards real-time, detect anomalies, and do so while minimizing engineering and operational overhead. Druid enables TrueCar to unlock insights from digital interaction data, further empowers their data scientists and product teams to improve services with increased agility, and deliver a higher quality experience.
-
-## Trustpilot
-
-Trustpilot used Apache Druid to create an interactive application based on D3 that their entire company uses to put real time numbers behind every business decision. They replaced siloed applications built with Looker, Amplitude and others with Druid tp better respond to requests and provide more functionality to their end users.
-
-## Twitch
-
-In order to continue empowering decision making as Twitch scaled, we turned to using Druid and Imply to provide self service analytics to both our technical and non technical staff allowing them to drill into high level metrics in lieu of reading generated reports.
-
-* [Self Service Analytics at Twitch](https://imply.io/virtual-druid-summit/self-service-analytics-at-twitch)
-
-## Twitter
-
-* [Interactive Analytics at MoPub: Querying Terabytes of Data in Seconds](https://blog.twitter.com/engineering/en_us/topics/infrastructure/2019/interactive-analytics-at-mopub.html)
-
-* [Powering real-time data analytics with Druid at Twitter](https://blog.twitter.com/engineering/en_us/topics/infrastructure/2022/powering-real-time-data-analytics-with-druid-at-twitter)
-
-## Unity
-
-Unity's monetization business generates billions of in-game events in a multi-sided marketplace, which creates complexity, slowness, and overhead for reporting. To work around these issues, Unity deploys a Kafka, Spark, and Druid-based ingestion and aggregation pipeline.
-
-* [Getting Started with Druid (Imply)](https://medium.com/@linda0511ny/getting-started-with-druid-imply-17d194762574)
-
-## Verizon
-
-Verizon’s network analytics platform leverages Druid as a real-time analytics engine to enable interactive analytics and performance metrics, support use cases like traffic capacity management using Netflow and network statistics, and provide a feature store for machine learning and service key performance indicators to monitor and quantify the health of Verizon’s global networks. We chose Druid because it enables us to achieve our mission with sub-second latency on large datasets.
-
-## VideoAmp
-
-At VideoAmp, Druid is a key component of our Big Data stack. It powers our real-time video advertising analytics at low granularity and huge scale. Druid has helped us minimized the time between event, insight and action.
-
-## Vigiglobe
-
-Vigiglobe turns the noise of Social Media into real-time Smart Content. To this end, Druid enables us to maintain high request throughput coupled with huge data absorption capacity.
-
-## ViralGains
-
-ViralGains uses Druid for real-time analysis of millions of viral video views, shares, and conversations.
-
-## Virool
-
-Druid powers Virool’s real time analytics of over 1 billion raw events per day. We query this data to gain a deep understanding of all of our inventory sources, from exchanges to direct partners, everything is available with lightning fast query times. Druid puts the power and flexibility of big data in each of our Viroolian’s hands.
-
-## Vserv
-
-[Vserv](https://www.vserv.com) is the leading authentic data platform for mobile marketing in India. We have successfully implemented Druid for analysis of digital marketing campaign data. 
-
-## WalkMe
-
-* [How WalkMe uses Druid and Imply Cloud to Analyze Clickstreams and User Behavior](https://imply.io/post/how-walkme-uses-druid-and-imply-cloud)
-
-## Walmart
-
-* [Event Stream Analytics at Walmart with Druid](https://medium.com/walmartlabs/event-stream-analytics-at-walmart-with-druid-dcf1a37ceda7)
-
-## WeQ
-
-[WeQ](http://weq.com) is a leading mobile performance and branding ad-tech company headquartered in Berlin. Druid is one of the core services that helps WeQ drive user acquisition and engagement to deliver mobile performance and branding campaigns.
-
-We are using Druid for real-time analytics that delivers business insights and power our anti-fraud tools. We are ingesting several billions of events per day using the Kafka real-time connector. Druid is a rock-solid and flexible foundation for our data system delivering blazing-fast analytics
-
-## WhyLabs
-
-[WhyLabs](https://whylabs.ai/) is on a mission to build the interface between humans and AI applications. Our Data and AI Observability platform enables teams to monitor all of their ML models, regardless of scale, with zero configuration necessary. Druid allows WhyLabs to combine stream and batch processing to interactively explore and visualize statistically profiled datasets. 
-
-* [Migrating Our ML Insights Platform To Druid](https://www.youtube.com/watch?v=XV-rFsv9FTc) (Drew Dahlke), *Imply [YouTube]*, 10 Dec 2021.
-
-## Wikimedia Foundation
-
-We're serving pageview data via Druid and Pivot.  Our internal customers are loving it and we're working on allowing public access to sanitized data, both editing and pageview.  We like Druid because it's open source, the folks that work on it have built a good community, and it's about five times faster than Hive for us right now, without any tuning or optimization (and the Hadoop cluster is beefier than the Druid one), just dumped lots of data into it and Pivot was immediately useful to our analysts.  We wrote a [puppet](https://github.com/wikimedia/operations-puppet/tree/production/modules/druid) module that others might find helpful.
-
-* [Analytics at Wikipedia](https://www.oreilly.com/library/view/strata-data-conference/9781491976326/video316954.html)
-* [Analytics at Wikipedia with Druid](https://speakerdeck.com/druidio/analytics-at-wikipedia-with-druid)
-
-## Wipro
-
-Wipro Limited is an Indian multinational corporation that provides information technology, consulting and business process services. Wipro is using Druid to track and monitor the performance of internal applications and to gain insights from data in real-time. 
-
-## Wiremind
-
-Wiremind develops solutions for optimizing sold capacity (passenger transportation, airfreight and event/sport ticketing) that blend UX, software and data science.
-
-We process thousands of millions of rows of data to serve our machine learning models and ingest tens of millions of rows by batch indexation each day. We chose Druid because it gives us the ability to query data effectively, the capacity to manage ingesting tasks easily, and the capability to split data for our diverse clients while having a highly scalable architecture for our growth.
-
-## Xiaomi
-
-Xiaomi uses Druid as an analytics tool to analyze online advertising data.
-
-## Yahoo
-
-Yahoo uses Druid to power various customer-facing audience and advertising analytics products.
-
-* [Druid Ecosystem at Yahoo](https://speakerdeck.com/implydatainc/druid-ecosystem-at-yahoo)
-* [Complementing Hadoop at Yahoo: Interactive Analytics with Druid](http://yahooeng.tumblr.com/post/125287346011/complementing-hadoop-at-yahoo-interactive)
-* [Combining Druid and DataSketches for Real-time, Robust Behavioral Analytics](https://yahooeng.tumblr.com/post/147711922956/combining-druid-and-datasketches-for-real-time)
-
-## YeahMobi
-
-YeahMobi uses Druid to power a dashboard used for ad-tech analytics such as impression and conversion tracking, unique IP statistics, and aggregating metrics such as costs and revenues.
-
-## Yieldr
-
-[Yieldr](https://www.yieldr.com) uses Druid to power real-time web and mobile analytics for airlines across the globe.
-
-## Youku Tudou
-
-Youku Tudou employs Druid for real-time advertising analysis of huge volumes of data.
-
-## Youzan
-
-China Youzan is a SaaS company which principally engaged in retail science and technology. We use Druid for business intelligence (BI) analytics and application performance management (APM) metrics
-
-## Zapr
-
-Zapr is leveraging Druid to analyze TV viewership data and powering analytical dashboards to report real time user behavior. 
-
-* [Apache Druid: Sub-second Slice and Dice your Data!](https://www.meetup.com/Delhi-Apache-Druid-Meetup-Group/events/267001583/)
-
-## Zeotap
-
-* [Apache Druid helps Zeotap Master Multi-Channel Attribution at Scale](https://imply.io/post/apache-druid-helps-zeotap-master-multi-channel-attribution)
-
-## Zhihu
-
-[Zhihu](https://www.zhihu.com/) is a Chinese question-and-answer website. In Zhihu, Druid is used to power clients' interactive queries, data reports, A/B testing and performance monitoring. Almost 1T per day data is ingested into druid cluster, and we are strongly depending on thetaSketch aggregator for computing cardinality and retention, looking forward to more improvement on DataSketch.
-
-## Zilingo
-
-Zilingo's data collection infrastructure, processing pipeline and analytics stack is based on Druid. Data is collected from various IOT devices/sensors, mobile and tablet devices and 3p data and is streamed in near realtime. This allows our customers to get a view of the supply chain with thousands of data points via dashboards, reports and the ability to slice and dice data.
-
-## Zscaler
-* [Druid @ Zscaler - A Retrospective](https://imply.io/post/druid-at-zscaler-security-log-analytics)
-
-## Zuoyebang
-
-[Zuoyebang](https://www.zybang.com/) is the most used K12 education platform, 7 out of every 10 K12 users are using Zuoyebang. At Zuoyebang Data Platform Group, we use the Druid in the advertising scene,  mainly related to advertising display, click, billing, and other functions. The performance and timeliness of druid can meet our OLAP queries very well.
-
-[Add Your Company](https://github.com/apache/druid-website-src/blob/master/druid-powered.md)
diff --git a/faq.md b/faq.md
deleted file mode 100644
index 7f17eca..0000000
--- a/faq.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: Frequently Asked Questions
-subtitle: Don't see your question here? <a href='/community/'>Ask us</a>
-layout: simple_page
-sectionid: faq
-canonical: 'https://druid.apache.org/faq'
----
-
-### Is Druid a data warehouse? When should I use Druid over Redshift/BigQuery/Snowflake?
-
-Apache Druid is a new type of database to power real-time analytic workloads for
-event-driven data, and isn’t a traditional data warehouse.  Although Druid
-incorporates architecture ideas from data warehouses such as column-oriented
-storage, Druid also incorporates designs from search systems and timeseries
-databases. Druid's architecture is designed to handle many use cases that
-traditional data warehouses cannot.
-
-Druid offers the following advantages over traditional data warehouses:
-
-* Much lower latency for OLAP-style queries
-* Much lower latency for data ingest (both streaming and batch)
-* Out-of-the-box integration with Apache Kafka, AWS Kinesis, HDFS, AWS S3, and more
-* Time-based partitioning, which enables performant time-based queries
-* Fast search and filter, for fast slice and dice
-* Minimal schema design and native support for semi-structured and nested data
-
-Consider using Druid to augment your data warehouse if your use case requires:
-
-* Powering an user-facing application
-* Low-latency query response with high concurrency
-* Instant data visibility
-* Fast ad-hoc slice and dice
-* Streaming data
-
-To summarize, Druid shines when the use cases involves real-time analytics and
-where the end-user (technical or not) wants to apply numerous queries in rapid
-succession to explore or better understand data trends. 
-
-### Is Druid a log aggregation/log search system? When should I use Druid over Elastic/Splunk?
-
-Druid uses inverted indexes (in particular, compressed bitmaps) for fast
-searching and filtering, but it is not generally considered a search system.
-While Druid contains many features commonly found in search systems, such as
-the ability to stream in structured and semi-structured data and the ability to
-search and filter the data, Druid isn’t commonly used to ingest text logs and
-run full text search queries over the text logs.  However, Druid is often used
-to ingest and analyze semi-structured data such as JSON.
-
-Druid at its core is an analytics engine and as such, it can support numerical
-aggregations, groupBys (including multi-dimensional groupBys), and other
-analytic workloads faster and more efficiently than search systems.
-
-### Is Druid a timeseries database? When should I use Druid over InfluxDB/OpenTSDB/Prometheus?
-
-Druid does share some characteristics with timeseries databases, but also
-combines ideas from analytic databases and search systems.  Like in timeseries
-databases, Druid is optimized for data where a timestamp is present.  Druid
-partitions data by time, and queries that include a time filter will be
-significantly faster than those that do not.  Aggregating metrics and filtering
-on dimensions (which are roughly equivalent to TSDBs' tags) are also very fast when a
-time filter is present.  However, because Druid incorporates many architectural designs
-from analytics databases and search systems, it can significantly
-outperformance TSDBs when grouping, searching, and filtering on tags that are
-not time, or when computing complex metrics such as histograms and quantiles.
-
-### Does Druid separate storage and compute?
-
-Druid creates an indexed copy of raw data that is highly optimized for
-analytic queries. Druid runs queries over this indexed data, called a ['segment'](/docs/latest/design/segments.html)
-in Druid, and does not pull raw data from an external storage system as needed
-by queries. 
-
-### How is Druid deployed?
-
-Druid can be deployed on commodity hardware in any *NIX based environment.
-A Druid cluster consists of several different services, each designed to do a small set of things very well (ingestion, querying, coordination, etc).
-Many of these services can be co-located and deployed together on the same hardware as described [here](/docs/latest/tutorials/quickstart).
-
-Druid was designed for the cloud, and runs well in AWS, GCP, Azure, and other cloud environments.
-
-### Where does Druid fit in my big data stack?
-
-Druid typically connects to a source of raw data such as a message bus such as Apache Kafka, or a filesystem such as HDFS.
-Druid ingests an optimized, column-oriented, indexed copy of your data and serves analytics workloads on top of it.
-
-A common streaming data oriented setup involving Druid looks like this:
-Raw data → Kafka → Stream processor (optional, typically for ETL) → Kafka (optional) → Druid → Application/user
-
-A common batch/static file oriented setup involving Druid looks like this:
-Raw data → Kafka (optional) → HDFS → ETL process (optional) → Druid → Application/user
-
-The same Druid cluster can serve both the streaming and batch path.
-
-
-### Is Druid in-memory?
-
-The earliest iterations of Druid didn’t allow for data to be paged in from
-and out to disk, so it was often called an “in-memory” database. As Druid
-evolved, this limitation was removed. To provide a balance between hardware
-cost and query performance, Druid leverages memory-mapping to page data between
-disk and memory and extend the amount of data a single node can load up to the
-size of its disks.
-
-Individual Historicals can be configured with the maximum amount of data
-they should be given. Coupled with the Coordinator’s ability to assign data to
-different “tiers” based on different query requirements, Druid is essentially a
-system that can be configured across a wide spectrum of performance
-requirements. All data can be in memory and processed, or data can be heavily
-over-committed compared to the amount of memory available. Druid can also
-support complex configurations, such as configuring the most recent month of
-data in memory, while everything else is over-committed.
diff --git a/libraries.md b/libraries.md
deleted file mode 100644
index 9928570..0000000
--- a/libraries.md
+++ /dev/null
@@ -1,104 +0,0 @@
----
-title: Community and Third Party Software
-layout: simple_page
-canonical: 'https://druid.apache.org/libraries'
----
-
-Query Libraries
----------------
-
-Some great folks have written their own libraries to interact with Apache Druid.
-
-#### Python
-
-* [druid-io/pydruid](https://github.com/druid-io/pydruid) - A python client for Druid
-
-#### R
-
-* [druid-io/RDruid](https://github.com/druid-io/RDruid) - An R connector for Druid
-
-#### JavaScript
-
-* [implydata/plywood](https://github.com/implydata/plywood) - A higher level API for Druid. An extension of the work that was started in facet.js.
-* [7eggs/node-druid-query](https://github.com/7eggs/node-druid-query) - A Node.js client for Druid
-
-#### Clojure
-
-* [y42/clj-druid](https://github.com/y42/clj-druid) - A Clojure client for Druid
-
-#### Elixir
-
-* [GameAnalytics/panoramix](https://github.com/GameAnalytics/panoramix) - An Elixir client for Druid
-
-#### Ruby
-
-* [ruby-druid/ruby-druid](https://github.com/ruby-druid/ruby-druid) - A ruby client for Druid
-* [redBorder/druid_config](https://github.com/redBorder/druid_config) - A ruby client to configure and check the status of a Druid Cluster
-* [andremleblanc/druiddb-ruby](https://github.com/andremleblanc/druiddb-ruby) - A Ruby client for Druid using the Kafka Indexing Service
-
-#### SQL
-
-* [Apache Calcite](http://calcite.apache.org/) - SQL parser, planner and query engine whose [Druid adapter](http://calcite.apache.org/docs/druid_adapter.html) can query data residing in Druid, and combine it with data in other locations; has local and remote JDBC drivers [powered by Avatica](http://calcite.apache.org/avatica/)
-* [implydata/plyql](https://github.com/implydata/plyql) - A command line and HTTP interface for issuing SQL queries to Druid
-
-#### PHP
-
-* [pixelfederation/druid-php](https://github.com/pixelfederation/druid-php) - A PHP client for Druid
-* [Neeke/PHP-Druid](https://github.com/Neeke/PHP-Druid) - A Druid driver for PHP with PECL extension
-* [level23/druid-client](https://github.com/level23/druid-client) - Druid-client library with a query-builder. This library also allows you to query metadata, reindex datasources and more. 
-
-#### Scala
-
-* [ing-bank/scruid](https://github.com/ing-bank/scruid) - A Scala client for Druid
-
-#### Java
-
-* [zapr/druidry](https://github.com/zapr-oss/druidry) - A Java Client and query generator for Druid
-
-#### .NET
-
-* [raygun/druid4net](https://github.com/MindscapeHQ/druid4net) - A .NET client for Druid written in C#. Supports the .NET full framework and .NET Core.
-
-#### Rust
-
-* [bugzmanov/druid-io-rs](https://github.com/bugzmanov/druid-io-rs) - Fully asynchronous, future-enabled Apache Druid client library for rust programming language.
-
-Other Druid Distributions
--------------------------
-* [eBay/embedded-druid](https://github.com/eBay/embedded-druid) - Leveraging Druid capabilities in stand alone application
-* [Hortonworks Data platform](https://www.cloudera.com/products/hdp.html) - Hortonworks Data Platform includes Druid and Apache Hive to enable real-time data warehousing capabilities.   
-* [Imply Analytics Platform](http://imply.io/download) - The Imply Analytics platform includes Druid bundled with all its dependencies, an exploratory analytics UI, and a SQL layer.
-* [Rill Data](https://www.rilldata.com/) - Rill provides a truly elastic, fully managed cloud service for Apache Druid.
-
-UIs
----
-
-* [airbnb/superset](https://github.com/airbnb/superset) - A web application to slice, dice and visualize data out of Druid. Formerly Caravel and Panoramix
-* [Deep.Explorer](https://www.deep.bi/solutions/apache-druid) - A UI built for slice & dice analytics, adhoc queries and powerful, easy data visualizations
-* [Grafana](https://github.com/societe-generale/druidplugin) - A plugin for [Grafana](http://grafana.org/)
-* [grafana](https://github.com/Quantiply/grafana-plugins/tree/master/features/druid) - A plugin for [Grafana](http://grafana.org/)
-* [Pivot](https://github.com/implydata/pivot) - An exploratory analytics UI for Druid
-* [Metabase](https://github.com/metabase/metabase) - Simple dashboards, charts and query tool for your Druid DB
-* [Metatron](https://github.com/metatron-app/metatron-discovery) - All-in-one analytics with Druid from easy data preparation to fast visualization
-
-Tools
------
-
-* [Insert Segments](/docs/latest/operations/insert-segment-to-db.html) - A tool that can insert segments' metadata into Druid metadata storage.
-* [liquidm/druid-dumbo](https://github.com/liquidm/druid-dumbo) - Scripts to help generate batch configs for the ingestion of data into Druid
-* [housejester/druid-test-harness](https://github.com/housejester/druid-test-harness) - A set of scripts to simplify standing up some servers and seeing how things work
-* [spaghettifunk/druid-prometheus-exporter](https://github.com/spaghettifunk/druid-prometheus-exporter) - A HTTP service for collecting Druid metrics and exposing them as Prometheus metrics
-* [rovio-ingest](https://github.com/rovio/rovio-ingest) - An implementation of the DatasourceV2 interface of Apache Spark™ for writing Spark Datasets to Apache Druid™.
-
-Community Extensions
---------------------
-
-These are extensions from the community, beyond those included in the Druid repository itself.
-
-* [acesinc/druid-cors-filter-extension](https://github.com/acesinc/druid-cors-filter-extension) - An extension to enable CORS headers in http requests.
-
-Add Your Software
------------------
-
-If you've written software that uses Druid and want it included on this page,
-[edit it on GitHub](https://github.com/apache/druid-website-src/blob/master/libraries.md) to create a pull request!
diff --git a/licensing.md b/licensing.md
deleted file mode 100644
index 0ba34a5..0000000
--- a/licensing.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: License
-layout: simple_page
-canonical: 'https://druid.apache.org/licensing'
----
-
-## License
-
-Apache Druid and the Druid documentation are licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
-
-The Druid website is licensed under [CC-BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/)
-
-## Copyright and Trademark
-
-Contributions to Druid are subject to a [contributor license agreement](/community/cla.html)
-
-## Does Druid use or embed any other open source software?
-
-Druid uses
-
-- Hadoop
-- Kafka
-- MySQL
-- ZooKeeper
-
-
-Druid *embeds*
-
-- Curator
-- Jackson
-- Jetty
-- Joda Time
-- Ning Compress
-
-See the [NOTICE file](https://github.com/apache/druid/blob/master/NOTICE)
-for other software included as part of Druid.
diff --git a/technology.md b/technology.md
deleted file mode 100644
index 5fb422e..0000000
--- a/technology.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-title: Technology
-layout: simple_page
-sectionid: technology
-canonical: 'https://druid.apache.org/technology'
----
-
-Apache Druid is used to power real-time analytics applications that require fast queries at scale and under load on streaming and batch data. Druid features a unique distributed architecture across its ingestion, storage, and query layer to handle the scale needed for large aggregations with the performance needed for applications.
-
-## Architecture
-
-<div class="image-large">
-  <img src="img/diagram-7.png" style="max-width: 800px;">
-</div>
-
-Druid is a services-based architecture that consists of independently scalable services for ingestion, querying, and orchestration, each of which can be fine-tuned to optimize cluster resources for mixed use cases and workloads. For example, more resources can be directed to Druid’s query service while providing less resources to ingestion as workloads change.  Druid services can fail without impact on the operations of other services.
-
-A Druid deployment is a scalable cluster of commodity hardware with node types that serve specific functions.  In a small configuration, all of these nodes can run together on a single server (or even a laptop). For larger deployments, one or more servers are dedicated to each node type and can scale to thousands of nodes for higher throughput requirements.
-
-<ul style="margin-left: 20px;">
-  <li>Master Nodes govern data availability and ingestion</li>
-  <li>Query Nodes accept queries, manage execution across the system, and return the results</li>
-  <li>Data Nodes execute ingestion workloads and queries as well as store queryable data</li>
-</ul>
-
-In addition, Druid utilizes a deep storage layer - cloud object storage or HDFS - that contains an additional copy of each data segment. It enables background data movement between Druid processes and also provides a highly durable data source to recover from system failures.
-
-For more information, please visit [our docs page](/docs/latest/design/index.html).
-
-## Ingestion Layer
-
-In Druid, ingestion, sometimes called indexing, is loading data into tables. Druid reads data from source systems, whether files or streams, and stores the data in segments.
-
-When data is ingested into Druid, it is automatically indexed, partitioned, and, optionally, partially pre-aggregated (known as <a href="https://druid.apache.org/docs/latest/tutorials/tutorial-rollup.html">"rollup"</a>). Compressed bitmap indexes enable fast filtering and searching across multiple columns. Data is partitioned by time and, optionally, by other dimensions.
-
-<div class="image-large">
-  <img alt="Stream Ingestion Layer" src="img/ingestion_layer_stream_batch.png" style="max-width: 580px;">
-</div>
-
-<h3>Stream Data</h3>
-Druid was designed from the outset for rapid ingestion and immediate querying of stream data upon delivery.  No connectors are needed as Druid includes inherent exactly-once ingestion for data streams using Apache Kafka® and Amazon Kinesis APIs. Druid’s continuous backup into deep storage also ensures a zero RPO for stream data.
-
-<h3>Batch Data</h3>
-Druid easily ingests data from object stores including HDFS, Amazon S3, Azure Blob, and Google Cloud Storage as well as data files from databases and other sources. The data files can be in a number of common formats, including JSON, CSV, TSV, Parquet, ORC, Avro, and Protobuf. Druid supports both SQL batch import and in-database transformations.
-
-For more information, please visit [our docs page](/docs/latest/ingestion/index.html).
-
-## Storage Format
-
-Druid stores data into segments. Each segment is a single file, typically comprising up to a few million rows of data. Each Druid table can have anywhere from one segment to millions of segments distributed across the cluster.
-
-Within the segments, data storage is column-oriented. Queries only load the specific columns needed for each request. Each column’s storage is optimized by data type, which further improves the performance of scans and aggregations. String columns are stored using compressed dictionary encoding, while numeric columns are stored using compressed raw values.
-
-<div class="image-large">
-  <img alt="Graphical User Interface, Application" src="img/graphical_ui_application_v2.png" style="max-width: 580px;">
-</div>
-
-For more information, please visit [our docs page](/docs/latest/design/segments.html).
-
-## Interactive Queries
-
-Druid's interactive query engine is utilized for performance-sensitive queries. The query engine and storage format were designed together to provide maximum query performance using the fewest resources possible (as well as the best price for performance for mixed workloads). 
-
-With this engine, Druid only reads from segments that are pre-loaded into memory or local storage in the data nodes. This ensures fast performance as data is co-located with computing resources and does not have to move across a network. Data is then queried using scatter/gather for optimal parallelization.
-
-<div class="image-large">
-  <img alt="Interactive Querying Scatter Gather Diagram" src="img/scatter_gather_diagram.png" style="max-width: 580px;">
-</div>
-
-First, the query engine prunes the list of segments, creating a list of which segments are relevant to the query based on time-internals and other filters.  Next, queries are divided into discrete pieces and sent in parallel to the data nodes that are managing each relevant segment or copy of that segment (“scatter”). On the data nodes, the sub-queries are processed and sent back to the query nodes to merge the final result set (“gather”). 
-
-Scatter/gather works from the smallest single server cluster (all of Druid on one server) to clusters with thousands of servers, enabling sub-second performance for most queries even with very large data sets of multiple petabytes.
-
-For more information, please visit [our docs page](/docs/latest/querying/querying.html).
\ No newline at end of file
diff --git a/use-cases.md b/use-cases.md
deleted file mode 100644
index 01e9fd9..0000000
--- a/use-cases.md
+++ /dev/null
@@ -1,138 +0,0 @@
----
-title: Use Cases
-layout: simple_page
-sectionid: use-cases
-canonical: 'https://druid.apache.org/use-cases'
----
-
-## Power real-time analytics, data applications, and more
-
-Apache Druid is a database that is most often used for powering use cases where real-time ingest, fast query performance, and high uptime are important. As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly-concurrent APIs that need fast aggregations. Druid works best with event-oriented data.
-
-Common application areas for Druid include:
-
-<div class="fancy-list">
-
-{{"
-* Clickstream analytics (web and mobile analytics)
-* Risk/fraud analysis
-* Network telemetry analytics (network performance monitoring)
-* Server metrics storage
-* Supply chain analytics (manufacturing metrics)
-* Application performance metrics
-* Business intelligence / OLAP
-" | markdownify }}
-
-</div>
-
-Some of these use cases are described in more detail below. For an overview of technical differentiation, please see the [FAQ](/faq).
-
-<div class="zigzag-features">
-<div class="zigzag-feature">
-<span class="fa fa-users fa"></span>
-
-{{"
-## User activity and behavior
-
-Druid is often used for clickstreams, viewstreams, and activity streams.
-Specific use cases include measuring user engagement, tracking A/B test data for product releases, and understanding usage patterns.
-
-Druid can compute user metrics such as [distinct counts](/docs/latest/querying/aggregations) both exactly and approximately.
-This mean measures such as daily active users can be computed in under a second approximately (with 98% avg. accuracy) to view general trends, or computed exactly to present to key stakeholders.
-Furthermore, Druid can be used for [funnel analysis](/docs/latest/development/extensions-core/datasketches-aggregators), and to measure how many users took one action, but did not take another action.
-Such analysis is useful is tracking user signups for a product.
-
-Druid’s search and filter capabilities enable rapid, easy drill-downs of users along any set of attributes.
-Measure and compare user activity by age, gender, location, and much more.
-" | markdownify }}
-
-</div>
-
-<div class="zigzag-feature">
-<span class="fa fa-project-diagram fa"></span>
-
-{{"
-## Network flows
-
-Druid is commonly used to collect and analyze network flows.
-Druid is used to arbitrarily slice and dice flow data along any set of attributes.
-
-Druid helps with network flow analysis by being able to ingest large amounts of flow records, and by being able to group or rank across dozens of attributes at query time at interactive speeds.
-These attributes often include core attributes like IP and port, as well as attributes added through enhancement such as geolocation, service, application, facility, and ASN.
-Druid's ability to handle flexible schemas means that you can add any attributes you want.
-" | markdownify }}
-
-</div>
-
-<div class="zigzag-feature">
-<span class="fa fa-comments-dollar fa"></span>
-
-{{"
-## Digital marketing
-
-Druid is commonly used to store and query online advertising data.
-This data typically comes from ad servers and is critical to measure and understand advertising campaign performance, click through rates, conversion rates (attrition rates), and much more.
-
-Druid was initially designed to power a user-facing analytics application for digital advertising data.
-Druid has seen substantial production use for this type of data and the largest users in the world have petabytes of data stored on thousands of servers.
-
-Druid can be used to compute impressions, clicks, eCPM, and key conversion metrics, filtered on publisher, campaign, user information, and dozens of other dimensions supporting full slice and dice capability.
-" | markdownify }}
-
-</div>
-
-<div class="zigzag-feature">
-<span class="fa fa-tachometer-alt fa"></span>
-
-{{"
-## Application performance management
-
-Druid is often used to track the operational data generated by applications.
-Similar to the user activity use case, this data can be about how users are interacting with an application or it can be the metrics emitted by the application itself.
-Druid can be used to drill into how different components of an application are performing, identify bottlenecks, and troubleshoot issues.
-
-Unlike many traditional solutions, there are few limits to the volume, complexity, and throughput of the data.
-Rapidly analyze application events with thousands of attributes, and compute complex metrics on load, performance, and usage.
-For example, rank API endpoints based on 95th percentile query latency, and slice and dice how these metrics change based on any ad-hoc set of attributes such as time of day, user demographic, or datacenter location.
-" | markdownify }}
-
-</div>
-
-<div class="zigzag-feature">
-<span class="fa fa-thermometer-three-quarters fa"></span>
-
-{{"
-## IoT and device metrics
-
-Druid can be used as a time series solution for server and device metrics.
-Ingest machine generated data in real-time, and perform rapid ad-hoc analytics to measure performance, optimize hardware resources, or identify issues.
-
-Unlike many traditional timeseries databases, Druid is an analytics engine at heart.
-Druid combines ideas of timeseries databases, column-oriented analytic databases, and search systems.
-Druid supports time-based partitioning, column-oriented storage, and search indexes in a single system.
-This means time-based queries, numerical aggregations, and search and filter queries are all extremely fast.
-
-You can include millions of unique dimension values with your metrics, and arbitrarily group and filter on any combination of dimensions (dimensions in Druid are similar to tags in TSDBs).
-You can group and rank on tags, and compute a variety of complex metrics.
-Furthermore, you search and filter on tag values orders of magnitude faster than in traditional timeseries databases.
-" | markdownify }}
-
-</div>
-
-<div class="zigzag-feature">
-<span class="fa fa-chart-pie fa"></span>
-
-{{"
-## OLAP and business intelligence
-
-Druid is commonly used for BI use cases.
-Organizations have deployed Druid to accelerate queries and power applications.
-Unlike SQL-on-Hadoop engines such as Presto or Hive, Druid is designed for high concurrency and sub-second queries, powering interactive data exploration through a UI.
-In general this makes Druid a better fit for truly interactive visual analytics.
-
-Druid is a great fit if you need a user-facing application and you want your users to be able to run their own self service drill-down queries.
-You can either develop your own application using Druid's API or use one of the [many off the shelf applications](/libraries) that work with Druid.
-" | markdownify }}
-
-</div>
-</div>