|author||Jack Li(Analytics Engineering) <firstname.lastname@example.org>||Sat Aug 08 20:47:16 2020 -0700|
|committer||Jack Li(Analytics Engineering) <email@example.com>||Mon Aug 10 14:16:38 2020 -0700|
Migrate publication from Travis CI to Github action
Apache Pinot (incubating) is a real-time distributed OLAP datastore, built to deliver scalable real-time analytics with low latency. It can ingest from batch data sources (such as Hadoop HDFS, Amazon S3, Azure ADLS, Google Cloud Storage) as well as stream data sources (such as Apache Kafka).
Pinot was built by engineers at LinkedIn and Uber and is designed to scale up and out with no upper bound. Performance always remains constant based on the size of your cluster and an expected query per second (QPS) threshold.
For getting started guides, deployment recipes, tutorials, and more, please visit our project documentation at https://docs.pinot.apache.org.
Pinot was originally built at LinkedIn to power rich interactive real-time analytic applications such as Who Viewed Profile, Company Analytics, Talent Insights, and many more. UberEats Restaurant Manager is another example of a customer facing Analytics App. At LinkedIn, Pinot powers 50+ user-facing products, ingesting millions of events per second and serving 100k+ queries per second at millisecond latency.
Column-oriented: a column-oriented database with various compression schemes such as Run Length, Fixed Bit Length.
Pluggable indexing: pluggable indexing technologies Sorted Index, Bitmap Index, Inverted Index.
Query optimization: ability to optimize query/execution plan based on query and segment metadata.
Stream and batch ingest: near real time ingestion from streams and batch ingestion from Hadoop.
Query with SQL: SQL-like language that supports selection, aggregation, filtering, group by, order by, distinct queries on data.
Multi-valued fields: support for multi-valued fields, allowing you to query fields as comma separated values.
Cloud-native on Kubernetes: Helm chart provides a horizontally scalable and fault-tolerant clustered deployment that is easy to manage using Kubernetes.
Pinot is designed to execute real-time OLAP queries with low latency on massive amounts of data and events. In addition to real-time stream ingestion, Pinot also supports batch use cases with the same low latency guarantees. It is suited in contexts where fast analytics, such as aggregations, are needed on immutable data, possibly, with real-time data ingestion. Pinot works very well for querying time series data with lots of dimensions and metrics.
SELECT sum(clicks), sum(impressions) FROM AdAnalyticsTable WHERE ((daysSinceEpoch >= 17849 AND daysSinceEpoch <= 17856)) AND accountId IN (123456789) GROUP BY daysSinceEpoch TOP 100
Pinot is not a replacement for database i.e it cannot be used as source of truth store, cannot mutate data. While Pinot supports text search, it's not a replacement for a search engine. Also, Pinot queries cannot span across multiple tables by default. You can use the Presto-Pinot connector to achieve table joins and other features.
More detailed instructions can be found at Quick Demo section in the documentation.
# Clone a repo $ git clone https://github.com/apache/incubator-pinot.git $ cd incubator-pinot # Build Pinot $ mvn clean install -DskipTests -Pbin-dist # Run the Quick Demo $ cd pinot-distribution/target/apache-pinot-incubating-<version>-SNAPSHOT-bin $ bin/quick-start-batch.sh
Please refer to Running Pinot on Kubernetes in our project documentation. Pinot also provides Kubernetes integrations with the interactive query engine, Presto, and the data visualization tool, Apache Superset.
Check out Pinot documentation for a complete description of Pinot's features.
Apache Pinot is under Apache License, Version 2.0