fix(linkis-dist): fix the install-linkis-to-kubernetes.sh error (#3658)

* fix(linkis-dist): fix the install-linkis-to-kubernetes.sh exception

* fix(linkis-dist): fixed an error getting pods

* fix(linkis-dist): add reset command

* fix(linkis-dist): fix error
1 file changed
tree: 557e8f23d993abd3e28e739b41f5d1199bf4a84a
  1. .github/
  2. .mvn/
  3. docs/
  4. licenses/
  5. linkis-commons/
  6. linkis-computation-governance/
  7. linkis-dist/
  8. linkis-engineconn-plugins/
  9. linkis-extensions/
  10. linkis-orchestrator/
  11. linkis-public-enhancements/
  12. linkis-spring-cloud-services/
  13. linkis-web/
  14. tool/
  15. .asf.yaml
  16. .codecov.yml
  17. .gitignore
  18. .scalafmt.conf
  19. CONTRIBUTING.md
  20. CONTRIBUTING_CN.md
  21. DISCLAIMER
  22. LICENSE
  23. mvnw
  24. mvnw.cmd
  25. NOTICE
  26. pom.xml
  27. README.md
  28. README_CN.md
  29. scalastyle-config.xml
README.md

English | 中文

Introduction

Linkis builds a layer of computation middleware between upper applications and underlying engines. By using standard interfaces such as REST/WS/JDBC provided by Linkis, the upper applications can easily access the underlying engines such as MySQL/Spark/Hive/Presto/Flink, etc., and achieve the intercommunication of user resources like unified variables, scripts, UDFs, functions and resource files at the same time.

As a computation middleware, Linkis provides powerful connectivity, reuse, orchestration, expansion, and governance capabilities. By decoupling the application layer and the engine layer, it simplifies the complex network call relationship, and thus reduces the overall complexity and saves the development and maintenance costs as well.

Since the first release of Linkis in 2019, it has accumulated more than 700 trial companies and 1000+ sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on. Lots of companies have already used Linkis as a unified entrance for the underlying computation and storage engines of the big data platform.

linkis-intro-01

linkis-intro-03

Features

  • Support for diverse underlying computation storage engines

    • Currently supported computation/storage engines: Spark、Hive、Flink、Python、Pipeline、Sqoop、openLooKeng、Presto、ElasticSearch、JDBC, Shell, etc
    • Computation/storage engines to be supported: Trino (planned 1.3.1), SeaTunnel (planned 1.3.1), etc
    • Supported scripting languages: SparkSQL、HiveQL、Python、Shell、Pyspark、R、Scala and JDBC, etc
  • Powerful task/request governance capabilities With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc

  • Support full stack computation/storage engine As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks

  • Resource management capabilities ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across mutiple Yarn clusters and mutiple computation resource types

  • Unified Context Service Generate Context ID for each task/request, associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere

  • Unified materials System and user-level unified material management, which can be shared and transferred across users and systems

Supported Engine Types

Engine NameSuppor Component Version
(Default Dependent Version)
Linkis Version RequirementsIncluded in Release Package
By Default
Description
SparkApache 2.0.0~2.4.7,
CDH >= 5.4.0,
(default Apache Spark 2.4.3)
>=1.0.3YesSpark EngineConn, supports SQL , Scala, Pyspark and R code
HiveApache >= 1.0.0,
CDH >= 5.4.0,
(default Apache Hive 2.3.3)
>=1.0.3YesHive EngineConn, supports HiveQL code
PythonPython >= 2.6,
(default Python2*)
>=1.0.3YesPython EngineConn, supports python code
ShellBash >= 2.0>=1.0.3YesShell EngineConn, supports Bash shell code
JDBCMySQL >= 5.0, Hive >=1.2.1,
(default Hive-jdbc 2.3.4)
>=1.0.3NoJDBC EngineConn, already supports MySQL and HiveQL, can be extended quickly Support other engines with JDBC Driver package, such as Oracle
FlinkFlink >= 1.12.2,
(default Apache Flink 1.12.2)
>=1.0.3NoFlink EngineConn, supports FlinkSQL code, also supports starting a new Yarn in the form of Flink Jar Application
Pipeline->=1.0.3NoPipeline EngineConn, supports file import and export
openLooKengopenLooKeng >= 1.5.0,
(default openLookEng 1.5.0)
>=1.1.1NoopenLooKeng EngineConn, supports querying data virtualization engine with Sql openLooKeng
SqoopSqoop >= 1.4.6,
(default Apache Sqoop 1.4.6)
>=1.1.2NoSqoop EngineConn, support data migration tool Sqoop engine
PrestoPresto >= 0.180,
(default Presto 0.234)
>=1.2.0-Presto EngineConn, supports Presto SQL code
ElasticSearchElasticSearch >=6.0,
(default ElasticSearch 7.6.2)
>=1.2.0-ElasticSearch EngineConn, supports SQL and DSL code
ImpalaImpala >= 3.2.0, CDH >=6.3.0ongoing-Impala EngineConn, supports Impala SQL code
MLSQLMLSQL >=1.1.0ongoing-MLSQL EngineConn, supports MLSQL code.
HadoopApache >=2.6.0,
CDH >=5.4.0
ongoing-Hadoop EngineConn, supports Hadoop MR/YARN application
TiSpark1.1ongoing-TiSpark EngineConn, supports querying TiDB with SparkSQL

Download

Please go to the Linkis Releases Page to download a compiled distribution or a source code package of Linkis.

Compile and Deploy

For more detailed guidance see:


Note: If you want use `-Dlinkis.build.web=true` to build linkis-web image, you need to compile linkis-web first. ## compile backend ### Mac OS/Linux # 1. When compiling for the first time, execute the following command first ./mvnw -N install # 2. make the linkis distribution package # - Option 1: make the linkis distribution package only ./mvnw clean install -Dmaven.javadoc.skip=true -Dmaven.test.skip=true # - Option 2: make the linkis distribution package and docker image # - Option 2.1: image without mysql jdbc jars ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true # - Option 2.2: image with mysql jdbc jars ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.with.jdbc=true # - Option 3: linkis distribution package and docker image (included web) ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true # - Option 4: linkis distribution package and docker image (included web and ldh (hadoop all in one for test)) ./mvnw clean install -Pdocker -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -Dlinkis.build.web=true -Dlinkis.build.ldh=true -Dlinkis.build.with.jdbc=true ### Windows mvnw.cmd -N install mvnw.cmd clean install -Dmaven.javadoc.skip=true -Dmaven.test.skip=true ## compile web cd incubator-linkis/linkis-web npm install npm run build

Bundled with MySQL JDBC Driver

Due to the MySQL licensing restrictions, the MySQL Java Database Connectivity (JDBC) driver is not bundled with the official released linkis image by default. However, at current stage, linkis still relies on this library to work properly. To solve this problem, we provide a script which can help to creating an custom image with mysql jdbc from the official linkis image by yourself, the image created by this tool will be tagged as linkis:with-jdbc by default.

$> LINKIS_IMAGE=linkis:1.3.0 
$> ./linkis-dist/docker/scripts/make-linikis-image-with-mysql-jdbc.sh

Please refer to Quick Deployment to do the deployment.

Examples and Guidance

Documentation & Vedio

Architecture

Linkis services could be divided into three categories: computation governance services, public enhancement services and microservice governance services

  • The computation governance services, support the 3 major stages of processing a task/request: submission -> preparation -> execution
  • The public enhancement services, including the material library service, context service, and data source service
  • The microservice governance services, including Spring Cloud Gateway, Eureka and Open Feign

Below is the Linkis architecture diagram. You can find more detailed architecture docs in Linkis-Doc/Architecture. architecture

Contributing

Contributions are always welcomed, we need more contributors to build Linkis together. either code, or doc, or other supports that could help the community.
For code and documentation contributions, please follow the contribution guide.

Contact Us

  • Any questions or suggestions please kindly submit an issue.
  • By mail dev@linkis.apache.org
  • You can scan the QR code below to join our WeChat group to get more immediate response

wechatgroup

Who is Using Linkis

We opened an issue [Who is Using Linkis] for users to feedback and record who is using Linkis.
Since the first release of Linkis in 2019, it has accumulated more than 700 trial companies and 1000+ sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on.