commit | 6dcaae0e403a8d7322d5c63e82b01ed24340d984 | [log] [tgz] |
---|---|---|
author | Andrey Zagrebin <azagrebin@apache.org> | Fri Jan 24 15:52:25 2020 +0100 |
committer | Till Rohrmann <trohrmann@apache.org> | Fri Jan 24 18:30:09 2020 +0100 |
tree | 3b99cf9adf722635311e2eca0625bda5c4c45dcd | |
parent | 90f638c4b83a322623cc40ac1d8009bd2c055ac1 [diff] |
[FLINK-14894][core][mem] Do not explicitly release unsafe memory when managed segment is freed The conclusion at the moment is that releasing unsafe memory, while potentially having a link to it in Java code, is dangerous. We revert this to rely only on GC when there are no links in Java code. The problem can happen e.g. if task thread exits w/o joining with IO threads (e.g. spilling in batch job) then the unsafe memory is released but it can be written w/o segfault by IO thread. At the same time, other task can allocate interleaving memory which can be spoiled by that IO thread. We still keep it unsafe to allocate it outside of JVM direct memory limit to not interfere with direct allocations, also it does not make sense for RocksDB native memory (also accounted in MemoryManager) to be part of direct memory limit. The potential downside can be that over-allocating of unsafe memory will not hit the direct limit and will not cause GC immediately which will be the only way to release it. In this case, it can cause out-of-memory failures w/o triggering GC to release a lot of potentially already unused memory. If we see the delayed release as a problem then we can investigate further optimisations, like: - directly monitoring phantom reference queue of the cleaner (if JVM detects quickly that there are no more reference to the memory) and explicitly release memory ready for GC asap, e.g. after Task exit - monitor allocated memory amount and block allocation until GC releases occupied memory instead of failing with out-of-memory immediately This closes #10940.
Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities.
Learn more about Flink at https://flink.apache.org/
A streaming-first runtime that supports both batch processing and data streaming programs
Elegant and fluent APIs in Java and Scala
A runtime that supports very high throughput and low event latency at the same time
Support for event time and out-of-order processing in the DataStream API, based on the Dataflow Model
Flexible windowing (time, count, sessions, custom triggers) across different time semantics (event time, processing time)
Fault-tolerance with exactly-once processing guarantees
Natural back-pressure in streaming programs
Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming)
Built-in support for iterative programs (BSP) in the DataSet (batch) API
Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms
Compatibility layers for Apache Hadoop MapReduce
Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem
case class WordWithCount(word: String, count: Long) val text = env.socketTextStream(host, port, '\n') val windowCounts = text.flatMap { w => w.split("\\s") } .map { w => WordWithCount(w, 1) } .keyBy("word") .timeWindow(Time.seconds(5)) .sum("count") windowCounts.print()
case class WordWithCount(word: String, count: Long) val text = env.readTextFile(path) val counts = text.flatMap { w => w.split("\\s") } .map { w => WordWithCount(w, 1) } .groupBy("word") .sum("count") counts.writeAsCsv(outputPath)
Prerequisites for building Flink:
git clone https://github.com/apache/flink.git cd flink mvn clean package -DskipTests # this will take up to 10 minutes
Flink is now installed in build-target
.
NOTE: Maven 3.3.x can build Flink, but will not properly shade away certain dependencies. Maven 3.1.1 creates the libraries properly. To build unit tests with Java 8, use Java 8u51 or above to prevent failures in unit tests that use the PowerMock runner.
The Flink committers use IntelliJ IDEA to develop the Flink codebase. We recommend IntelliJ IDEA for developing projects that involve Scala code.
Minimal requirements for an IDE are:
The IntelliJ IDE supports Maven out of the box and offers a plugin for Scala development.
Check out our Setting up IntelliJ guide for details.
NOTE: From our experience, this setup does not work with Flink due to deficiencies of the old Eclipse version bundled with Scala IDE 3.0.3 or due to version incompatibilities with the bundled Scala version in Scala IDE 4.4.1.
We recommend to use IntelliJ instead (see above)
Don’t hesitate to ask!
Contact the developers and community on the mailing lists if you need any help.
Open an issue if you found a bug in Flink.
The documentation of Apache Flink is located on the website: https://flink.apache.org or in the docs/
directory of the source code.
This is an active open-source project. We are always open to people who want to use the system or contribute to it. Contact us if you are looking for implementation tasks that fit your skills. This article describes how to contribute to Apache Flink.
Apache Flink is an open source project of The Apache Software Foundation (ASF). The Apache Flink project originated from the Stratosphere research project.