These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 1.20 and Flink 2.0. Please read these notes carefully if you are planning to upgrade your Flink version to 2.0.
The past decade has witnessed a dramatic shift in Flink‘s deployment mode, workload patterns, and hardware improvements. We’ve moved from the map-reduce era where workers are computation-storage tightly coupled nodes to a cloud-native world where containerized deployments on Kubernetes become standard. To enable Flink's Cloud-Native future, we introduce Disaggregated State Storage and Management that uses remote storage as primary storage in Flink 2.0.
This new architecture solves the following challenges brought in the cloud-native era for Flink.
While extending the state store to interact with remote DFS seems like a straightforward solution, but it is insufficient due to Flink's existing blocking execution model. To overcome this limitation, Flink 2.0 introduces an asynchronous execution model alongside a disaggregated state backend, with newly designed SQL operators performing asynchronous state access in parallel.
Users can now configure Flink to use s5cmd to speed up downloading files from S3 during the recovery process, when using RocksDB, at least a factor of 2.
This enables the user to synchronize checkpointing and rescaling in the AdaptiveScheduler, so that minimize reprocessing time.
Flink possesses adaptive batch execution capabilities that optimize execution plans based on runtime information to enhance performance. Key features include dynamic partition pruning, Runtime Filter, and automatic parallelism adjustment based on data volume. In Flink 2.0, we have further strengthened these capabilities with two new optimizations:
Adaptive Broadcast Join - Compared to Shuffled Hash Join and Sort Merge Join, Broadcast Join eliminates the need for large-scale data shuffling and sorting, delivering superior execution efficiency. However, its applicability depends on one side of the input being sufficiently small; otherwise, performance or stability issues may arise. During the static SQL optimization phase, accurately estimating the input data volume of a Join operator is challenging, making it difficult to determine whether Broadcast Join is suitable. By enabling adaptive execution optimization, Flink dynamically captures the actual input conditions of Join operators at runtime and automatically switches to Broadcast Join when criteria are met, significantly improving execution efficiency.
Automatic Join Skew Optimization - In Join operations, frequent occurrences of specific keys may lead to significant disparities in data volumes processed by downstream Join tasks. Tasks handling larger data volumes can become long-tail bottlenecks, severely delaying overall job execution. Through the Adaptive Skewed Join optimization, Flink leverages runtime statistical information from Join operator inputs to dynamically split skewed data partitions while ensuring the integrity of Join results. This effectively mitigates long-tail latency caused by data skew.
See more details about the capabilities and usages of Flink's Adaptive Batch Execution.
execution.state-recovery.from-local flag nowAdaptiveScheduler now respects execution.state-recovery.from-local flag, which defaults to false. As a result you now need to opt-in to make local recovery work.
The new configuration jobmanager.adaptive-scheduler.executing.resource-stabilization-timeout for the AdaptiveScheduler was introduced. It defines a duration for which the JobManager delays the scaling operation after a resource change until sufficient resources are available.
The existing configuration jobmanager.adaptive-scheduler.min-parallelism-increase was deprecated in Flink 2.0.
For detecting idleness, the way idleness timeout is calculated has changed. Previously the time, when source or source's split was backpressured or blocked due to watermark alignment, was incorrectly accounted for when determining idleness timeout. This could lead to a situation where sources or some splits were incorrectly switching to idle, while they were being unable to make any progress and had some more records to emit, which in turn could result in incorrectly calculated watermarks and erroneous late data. This has been fixed for 2.0.
This change will introduce a new public api org.apache.flink.api.common.eventtime.WatermarkGeneratorSupplier.Context#getInputActivityClock. However, this will not create compatibility problems for users upgrading from prior Flink versions.
Materialized Tables represent a cornerstone of our vision to unify stream and batch processing paradigms. These tables enable users to declaratively manage both real-time and historical data through a single pipeline, eliminating the need for separate codebases or workflows.
In this release, with a focus on production-grade operability, we have done critical enhancements to simplify lifecycle management and execution in real-world environments:
Query Modifications - Materialized Tables now support schema and query updates, enabling seamless iteration of business logic without reprocessing historical data. This is vital for production scenarios requiring rapid schema evolution and computational adjustments.
Kubernetes/Yarn Submission - Beyond standalone clusters, Flink 2.0 extends native support for submitting Materialized Table refresh jobs to YARN and Kubernetes clusters. This allows users to seamlessly integrate refresh workflows into their production-grade infrastructure, leveraging standardized resource management, fault tolerance, and scalability.
Ecosystem Integration - Collaborating with the Apache Paimon community, Materialized Tables now integrate natively with Paimon’s lake storage format, combining Flink’s stream-batch compute with Paimon’s high-performance ACID transactions for unified data serving.
By streamlining modifications and execution on production infrastructure, Materialized Tables empower teams to unify streaming and batch pipelines with higher reliability. Future releases will deepen production support, including integration with a production-ready schedulers to enable policy-driven refresh automation.
SQL gateway now supports executing SQL jobs in application mode, serving as a replacement of the removed per-job deployment mode.
Flink SQL now supports C-style escape strings. See the documentation for more details.
A new QUALIFY clause has been added as a more concise syntax for filtering outputs of window functions. See the Top-N and Deduplication examples.
Lookup Join is an important feature in Flink, It is typically used to enrich a table with data that is queried from an external system. If we interact with the external systems for each incoming record, we incur significant network I/O and RPC overhead. Therefore, most connectors supporting lookup introduce caching to reduce the per-record level query overhead, such as Hbase and JDBC. However, because the data distribution of Lookup Join's input stream is arbitrary, the cache hit rate is sometimes unsatisfactory. External systems may have different requirements for data distribution on the input side, and Flink does not have this knowledge. Flink 2.0 introduces a mechanism for the connector to tell the Flink planner its desired input stream data distribution or partitioning strategy. This can significantly reduce the amount of cached data and improve performance of Lookup Joins.
In Flink 2.0, we disable unaligned checkpoints for all connections of operators within the sink expansion(committer, and any pre/post commit topology). This is necessary because committables need to be at the respective operators on notifyCheckpointComplete or else we can't commit all side effects, which violates the contract of notifyCheckpointComplete.
We have deprecated all setXxx and getXxx methods except getString(String key, String defaultValue) and setString(String key, String value), such as: setInteger, setLong, getInteger and getLong etc. We strongly recommend users use get(ConfigOption<T> option) and set(ConfigOption<T> option, T value) methods directly.
FLIP-366 introduces support for parsing standard YAML files for Flink configuration. A new configuration file named config.yaml, which adheres to the standard YAML format, has been introduced.
In the Flink 2.0, we have removed the old configuration file from the flink-dist, along with support for the old configuration parser. Users are now required to use the new config.yaml file when starting the cluster and submitting jobs. They can utilize the migration tool to assist in migrating legacy configuration files to the new format.
The DataStream API is one of the two main APIs that Flink provides for writing data processing programs. As an API that was introduced practically since day 1 of the project and has been evolved for nearly a decade, we are observing more and more problems with it. Addressing these problems breaks the existing DataStream API, which makes in-place refactor impractical. Therefore, we propose to introduce a new set of APIs, the DataStream API V2, to gradually replace the original DataStream API.
In Flink 2.0, we provide the experimental version of the new DataStream V2 API. It contains the low-level building blocks (DataStream, ProcessFunction, Partitioning), context and primitives like state, time service, watermark processing. At the same time, we also provide some high-level extensions, such as window and join. The high level extensions make it is simpler to work with the APIs for many cases, though if you want more flexibility / control, you can use the low level APIs.
See DataStream API(V2) Documentation for more details
NOTICE: The new DataStream API is currently in the experimental stage and is not yet stable, thus not recommended for production usage at the moment.
Starting the 2.0 version, Flink officially supports Java 21.
The default and recommended Java version is changed to Java 17 (previously Java 11). This change mainly effects the docker images and building Flink from source.
Meanwhile, Java 8 is no longer supported.
Flink 2.0 introduces much more efficient built-in serializers for collection types (i.e., Map / List / Set), which are enabled by default.
We have also upgraded Kryo to version 5.6, which is faster, more memory efficient, and has better supports for newer Java versions.
The following sets of APIs have been completely removed.
Some deprecated methods have been removed from DataStream API. See also the list of breaking programming APIs.
Some deprecated fields have been removed from the REST API. See also the list of breaking REST APIs.
NOTICE: You may find some of the removed APIs still exist in the code base, usually in a different package. They are for internal usage only and can be changed / removed anytime without notifications. Please DO NOT USE them.
As SourceFunction, SinkFunction and SinkV1 being removed, existing connectors depending on these APIs will not work on the Flink 2.x series. Here’s the plan for adapting connectors that officially maintained by community.
Configuration options that met the following criteria are removed in Flink 2.0. See also the list of removed configuration options.
@Public and have been deprecated for at least 2 minor releases.@PublicEvolving and have been deprecated for at least 1 minor releases.The legacy configuration file flink-conf.yaml is no longer supported. Please use config.yaml that uses standard YAML format instead. A migration tool is provided to convert a legacy flink-conf.yaml into a new config.yaml. See “Migrate from flink-conf.yaml to config.yaml” for more details.
Configuration APIs that takes java objects as arguments are removed from StreamExecutionEnvironment and ExecutionConfig. They should now be set via Configuration and ConfigOption. See also the list of breaking programming APIs.
To avoid exposing internal interfaces, User-Defined Functions no longer have full access to ExecutionConfig. Instead, necessary functions such as createSerializer(), getGlobalJobParameters() and isObjectReuseEnabled() can now be accessed from the RuntimeContext directly.
org.apache.flink.api.common.ExecutionConfig$SerializableSerializerorg.apache.flink.api.common.ExecutionModeorg.apache.flink.api.common.InputDependencyConstraintorg.apache.flink.api.common.restartstrategy.RestartStrategies$ExponentialDelayRestartStrategyConfigurationorg.apache.flink.api.common.restartstrategy.RestartStrategies$FailureRateRestartStrategyConfigurationorg.apache.flink.api.common.restartstrategy.RestartStrategies$FallbackRestartStrategyConfigurationorg.apache.flink.api.common.restartstrategy.RestartStrategies$FixedDelayRestartStrategyConfigurationorg.apache.flink.api.common.restartstrategy.RestartStrategies$NoRestartStrategyConfigurationorg.apache.flink.api.common.restartstrategy.RestartStrategies$RestartStrategyConfigurationorg.apache.flink.api.common.restartstrategy.RestartStrategiesorg.apache.flink.api.common.time.Timeorg.apache.flink.api.connector.sink.Committerorg.apache.flink.api.connector.sink.GlobalCommitterorg.apache.flink.api.connector.sink.Sink$InitContextorg.apache.flink.api.connector.sink.Sink$ProcessingTimeService$ProcessingTimeCallbackorg.apache.flink.api.connector.sink.Sink$ProcessingTimeServiceorg.apache.flink.api.connector.sink.SinkWriter$Contextorg.apache.flink.api.connector.sink.SinkWriterorg.apache.flink.api.connector.sink.Sinkorg.apache.flink.api.connector.sink2.Sink$InitContextWrapperorg.apache.flink.api.connector.sink2.Sink$InitContextorg.apache.flink.api.connector.sink2.StatefulSink$StatefulSinkWriterorg.apache.flink.api.connector.sink2.StatefulSink$WithCompatibleStateorg.apache.flink.api.connector.sink2.StatefulSinkorg.apache.flink.api.connector.sink2.TwoPhaseCommittingSink$PrecommittingSinkWriterorg.apache.flink.api.connector.sink2.TwoPhaseCommittingSinkorg.apache.flink.api.java.CollectionEnvironmentorg.apache.flink.api.java.DataSetorg.apache.flink.api.java.ExecutionEnvironmentFactoryorg.apache.flink.api.java.ExecutionEnvironmentorg.apache.flink.api.java.LocalEnvironmentorg.apache.flink.api.java.RemoteEnvironmentorg.apache.flink.api.java.aggregation.Aggregationsorg.apache.flink.api.java.aggregation.UnsupportedAggregationTypeExceptionorg.apache.flink.api.java.functions.FlatMapIteratororg.apache.flink.api.java.functions.FunctionAnnotation$ForwardedFieldsFirstorg.apache.flink.api.java.functions.FunctionAnnotation$ForwardedFieldsSecondorg.apache.flink.api.java.functions.FunctionAnnotation$ForwardedFieldsorg.apache.flink.api.java.functions.FunctionAnnotation$NonForwardedFieldsFirstorg.apache.flink.api.java.functions.FunctionAnnotation$NonForwardedFieldsSecondorg.apache.flink.api.java.functions.FunctionAnnotation$NonForwardedFieldsorg.apache.flink.api.java.functions.FunctionAnnotation$ReadFieldsFirstorg.apache.flink.api.java.functions.FunctionAnnotation$ReadFieldsSecondorg.apache.flink.api.java.functions.FunctionAnnotation$ReadFieldsorg.apache.flink.api.java.functions.FunctionAnnotationorg.apache.flink.api.java.functions.GroupReduceIteratororg.apache.flink.api.java.io.CollectionInputFormatorg.apache.flink.api.java.io.CsvOutputFormatorg.apache.flink.api.java.io.CsvReaderorg.apache.flink.api.java.io.DiscardingOutputFormatorg.apache.flink.api.java.io.IteratorInputFormatorg.apache.flink.api.java.io.LocalCollectionOutputFormatorg.apache.flink.api.java.io.ParallelIteratorInputFormatorg.apache.flink.api.java.io.PrimitiveInputFormatorg.apache.flink.api.java.io.PrintingOutputFormatorg.apache.flink.api.java.io.RowCsvInputFormatorg.apache.flink.api.java.io.SplitDataProperties$SourcePartitionerMarkerorg.apache.flink.api.java.io.SplitDataPropertiesorg.apache.flink.api.java.io.TextInputFormatorg.apache.flink.api.java.io.TextOutputFormat$TextFormatterorg.apache.flink.api.java.io.TextOutputFormatorg.apache.flink.api.java.io.TextValueInputFormatorg.apache.flink.api.java.io.TypeSerializerInputFormatorg.apache.flink.api.java.io.TypeSerializerOutputFormatorg.apache.flink.api.java.operators.AggregateOperatororg.apache.flink.api.java.operators.CoGroupOperator$CoGroupOperatorSetsorg.apache.flink.api.java.operators.CoGroupOperatororg.apache.flink.api.java.operators.CrossOperator$DefaultCrossorg.apache.flink.api.java.operators.CrossOperator$ProjectCrossorg.apache.flink.api.java.operators.CrossOperatororg.apache.flink.api.java.operators.CustomUnaryOperationorg.apache.flink.api.java.operators.DataSinkorg.apache.flink.api.java.operators.DataSourceorg.apache.flink.api.java.operators.DeltaIteration$SolutionSetPlaceHolderorg.apache.flink.api.java.operators.DeltaIteration$WorksetPlaceHolderorg.apache.flink.api.java.operators.DeltaIterationResultSetorg.apache.flink.api.java.operators.DeltaIterationorg.apache.flink.api.java.operators.DistinctOperatororg.apache.flink.api.java.operators.FilterOperatororg.apache.flink.api.java.operators.FlatMapOperatororg.apache.flink.api.java.operators.GroupCombineOperatororg.apache.flink.api.java.operators.GroupReduceOperatororg.apache.flink.api.java.operators.Groupingorg.apache.flink.api.java.operators.IterativeDataSetorg.apache.flink.api.java.operators.JoinOperator$DefaultJoinorg.apache.flink.api.java.operators.JoinOperator$EquiJoinorg.apache.flink.api.java.operators.JoinOperator$JoinOperatorSets$JoinOperatorSetsPredicateorg.apache.flink.api.java.operators.JoinOperator$JoinOperatorSetsorg.apache.flink.api.java.operators.JoinOperator$ProjectJoinorg.apache.flink.api.java.operators.JoinOperatororg.apache.flink.api.java.operators.MapOperatororg.apache.flink.api.java.operators.MapPartitionOperatororg.apache.flink.api.java.operators.Operatororg.apache.flink.api.java.operators.PartitionOperatororg.apache.flink.api.java.operators.ProjectOperatororg.apache.flink.api.java.operators.ReduceOperatororg.apache.flink.api.java.operators.SingleInputOperatororg.apache.flink.api.java.operators.SingleInputUdfOperatororg.apache.flink.api.java.operators.SortPartitionOperatororg.apache.flink.api.java.operators.SortedGroupingorg.apache.flink.api.java.operators.TwoInputOperatororg.apache.flink.api.java.operators.TwoInputUdfOperatororg.apache.flink.api.java.operators.UdfOperatororg.apache.flink.api.java.operators.UnionOperatororg.apache.flink.api.java.operators.UnsortedGroupingorg.apache.flink.api.java.operators.join.JoinFunctionAssignerorg.apache.flink.api.java.operators.join.JoinOperatorSetsBase$JoinOperatorSetsPredicateBaseorg.apache.flink.api.java.operators.join.JoinOperatorSetsBaseorg.apache.flink.api.java.operators.join.JoinTypeorg.apache.flink.api.java.summarize.BooleanColumnSummaryorg.apache.flink.api.java.summarize.ColumnSummaryorg.apache.flink.api.java.summarize.NumericColumnSummaryorg.apache.flink.api.java.summarize.ObjectColumnSummaryorg.apache.flink.api.java.summarize.StringColumnSummaryorg.apache.flink.api.java.utils.AbstractParameterToolorg.apache.flink.api.java.utils.DataSetUtilsorg.apache.flink.api.java.utils.MultipleParameterToolorg.apache.flink.api.java.utils.ParameterToolorg.apache.flink.configuration.AkkaOptionsorg.apache.flink.connector.file.src.reader.FileRecordFormat$Readerorg.apache.flink.connector.file.src.reader.FileRecordFormatorg.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContextorg.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend$PriorityQueueStateTypeorg.apache.flink.core.execution.RestoreModeorg.apache.flink.datastream.api.stream.KeyedPartitionStream$TwoKeyedPartitionStreamsorg.apache.flink.datastream.api.stream.NonKeyedPartitionStream$TwoNonKeyedPartitionStreamsorg.apache.flink.formats.avro.AvroRowDeserializationSchemaorg.apache.flink.formats.csv.CsvRowDeserializationSchema$Builderorg.apache.flink.formats.csv.CsvRowDeserializationSchemaorg.apache.flink.formats.csv.CsvRowSerializationSchema$Builderorg.apache.flink.formats.csv.CsvRowSerializationSchemaorg.apache.flink.formats.json.JsonRowDeserializationSchema$Builderorg.apache.flink.formats.json.JsonRowDeserializationSchemaorg.apache.flink.formats.json.JsonRowSerializationSchema$Builderorg.apache.flink.formats.json.JsonRowSerializationSchemaorg.apache.flink.metrics.reporter.InstantiateViaFactoryorg.apache.flink.metrics.reporter.InterceptInstantiationViaReflectionorg.apache.flink.runtime.jobgraph.SavepointConfigOptionsorg.apache.flink.runtime.state.CheckpointListenerorg.apache.flink.runtime.state.filesystem.FsStateBackendFactoryorg.apache.flink.runtime.state.filesystem.FsStateBackendorg.apache.flink.runtime.state.memory.MemoryStateBackendFactoryorg.apache.flink.runtime.state.memory.MemoryStateBackendorg.apache.flink.state.api.BootstrapTransformationorg.apache.flink.state.api.EvictingWindowReaderorg.apache.flink.state.api.ExistingSavepointorg.apache.flink.state.api.KeyedOperatorTransformationorg.apache.flink.state.api.NewSavepointorg.apache.flink.state.api.OneInputOperatorTransformationorg.apache.flink.state.api.Savepointorg.apache.flink.state.api.WindowReaderorg.apache.flink.state.api.WindowedOperatorTransformationorg.apache.flink.state.api.WritableSavepointorg.apache.flink.state.forst.fs.ByteBufferReadableFSDataInputStreamorg.apache.flink.state.forst.fs.ByteBufferWritableFSDataOutputStreamorg.apache.flink.state.forst.fs.ForStFlinkFileSystemorg.apache.flink.streaming.api.TimeCharacteristicorg.apache.flink.streaming.api.checkpoint.ExternallyInducedSource$CheckpointTriggerorg.apache.flink.streaming.api.checkpoint.ExternallyInducedSourceorg.apache.flink.streaming.api.connector.sink2.WithPostCommitTopologyorg.apache.flink.streaming.api.connector.sink2.WithPreCommitTopologyorg.apache.flink.streaming.api.connector.sink2.WithPreWriteTopologyorg.apache.flink.streaming.api.datastream.IterativeStream$ConnectedIterativeStreamsorg.apache.flink.streaming.api.environment.CheckpointConfig$ExternalizedCheckpointCleanuporg.apache.flink.streaming.api.environment.ExecutionCheckpointingOptionsorg.apache.flink.streaming.api.environment.StreamPipelineOptionsorg.apache.flink.streaming.api.functions.AscendingTimestampExtractororg.apache.flink.streaming.api.functions.sink.DiscardingSinkorg.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunctionorg.apache.flink.streaming.api.functions.sink.PrintSinkFunctionorg.apache.flink.streaming.api.functions.sink.RichSinkFunctionorg.apache.flink.streaming.api.functions.sink.SinkFunction$Contextorg.apache.flink.streaming.api.functions.sink.SinkFunctionorg.apache.flink.streaming.api.functions.sink.SocketClientSinkorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionorg.apache.flink.streaming.api.functions.sink.WriteFormatAsCsvorg.apache.flink.streaming.api.functions.sink.WriteFormatAsTextorg.apache.flink.streaming.api.functions.sink.WriteFormatorg.apache.flink.streaming.api.functions.sink.WriteSinkFunctionByMillisorg.apache.flink.streaming.api.functions.sink.WriteSinkFunctionorg.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$BulkFormatBuilderorg.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$DefaultBulkFormatBuilderorg.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$DefaultRowFormatBuilderorg.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilderorg.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkorg.apache.flink.streaming.api.functions.source.FromElementsFunctionorg.apache.flink.streaming.api.functions.source.FromIteratorFunctionorg.apache.flink.streaming.api.functions.source.FromSplittableIteratorFunctionorg.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBaseorg.apache.flink.streaming.api.functions.source.MultipleIdsMessageAcknowledgingSourceBaseorg.apache.flink.streaming.api.functions.source.ParallelSourceFunctionorg.apache.flink.streaming.api.functions.source.RichParallelSourceFunctionorg.apache.flink.streaming.api.functions.source.RichSourceFunctionorg.apache.flink.streaming.api.functions.source.SocketTextStreamFunctionorg.apache.flink.streaming.api.functions.source.SourceFunction$SourceContextorg.apache.flink.streaming.api.functions.source.SourceFunctionorg.apache.flink.streaming.api.functions.source.StatefulSequenceSourceorg.apache.flink.streaming.api.functions.source.datagen.DataGeneratorSourceorg.apache.flink.streaming.api.functions.windowing.RichProcessAllWindowFunctionorg.apache.flink.streaming.api.functions.windowing.RichProcessWindowFunctionorg.apache.flink.streaming.api.operators.SetupableStreamOperatororg.apache.flink.streaming.api.operators.YieldingOperatorFactoryorg.apache.flink.streaming.api.windowing.time.Timeorg.apache.flink.streaming.util.serialization.AbstractDeserializationSchemaorg.apache.flink.streaming.util.serialization.DeserializationSchemaorg.apache.flink.streaming.util.serialization.SerializationSchemaorg.apache.flink.streaming.util.serialization.SimpleStringSchemaorg.apache.flink.streaming.util.serialization.TypeInformationSerializationSchemaorg.apache.flink.table.api.TableColumn$ComputedColumnorg.apache.flink.table.api.TableColumn$MetadataColumnorg.apache.flink.table.api.TableColumn$PhysicalColumnorg.apache.flink.table.api.TableColumnorg.apache.flink.table.api.TableSchema$Builderorg.apache.flink.table.api.TableSchemaorg.apache.flink.table.api.constraints.Constraint$ConstraintTypeorg.apache.flink.table.api.constraints.Constraintorg.apache.flink.table.api.constraints.UniqueConstraintorg.apache.flink.table.connector.sink.SinkFunctionProviderorg.apache.flink.table.connector.sink.SinkProviderorg.apache.flink.table.connector.source.AsyncTableFunctionProviderorg.apache.flink.table.connector.source.SourceFunctionProviderorg.apache.flink.table.connector.source.TableFunctionProviderorg.apache.flink.table.descriptors.Descriptororg.apache.flink.table.descriptors.RowtimeValidatororg.apache.flink.table.descriptors.Rowtimeorg.apache.flink.table.descriptors.SchemaValidatororg.apache.flink.table.descriptors.Schemaorg.apache.flink.table.factories.StreamTableSinkFactoryorg.apache.flink.table.factories.StreamTableSourceFactoryorg.apache.flink.table.factories.TableFactoryorg.apache.flink.table.factories.TableSinkFactory$Contextorg.apache.flink.table.factories.TableSinkFactoryorg.apache.flink.table.factories.TableSourceFactory$Contextorg.apache.flink.table.factories.TableSourceFactoryorg.apache.flink.table.planner.codegen.agg.batch.HashAggCodeGenerator$org.apache.flink.table.planner.plan.metadata.FlinkRelMdRowCount$org.apache.flink.table.planner.plan.optimize.RelNodeBlockPlanBuilder$org.apache.flink.table.planner.plan.rules.logical.JoinDeriveNullFilterRule$org.apache.flink.table.planner.plan.rules.physical.batch.BatchPhysicalJoinRuleBase$org.apache.flink.table.planner.plan.rules.physical.batch.BatchPhysicalSortMergeJoinRule$org.apache.flink.table.planner.plan.rules.physical.batch.BatchPhysicalSortRule$org.apache.flink.table.planner.plan.rules.physical.stream.IncrementalAggregateRule$org.apache.flink.table.planner.plan.utils.FlinkRexUtil$org.apache.flink.table.sinks.AppendStreamTableSinkorg.apache.flink.table.sinks.OutputFormatTableSinkorg.apache.flink.table.sinks.OverwritableTableSinkorg.apache.flink.table.sinks.PartitionableTableSinkorg.apache.flink.table.sinks.RetractStreamTableSinkorg.apache.flink.table.sinks.TableSinkorg.apache.flink.table.sinks.UpsertStreamTableSinkorg.apache.flink.table.sources.DefinedFieldMappingorg.apache.flink.table.sources.DefinedProctimeAttributeorg.apache.flink.table.sources.DefinedRowtimeAttributesorg.apache.flink.table.sources.FieldComputerorg.apache.flink.table.sources.InputFormatTableSourceorg.apache.flink.table.sources.LimitableTableSourceorg.apache.flink.table.sources.LookupableTableSourceorg.apache.flink.table.sources.NestedFieldsProjectableTableSourceorg.apache.flink.table.sources.PartitionableTableSourceorg.apache.flink.table.sources.ProjectableTableSourceorg.apache.flink.table.sources.TableSourceorg.apache.flink.table.sources.tsextractors.ExistingFieldorg.apache.flink.table.sources.tsextractors.StreamRecordTimestamporg.apache.flink.table.sources.tsextractors.TimestampExtractororg.apache.flink.table.types.logical.TypeInformationRawTypeorg.apache.flink.table.utils.TypeStringUtilsorg.apache.flink.walkthrough.common.sink.AlertSinkorg.apache.flink.walkthrough.common.source.TransactionSourceorg.apache.flink.table.api.bridge.java.StreamTableEnvironmentvoid registerDataStream(java.lang.String, org.apache.flink.streaming.api.datastream.DataStream<T>)void registerFunction(java.lang.String, org.apache.flink.table.functions.TableFunction<T>)void registerFunction(java.lang.String, org.apache.flink.table.functions.AggregateFunction<T,ACC>)void registerFunction(java.lang.String, org.apache.flink.table.functions.TableAggregateFunction<T,ACC>)org.apache.flink.table.api.config.ExecutionConfigOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> TABLE_EXEC_LEGACY_TRANSFORMATION_UIDSorg.apache.flink.configuration.ConfigOption<java.lang.String> TABLE_EXEC_SHUFFLE_MODEorg.apache.flink.table.api.config.LookupJoinHintOptionsorg.apache.flink.shaded.guava32.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption> (<-org.apache.flink.shaded.guava31.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption>) getRequiredOptions()org.apache.flink.shaded.guava32.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption> (<-org.apache.flink.shaded.guava31.com.google.common.collect.ImmutableSet<org.apache.flink.configuration.ConfigOption><org.apache.flink.configuration.ConfigOption>) getSupportedOptions()org.apache.flink.table.api.config.OptimizerConfigOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> TABLE_OPTIMIZER_SOURCE_PREDICATE_PUSHDOWN_ENABLEDorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> TABLE_OPTIMIZER_SOURCE_AGGREGATE_PUSHDOWN_ENABLEDorg.apache.flink.table.api.dataview.ListViewTRANSIENT(-) org.apache.flink.api.common.typeinfo.TypeInformation<?> elementTypeListView(org.apache.flink.api.common.typeinfo.TypeInformation<?>)org.apache.flink.table.api.dataview.MapViewTRANSIENT(-) org.apache.flink.api.common.typeinfo.TypeInformation<?> valueTypeTRANSIENT(-) org.apache.flink.api.common.typeinfo.TypeInformation<?> keyTypeMapView(org.apache.flink.api.common.typeinfo.TypeInformation<?>, org.apache.flink.api.common.typeinfo.TypeInformation<?>)org.apache.flink.table.api.EnvironmentSettingsorg.apache.flink.table.api.EnvironmentSettings fromConfiguration(org.apache.flink.configuration.ReadableConfig)org.apache.flink.configuration.Configuration toConfiguration()org.apache.flink.table.api.internal.BaseExpressionsjava.lang.Object cast(org.apache.flink.api.common.typeinfo.TypeInformation<?>)org.apache.flink.table.api.OverWindowjava.util.Optional<org.apache.flink.table.expressions.Expression> (<-org.apache.flink.table.expressions.Expression<org.apache.flink.table.expressions.Expression>) getPreceding()org.apache.flink.table.api.Tableorg.apache.flink.table.legacy.api.TableSchema (<-org.apache.flink.table.api.TableSchema) getSchema()org.apache.flink.table.api.TableConfigPRIVATE (<- PUBLIC) TableConfig()long getMaxIdleStateRetentionTime()long getMinIdleStateRetentionTime()void setIdleStateRetentionTime(org.apache.flink.api.common.time.Time, org.apache.flink.api.common.time.Time)org.apache.flink.table.api.TableDescriptororg.apache.flink.table.api.TableDescriptor$Builder forManaged()org.apache.flink.table.api.TableResultorg.apache.flink.table.api.TableSchema getTableSchema()org.apache.flink.table.catalog.Catalogjava.util.Optional<org.apache.flink.table.factories.TableFactory> getTableFactory()boolean supportsManagedTable()org.apache.flink.table.catalog.CatalogBaseTableorg.apache.flink.table.legacy.api.TableSchema (<-org.apache.flink.table.api.TableSchema) getSchema()org.apache.flink.table.catalog.CatalogFunctionboolean isGeneric()org.apache.flink.table.catalog.CatalogTableorg.apache.flink.table.catalog.CatalogTable of(org.apache.flink.table.api.Schema, java.lang.String, java.util.List<java.lang.String>, java.util.Map<java.lang.String,java.lang.String>)org.apache.flink.table.catalog.CatalogTable of(org.apache.flink.table.api.Schema, java.lang.String, java.util.List<java.lang.String>, java.util.Map<java.lang.String,java.lang.String>, java.lang.Long)java.util.Map<java.lang.String,java.lang.String> toProperties()org.apache.flink.table.catalog.ResolvedCatalogBaseTableorg.apache.flink.table.legacy.api.TableSchema (<-org.apache.flink.table.api.TableSchema) getSchema()org.apache.flink.table.connector.sink.DataStreamSinkProvider(<- NON_ABSTRACT) org.apache.flink.streaming.api.datastream.DataStreamSink<?><?> consumeDataStream(org.apache.flink.table.connector.ProviderContext, org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.table.data.RowData><org.apache.flink.table.data.RowData>)org.apache.flink.streaming.api.datastream.DataStreamSink<?> consumeDataStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.table.data.RowData>)org.apache.flink.table.connector.source.abilities.SupportsProjectionPushDownvoid applyProjection(int[][])(<- NON_ABSTRACT) void applyProjection(int[][], org.apache.flink.table.types.DataType)org.apache.flink.table.connector.source.DataStreamScanProvider(<- NON_ABSTRACT) org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.table.data.RowData><org.apache.flink.table.data.RowData> produceDataStream(org.apache.flink.table.connector.ProviderContext, org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.table.data.RowData> produceDataStream(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)org.apache.flink.table.expressions.CallExpressionCallExpression(org.apache.flink.table.functions.FunctionIdentifier, org.apache.flink.table.functions.FunctionDefinition, java.util.List<org.apache.flink.table.expressions.ResolvedExpression>, org.apache.flink.table.types.DataType)CallExpression(org.apache.flink.table.functions.FunctionDefinition, java.util.List<org.apache.flink.table.expressions.ResolvedExpression>, org.apache.flink.table.types.DataType)org.apache.flink.table.factories.FactoryUtilorg.apache.flink.table.connector.sink.DynamicTableSink createDynamicTableSink(org.apache.flink.table.factories.DynamicTableSinkFactory, org.apache.flink.table.catalog.ObjectIdentifier, org.apache.flink.table.catalog.ResolvedCatalogTable, org.apache.flink.configuration.ReadableConfig, java.lang.ClassLoader, boolean)org.apache.flink.table.connector.source.DynamicTableSource createDynamicTableSource(org.apache.flink.table.factories.DynamicTableSourceFactory, org.apache.flink.table.catalog.ObjectIdentifier, org.apache.flink.table.catalog.ResolvedCatalogTable, org.apache.flink.configuration.ReadableConfig, java.lang.ClassLoader, boolean)org.apache.flink.table.connector.sink.DynamicTableSink createTableSink(org.apache.flink.table.catalog.Catalog, org.apache.flink.table.catalog.ObjectIdentifier, org.apache.flink.table.catalog.ResolvedCatalogTable, org.apache.flink.configuration.ReadableConfig, java.lang.ClassLoader, boolean)org.apache.flink.table.connector.source.DynamicTableSource createTableSource(org.apache.flink.table.catalog.Catalog, org.apache.flink.table.catalog.ObjectIdentifier, org.apache.flink.table.catalog.ResolvedCatalogTable, org.apache.flink.configuration.ReadableConfig, java.lang.ClassLoader, boolean)org.apache.flink.table.factories.FunctionDefinitionFactoryorg.apache.flink.table.functions.FunctionDefinition createFunctionDefinition(java.lang.String, org.apache.flink.table.catalog.CatalogFunction)(<- NON_ABSTRACT) org.apache.flink.table.functions.FunctionDefinition createFunctionDefinition(java.lang.String, org.apache.flink.table.catalog.CatalogFunction, org.apache.flink.table.factories.FunctionDefinitionFactory$Context)org.apache.flink.table.functions.FunctionContextFunctionContext(org.apache.flink.api.common.functions.RuntimeContext, java.lang.ClassLoader, org.apache.flink.configuration.Configuration)org.apache.flink.table.plan.stats.ColumnStatsColumnStats(java.lang.Long, java.lang.Long, java.lang.Double, java.lang.Integer, java.lang.Number, java.lang.Number)java.lang.Number getMaxValue()java.lang.Number getMinValue()org.apache.flink.table.types.logical.SymbolTypeSymbolType(boolean, java.lang.Class<T>)SymbolType(java.lang.Class<T>)org.apache.flink.table.types.logical.utils.LogicalTypeParserorg.apache.flink.table.types.logical.LogicalType parse(java.lang.String)org.apache.flink.api.common.state.v2.StateIteratororg.apache.flink.api.common.state.v2.StateFuture<java.util.Collection<U>> onNext(java.util.function.Function<T,org.apache.flink.api.common.state.v2.StateFuture<U>>)org.apache.flink.api.common.state.v2.StateFuture<java.lang.Void> onNext(java.util.function.Consumer<T>)org.apache.flink.table.api.ImplicitExpressionConversionsorg.apache.flink.table.expressions.Expression toTimestampLtz(org.apache.flink.table.expressions.Expression, org.apache.flink.table.expressions.Expression)SYNTHETIC(-) org.apache.flink.table.expressions.Expression toTimestampLtz$(org.apache.flink.table.api.ImplicitExpressionConversions, org.apache.flink.table.expressions.Expression, org.apache.flink.table.expressions.Expression)org.apache.flink.api.common.eventtime.WatermarksWithIdlenessWatermarksWithIdleness(org.apache.flink.api.common.eventtime.WatermarkGenerator<T>, java.time.Duration)org.apache.flink.api.common.ExecutionConfigint PARALLELISM_AUTO_MAXvoid addDefaultKryoSerializer(java.lang.Class<?>, com.esotericsoftware.kryo.Serializer<?>)void addDefaultKryoSerializer(java.lang.Class<?>, java.lang.Class<? extends com.esotericsoftware.kryo.Serializer<? extends ?>>)boolean canEqual(java.lang.Object)void disableAutoTypeRegistration()void disableForceAvro()void disableForceKryo()void disableGenericTypes()void enableForceAvro()void enableForceKryo()void enableGenericTypes()int getAsyncInflightRecordsLimit()int getAsyncStateBufferSize()long getAsyncStateBufferTimeout()org.apache.flink.api.common.InputDependencyConstraint getDefaultInputDependencyConstraint()java.util.LinkedHashMap<java.lang.Class<?>,java.lang.Class<com.esotericsoftware.kryo.Serializer<? extends ?>>> getDefaultKryoSerializerClasses()java.util.LinkedHashMap<java.lang.Class<?>,org.apache.flink.api.common.ExecutionConfig$SerializableSerializer<?>> getDefaultKryoSerializers()org.apache.flink.api.common.ExecutionMode getExecutionMode()long getExecutionRetryDelay()int getNumberOfExecutionRetries()java.util.LinkedHashSet<java.lang.Class<?>> getRegisteredKryoTypes()java.util.LinkedHashSet<java.lang.Class<?>> getRegisteredPojoTypes()java.util.LinkedHashMap<java.lang.Class<?>,java.lang.Class<com.esotericsoftware.kryo.Serializer<? extends ?>>> getRegisteredTypesWithKryoSerializerClasses()java.util.LinkedHashMap<java.lang.Class<?>,org.apache.flink.api.common.ExecutionConfig$SerializableSerializer<?>> getRegisteredTypesWithKryoSerializers()org.apache.flink.api.common.restartstrategy.RestartStrategies$RestartStrategyConfiguration getRestartStrategy()boolean hasGenericTypesDisabled()boolean isAutoTypeRegistrationDisabled()boolean isForceAvroEnabled()boolean isForceKryoEnabled()void registerKryoType(java.lang.Class<?>)void registerPojoType(java.lang.Class<?>)void registerTypeWithKryoSerializer(java.lang.Class<?>, com.esotericsoftware.kryo.Serializer<?>)void registerTypeWithKryoSerializer(java.lang.Class<?>, java.lang.Class<? extends com.esotericsoftware.kryo.Serializer>)org.apache.flink.api.common.ExecutionConfig setAsyncInflightRecordsLimit(int)org.apache.flink.api.common.ExecutionConfig setAsyncStateBufferSize(int)org.apache.flink.api.common.ExecutionConfig setAsyncStateBufferTimeout(long)void setDefaultInputDependencyConstraint(org.apache.flink.api.common.InputDependencyConstraint)void setExecutionMode(org.apache.flink.api.common.ExecutionMode)org.apache.flink.api.common.ExecutionConfig setExecutionRetryDelay(long)org.apache.flink.api.common.ExecutionConfig setNumberOfExecutionRetries(int)void setRestartStrategy(org.apache.flink.api.common.restartstrategy.RestartStrategies$RestartStrategyConfiguration)org.apache.flink.api.common.functions.RichFunctionvoid open(org.apache.flink.configuration.Configuration)(<- NON_ABSTRACT) void open(org.apache.flink.api.common.functions.OpenContext)org.apache.flink.api.common.functions.RuntimeContextint getAttemptNumber()org.apache.flink.api.common.ExecutionConfig getExecutionConfig()int getIndexOfThisSubtask()org.apache.flink.api.common.JobID getJobId()int getMaxNumberOfParallelSubtasks()int getNumberOfParallelSubtasks()java.lang.String getTaskName()java.lang.String getTaskNameWithSubtasks()org.apache.flink.api.common.io.BinaryInputFormatjava.lang.String BLOCK_SIZE_PARAMETER_KEYorg.apache.flink.api.common.io.BinaryOutputFormatjava.lang.String BLOCK_SIZE_PARAMETER_KEYorg.apache.flink.api.common.io.FileInputFormatjava.lang.String ENUMERATE_NESTED_FILES_FLAGorg.apache.flink.core.fs.Path getFilePath()boolean supportsMultiPaths()org.apache.flink.api.common.io.FileOutputFormatjava.lang.String FILE_PARAMETER_KEYorg.apache.flink.api.common.io.FinalizeOnMastervoid finalizeGlobal(int)(<- NON_ABSTRACT) void finalizeGlobal(org.apache.flink.api.common.io.FinalizeOnMaster$FinalizationContext)org.apache.flink.api.common.io.OutputFormatvoid open(int, int)(<- NON_ABSTRACT) void open(org.apache.flink.api.common.io.OutputFormat$InitializationContext)org.apache.flink.api.common.JobExecutionResultorg.apache.flink.api.common.JobExecutionResult fromJobSubmissionResult(org.apache.flink.api.common.JobSubmissionResult)java.lang.Integer getIntCounterResult(java.lang.String)org.apache.flink.api.common.serialization.SerializerConfigjava.util.LinkedHashMap<java.lang.Class<?>,org.apache.flink.api.common.ExecutionConfig$SerializableSerializer<?>> getDefaultKryoSerializers()java.util.LinkedHashMap<java.lang.Class<?>,org.apache.flink.api.common.ExecutionConfig$SerializableSerializer<?>> getRegisteredTypesWithKryoSerializers()org.apache.flink.api.common.state.StateTtlConfigorg.apache.flink.api.common.time.Time getTtl()org.apache.flink.api.common.state.StateTtlConfig$Builder newBuilder(org.apache.flink.api.common.time.Time)org.apache.flink.api.common.state.StateTtlConfig$BuilderStateTtlConfig$Builder(org.apache.flink.api.common.time.Time)org.apache.flink.api.common.state.StateTtlConfig$Builder setTtl(org.apache.flink.api.common.time.Time)org.apache.flink.api.common.typeinfo.TypeInformation(<- NON_ABSTRACT) org.apache.flink.api.common.typeutils.TypeSerializer<T><T> createSerializer(org.apache.flink.api.common.serialization.SerializerConfig)org.apache.flink.api.common.typeutils.TypeSerializer<T> createSerializer(org.apache.flink.api.common.ExecutionConfig)org.apache.flink.api.common.typeutils.TypeSerializerSnapshotorg.apache.flink.api.common.typeutils.TypeSerializerSchemaCompatibility<T> resolveSchemaCompatibility(org.apache.flink.api.common.typeutils.TypeSerializer<T>)(<- NON_ABSTRACT) org.apache.flink.api.common.typeutils.TypeSerializerSchemaCompatibility<T><T> resolveSchemaCompatibility(org.apache.flink.api.common.typeutils.TypeSerializerSnapshot<T><T>)org.apache.flink.api.connector.sink2.Sinkorg.apache.flink.api.connector.sink2.SinkWriter<InputT> createWriter(org.apache.flink.api.connector.sink2.Sink$InitContext)(<- NON_ABSTRACT) org.apache.flink.api.connector.sink2.SinkWriter<InputT><InputT> createWriter(org.apache.flink.api.connector.sink2.WriterInitContext)org.apache.flink.api.java.typeutils.PojoTypeInfoorg.apache.flink.api.java.typeutils.runtime.PojoSerializer<T> createPojoSerializer(org.apache.flink.api.common.ExecutionConfig)org.apache.flink.api.java.typeutils.RowTypeInfoorg.apache.flink.api.common.typeutils.TypeSerializer<org.apache.flink.types.Row> createLegacySerializer(org.apache.flink.api.common.serialization.SerializerConfig)org.apache.flink.configuration.CheckpointingOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> LOCAL_RECOVERYorg.apache.flink.configuration.ConfigOption<java.lang.String> STATE_BACKENDorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> ASYNC_SNAPSHOTSorg.apache.flink.configuration.ClusterOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> FINE_GRAINED_SHUFFLE_MODE_ALL_BLOCKINGorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> EVENLY_SPREAD_OUT_SLOTS_STRATEGYorg.apache.flink.configuration.ConfigConstantsjava.lang.String HA_ZOOKEEPER_LEADER_PATHdouble DEFAULT_AKKA_WATCH_THRESHOLDint DEFAULT_JOB_MANAGER_IPC_PORTjava.lang.String JOB_MANAGER_WEB_TMPDIR_KEYint DEFAULT_TASK_MANAGER_MEMORY_SEGMENT_SIZEjava.lang.String METRICS_SCOPE_NAMING_TASKjava.lang.String ZOOKEEPER_NAMESPACE_KEYint DEFAULT_AKKA_DISPATCHER_THROUGHPUTjava.lang.String RESTART_STRATEGY_FIXED_DELAY_ATTEMPTSjava.lang.String MESOS_MASTER_URLjava.lang.String FLINK_BASE_DIR_PATH_KEYjava.lang.String JOB_MANAGER_WEB_SSL_ENABLEDjava.lang.String YARN_APPLICATION_TAGSjava.lang.String HDFS_SITE_CONFIGjava.lang.String EXECUTION_RETRY_DELAY_KEYint DEFAULT_MESOS_ARTIFACT_SERVER_PORTboolean DEFAULT_SECURITY_SSL_VERIFY_HOSTNAMEjava.lang.String CONTAINERIZED_HEAP_CUTOFF_MINjava.lang.String YARN_HEARTBEAT_DELAY_SECONDSjava.lang.String AKKA_SSL_ENABLEDjava.lang.String HA_MODEjava.lang.String ZOOKEEPER_MESOS_WORKERS_PATHboolean DEFAULT_ZOOKEEPER_SASL_DISABLEjava.lang.String METRICS_SCOPE_DELIMITERjava.lang.String LOCAL_NUMBER_RESOURCE_MANAGERjava.lang.String AKKA_TCP_TIMEOUTjava.lang.String METRICS_SCOPE_NAMING_OPERATORjava.lang.String ZOOKEEPER_RECOVERY_PATHint DEFAULT_ZOOKEEPER_LEADER_PORTjava.lang.String DEFAULT_ZOOKEEPER_LATCH_PATHint DEFAULT_ZOOKEEPER_PEER_PORTjava.lang.String METRICS_SCOPE_NAMING_TM_JOBint DEFAULT_JOB_MANAGER_WEB_BACK_PRESSURE_NUM_SAMPLESjava.lang.String HA_ZOOKEEPER_SESSION_TIMEOUTjava.lang.String FLINK_JVM_OPTIONSjava.lang.String HA_ZOOKEEPER_CHECKPOINT_COUNTER_PATHjava.lang.String METRICS_SCOPE_NAMING_JMjava.lang.String DEFAULT_YARN_JOB_MANAGER_PORTboolean DEFAULT_JOB_MANAGER_WEB_CHECKPOINTS_DISABLEjava.lang.String HA_ZOOKEEPER_QUORUM_KEYboolean DEFAULT_JOB_MANAGER_WEB_SUBMIT_ENABLEDjava.lang.String JOB_MANAGER_WEB_CHECKPOINTS_HISTORY_SIZEjava.lang.String ZOOKEEPER_JOBGRAPHS_PATHjava.lang.String ZOOKEEPER_SASL_SERVICE_NAMEjava.lang.String DEFAULT_AKKA_LOOKUP_TIMEOUTjava.lang.String RESTART_STRATEGY_FAILURE_RATE_MAX_FAILURES_PER_INTERVALjava.lang.String JOB_MANAGER_WEB_PORT_KEYjava.lang.String METRICS_LATENCY_HISTORY_SIZEint DEFAULT_BLOB_FETCH_BACKLOGjava.lang.String JOB_MANAGER_WEB_BACK_PRESSURE_REFRESH_INTERVALfloat DEFAULT_SORT_SPILLING_THRESHOLDjava.lang.String DEFAULT_AKKA_TRANSPORT_HEARTBEAT_INTERVALjava.lang.String CONTAINERIZED_MASTER_ENV_PREFIXint DEFAULT_JOB_MANAGER_WEB_ARCHIVE_COUNTjava.lang.String TASK_MANAGER_HOSTNAME_KEYjava.lang.String AKKA_WATCH_HEARTBEAT_INTERVALjava.lang.String DEFAULT_TASK_MANAGER_TMP_PATHint DEFAULT_EXECUTION_RETRIESint DEFAULT_JOB_MANAGER_WEB_FRONTEND_PORTjava.lang.String JOB_MANAGER_WEB_LOG_PATH_KEYjava.lang.String TASK_MANAGER_MEMORY_SIZE_KEYjava.lang.String DEFAULT_MESOS_RESOURCEMANAGER_FRAMEWORK_NAMEjava.lang.String TASK_MANAGER_DATA_PORT_KEYjava.lang.String ZOOKEEPER_CHECKPOINTS_PATHjava.lang.String HA_JOB_MANAGER_PORTjava.lang.String TASK_MANAGER_REFUSED_REGISTRATION_PAUSEjava.lang.String CONTAINERIZED_HEAP_CUTOFF_RATIOjava.lang.String DEFAULT_SORT_SPILLING_THRESHOLD_KEYjava.lang.String YARN_CONTAINER_START_COMMAND_TEMPLATEboolean DEFAULT_JOB_MANAGER_WEB_SSL_ENABLEDjava.lang.String LIBRARY_CACHE_MANAGER_CLEANUP_INTERVALjava.lang.String JOB_MANAGER_WEB_CHECKPOINTS_DISABLEjava.lang.String DEFAULT_ZOOKEEPER_LEADER_PATHint DEFAULT_JOB_MANAGER_WEB_BACK_PRESSURE_DELAYjava.lang.String DEFAULT_TASK_MANAGER_MAX_REGISTRATION_PAUSEjava.lang.String METRICS_REPORTERS_LISTjava.lang.String DEFAULT_RECOVERY_MODEint DEFAULT_METRICS_LATENCY_HISTORY_SIZEjava.lang.String TASK_MANAGER_INITIAL_REGISTRATION_PAUSEjava.lang.String DEFAULT_MESOS_RESOURCEMANAGER_FRAMEWORK_ROLEint DEFAULT_JOB_MANAGER_WEB_CHECKPOINTS_HISTORY_SIZEjava.lang.String YARN_PROPERTIES_FILE_LOCATIONjava.lang.String RECOVERY_JOB_MANAGER_PORTboolean DEFAULT_SECURITY_SSL_ENABLEDjava.lang.String MESOS_FAILOVER_TIMEOUT_SECONDSjava.lang.String RUNTIME_HASH_JOIN_BLOOM_FILTERS_KEYjava.lang.String ZOOKEEPER_LEADER_PATHjava.lang.String ZOOKEEPER_MAX_RETRY_ATTEMPTSjava.lang.String HA_ZOOKEEPER_CHECKPOINTS_PATHjava.lang.String MESOS_RESOURCEMANAGER_FRAMEWORK_ROLEint DEFAULT_JOB_MANAGER_WEB_BACK_PRESSURE_REFRESH_INTERVALjava.lang.String DEFAULT_ZOOKEEPER_MESOS_WORKERS_PATHjava.lang.String JOB_MANAGER_IPC_PORT_KEYjava.lang.String AKKA_WATCH_HEARTBEAT_PAUSEjava.lang.String MESOS_RESOURCEMANAGER_FRAMEWORK_NAMEjava.lang.String DELIMITED_FORMAT_MAX_SAMPLE_LENGTH_KEYjava.lang.String STATE_BACKENDjava.lang.String MESOS_RESOURCEMANAGER_FRAMEWORK_PRINCIPALlong DEFAULT_TASK_MANAGER_DEBUG_MEMORY_USAGE_LOG_INTERVAL_MSjava.lang.String DEFAULT_AKKA_CLIENT_TIMEOUTint DEFAULT_SPILLING_MAX_FANjava.lang.String TASK_MANAGER_IPC_PORT_KEYjava.lang.String TASK_MANAGER_MEMORY_OFF_HEAP_KEYboolean DEFAULT_FILESYSTEM_OVERWRITEboolean DEFAULT_USE_LARGE_RECORD_HANDLERjava.lang.String HA_ZOOKEEPER_JOBGRAPHS_PATHboolean DEFAULT_BLOB_SERVICE_SSL_ENABLEDjava.lang.String ZOOKEEPER_SESSION_TIMEOUTjava.lang.String TASK_MANAGER_NETWORK_DEFAULT_IO_MODEjava.lang.String SECURITY_SSL_TRUSTSTORE_PASSWORDint DEFAULT_ZOOKEEPER_MAX_RETRY_ATTEMPTSjava.lang.String AKKA_STARTUP_TIMEOUTjava.lang.String TASK_MANAGER_TMP_DIR_KEYjava.lang.String USE_LARGE_RECORD_HANDLER_KEYjava.lang.String DEFAULT_ZOOKEEPER_DIR_KEYint DEFAULT_YARN_MIN_HEAP_CUTOFFjava.lang.String TASK_MANAGER_DATA_SSL_ENABLEDjava.lang.String HDFS_DEFAULT_CONFIGboolean DEFAULT_TASK_MANAGER_DATA_SSL_ENABLEDjava.lang.String DEFAULT_ZOOKEEPER_JOBGRAPHS_PATHjava.lang.String HA_ZOOKEEPER_MESOS_WORKERS_PATHjava.lang.String BLOB_STORAGE_DIRECTORY_KEYjava.lang.String DEFAULT_STATE_BACKENDjava.lang.String HA_ZOOKEEPER_RETRY_WAITjava.lang.String AKKA_ASK_TIMEOUTjava.lang.String JOB_MANAGER_WEB_SUBMIT_ENABLED_KEYjava.lang.String DEFAULT_ZOOKEEPER_NAMESPACE_KEYjava.lang.String DEFAULT_ZOOKEEPER_CHECKPOINTS_PATHint DEFAULT_LOCAL_NUMBER_JOB_MANAGERjava.lang.String AKKA_TRANSPORT_HEARTBEAT_INTERVALjava.lang.String DEFAULT_ZOOKEEPER_CHECKPOINT_COUNTER_PATHjava.lang.String FS_STREAM_OPENING_TIMEOUT_KEYjava.lang.String SECURITY_SSL_TRUSTSTOREjava.lang.String METRICS_SCOPE_NAMING_JM_JOBjava.lang.String MESOS_INITIAL_TASKSjava.lang.String AKKA_FRAMESIZEint DEFAULT_ZOOKEEPER_INIT_LIMITjava.lang.String JOB_MANAGER_WEB_BACK_PRESSURE_CLEAN_UP_INTERVALjava.lang.String SECURITY_SSL_KEYSTOREboolean DEFAULT_MESOS_ARTIFACT_SERVER_SSL_ENABLEDjava.lang.String HA_ZOOKEEPER_MAX_RETRY_ATTEMPTSint DEFAULT_PARALLELISMjava.lang.String RECOVERY_MODEjava.lang.String EXECUTION_RETRIES_KEYjava.lang.String METRICS_REPORTER_SCOPE_DELIMITERjava.lang.String LOCAL_START_WEBSERVERjava.lang.String LOCAL_NUMBER_JOB_MANAGERjava.lang.String RESTART_STRATEGYjava.lang.String ZOOKEEPER_QUORUM_KEYint DEFAULT_MESOS_FAILOVER_TIMEOUT_SECSboolean DEFAULT_TASK_MANAGER_MEMORY_PRE_ALLOCATEint DEFAULT_LOCAL_NUMBER_RESOURCE_MANAGERjava.lang.String HA_ZOOKEEPER_CLIENT_ACLjava.lang.String METRICS_REPORTER_FACTORY_CLASS_SUFFIXboolean DEFAULT_FILESYSTEM_ALWAYS_CREATE_DIRECTORYjava.lang.String BLOB_FETCH_CONCURRENT_KEYjava.lang.String FILESYSTEM_DEFAULT_OVERWRITE_KEYjava.lang.String RESOURCE_MANAGER_IPC_PORT_KEYjava.lang.String DEFAULT_AKKA_ASK_TIMEOUTint DEFAULT_ZOOKEEPER_CLIENT_PORTdouble DEFAULT_AKKA_TRANSPORT_THRESHOLDjava.lang.String DEFAULT_AKKA_FRAMESIZEjava.lang.String TASK_MANAGER_NUM_TASK_SLOTSjava.lang.String YARN_APPLICATION_MASTER_ENV_PREFIXjava.lang.String JOB_MANAGER_WEB_BACK_PRESSURE_DELAYlong DEFAULT_TASK_CANCELLATION_INTERVAL_MILLISjava.lang.String TASK_MANAGER_MEMORY_PRE_ALLOCATE_KEYjava.lang.String FILESYSTEM_SCHEMEjava.lang.String TASK_MANAGER_MAX_REGISTRATION_DURATIONjava.lang.String HA_ZOOKEEPER_DIR_KEYjava.lang.String DEFAULT_MESOS_RESOURCEMANAGER_FRAMEWORK_USERjava.lang.String DEFAULT_FILESYSTEM_SCHEMEjava.lang.String MESOS_RESOURCEMANAGER_FRAMEWORK_SECRETint DEFAULT_DELIMITED_FORMAT_MAX_SAMPLE_LENjava.lang.String ENV_FLINK_BIN_DIRfloat DEFAULT_YARN_HEAP_CUTOFF_RATIOjava.lang.String SAVEPOINT_FS_DIRECTORY_KEYjava.lang.String AKKA_JVM_EXIT_ON_FATAL_ERRORjava.lang.String ZOOKEEPER_RETRY_WAITjava.lang.String HA_ZOOKEEPER_NAMESPACE_KEYjava.lang.String ZOOKEEPER_CONNECTION_TIMEOUTjava.lang.String TASK_MANAGER_NETWORK_NUM_BUFFERS_KEYjava.lang.String JOB_MANAGER_WEB_ARCHIVE_COUNTint DEFAULT_RESOURCE_MANAGER_IPC_PORTint DEFAULT_JOB_MANAGER_WEB_BACK_PRESSURE_CLEAN_UP_INTERVALjava.lang.String YARN_REALLOCATE_FAILED_CONTAINERSjava.lang.String SECURITY_SSL_KEYSTORE_PASSWORDjava.lang.String DEFAULT_HA_JOB_MANAGER_PORTjava.lang.String BLOB_FETCH_RETRIES_KEYjava.lang.String METRICS_REPORTER_EXCLUDED_VARIABLESjava.lang.String DEFAULT_SECURITY_SSL_PROTOCOLjava.lang.String RECOVERY_JOB_DELAYjava.lang.String TASK_CANCELLATION_INTERVAL_MILLISjava.lang.String YARN_APPLICATION_MASTER_PORTint DEFAULT_TASK_MANAGER_DATA_PORTjava.lang.String RESTART_STRATEGY_FAILURE_RATE_FAILURE_RATE_INTERVALjava.lang.String YARN_TASK_MANAGER_ENV_PREFIXint DEFAULT_DELIMITED_FORMAT_MIN_LINE_SAMPLESjava.lang.String AKKA_LOG_LIFECYCLE_EVENTSboolean DEFAULT_AKKA_LOG_LIFECYCLE_EVENTSjava.lang.String SECURITY_SSL_ENABLEDint DEFAULT_DELIMITED_FORMAT_MAX_LINE_SAMPLESjava.lang.String LOCAL_NUMBER_TASK_MANAGERjava.lang.String DEFAULT_TASK_MANAGER_REFUSED_REGISTRATION_PAUSEjava.lang.String DEFAULT_SECURITY_SSL_ALGORITHMSjava.lang.String MESOS_MAX_FAILED_TASKSint DEFAULT_TASK_MANAGER_IPC_PORTorg.apache.flink.configuration.ConfigOption<java.lang.String> DEFAULT_JOB_MANAGER_WEB_FRONTEND_ADDRESSjava.lang.String SECURITY_SSL_ALGORITHMSint DEFAULT_ZOOKEEPER_CONNECTION_TIMEOUTjava.lang.String YARN_HEAP_CUTOFF_RATIOjava.lang.String HA_ZOOKEEPER_LATCH_PATHint DEFAULT_ZOOKEEPER_SESSION_TIMEOUTjava.lang.String DEFAULT_SPILLING_MAX_FAN_KEYjava.lang.String AKKA_WATCH_THRESHOLDjava.lang.String TASK_MANAGER_DEBUG_MEMORY_USAGE_LOG_INTERVAL_MSjava.lang.String HA_ZOOKEEPER_STORAGE_PATHjava.lang.String DEFAULT_BLOB_SERVER_PORTjava.lang.String AKKA_TRANSPORT_THRESHOLDjava.lang.String ZOOKEEPER_CHECKPOINT_COUNTER_PATHboolean DEFAULT_RUNTIME_HASH_JOIN_BLOOM_FILTERSint DEFAULT_BLOB_FETCH_CONCURRENTjava.lang.String BLOB_SERVER_PORTorg.apache.flink.configuration.ConfigOption<java.lang.String> RESTART_STRATEGY_FIXED_DELAY_DELAYjava.lang.String METRICS_REPORTER_CLASS_SUFFIXjava.lang.String ZOOKEEPER_DIR_KEYjava.lang.String JOB_MANAGER_IPC_ADDRESS_KEYint DEFAULT_TASK_MANAGER_NETWORK_NUM_BUFFERSjava.lang.String DEFAULT_AKKA_TRANSPORT_HEARTBEAT_PAUSEjava.lang.String MESOS_ARTIFACT_SERVER_SSL_ENABLEDjava.lang.String RESTART_STRATEGY_FAILURE_RATE_DELAYjava.lang.String DELIMITED_FORMAT_MIN_LINE_SAMPLES_KEYjava.lang.String BLOB_FETCH_BACKLOG_KEYjava.lang.String FILESYSTEM_OUTPUT_ALWAYS_CREATE_DIRECTORY_KEYjava.lang.String DEFAULT_TASK_MANAGER_MAX_REGISTRATION_DURATIONjava.lang.String TASK_MANAGER_LOG_PATH_KEYjava.lang.String DEFAULT_TASK_MANAGER_NETWORK_DEFAULT_IO_MODEint DEFAULT_YARN_HEAP_CUTOFFjava.lang.String SECURITY_SSL_PROTOCOLjava.lang.String JOB_MANAGER_WEB_BACK_PRESSURE_NUM_SAMPLESjava.lang.String CHECKPOINTS_DIRECTORY_KEYjava.lang.String DELIMITED_FORMAT_MAX_LINE_SAMPLES_KEYjava.lang.String PATH_HADOOP_CONFIGjava.lang.String ZOOKEEPER_SASL_DISABLEjava.lang.String AKKA_LOOKUP_TIMEOUTjava.lang.String YARN_HEAP_CUTOFF_MINjava.lang.String AKKA_CLIENT_TIMEOUTint DEFAULT_ZOOKEEPER_SYNC_LIMITjava.lang.String DEFAULT_HA_MODEjava.lang.String CONTAINERIZED_TASK_MANAGER_ENV_PREFIXjava.lang.String HA_ZOOKEEPER_CONNECTION_TIMEOUTjava.lang.String METRICS_REPORTER_ADDITIONAL_VARIABLESjava.lang.String MESOS_ARTIFACT_SERVER_PORT_KEYjava.lang.String TASK_MANAGER_DEBUG_MEMORY_USAGE_START_LOG_THREADjava.lang.String TASK_MANAGER_MEMORY_SEGMENT_SIZE_KEYjava.lang.String YARN_APPLICATION_ATTEMPTSjava.lang.String AKKA_TRANSPORT_HEARTBEAT_PAUSEjava.lang.String DEFAULT_TASK_MANAGER_INITIAL_REGISTRATION_PAUSEjava.lang.String SECURITY_SSL_VERIFY_HOSTNAMEjava.lang.String DEFAULT_PARALLELISM_KEYjava.lang.String AKKA_DISPATCHER_THROUGHPUTjava.lang.String TASK_MANAGER_MEMORY_FRACTION_KEYjava.lang.String JOB_MANAGER_WEB_UPLOAD_DIR_KEYjava.lang.String SECURITY_SSL_KEY_PASSWORDint DEFAULT_BLOB_FETCH_RETRIESjava.lang.String MESOS_RESOURCEMANAGER_FRAMEWORK_USERjava.lang.String BLOB_SERVICE_SSL_ENABLEDjava.lang.String DEFAULT_YARN_APPLICATION_MASTER_PORTjava.lang.String METRICS_SCOPE_NAMING_TMjava.lang.String TASK_MANAGER_MAX_REGISTARTION_PAUSElong DEFAULT_LIBRARY_CACHE_MANAGER_CLEANUP_INTERVALint DEFAULT_FS_STREAM_OPENING_TIMEOUTjava.lang.String YARN_VCORESjava.lang.String YARN_MAX_FAILED_CONTAINERSjava.lang.String METRICS_REPORTER_INTERVAL_SUFFIXjava.lang.String DEFAULT_HA_ZOOKEEPER_CLIENT_ACLfloat DEFAULT_MEMORY_MANAGER_MEMORY_FRACTIONjava.lang.String SAVEPOINT_DIRECTORY_KEYint DEFAULT_ZOOKEEPER_RETRY_WAITjava.lang.String ZOOKEEPER_LATCH_PATHjava.lang.String DEFAULT_RECOVERY_JOB_MANAGER_PORTboolean DEFAULT_TASK_MANAGER_DEBUG_MEMORY_USAGE_START_LOG_THREADboolean DEFAULT_AKKA_SSL_ENABLEDorg.apache.flink.configuration.ConfigOptionjava.lang.Iterable<java.lang.String> deprecatedKeys()boolean hasDeprecatedKeys()org.apache.flink.configuration.ConfigOptions$OptionBuilderorg.apache.flink.configuration.ConfigOption<T> defaultValue(java.lang.Object)org.apache.flink.configuration.ConfigOption<java.lang.String> noDefaultValue()org.apache.flink.configuration.Configurationboolean getBoolean(java.lang.String, boolean)boolean getBoolean(org.apache.flink.configuration.ConfigOption<java.lang.Boolean>)boolean getBoolean(org.apache.flink.configuration.ConfigOption<java.lang.Boolean>, boolean)byte[] getBytes(java.lang.String, byte[])java.lang.Class<T> getClass(java.lang.String, java.lang.Class<? extends T>, java.lang.ClassLoader)double getDouble(java.lang.String, double)double getDouble(org.apache.flink.configuration.ConfigOption<java.lang.Double>)double getDouble(org.apache.flink.configuration.ConfigOption<java.lang.Double>, double)float getFloat(java.lang.String, float)float getFloat(org.apache.flink.configuration.ConfigOption<java.lang.Float>)float getFloat(org.apache.flink.configuration.ConfigOption<java.lang.Float>, float)int getInteger(java.lang.String, int)int getInteger(org.apache.flink.configuration.ConfigOption<java.lang.Integer>)int getInteger(org.apache.flink.configuration.ConfigOption<java.lang.Integer>, int)long getLong(java.lang.String, long)long getLong(org.apache.flink.configuration.ConfigOption<java.lang.Long>)long getLong(org.apache.flink.configuration.ConfigOption<java.lang.Long>, long)java.lang.String getString(org.apache.flink.configuration.ConfigOption<java.lang.String>)java.lang.String getString(org.apache.flink.configuration.ConfigOption<java.lang.String>, java.lang.String)void setBoolean(java.lang.String, boolean)void setBoolean(org.apache.flink.configuration.ConfigOption<java.lang.Boolean>, boolean)void setBytes(java.lang.String, byte[])void setClass(java.lang.String, java.lang.Class<?>)void setDouble(java.lang.String, double)void setDouble(org.apache.flink.configuration.ConfigOption<java.lang.Double>, double)void setFloat(java.lang.String, float)void setFloat(org.apache.flink.configuration.ConfigOption<java.lang.Float>, float)void setInteger(java.lang.String, int)void setInteger(org.apache.flink.configuration.ConfigOption<java.lang.Integer>, int)void setLong(java.lang.String, long)void setLong(org.apache.flink.configuration.ConfigOption<java.lang.Long>, long)void setString(org.apache.flink.configuration.ConfigOption<java.lang.String>, java.lang.String)org.apache.flink.configuration.ExecutionOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Long> ASYNC_STATE_BUFFER_TIMEOUTorg.apache.flink.configuration.ConfigOption<java.lang.Integer> ASYNC_INFLIGHT_RECORDS_LIMITorg.apache.flink.configuration.ConfigOption<java.lang.Integer> ASYNC_STATE_BUFFER_SIZEorg.apache.flink.configuration.HighAvailabilityOptionsorg.apache.flink.configuration.ConfigOption<java.lang.String> HA_ZOOKEEPER_JOBGRAPHS_PATHorg.apache.flink.configuration.ConfigOption<java.lang.String> ZOOKEEPER_RUNNING_JOB_REGISTRY_PATHorg.apache.flink.configuration.ConfigOption<java.lang.String> HA_JOB_DELAYorg.apache.flink.configuration.JobManagerOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Integer> JOB_MANAGER_HEAP_MEMORY_MBorg.apache.flink.configuration.ConfigOption<java.time.Duration> BLOCK_SLOW_NODE_DURATIONorg.apache.flink.configuration.ConfigOption<org.apache.flink.configuration.MemorySize> ADAPTIVE_BATCH_SCHEDULER_AVG_DATA_VOLUME_PER_TASKorg.apache.flink.configuration.ConfigOption<java.lang.Integer> SPECULATIVE_MAX_CONCURRENT_EXECUTIONSorg.apache.flink.configuration.ConfigOption<java.lang.Integer> ADAPTIVE_BATCH_SCHEDULER_DEFAULT_SOURCE_PARALLELISMorg.apache.flink.configuration.ConfigOption<java.lang.Integer> ADAPTIVE_BATCH_SCHEDULER_MAX_PARALLELISMorg.apache.flink.configuration.ConfigOption<org.apache.flink.configuration.MemorySize> JOB_MANAGER_HEAP_MEMORYorg.apache.flink.configuration.ConfigOption<java.lang.Integer> ADAPTIVE_BATCH_SCHEDULER_MIN_PARALLELISMorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> SPECULATIVE_ENABLEDorg.apache.flink.configuration.JobManagerOptions$SchedulerTypeorg.apache.flink.configuration.JobManagerOptions$SchedulerType Ngorg.apache.flink.configuration.MetricOptionsorg.apache.flink.configuration.ConfigOption<java.lang.String> REPORTER_CLASSorg.apache.flink.configuration.NettyShuffleEnvironmentOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NUM_THREADS_CLIENTorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NETWORK_BUFFERS_PER_CHANNELorg.apache.flink.configuration.ConfigOption<java.lang.Integer> HYBRID_SHUFFLE_SPILLED_INDEX_REGION_GROUP_SIZEorg.apache.flink.configuration.ConfigOption<java.lang.Integer> CONNECT_BACKLOGorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> NETWORK_HYBRID_SHUFFLE_ENABLE_NEW_MODEorg.apache.flink.configuration.ConfigOption<java.lang.Float> NETWORK_BUFFERS_MEMORY_FRACTIONorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NUM_ARENASorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NUM_THREADS_SERVERorg.apache.flink.configuration.ConfigOption<java.lang.Long> HYBRID_SHUFFLE_NUM_RETAINED_IN_MEMORY_REGIONS_MAXorg.apache.flink.configuration.ConfigOption<java.lang.String> NETWORK_BUFFERS_MEMORY_MINorg.apache.flink.configuration.ConfigOption<java.lang.String> TRANSPORT_TYPEorg.apache.flink.configuration.ConfigOption<java.lang.String> NETWORK_BLOCKING_SHUFFLE_TYPEorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NETWORK_MAX_OVERDRAFT_BUFFERS_PER_GATEorg.apache.flink.configuration.ConfigOption<java.lang.Long> NETWORK_EXCLUSIVE_BUFFERS_REQUEST_TIMEOUT_MILLISECONDSorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> BATCH_SHUFFLE_COMPRESSION_ENABLEDorg.apache.flink.configuration.ConfigOption<java.lang.Integer> SEND_RECEIVE_BUFFER_SIZEorg.apache.flink.configuration.ConfigOption<java.lang.String> NETWORK_BUFFERS_MEMORY_MAXorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NETWORK_EXTRA_BUFFERS_PER_GATEorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NETWORK_SORT_SHUFFLE_MIN_PARALLELISMorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NETWORK_NUM_BUFFERSorg.apache.flink.configuration.ConfigOption<java.lang.Integer> NETWORK_MAX_BUFFERS_PER_CHANNELorg.apache.flink.configuration.ConfigOption<java.lang.Integer> MAX_NUM_TCP_CONNECTIONSorg.apache.flink.configuration.PipelineOptionsorg.apache.flink.configuration.ConfigOption<java.util.List<java.lang.String>> KRYO_DEFAULT_SERIALIZERSorg.apache.flink.configuration.ConfigOption<java.util.List<java.lang.String>> POJO_REGISTERED_CLASSESorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> AUTO_TYPE_REGISTRATIONorg.apache.flink.configuration.ConfigOption<java.util.List<java.lang.String>> KRYO_REGISTERED_CLASSESorg.apache.flink.configuration.ResourceManagerOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Integer> LOCAL_NUMBER_RESOURCE_MANAGERorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> TASK_MANAGER_RELEASE_WHEN_RESULT_CONSUMEDorg.apache.flink.configuration.ConfigOption<java.lang.Long> SLOT_MANAGER_TASK_MANAGER_TIMEOUTorg.apache.flink.configuration.SecurityOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Double> KERBEROS_TOKENS_RENEWAL_TIME_RATIOorg.apache.flink.configuration.ConfigOption<java.time.Duration> KERBEROS_TOKENS_RENEWAL_RETRY_BACKOFForg.apache.flink.configuration.ConfigOption<java.lang.Boolean> KERBEROS_FETCH_DELEGATION_TOKENorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> SSL_ENABLEDorg.apache.flink.configuration.StateBackendOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Integer> LATENCY_TRACK_HISTORY_SIZEorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> LATENCY_TRACK_STATE_NAME_AS_VARIABLEorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> LATENCY_TRACK_ENABLEDorg.apache.flink.configuration.ConfigOption<java.lang.Integer> LATENCY_TRACK_SAMPLE_INTERVALorg.apache.flink.configuration.TaskManagerOptionsorg.apache.flink.configuration.ConfigOption<java.time.Duration> REGISTRATION_MAX_BACKOFForg.apache.flink.configuration.ConfigOption<java.time.Duration> INITIAL_REGISTRATION_BACKOFForg.apache.flink.configuration.ConfigOption<java.lang.Boolean> EXIT_ON_FATAL_AKKA_ERRORorg.apache.flink.configuration.ConfigOption<java.lang.Integer> TASK_MANAGER_HEAP_MEMORY_MBjava.lang.String MANAGED_MEMORY_CONSUMER_NAME_DATAPROCorg.apache.flink.configuration.ConfigOption<org.apache.flink.configuration.MemorySize> TASK_MANAGER_HEAP_MEMORYorg.apache.flink.configuration.ConfigOption<java.time.Duration> REFUSED_REGISTRATION_BACKOFForg.apache.flink.configuration.TaskManagerOptions$TaskManagerLoadBalanceModeorg.apache.flink.configuration.TaskManagerOptions$TaskManagerLoadBalanceMode loadFromConfiguration(org.apache.flink.configuration.Configuration)org.apache.flink.configuration.WebOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Integer> BACKPRESSURE_DELAYorg.apache.flink.configuration.ConfigOption<java.lang.Integer> PORTorg.apache.flink.configuration.ConfigOption<java.lang.Integer> BACKPRESSURE_REFRESH_INTERVALorg.apache.flink.configuration.ConfigOption<java.lang.Integer> BACKPRESSURE_NUM_SAMPLESorg.apache.flink.configuration.ConfigOption<java.lang.String> ADDRESSorg.apache.flink.configuration.ConfigOption<java.lang.Integer> BACKPRESSURE_CLEANUP_INTERVALorg.apache.flink.configuration.ConfigOption<java.lang.Boolean> SSL_ENABLEDorg.apache.flink.connector.base.sink.AsyncSinkBaseorg.apache.flink.api.connector.sink2.StatefulSinkorg.apache.flink.connector.base.sink.writer.AsyncSinkWriterAsyncSinkWriter(org.apache.flink.connector.base.sink.writer.ElementConverter<InputT,RequestEntryT>, org.apache.flink.api.connector.sink2.Sink$InitContext, int, int, int, long, long, long)AsyncSinkWriter(org.apache.flink.connector.base.sink.writer.ElementConverter<InputT,RequestEntryT>, org.apache.flink.api.connector.sink2.Sink$InitContext, int, int, int, long, long, long, java.util.Collection<org.apache.flink.connector.base.sink.writer.BufferedRequestState<RequestEntryT>>)AsyncSinkWriter(org.apache.flink.connector.base.sink.writer.ElementConverter<InputT,RequestEntryT>, org.apache.flink.api.connector.sink2.Sink$InitContext, org.apache.flink.connector.base.sink.writer.config.AsyncSinkWriterConfiguration, java.util.Collection<org.apache.flink.connector.base.sink.writer.BufferedRequestState<RequestEntryT>>)org.apache.flink.connector.base.sink.writer.ElementConvertervoid open(org.apache.flink.api.connector.sink2.Sink$InitContext)org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManagerSingleThreadFetcherManager(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, java.util.function.Supplier<org.apache.flink.connector.base.source.reader.splitreader.SplitReader<E,SplitT>>)SingleThreadFetcherManager(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, java.util.function.Supplier<org.apache.flink.connector.base.source.reader.splitreader.SplitReader<E,SplitT>>, org.apache.flink.configuration.Configuration)SingleThreadFetcherManager(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, java.util.function.Supplier<org.apache.flink.connector.base.source.reader.splitreader.SplitReader<E,SplitT>>, org.apache.flink.configuration.Configuration, java.util.function.Consumer<java.util.Collection<java.lang.String>>)org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManagerSplitFetcherManager(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, java.util.function.Supplier<org.apache.flink.connector.base.source.reader.splitreader.SplitReader<E,SplitT>>, org.apache.flink.configuration.Configuration, java.util.function.Consumer<java.util.Collection<java.lang.String>>)SplitFetcherManager(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, java.util.function.Supplier<org.apache.flink.connector.base.source.reader.splitreader.SplitReader<E,SplitT>>, org.apache.flink.configuration.Configuration)org.apache.flink.connector.base.source.reader.SingleThreadMultiplexSourceReaderBaseSingleThreadMultiplexSourceReaderBase(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, java.util.function.Supplier<org.apache.flink.connector.base.source.reader.splitreader.SplitReader<E,SplitT>>, org.apache.flink.connector.base.source.reader.RecordEmitter<E,T,SplitStateT>, org.apache.flink.configuration.Configuration, org.apache.flink.api.connector.source.SourceReaderContext)SingleThreadMultiplexSourceReaderBase(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager<E,SplitT>, org.apache.flink.connector.base.source.reader.RecordEmitter<E,T,SplitStateT>, org.apache.flink.configuration.Configuration, org.apache.flink.api.connector.source.SourceReaderContext)SingleThreadMultiplexSourceReaderBase(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager<E,SplitT>, org.apache.flink.connector.base.source.reader.RecordEmitter<E,T,SplitStateT>, org.apache.flink.connector.base.source.reader.RecordEvaluator<T>, org.apache.flink.configuration.Configuration, org.apache.flink.api.connector.source.SourceReaderContext)org.apache.flink.connector.base.source.reader.SourceReaderBaseSourceReaderBase(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager<E,SplitT>, org.apache.flink.connector.base.source.reader.RecordEmitter<E,T,SplitStateT>, org.apache.flink.configuration.Configuration, org.apache.flink.api.connector.source.SourceReaderContext)SourceReaderBase(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<E>>, org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager<E,SplitT>, org.apache.flink.connector.base.source.reader.RecordEmitter<E,T,SplitStateT>, org.apache.flink.connector.base.source.reader.RecordEvaluator<T>, org.apache.flink.configuration.Configuration, org.apache.flink.api.connector.source.SourceReaderContext)org.apache.flink.contrib.streaming.state.ConfigurableRocksDBOptionsFactoryorg.apache.flink.contrib.streaming.state.RocksDBOptionsFactoryorg.apache.flink.contrib.streaming.state.RocksDBOptionsFactory configure(org.apache.flink.configuration.ReadableConfig)org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackendorg.apache.flink.contrib.streaming.state.RocksDBMemoryConfiguration getMemoryConfiguration()org.apache.flink.contrib.streaming.state.PredefinedOptions getPredefinedOptions()org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend$PriorityQueueStateType getPriorityQueueStateType()org.apache.flink.contrib.streaming.state.RocksDBOptionsFactory getRocksDBOptions()void setPredefinedOptions(org.apache.flink.contrib.streaming.state.PredefinedOptions)void setPriorityQueueStateType(org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend$PriorityQueueStateType)void setRocksDBMemoryFactory(org.apache.flink.contrib.streaming.state.RocksDBMemoryControllerUtils$RocksDBMemoryFactory)void setRocksDBOptions(org.apache.flink.contrib.streaming.state.RocksDBOptionsFactory)org.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptionsorg.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptions fromConfig(org.apache.flink.configuration.ReadableConfig)org.apache.flink.contrib.streaming.state.RocksDBOptionsFactoryorg.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptions createNativeMetricsOptions(org.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptions)org.apache.flink.core.execution.JobClientjava.util.concurrent.CompletableFuture<java.lang.String> stopWithSavepoint(boolean, java.lang.String)java.util.concurrent.CompletableFuture<java.lang.String> triggerSavepoint(java.lang.String)org.apache.flink.core.failure.FailureEnricher$Contextorg.apache.flink.api.common.JobID getJobId()java.lang.String getJobName()org.apache.flink.core.fs.FileSystemorg.apache.flink.core.fs.FileSystemKind getKind()org.apache.flink.core.fs.Pathorg.apache.flink.core.io.IOReadableWritableorg.apache.flink.datastream.api.context.StateManagerorg.apache.flink.api.common.state.v2.ListState<T>(<- <org.apache.flink.api.common.state.ListState<T>>) (<-java.util.Optional<T>(<- <org.apache.flink.api.common.state.ListState<T>>)) getState(org.apache.flink.api.common.state.ListStateDeclaration<T><T>)org.apache.flink.api.common.state.v2.ValueState<T>(<- <org.apache.flink.api.common.state.ValueState<T>>) (<-java.util.Optional<T>(<- <org.apache.flink.api.common.state.ValueState<T>>)) getState(org.apache.flink.api.common.state.ValueStateDeclaration<T><T>)org.apache.flink.api.common.state.v2.MapState<K,V>(<- <org.apache.flink.api.common.state.MapState<K,V>>) (<-java.util.Optional<K,V>(<- <org.apache.flink.api.common.state.MapState<K,V>>)) getState(org.apache.flink.api.common.state.MapStateDeclaration<K,V><K,V>)org.apache.flink.api.common.state.v2.ReducingState<T>(<- <org.apache.flink.api.common.state.ReducingState<T>>) (<-java.util.Optional<T>(<- <org.apache.flink.api.common.state.ReducingState<T>>)) getState(org.apache.flink.api.common.state.ReducingStateDeclaration<T><T>)org.apache.flink.api.common.state.v2.AggregatingState<IN,OUT>(<- <org.apache.flink.api.common.state.AggregatingState<IN,OUT>>) (<-java.util.Optional<IN,OUT>(<- <org.apache.flink.api.common.state.AggregatingState<IN,OUT>>)) getState(org.apache.flink.api.common.state.AggregatingStateDeclaration<IN,ACC,OUT><IN,ACC,OUT>)org.apache.flink.api.common.state.BroadcastState<K,V>(<- <org.apache.flink.api.common.state.BroadcastState<K,V>>) (<-java.util.Optional<K,V>(<- <org.apache.flink.api.common.state.BroadcastState<K,V>>)) getState(org.apache.flink.api.common.state.BroadcastStateDeclaration<K,V><K,V>)org.apache.flink.datastream.api.ExecutionEnvironmentorg.apache.flink.datastream.api.stream.NonKeyedPartitionStream$ProcessConfigurableAndNonKeyedPartitionStream<OUT><OUT> (<-org.apache.flink.datastream.api.stream.NonKeyedPartitionStream<OUT><OUT>) fromSource(org.apache.flink.api.connector.dsv2.Source<OUT><OUT>, java.lang.String)org.apache.flink.datastream.api.function.ProcessFunctionvoid open()org.apache.flink.datastream.api.function.TwoOutputApplyPartitionFunctionvoid apply(org.apache.flink.datastream.api.common.Collector<OUT1>, org.apache.flink.datastream.api.common.Collector<OUT2>, org.apache.flink.datastream.api.context.PartitionedContext)org.apache.flink.datastream.api.function.TwoOutputStreamProcessFunctionvoid onProcessingTimer(long, org.apache.flink.datastream.api.common.Collector<OUT1>, org.apache.flink.datastream.api.common.Collector<OUT2>, org.apache.flink.datastream.api.context.PartitionedContext)void processRecord(java.lang.Object, org.apache.flink.datastream.api.common.Collector<OUT1>, org.apache.flink.datastream.api.common.Collector<OUT2>, org.apache.flink.datastream.api.context.PartitionedContext)org.apache.flink.datastream.api.stream.KeyedPartitionStreamorg.apache.flink.datastream.api.stream.KeyedPartitionStream$ProcessConfigurableAndTwoKeyedPartitionStreams<K,OUT1,OUT2><K,OUT1,OUT2> (<-org.apache.flink.datastream.api.stream.KeyedPartitionStream$TwoKeyedPartitionStreams<K,OUT1,OUT2><K,OUT1,OUT2>) process(org.apache.flink.datastream.api.function.TwoOutputStreamProcessFunction<T,OUT1,OUT2><T,OUT1,OUT2>, org.apache.flink.api.java.functions.KeySelector<OUT1,K><OUT1,K>, org.apache.flink.api.java.functions.KeySelector<OUT2,K><OUT2,K>)org.apache.flink.datastream.api.stream.NonKeyedPartitionStream$ProcessConfigurableAndTwoNonKeyedPartitionStream<OUT1,OUT2><OUT1,OUT2> (<-org.apache.flink.datastream.api.stream.NonKeyedPartitionStream$TwoNonKeyedPartitionStreams<OUT1,OUT2><OUT1,OUT2>) process(org.apache.flink.datastream.api.function.TwoOutputStreamProcessFunction<T,OUT1,OUT2><T,OUT1,OUT2>)org.apache.flink.datastream.api.stream.NonKeyedPartitionStreamorg.apache.flink.datastream.api.stream.NonKeyedPartitionStream$ProcessConfigurableAndTwoNonKeyedPartitionStream<OUT1,OUT2><OUT1,OUT2> (<-org.apache.flink.datastream.api.stream.NonKeyedPartitionStream$TwoNonKeyedPartitionStreams<OUT1,OUT2><OUT1,OUT2>) process(org.apache.flink.datastream.api.function.TwoOutputStreamProcessFunction<T,OUT1,OUT2><T,OUT1,OUT2>)org.apache.flink.streaming.api.connector.sink2.CommittableMessagejava.util.OptionalLong getCheckpointId()org.apache.flink.streaming.api.connector.sink2.CommittableSummaryCommittableSummary(int, int, java.lang.Long, int, int, int)org.apache.flink.streaming.api.connector.sink2.CommittableWithLineageCommittableWithLineage(java.lang.Object, java.lang.Long, int)org.apache.flink.streaming.api.datastream.AllWindowedStreamorg.apache.flink.streaming.api.datastream.AllWindowedStream<T,W> allowedLateness(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<R> apply(org.apache.flink.api.common.functions.ReduceFunction<T>, org.apache.flink.streaming.api.functions.windowing.AllWindowFunction<T,R,W>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<R> apply(org.apache.flink.api.common.functions.ReduceFunction<T>, org.apache.flink.streaming.api.functions.windowing.AllWindowFunction<T,R,W>, org.apache.flink.api.common.typeinfo.TypeInformation<R>)org.apache.flink.streaming.api.datastream.CoGroupedStreams$WithWindoworg.apache.flink.streaming.api.datastream.CoGroupedStreams$WithWindow<T1,T2,KEY,W> allowedLateness(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> with(org.apache.flink.api.common.functions.CoGroupFunction<T1,T2,T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> with(org.apache.flink.api.common.functions.CoGroupFunction<T1,T2,T>, org.apache.flink.api.common.typeinfo.TypeInformation<T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T><T> (<-org.apache.flink.streaming.api.datastream.DataStream<T><T>) apply(org.apache.flink.api.common.functions.CoGroupFunction<T1,T2,T><T1,T2,T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T><T> (<-org.apache.flink.streaming.api.datastream.DataStream<T><T>) apply(org.apache.flink.api.common.functions.CoGroupFunction<T1,T2,T><T1,T2,T>, org.apache.flink.api.common.typeinfo.TypeInformation<T><T>)org.apache.flink.streaming.api.datastream.DataStreamorg.apache.flink.streaming.api.datastream.DataStreamSink<T> addSink(org.apache.flink.streaming.api.functions.sink.SinkFunction<T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> assignTimestampsAndWatermarks(org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> assignTimestampsAndWatermarks(org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T>)org.apache.flink.streaming.api.datastream.IterativeStream<T> iterate()org.apache.flink.streaming.api.datastream.IterativeStream<T> iterate(long)org.apache.flink.streaming.api.datastream.KeyedStream<T,org.apache.flink.api.java.tuple.Tuple> keyBy(int[])org.apache.flink.streaming.api.datastream.KeyedStream<T,org.apache.flink.api.java.tuple.Tuple> keyBy(java.lang.String[])org.apache.flink.streaming.api.datastream.DataStream<T> partitionCustom(org.apache.flink.api.common.functions.Partitioner<K>, int)org.apache.flink.streaming.api.datastream.DataStream<T> partitionCustom(org.apache.flink.api.common.functions.Partitioner<K>, java.lang.String)org.apache.flink.streaming.api.datastream.DataStreamSink<T> sinkTo(org.apache.flink.api.connector.sink.Sink<T,?,?,?>)org.apache.flink.streaming.api.datastream.DataStreamSink<T> sinkTo(org.apache.flink.api.connector.sink.Sink<T,?,?,?>, org.apache.flink.streaming.api.datastream.CustomSinkOperatorUidHashes)org.apache.flink.streaming.api.datastream.AllWindowedStream<T,org.apache.flink.streaming.api.windowing.windows.TimeWindow> timeWindowAll(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.AllWindowedStream<T,org.apache.flink.streaming.api.windowing.windows.TimeWindow> timeWindowAll(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.DataStreamSink<T> writeAsCsv(java.lang.String)org.apache.flink.streaming.api.datastream.DataStreamSink<T> writeAsCsv(java.lang.String, org.apache.flink.core.fs.FileSystem$WriteMode)org.apache.flink.streaming.api.datastream.DataStreamSink<T> writeAsCsv(java.lang.String, org.apache.flink.core.fs.FileSystem$WriteMode, java.lang.String, java.lang.String)org.apache.flink.streaming.api.datastream.DataStreamSink<T> writeAsText(java.lang.String)org.apache.flink.streaming.api.datastream.DataStreamSink<T> writeAsText(java.lang.String, org.apache.flink.core.fs.FileSystem$WriteMode)org.apache.flink.streaming.api.datastream.DataStreamUtilsjava.util.Iterator<OUT> collect(org.apache.flink.streaming.api.datastream.DataStream<OUT>)java.util.Iterator<OUT> collect(org.apache.flink.streaming.api.datastream.DataStream<OUT>, java.lang.String)java.util.List<E> collectBoundedStream(org.apache.flink.streaming.api.datastream.DataStream<E>, java.lang.String)java.util.List<E> collectRecordsFromUnboundedStream(org.apache.flink.streaming.api.operators.collect.ClientAndIterator<E>, int)java.util.List<E> collectUnboundedStream(org.apache.flink.streaming.api.datastream.DataStream<E>, int, java.lang.String)org.apache.flink.streaming.api.operators.collect.ClientAndIterator<OUT> collectWithClient(org.apache.flink.streaming.api.datastream.DataStream<OUT>, java.lang.String)org.apache.flink.streaming.api.datastream.JoinedStreams$WithWindoworg.apache.flink.streaming.api.datastream.JoinedStreams$WithWindow<T1,T2,KEY,W> allowedLateness(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> with(org.apache.flink.api.common.functions.JoinFunction<T1,T2,T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> with(org.apache.flink.api.common.functions.FlatJoinFunction<T1,T2,T>, org.apache.flink.api.common.typeinfo.TypeInformation<T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> with(org.apache.flink.api.common.functions.FlatJoinFunction<T1,T2,T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T> with(org.apache.flink.api.common.functions.JoinFunction<T1,T2,T>, org.apache.flink.api.common.typeinfo.TypeInformation<T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T><T> (<-org.apache.flink.streaming.api.datastream.DataStream<T><T>) apply(org.apache.flink.api.common.functions.JoinFunction<T1,T2,T><T1,T2,T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T><T> (<-org.apache.flink.streaming.api.datastream.DataStream<T><T>) apply(org.apache.flink.api.common.functions.FlatJoinFunction<T1,T2,T><T1,T2,T>, org.apache.flink.api.common.typeinfo.TypeInformation<T><T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T><T> (<-org.apache.flink.streaming.api.datastream.DataStream<T><T>) apply(org.apache.flink.api.common.functions.FlatJoinFunction<T1,T2,T><T1,T2,T>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<T><T> (<-org.apache.flink.streaming.api.datastream.DataStream<T><T>) apply(org.apache.flink.api.common.functions.JoinFunction<T1,T2,T><T1,T2,T>, org.apache.flink.api.common.typeinfo.TypeInformation<T><T>)org.apache.flink.streaming.api.datastream.KeyedStreamorg.apache.flink.streaming.api.datastream.WindowedStream<T,KEY,org.apache.flink.streaming.api.windowing.windows.TimeWindow> timeWindow(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.WindowedStream<T,KEY,org.apache.flink.streaming.api.windowing.windows.TimeWindow> timeWindow(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.KeyedStream$IntervalJoinorg.apache.flink.streaming.api.datastream.KeyedStream$IntervalJoined<T1,T2,KEY> between(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.WindowedStreamorg.apache.flink.streaming.api.datastream.WindowedStream<T,K,W> allowedLateness(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<R> apply(org.apache.flink.api.common.functions.ReduceFunction<T>, org.apache.flink.streaming.api.functions.windowing.WindowFunction<T,R,K,W>)org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator<R> apply(org.apache.flink.api.common.functions.ReduceFunction<T>, org.apache.flink.streaming.api.functions.windowing.WindowFunction<T,R,K,W>, org.apache.flink.api.common.typeinfo.TypeInformation<R>)org.apache.flink.streaming.api.environment.CheckpointConfigint UNDEFINED_TOLERABLE_CHECKPOINT_NUMBERlong DEFAULT_TIMEOUTlong DEFAULT_MIN_PAUSE_BETWEEN_CHECKPOINTSorg.apache.flink.streaming.api.CheckpointingMode DEFAULT_MODEint DEFAULT_MAX_CONCURRENT_CHECKPOINTSint DEFAULT_CHECKPOINT_ID_OF_IGNORED_IN_FLIGHT_DATAvoid enableExternalizedCheckpoints(org.apache.flink.streaming.api.environment.CheckpointConfig$ExternalizedCheckpointCleanup)java.time.Duration getAlignmentTimeout()org.apache.flink.runtime.state.CheckpointStorage getCheckpointStorage()org.apache.flink.streaming.api.environment.CheckpointConfig$ExternalizedCheckpointCleanup getExternalizedCheckpointCleanup()boolean isFailOnCheckpointingErrors()boolean isForceCheckpointing()void setAlignmentTimeout(java.time.Duration)void setCheckpointStorage(org.apache.flink.runtime.state.CheckpointStorage)void setCheckpointStorage(java.lang.String)void setCheckpointStorage(java.net.URI)void setCheckpointStorage(org.apache.flink.core.fs.Path)void setExternalizedCheckpointCleanup(org.apache.flink.streaming.api.environment.CheckpointConfig$ExternalizedCheckpointCleanup)void setFailOnCheckpointingErrors(boolean)void setForceCheckpointing(boolean)org.apache.flink.streaming.api.environment.RemoteStreamEnvironmentorg.apache.flink.configuration.Configuration getClientConfiguration()org.apache.flink.streaming.api.environment.StreamExecutionEnvironmentjava.lang.String DEFAULT_JOB_NAMEvoid addDefaultKryoSerializer(java.lang.Class<?>, com.esotericsoftware.kryo.Serializer<?>)void addDefaultKryoSerializer(java.lang.Class<?>, java.lang.Class<? extends com.esotericsoftware.kryo.Serializer<? extends ?>>)org.apache.flink.streaming.api.datastream.DataStreamSource<OUT> addSource(org.apache.flink.streaming.api.functions.source.SourceFunction<OUT>)org.apache.flink.streaming.api.datastream.DataStreamSource<OUT> addSource(org.apache.flink.streaming.api.functions.source.SourceFunction<OUT>, java.lang.String)org.apache.flink.streaming.api.datastream.DataStreamSource<OUT> addSource(org.apache.flink.streaming.api.functions.source.SourceFunction<OUT>, org.apache.flink.api.common.typeinfo.TypeInformation<OUT>)org.apache.flink.streaming.api.datastream.DataStreamSource<OUT> addSource(org.apache.flink.streaming.api.functions.source.SourceFunction<OUT>, java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<OUT>)org.apache.flink.streaming.api.environment.StreamExecutionEnvironment enableCheckpointing(long, org.apache.flink.streaming.api.CheckpointingMode, boolean)org.apache.flink.streaming.api.environment.StreamExecutionEnvironment enableCheckpointing()int getNumberOfExecutionRetries()org.apache.flink.api.common.restartstrategy.RestartStrategies$RestartStrategyConfiguration getRestartStrategy()org.apache.flink.runtime.state.StateBackend getStateBackend()org.apache.flink.streaming.api.TimeCharacteristic getStreamTimeCharacteristic()boolean isForceCheckpointing()org.apache.flink.streaming.api.datastream.DataStream<java.lang.String> readFileStream(java.lang.String, long, org.apache.flink.streaming.api.functions.source.FileMonitoringFunction$WatchType)org.apache.flink.streaming.api.datastream.DataStreamSource<java.lang.String> readTextFile(java.lang.String)org.apache.flink.streaming.api.datastream.DataStreamSource<java.lang.String> readTextFile(java.lang.String, java.lang.String)void registerType(java.lang.Class<?>)void registerTypeWithKryoSerializer(java.lang.Class<?>, com.esotericsoftware.kryo.Serializer<?>)void registerTypeWithKryoSerializer(java.lang.Class<?>, java.lang.Class<? extends com.esotericsoftware.kryo.Serializer>)void setNumberOfExecutionRetries(int)void setRestartStrategy(org.apache.flink.api.common.restartstrategy.RestartStrategies$RestartStrategyConfiguration)org.apache.flink.streaming.api.environment.StreamExecutionEnvironment setStateBackend(org.apache.flink.runtime.state.StateBackend)void setStreamTimeCharacteristic(org.apache.flink.streaming.api.TimeCharacteristic)org.apache.flink.streaming.api.operators.AbstractStreamOperatororg.apache.flink.streaming.api.operators.SetupableStreamOperator(<- NON_FINAL) void initializeState(org.apache.flink.streaming.api.operators.StreamTaskStateInitializer)PROTECTED (<- PUBLIC) void setProcessingTimeService(org.apache.flink.streaming.runtime.tasks.ProcessingTimeService)PROTECTED (<- PUBLIC) void setup(org.apache.flink.streaming.runtime.tasks.StreamTask<?,?><?,?>, org.apache.flink.streaming.api.graph.StreamConfig, org.apache.flink.streaming.api.operators.Output<org.apache.flink.streaming.runtime.streamrecord.StreamRecord<OUT>><org.apache.flink.streaming.runtime.streamrecord.StreamRecord<OUT>>)org.apache.flink.streaming.api.operators.AbstractStreamOperatorV2(<- NON_FINAL) void initializeState(org.apache.flink.streaming.api.operators.StreamTaskStateInitializer)org.apache.flink.streaming.api.operators.AbstractUdfStreamOperatorPROTECTED (<- PUBLIC) void setup(org.apache.flink.streaming.runtime.tasks.StreamTask<?,?><?,?>, org.apache.flink.streaming.api.graph.StreamConfig, org.apache.flink.streaming.api.operators.Output<org.apache.flink.streaming.runtime.streamrecord.StreamRecord<OUT>><org.apache.flink.streaming.runtime.streamrecord.StreamRecord<OUT>>)org.apache.flink.streaming.api.windowing.assigners.EventTimeSessionWindowsorg.apache.flink.streaming.api.windowing.assigners.EventTimeSessionWindows withGap(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.ProcessingTimeSessionWindowsorg.apache.flink.streaming.api.windowing.assigners.ProcessingTimeSessionWindows withGap(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.SlidingEventTimeWindowsorg.apache.flink.streaming.api.windowing.assigners.SlidingEventTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.SlidingEventTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.SlidingProcessingTimeWindowsorg.apache.flink.streaming.api.windowing.assigners.SlidingProcessingTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.SlidingProcessingTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindowsorg.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.assigners.WindowStagger)org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindowsorg.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows of(org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.time.Time, org.apache.flink.streaming.api.windowing.assigners.WindowStagger)org.apache.flink.streaming.api.windowing.assigners.WindowAssigner(<- NON_ABSTRACT) org.apache.flink.streaming.api.windowing.triggers.Trigger<T,W><T,W> getDefaultTrigger()org.apache.flink.streaming.api.windowing.triggers.Trigger<T,W> getDefaultTrigger(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)org.apache.flink.streaming.api.windowing.evictors.TimeEvictororg.apache.flink.streaming.api.windowing.evictors.TimeEvictor<W> of(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.evictors.TimeEvictor<W> of(org.apache.flink.streaming.api.windowing.time.Time, boolean)org.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTriggerorg.apache.flink.streaming.api.windowing.triggers.ContinuousEventTimeTrigger<W> of(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.triggers.ContinuousProcessingTimeTriggerorg.apache.flink.streaming.api.windowing.triggers.ContinuousProcessingTimeTrigger<W> of(org.apache.flink.streaming.api.windowing.time.Time)org.apache.flink.streaming.api.windowing.triggers.Trigger$TriggerContextorg.apache.flink.api.common.state.ValueState<S> getKeyValueState(java.lang.String, java.lang.Class<S>, java.io.Serializable)org.apache.flink.api.common.state.ValueState<S> getKeyValueState(java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<S>, java.io.Serializable)org.apache.flink.streaming.experimental.CollectSinkorg.apache.flink.streaming.api.functions.sink.SinkFunctionorg.apache.flink.streaming.api.functions.sink.legacy.RichSinkFunction (<- org.apache.flink.streaming.api.functions.sink.RichSinkFunction)org.apache.flink.types.DoubleValueorg.apache.flink.types.Keyorg.apache.flink.types.FloatValueorg.apache.flink.types.Keyorg.apache.flink.types.NormalizableKeyorg.apache.flink.core.io.IOReadableWritableorg.apache.flink.types.Valueorg.apache.flink.types.Keyjava.io.Serializableorg.apache.flink.test.junit5.MiniClusterExtensionorg.apache.flink.test.util.TestEnvironment getTestEnvironment()org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporterOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Integer> PORTorg.apache.flink.configuration.ConfigOption<java.lang.String> HOSTorg.apache.flink.formats.csv.CsvReaderFormatorg.apache.flink.formats.csv.CsvReaderFormat<T> forSchema(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper, org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema, org.apache.flink.api.common.typeinfo.TypeInformation<T>)org.apache.flink.state.forst.ForStOptionsorg.apache.flink.configuration.ConfigOption<java.lang.String> REMOTE_DIRECTORYorg.apache.flink.state.forst.ForStOptionsFactoryorg.rocksdb.ColumnFamilyOptions createColumnOptions(org.rocksdb.ColumnFamilyOptions, java.util.Collection<java.lang.AutoCloseable>)org.rocksdb.DBOptions createDBOptions(org.rocksdb.DBOptions, java.util.Collection<java.lang.AutoCloseable>)org.rocksdb.ReadOptions createReadOptions(org.rocksdb.ReadOptions, java.util.Collection<java.lang.AutoCloseable>)org.rocksdb.WriteOptions createWriteOptions(org.rocksdb.WriteOptions, java.util.Collection<java.lang.AutoCloseable>)org.apache.flink.table.client.config.SqlClientOptionsorg.apache.flink.configuration.ConfigOption<java.lang.Integer> DISPLAY_MAX_COLUMN_WIDTHorg.apache.flink.table.runtime.typeutils.SortedMapTypeInfoorg.apache.flink.api.common.typeutils.TypeSerializer<java.util.SortedMap<K,V>> createSerializer(org.apache.flink.api.common.ExecutionConfig)org.apache.flink.connector.file.sink.FileSinkorg.apache.flink.api.connector.sink2.SinkWriter<IN> createWriter(org.apache.flink.api.connector.sink2.Sink$InitContext)org.apache.flink.connector.file.src.FileSourceorg.apache.flink.connector.file.src.FileSource$FileSourceBuilder<T> forRecordFileFormat(org.apache.flink.connector.file.src.reader.FileRecordFormat<T>, org.apache.flink.core.fs.Path[])org.apache.flink.connector.file.src.FileSourceSplitFileSourceSplit(java.lang.String, org.apache.flink.core.fs.Path, long, long)FileSourceSplit(java.lang.String, org.apache.flink.core.fs.Path, long, long, java.lang.String[])FileSourceSplit(java.lang.String, org.apache.flink.core.fs.Path, long, long, java.lang.String[], org.apache.flink.connector.file.src.util.CheckpointedPosition)org.apache.flink.state.api.functions.KeyedStateReaderFunctionvoid open(org.apache.flink.configuration.Configuration)org.apache.flink.state.api.OperatorTransformationorg.apache.flink.state.api.OneInputOperatorTransformation<T> bootstrapWith(org.apache.flink.api.java.DataSet<T>)org.apache.flink.state.api.SavepointReaderorg.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<K,V>> readBroadcastState(java.lang.String, java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<K>, org.apache.flink.api.common.typeinfo.TypeInformation<V>)org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<K,V>> readBroadcastState(java.lang.String, java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<K>, org.apache.flink.api.common.typeinfo.TypeInformation<V>, org.apache.flink.api.common.typeutils.TypeSerializer<K>, org.apache.flink.api.common.typeutils.TypeSerializer<V>)org.apache.flink.streaming.api.datastream.DataStream<OUT> readKeyedState(java.lang.String, org.apache.flink.state.api.functions.KeyedStateReaderFunction<K,OUT>)org.apache.flink.streaming.api.datastream.DataStream<OUT> readKeyedState(java.lang.String, org.apache.flink.state.api.functions.KeyedStateReaderFunction<K,OUT>, org.apache.flink.api.common.typeinfo.TypeInformation<K>, org.apache.flink.api.common.typeinfo.TypeInformation<OUT>)org.apache.flink.streaming.api.datastream.DataStream<T> readListState(java.lang.String, java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<T>)org.apache.flink.streaming.api.datastream.DataStream<T> readListState(java.lang.String, java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<T>, org.apache.flink.api.common.typeutils.TypeSerializer<T>)org.apache.flink.streaming.api.datastream.DataStream<T> readUnionState(java.lang.String, java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<T>)org.apache.flink.streaming.api.datastream.DataStream<T> readUnionState(java.lang.String, java.lang.String, org.apache.flink.api.common.typeinfo.TypeInformation<T>, org.apache.flink.api.common.typeutils.TypeSerializer<T>)org.apache.flink.state.api.SavepointWriterorg.apache.flink.state.api.SavepointWriter fromExistingSavepoint(java.lang.String)org.apache.flink.state.api.SavepointWriter fromExistingSavepoint(java.lang.String, org.apache.flink.runtime.state.StateBackend)org.apache.flink.state.api.SavepointWriter newSavepoint(int)org.apache.flink.state.api.SavepointWriter newSavepoint(org.apache.flink.runtime.state.StateBackend, int)org.apache.flink.state.api.SavepointWriter removeOperator(java.lang.String)org.apache.flink.state.api.SavepointWriter withOperator(java.lang.String, org.apache.flink.state.api.StateBootstrapTransformation<T>)org.apache.flink.state.api.SavepointWriterOperatorFactoryorg.apache.flink.streaming.api.operators.StreamOperatorFactory<org.apache.flink.state.api.output.TaggedOperatorSubtaskState><org.apache.flink.state.api.output.TaggedOperatorSubtaskState> (<-org.apache.flink.streaming.api.operators.StreamOperator<org.apache.flink.state.api.output.TaggedOperatorSubtaskState><org.apache.flink.state.api.output.TaggedOperatorSubtaskState>) createOperator(long, org.apache.flink.core.fs.Path)| REST API | Changes |
|---|---|
| /taskmanagers/:taskmanagerid | In its response, “metrics.memorySegmentsAvailable” and “metrics.memorySegmentsTotal” are removed. |
| /jobs/:jobid/config | In its response, the “execution-mode” property is removed. |
| /jars/:jarid/run | In its request, the internal type of “claimMode” and “restoreMode” are changed from RestoreMode to RecoveryClaimMode, but their json structure is not affected. |
| /jobs/:jobid/vertices/:vertexid /jobs/:jobid/vertices/:vertexid/subtasks/accumulators /jobs/:jobid/vertices/:vertexid/subtasks/:subtaskindex /jobs/:jobid/vertices/:vertexid/subtasks/:subtaskindex/attempts/:attempt /jobs/:jobid/vertices/:vertexid/subtasktimes /jobs/:jobid/vertices/:vertexid/taskmanagers /jobs/:jobid/taskmanagers/:taskmanagerid/log-url | In their responses, the “host”, “subtasks.host” or “taskmanagers.host” property is removed. |
-u,--update <SQL update statement> is removed: Please use option -f to submit update. statement.run-application action is removed: Please use run -t kubernetes-application to run Kubernetes Application mode.