blob: f6f41cb596f158e6de55d75f7bd95cdb43c45d82 [file] [log] [blame]
<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"></meta><title>QueryRecord</title><link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css"></link></head><script type="text/javascript">window.onload = function(){if(self==top) { document.getElementById('nameHeader').style.display = "inherit"; } }</script><body><h1 id="nameHeader" style="display: none;">QueryRecord</h1><h2>Description: </h2><p>Evaluates one or more SQL queries against the contents of a FlowFile. The result of the SQL query then becomes the content of the output FlowFile. This can be used, for example, for field-specific filtering, transformation, and row-level filtering. Columns can be renamed, simple calculations and aggregations performed, etc. The Processor is configured with a Record Reader Controller Service and a Record Writer service so as to allow flexibility in incoming and outgoing data formats. The Processor must be configured with at least one user-defined property. The name of the Property is the Relationship to route data to, and the value of the Property is a SQL SELECT statement that is used to specify how input data should be transformed/filtered. The SQL statement must be valid ANSI SQL and is powered by Apache Calcite. If the transformation fails, the original FlowFile is routed to the 'failure' relationship. Otherwise, the data selected will be routed to the associated relationship. If the Record Writer chooses to inherit the schema from the Record, it is important to note that the schema that is inherited will be from the ResultSet, rather than the input Record. This allows a single instance of the QueryRecord processor to have multiple queries, each of which returns a different set of columns and aggregations. As a result, though, the schema that is derived will have no schema name, so it is important that the configured Record Writer not attempt to write the Schema Name as an attribute if inheriting the Schema from the Record. See the Processor Usage documentation for more information.</p><p><a href="additionalDetails.html">Additional Details...</a></p><h3>Tags: </h3><p>sql, query, calcite, route, record, transform, select, update, modify, etl, filter, record, csv, json, logs, text, avro, aggregate</p><h3>Properties: </h3><p>In the list below, the names of required properties appear in <strong>bold</strong>. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the <a href="../../../../../html/expression-language-guide.html">NiFi Expression Language</a>.</p><table id="properties"><tr><th>Display Name</th><th>API Name</th><th>Default Value</th><th>Allowable Values</th><th>Description</th></tr><tr><td id="name"><strong>Record Reader</strong></td><td>record-reader</td><td></td><td id="allowable-values"><strong>Controller Service API: </strong><br/>RecordReaderFactory<br/><strong>Implementations: </strong><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.grok.GrokReader/index.html">GrokReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.xml.XMLReader/index.html">XMLReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.avro.AvroReader/index.html">AvroReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.syslog.Syslog5424Reader/index.html">Syslog5424Reader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.cef.CEFReader/index.html">CEFReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.syslog.SyslogReader/index.html">SyslogReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.json.JsonTreeReader/index.html">JsonTreeReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.csv.CSVReader/index.html">CSVReader</a><br/><a href="../../../nifi-scripting-nar/1.19.1/org.apache.nifi.record.script.ScriptedReader/index.html">ScriptedReader</a><br/><a href="../../../nifi-parquet-nar/1.19.1/org.apache.nifi.parquet.ParquetReader/index.html">ParquetReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.json.JsonPathReader/index.html">JsonPathReader</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.lookup.ReaderLookup/index.html">ReaderLookup</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.windowsevent.WindowsEventLogReader/index.html">WindowsEventLogReader</a></td><td id="description">Specifies the Controller Service to use for parsing incoming data and determining the data's schema</td></tr><tr><td id="name"><strong>Record Writer</strong></td><td>record-writer</td><td></td><td id="allowable-values"><strong>Controller Service API: </strong><br/>RecordSetWriterFactory<br/><strong>Implementations: </strong><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.text.FreeFormTextRecordSetWriter/index.html">FreeFormTextRecordSetWriter</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.avro.AvroRecordSetWriter/index.html">AvroRecordSetWriter</a><br/><a href="../../../nifi-scripting-nar/1.19.1/org.apache.nifi.record.script.ScriptedRecordSetWriter/index.html">ScriptedRecordSetWriter</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.csv.CSVRecordSetWriter/index.html">CSVRecordSetWriter</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.xml.XMLRecordSetWriter/index.html">XMLRecordSetWriter</a><br/><a href="../../../nifi-parquet-nar/1.19.1/org.apache.nifi.parquet.ParquetRecordSetWriter/index.html">ParquetRecordSetWriter</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.json.JsonRecordSetWriter/index.html">JsonRecordSetWriter</a><br/><a href="../../../nifi-record-serialization-services-nar/1.19.1/org.apache.nifi.lookup.RecordSetWriterLookup/index.html">RecordSetWriterLookup</a></td><td id="description">Specifies the Controller Service to use for writing results to a FlowFile</td></tr><tr><td id="name"><strong>Include Zero Record FlowFiles</strong></td><td>include-zero-record-flowfiles</td><td id="default-value">true</td><td id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td id="description">When running the SQL statement against an incoming FlowFile, if the result has no data, this property specifies whether or not a FlowFile will be sent to the corresponding relationship</td></tr><tr><td id="name"><strong>Cache Schema</strong></td><td>cache-schema</td><td id="default-value">true</td><td id="allowable-values"><ul><li>true</li><li>false</li></ul></td><td id="description">This property is no longer used. It remains solely for backward compatibility in order to avoid making existing Processors invalid upon upgrade. This property will be removed in future versions. Now, instead of forcing the user to understand the semantics of schema caching, the Processor caches up to 25 schemas and automatically rolls off the old schemas. This provides the same performance when caching was enabled previously and in some cases very significant performance improvements if caching was previously disabled.</td></tr><tr><td id="name"><strong>Default Decimal Precision</strong></td><td>dbf-default-precision</td><td id="default-value">10</td><td id="allowable-values"></td><td id="description">When a DECIMAL/NUMBER value is written as a 'decimal' Avro logical type, a specific 'precision' denoting number of available digits is required. Generally, precision is defined by column data type definition or database engines default. However undefined precision (0) can be returned from some database engines. 'Default Decimal Precision' is used when writing those undefined precision numbers.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr><tr><td id="name"><strong>Default Decimal Scale</strong></td><td>dbf-default-scale</td><td id="default-value">0</td><td id="allowable-values"></td><td id="description">When a DECIMAL/NUMBER value is written as a 'decimal' Avro logical type, a specific 'scale' denoting number of available decimal digits is required. Generally, scale is defined by column data type definition or database engines default. However when undefined precision (0) is returned, scale can also be uncertain with some database engines. 'Default Decimal Scale' is used when writing those undefined numbers. If a value has more decimals than specified scale, then the value will be rounded-up, e.g. 1.53 becomes 2 with scale 0, and 1.5 with scale 1.<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr></table><h3>Dynamic Properties: </h3><p>Supports Sensitive Dynamic Properties: <strong>No</strong></p><p>Dynamic Properties allow the user to specify both the name and value of a property.<table id="dynamic-properties"><tr><th>Name</th><th>Value</th><th>Description</th></tr><tr><td id="name">The name of the relationship to route data to</td><td id="value">A SQL SELECT statement that is used to determine what data should be routed to this relationship.</td><td>Each user-defined property specifies a SQL SELECT statement to run over the data, with the data that is selected being routed to the relationship whose name is the property name<br/><strong>Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)</strong></td></tr></table></p><h3>Relationships: </h3><table id="relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>failure</td><td>If a FlowFile fails processing for any reason (for example, the SQL statement contains columns not present in input data), the original FlowFile it will be routed to this relationship</td></tr><tr><td>original</td><td>The original FlowFile is routed to this relationship</td></tr></table><h3>Dynamic Relationships: </h3><p>A Dynamic Relationship may be created based on how the user configures the Processor.<table id="dynamic-relationships"><tr><th>Name</th><th>Description</th></tr><tr><td>&lt;Property Name&gt;</td><td>Each user-defined property defines a new Relationship for this Processor.</td></tr></table></p><h3>Reads Attributes: </h3>None specified.<h3>Writes Attributes: </h3><table id="writes-attributes"><tr><th>Name</th><th>Description</th></tr><tr><td>mime.type</td><td>Sets the mime.type attribute to the MIME Type specified by the Record Writer</td></tr><tr><td>record.count</td><td>The number of records selected by the query</td></tr><tr><td>QueryRecord.Route</td><td>The relation to which the FlowFile was routed</td></tr></table><h3>State management: </h3>This component does not store state.<h3>Restricted: </h3>This component is not restricted.<h3>Input requirement: </h3>This component requires an incoming relationship.<h3>System Resource Considerations:</h3>None specified.</body></html>