blob: 6209b8585258ffa9007fcf422513411282154e84 [file] [log] [blame]
[[jdbc-component]]
= JDBC Component
*Since Camel 1.2*
// HEADER START
*Only producer is supported*
// HEADER END
The JDBC component enables you to access databases through JDBC, where
SQL queries (SELECT) and operations (INSERT, UPDATE, etc) are sent in
the message body. This component uses the standard JDBC API, unlike the
xref:sql-component.adoc[SQL Component] component, which uses
spring-jdbc.
Maven users will need to add the following dependency to their `pom.xml`
for this component:
[source,xml]
----
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jdbc</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
----
This component can only be used to define producer endpoints, which
means that you cannot use the JDBC component in a `from()` statement.
== URI format
[source,text]
----
jdbc:dataSourceName[?options]
----
This component only supports producer endpoints.
You can append query options to the URI in the following format,
`?option=value&option=value&...`
== Options
// component options: START
The JDBC component supports 4 options, which are listed below.
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *dataSource* (producer) | To use the DataSource instance instead of looking up the data source by name from the registry. | | DataSource
| *basicPropertyBinding* (advanced) | Whether the component should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities | false | boolean
| *lazyStartProducer* (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean
| *bridgeErrorHandler* (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean
|===
// component options: END
// endpoint options: START
The JDBC endpoint is configured using URI syntax:
----
jdbc:dataSourceName
----
with the following path and query parameters:
=== Path Parameters (1 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *dataSourceName* | *Required* Name of DataSource to lookup in the Registry. If the name is dataSource or default, then Camel will attempt to lookup a default DataSource from the registry, meaning if there is a only one instance of DataSource found, then this DataSource will be used. | | String
|===
=== Query Parameters (15 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *allowNamedParameters* (producer) | Whether to allow using named parameters in the queries. | true | boolean
| *lazyStartProducer* (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean
| *outputClass* (producer) | Specify the full package and class name to use as conversion when outputType=SelectOne or SelectList. | | String
| *outputType* (producer) | Determines the output the producer should use. | SelectList | JdbcOutputType
| *parameters* (producer) | Optional parameters to the java.sql.Statement. For example to set maxRows, fetchSize etc. | | Map
| *readSize* (producer) | The default maximum number of rows that can be read by a polling query. The default value is 0. | | int
| *resetAutoCommit* (producer) | Camel will set the autoCommit on the JDBC connection to be false, commit the change after executed the statement and reset the autoCommit flag of the connection at the end, if the resetAutoCommit is true. If the JDBC connection doesn't support to reset the autoCommit flag, you can set the resetAutoCommit flag to be false, and Camel will not try to reset the autoCommit flag. When used with XA transactions you most likely need to set it to false so that the transaction manager is in charge of committing this tx. | true | boolean
| *transacted* (producer) | Whether transactions are in use. | false | boolean
| *useGetBytesForBlob* (producer) | To read BLOB columns as bytes instead of string data. This may be needed for certain databases such as Oracle where you must read BLOB columns as bytes. | false | boolean
| *useHeadersAsParameters* (producer) | Set this option to true to use the prepareStatementStrategy with named parameters. This allows to define queries with named placeholders, and use headers with the dynamic values for the query placeholders. | false | boolean
| *useJDBC4ColumnNameAnd LabelSemantics* (producer) | Sets whether to use JDBC 4 or JDBC 3.0 or older semantic when retrieving column name. JDBC 4.0 uses columnLabel to get the column name where as JDBC 3.0 uses both columnName or columnLabel. Unfortunately JDBC drivers behave differently so you can use this option to work out issues around your JDBC driver if you get problem using this component This option is default true. | true | boolean
| *basicPropertyBinding* (advanced) | Whether the endpoint should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities | false | boolean
| *beanRowMapper* (advanced) | To use a custom org.apache.camel.component.jdbc.BeanRowMapper when using outputClass. The default implementation will lower case the row names and skip underscores, and dashes. For example CUST_ID is mapped as custId. | | BeanRowMapper
| *prepareStatementStrategy* (advanced) | Allows to plugin to use a custom org.apache.camel.component.jdbc.JdbcPrepareStatementStrategy to control preparation of the query and prepared statement. | | JdbcPrepareStatementStrategy
| *synchronous* (advanced) | Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). | false | boolean
|===
// endpoint options: END
// spring-boot-auto-configure options: START
== Spring Boot Auto-Configuration
When using Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
[source,xml]
----
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-jdbc-starter</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
----
The component supports 5 options, which are listed below.
[width="100%",cols="2,5,^1,2",options="header"]
|===
| Name | Description | Default | Type
| *camel.component.jdbc.basic-property-binding* | Whether the component should use basic property binding (Camel 2.x) or the newer property binding with additional capabilities | false | Boolean
| *camel.component.jdbc.bridge-error-handler* | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean
| *camel.component.jdbc.data-source* | To use the DataSource instance instead of looking up the data source by name from the registry. The option is a javax.sql.DataSource type. | | String
| *camel.component.jdbc.enabled* | Enable jdbc component | true | Boolean
| *camel.component.jdbc.lazy-start-producer* | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean
|===
// spring-boot-auto-configure options: END
== Result
By default the result is returned in the OUT body as an
`ArrayList<HashMap<String, Object>>`. The `List` object contains the
list of rows and the `Map` objects contain each row with the `String`
key as the column name. You can use the option `outputType` to control
the result.
*Note:* This component fetches `ResultSetMetaData` to be able to return
the column name as the key in the `Map`.
=== Message Headers
[width="100%",cols="10%,90%",options="header",]
|===
|Header |Description
|`CamelJdbcRowCount` |If the query is a `SELECT`, query the row count is returned in this OUT
header.
|`CamelJdbcUpdateCount` |If the query is an `UPDATE`, query the update count is returned in this
OUT header.
|`CamelGeneratedKeysRows` |Rows that contains the generated keys.
|`CamelGeneratedKeysRowCount` |The number of rows in the header that contains generated
keys.
|`CamelJdbcColumnNames` |The column names from the ResultSet as a `java.util.Set`
type.
|`CamelJdbcParametes` |A `java.util.Map` which has the headers to be used if
`useHeadersAsParameters` has been enabled.
|===
== Generated keys
*Since Camel 2.10*
If you insert data using SQL INSERT, then the RDBMS may support auto
generated keys. You can instruct the xref:jdbc-component.adoc[JDBC] producer to
return the generated keys in headers. +
To do that set the header `CamelRetrieveGeneratedKeys=true`. Then the
generated keys will be provided as headers with the keys listed in the
table above.
Using generated keys does not work with together with named parameters.
== Using named parameters
*Since Camel 2.12*
In the given route below, we want to get all the projects from the
projects table. Notice the SQL query has 2 named parameters, :?lic and
:?min. +
Camel will then lookup these parameters from the message headers.
Notice in the example above we set two headers with constant value
for the named parameters:
[source,java]
----
from("direct:projects")
.setHeader("lic", constant("ASF"))
.setHeader("min", constant(123))
.setBody("select * from projects where license = :?lic and id > :?min order by id")
.to("jdbc:myDataSource?useHeadersAsParameters=true")
----
You can also store the header values in a `java.util.Map` and store the
map on the headers with the key `CamelJdbcParameters`.
== Samples
In the following example, we fetch the rows from the customer table.
First we register our datasource in the Camel registry as `testdb`:
Then we configure a route that routes to the JDBC component, so the SQL
will be executed. Note how we refer to the `testdb` datasource that was
bound in the previous step:
Or you can create a `DataSource` in Spring like this:
We create an endpoint, add the SQL query to the body of the IN message,
and then send the exchange. The result of the query is returned in the
OUT body:
If you want to work on the rows one by one instead of the entire
ResultSet at once you need to use the Splitter EIP
such as:
[source,java]
----
from("direct:hello")
// here we split the data from the testdb into new messages one by one
// so the mock endpoint will receive a message per row in the table
// the StreamList option allows to stream the result of the query without creating a List of rows
// and notice we also enable streaming mode on the splitter
.to("jdbc:testdb?outputType=StreamList")
.split(body()).streaming()
.to("mock:result");
----
== Sample - Polling the database every minute
If we want to poll a database using the JDBC component, we need to
combine it with a polling scheduler such as the xref:timer-component.adoc[Timer]
or xref:quartz-component.adoc[Quartz] etc. In the following example, we retrieve
data from the database every 60 seconds:
[source,java]
----
from("timer://foo?period=60000")
.setBody(constant("select * from customer"))
.to("jdbc:testdb")
.to("activemq:queue:customers");
----
== Sample - Move Data Between Data Sources
A common use case is to query for data, process it and move it to
another data source (ETL operations). In the following example, we
retrieve new customer records from the source table every hour,
filter/transform them and move them to a destination table:
[source,java]
----
from("timer://MoveNewCustomersEveryHour?period=3600000")
.setBody(constant("select * from customer where create_time > (sysdate-1/24)"))
.to("jdbc:testdb")
.split(body())
.process(new MyCustomerProcessor()) //filter/transform results as needed
.setBody(simple("insert into processed_customer values('${body[ID]}','${body[NAME]}')"))
.to("jdbc:testdb");
----