blob: deacb0cbc31e44c9aa18eab45da74808ab542e81 [file] [log] [blame]
<div class="wiki-content maincontent"><p>NIO Transport is very similar to the regular <link><page ri:content-title="TCP Transport Reference"></page><plain-text-link-body>TCP transport</plain-text-link-body></link>. The difference is that it is implemented using NIO API which can help with performance and scalability.&#160;NIO is a server side transport option only. Trying to use it on the client side will instantiate the regular TCP transport.</p><h4>Configuration Syntax</h4><p><code><strong>nio://hostname:port?key=value</strong></code></p><p>Configuration options are the same as for the <link><page ri:content-title="TCP Transport Reference"></page><plain-text-link-body>TCP transport</plain-text-link-body></link>.</p><p>Note that the original NIO transport is a replacement for the tcp transport that uses OpenWire protocol. Other network protocols, such AMQP, MQTT, Stomp, etc also have their own NIO transport implementations. It configured usually, by adding "+nio" suffix to the protocol prefix, like</p><structured-macro ac:macro-id="3509a8ac-d55b-40b8-81b3-cd22c109132a" ac:name="code" ac:schema-version="1"><plain-text-body>mqtt+nio://localhost:1883</plain-text-body></structured-macro><p>All protocol specific configuration should be applicable to the NIO version of the transport as well.</p><h3>Tuning NIO transport thread usage</h3><p>One of the main advantages of using NIO instead of the regular versions of the transport is that it can scale better and support larger number of connections. The main limit in this scenario is the number of threads the system in using. In blocking implementations of the transports, one thread is used per connection. In the NIO implementation, there's a shared pool of threads that will take the load, so that number of connections are not directly related to the number of threads used in the system.</p><p>You can tune the number of threads used by the transport using the following system properties (available since <strong>5.15.0</strong>)</p><table><tbody><tr><th colspan="1" rowspan="1">Property</th><th colspan="1" rowspan="1">Default value</th><th colspan="1" rowspan="1">Description</th></tr><tr><td colspan="1" rowspan="1">org.apache.activemq.transport.nio.SelectorManager.corePoolSize</td><td colspan="1" rowspan="1">10</td><td colspan="1" rowspan="1"><p>The number of threads to keep in the pool, even if they are idle</p></td></tr><tr><td colspan="1" rowspan="1">org.apache.activemq.transport.nio.SelectorManager.maximumPoolSize</td><td colspan="1" rowspan="1">1024</td><td colspan="1" rowspan="1"><p>The maximum number of threads to allow in the pool</p></td></tr><tr><td colspan="1" rowspan="1"><p>org.apache.activemq.transport.nio.SelectorManager.workQueueCapacity</p></td><td colspan="1" rowspan="1">&#160;0</td><td colspan="1" rowspan="1">&#160;The max work queue depth before growing the pool</td></tr><tr><td colspan="1" rowspan="1"><span>org.apache.activemq.transport.nio.SelectorManager.rejectWork</span></td><td colspan="1" rowspan="1">false</td><td colspan="1" rowspan="1">Allow work to be rejected with an IOException when capacity is reached such that existing QOS can be preserved</td></tr></tbody></table><p>If you want to scale your broker to support thousands of connections to it, you need to first find the limits of number of threads JVM process is allowed to create. Then you can set these properties to some value below that (broker need more threads to operate normally). For more information on thread usage by destinations and how to limit those, please take a look at <link><page ri:content-title="Scaling Queues"></page></link>&#160;or&#160;<a shape="rect" href="http://svn.apache.org/repos/asf/activemq/trunk/assembly/src/sample-conf/activemq-scalability.xml">this configuration file</a>. For example you can add the following</p><structured-macro ac:macro-id="90590c40-d24b-413e-bf65-bb4b0099cdeb" ac:name="code" ac:schema-version="1"><plain-text-body>ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.transport.nio.SelectorManager.corePoolSize=2000 -Dorg.apache.activemq.transport.nio.SelectorManager.maximumPoolSize=2000 -Dorg.apache.activemq.transport.nio.SelectorManager.workQueueCapacity=1024"</plain-text-body></structured-macro><p>to the startup script (<code>${ACTIVEMQ_HOME}/bin/env</code> for example)&#160;to have a constant pool of 2000 threads handling connections. With the setting like this, the broker should be able to accept the number of connections up to the system limits. Of course, accepting connections is just one part of the story, so there are other limits to vertically scaling the broker.</p><p>&#160;</p></div>