Running topologies on a production cluster is similar to running in Local mode. Here are the steps:
Define the topology (Use TopologyBuilder if defining using Java)
Use StormSubmitter to submit the topology to the cluster. StormSubmitter
takes as input the name of the topology, a configuration for the topology, and the topology itself. For example:
Config conf = new Config(); conf.setNumWorkers(20); conf.setMaxSpoutPending(5000); StormSubmitter.submitTopology("mytopology", conf, topology);
If you're using Maven, the Maven Assembly Plugin can do the packaging for you. Just add this to your pom.xml:
<plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass>com.path.to.main.Class</mainClass> </manifest> </archive> </configuration> </plugin>
Then run mvn assembly:assembly to get an appropriately packaged jar. Make sure you exclude the Storm jars since the cluster already has Storm on the classpath.
storm
client, specifying the path to your jar, the classname to run, and any arguments it will use:storm jar path/to/allmycode.jar org.me.MyTopology arg1 arg2 arg3
storm jar
will submit the jar to the cluster and configure the StormSubmitter
class to talk to the right cluster. In this example, after uploading the jar storm jar
calls the main function on org.me.MyTopology
with the arguments “arg1”, “arg2”, and “arg3”.
You can find out how to configure your storm
client to talk to a Storm cluster on Setting up development environment.
There are a variety of configurations you can set per topology. A list of all the configurations you can set can be found here. The ones prefixed with “TOPOLOGY” can be overridden on a topology-specific basis (the other ones are cluster configurations and cannot be overridden). Here are some common ones that are set for a topology:
To kill a topology, simply run:
storm kill {stormname}
Give the same name to storm kill
as you used when submitting the topology.
Storm won‘t kill the topology immediately. Instead, it deactivates all the spouts so that they don’t emit any more tuples, and then Storm waits Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS seconds before destroying all the workers. This gives the topology enough time to complete any tuples it was processing when it got killed.
To update a running topology, the only option currently is to kill the current topology and resubmit a new one. A planned feature is to implement a storm swap
command that swaps a running topology with a new one, ensuring minimal downtime and no chance of both topologies processing tuples at the same time.
The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.
You can also look at the worker logs on the cluster machines.