A scalable, mature and versatile web crawler based on Apache Storm

Clone this repo:
  1. d6c07fd Minor: Regenerated License File for bcc7348269da6999866d0e7ce6102b729cf36312 (#1840) by github-actions[bot] · 32 hours ago main
  2. da29b59 Bump org.apache.rat:apache-rat-plugin from 0.17 to 0.18 (#1839) by dependabot[bot] · 32 hours ago
  3. bcc7348 #1597 - Java 20 deprecates all constructors of the class java.net.URL (#1834) by Richard Zowalla · 32 hours ago
  4. 01e25b7 Bump com.ibm.icu:icu4j from 78.2 to 78.3 (#1838) by dependabot[bot] · 32 hours ago
  5. e001d4c Minor: Regenerated License File for ebec4beccef2a215d21b5b5b1dfb337826007f65 (#1837) by github-actions[bot] · 3 days ago

StormCrawler

license Build Status javadoc

Apache StormCrawler is an open source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java.

Quickstart

NOTE: These instructions assume that you have Apache Maven installed. You will need to install Apache Storm 2.8.4 to run the crawler.

StormCrawler requires Java 17 or above. To execute tests, it requires you to have a locally installed and working Docker environment.

Once Storm is installed, the easiest way to get started is to generate a new StormCrawler project following the instructions below:

mvn archetype:generate -DarchetypeGroupId=org.apache.stormcrawler -DarchetypeArtifactId=stormcrawler-archetype -DarchetypeVersion=3.4.0

You'll be asked to enter a groupId (e.g. com.mycompany.crawler), an artefactId (e.g. stormcrawler), a version, a package name and details about the user agent to use.

This will not only create a fully formed project containing a POM with the dependency above but also the default resource files, a default CrawlTopology class and a configuration file. Enter the directory you just created (should be the same as the artefactId you specified earlier) and follow the instructions on the README file.

Alternatively if you can‘t or don’t want to use the Maven archetype above, you can simply copy the files from archetype-resources.

Have a look at crawler.flux, the crawler-conf.yaml file as well as the files in src/main/resources/, they are all that is needed to run a crawl topology : all the other components come from the core module.

Getting help

The documentation is a good place to start your investigations but if you are stuck please use the tag stormcrawler on StackOverflow or ask a question in the discussions section.

The project website has a page listing companies providing commercial support for Apache StormCrawler.

Note for developers

Please format your code before submitting a PR with

mvn git-code-format:format-code -Dgcf.globPattern="**/*" -Dskip.format.code=false

You can enable pre-commit format hooks by running:

mvn clean install -Dskip.format.code=false

Building from source

The requirements for building from source are as follows

  • JDK 17+
  • Apache Maven 3
  • Docker (if you want to run tests)

The build itself is straightforward:

mvn clean install

Note: We use some binary files for testing advanced crawler functionality. These files are located exclusively in the src/test directories of the respective modules.

Thanks

alt tag

YourKit supports open source projects with its full-featured Java Profiler. YourKit, LLC is the creator of YourKit Java Profiler and YourKit .NET Profiler, innovative and intelligent tools for profiling Java and .NET applications.