<div style={{float: ‘left’, paddingRight: ‘40px’}}>5 minutes to read
Usually, we try to publish a new StreamPipes release every three months. But after attending a very exciting ApacheCon last week, where we worked with some Apache communities on a few really cool new features, we decided to release these features as soon as possible. So here's StreamPipes 0.64.0!
<img className=“blog-image” style={{maxWidth: ‘90%’}} src=“/img/blog/2019-09-19/spconnect.png” alt=“PLC4X adapter for StreamPipes Connect”/>
StreamPipes relies on a microservice-based architecture and therefore requires quite a few services (> 15 for the full version) to be up and running. This has impact on the memory consumption of the server where StreamPipes is running. On the other hand, we want to make it as easy as possible to try StreamPipes even on laptops with less powerful hardware.
However, the lite version still required > 8G memory and the full version even more. Additionally, after the last release, we received feedback from the community indicating that the memory consumption has significantly increased. So we looked deeper into the issue and discovered that the Docker base images we were using to deliver the StreamPipes services caused high memory consumption.
Before StreamPipes 0.63.0, we used the Alpine Oracle JDK image for most services. In 0.63.0, we switched to an OpenJDK/Alpine distribution. This had an enormous effect on memory, having individual services that reserved more than 1.5GB of memory.
So in this version, we switched to AdoptOpenJDK along with OpenJ9. The results are fantastic: The full version including all pipeline elements now needs only 6GB memory (compared to > 16 in the last version).
The screenshot below shows that StreamPipes now is much less resource hungry:
<img className=“blog-image” style={{maxWidth: ‘90%’}} src=“/img/blog/2019-09-19/memory.png” alt=“PLC4X adapter for StreamPipes Connect”/>
In future versions, we will continue our efforts to decrease the memory consumption of StreamPipes.
We are absolutely open to your suggestions for further improvements! Let us know (by mail, slack or twitter) and we'll consider your feature request in the next release!