WIP.
diff --git a/short-term-memory.html b/short-term-memory.html
index 5c35a30..e349c31 100644
--- a/short-term-memory.html
+++ b/short-term-memory.html
@@ -112,7 +112,7 @@
             period of time, i.e. it “forgets” the conversational context after few minutes of inactivity.
             Note also that conversational context can also be cleared explicitly
             via <a href="https://app.swaggerhub.com/apis-docs/Apache-NLPCraft/apache-nlpcraft/{{site.latest_version}}" target="swaggerhub">REST API</a>
-            or from the {% scaladoc NCConversation code %}.
+            or from the <a class="not-code" target="javadoc" href="/apis/latest/org/apache/nlpcraft/NCConversation.html">code</a>.
         </p>
         <p>
             To understand the algorithm behind the STM management let's back up a few steps...
@@ -264,11 +264,11 @@
             can have a "soft" context switch where you don't change the topic of the conversation 100% but yet sufficiently
             enough to forget at least some parts of the previously collected context. NLPCraft has a built-in algorithm
             to detect the hard switch in the conversation. It also exposes
-            {% scaladoc NCConversation API %} to perform a selective reset on the
+            <a class="not-code" target="javadoc" href="/apis/latest/org/apache/nlpcraft/NCConversation.html">API</a> to perform a selective reset on the
             conversation in case of "soft" switch.
         </p>
         <p>
-            See {% scaladoc NCConversation NCConversion %} interface
+            See <a class="javadoc" href="/apis/latest/org/apache/nlpcraft/NCConversation.html">NCConversion</a> interface
             for API details for STM management.
         </p>
     </section>
@@ -452,7 +452,7 @@
         </p>
         <p>
             In NLPCraft you can also explicitly reset conversation context through
-            {% scaladoc NCConversation API %}
+            <a class="not-code" target="javadoc" href="/apis/latest/org/apache/nlpcraft/NCConversation.html">API</a>
             or by switching the model on the request.
         </p>
     </section>