Note modules as "retired" (#43)

diff --git a/.gitignore b/.gitignore
index 374f427..60b1869 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,6 +1,6 @@
 # Generated
 /resources
-target
+target/
 
 .vscode
 
diff --git a/source/documentation/__index.md b/source/documentation/__index.md
index 4c82a87..6c071fc 100644
--- a/source/documentation/__index.md
+++ b/source/documentation/__index.md
@@ -22,9 +22,8 @@
 * [RDF Connection](./rdfconnection/) - a SPARQL API for local datasets and remote services
 * [SHACL](./shacl) - SHACL processor for Jena
 * [Javadoc](./javadoc/) - JavaDoc generated from the Jena source
-* [Text Search](./query/text-query.html) - enhanced indexes using Lucene or Solr for more efficient searching of text literals in Jena models and datasets.
+* [Text Search](./query/text-query.html) - enhanced indexes using Lucene for more efficient searching of text literals in Jena models and datasets.
 * [GeoSPARQL](./geosparql/) - support for GeoSPARQL
-* [Elephas](./hadoop) - working with RDF data on Apache Hadoop.
 * [How-To's](./notes/) - various topic-specific how-to documents
 * [Permissions](./permissions/) - a permissions wrapper around Jena RDF implementation
 * [JDBC](./jdbc/) - a SPARQL over JDBC driver framework
diff --git a/source/documentation/csv/__index.md b/source/documentation/csv/__index.md
index 2cde4fa..d4076a1 100644
--- a/source/documentation/csv/__index.md
+++ b/source/documentation/csv/__index.md
@@ -4,8 +4,8 @@
 ---
 
 ----
-> This page covers the jena-csv module which has been retired.
-> The last release of Jena with this module is Jena 3.9.0.
-> See [jena-csv/README.md](https://github.com/apache/jena/tree/master/jena-csv).
-> The [original documentation](csv).
+> This page covers the jena-csv module which has been retired.<br/>
+> The last release of Jena with this module is Jena 3.9.0.<br/>
+> See [jena-csv/README.md](https://github.com/apache/jena/tree/master/jena-csv).<br/>
+> The [original documentation](csv_index.html).
 ----
diff --git a/source/documentation/csv/csv.md b/source/documentation/csv/csv_index.md
similarity index 100%
rename from source/documentation/csv/csv.md
rename to source/documentation/csv/csv_index.md
diff --git a/source/documentation/hadoop/__index.md b/source/documentation/hadoop/__index.md
index 81f5375..db027cb 100644
--- a/source/documentation/hadoop/__index.md
+++ b/source/documentation/hadoop/__index.md
@@ -3,224 +3,9 @@
 slug: index
 ---
 
-Apache Jena Elephas is a set of libraries which provide various basic building blocks which enable you to start writing Apache Hadoop based applications which work with RDF data.
-
-Historically there has been no serious support for RDF within the Hadoop ecosystem and what support has existed has
-often been limited and task specific.  These libraries aim to be as generic as possible and provide the necessary
-infrastructure that enables developers to create their application specific logic without worrying about the
-underlying plumbing.
-
-## Beta
-
-These modules are currently considered to be in a **Beta** state, they have been under active development for about a year but have not yet been widely deployed and may contain as yet undiscovered bugs.
-
-Please see the [How to Report a Bug](../../help_and_support/bugs_and_suggestions.html) page for how to report any bugs you may encounter.
-
-## Documentation
-
-- [Overview](#overview)
-- [Getting Started](#getting-started)
-- APIs
-    - [Common](common.html)
-    - [IO](io.html)
-    - [Map/Reduce](mapred.html)
-    - [Javadoc](../javadoc/elephas/)
-- Examples
-    - [RDF Stats Demo](demo.html)
-- [Maven Artifacts](artifacts.html)
-
-## Overview
-
-Apache Jena Elephas is published as a set of Maven module via its [maven artifacts](artifacts.html).  The source for these libraries
-may be [downloaded](/download/index.cgi) as part of the source distribution.  These modules are built against the Hadoop 2.x. APIs and no
-backwards compatibility for 1.x is provided.
-
-The core aim of these libraries it to provide the basic building blocks that allow users to start writing Hadoop applications that
-work with RDF.  They are mostly fairly low level components but they are designed to be used as building blocks to help users and developers
-focus on actual application logic rather than on the low level plumbing.
-
-Firstly at the lowest level they provide `Writable` implementations that allow the basic RDF primitives - nodes, triples and quads -
-to be represented and exchanged within Hadoop applications, this support is provided by the [Common](common.html) library.
-
-Secondly they provide support for all the RDF serialisations which Jena supports as both input and output formats subject to the specific 
-limitations of those serialisations.  This support is provided by the [IO](io.html) library in the form of standard `InputFormat` and
-`OutputFormat` implementations.
-
-There are also a set of basic `Mapper` and `Reducer` implementations provided by the [Map/Reduce](mapred.html) library which contains code
-that enables various common Hadoop tasks such as counting, filtering, splitting and grouping to be carried out on RDF data.  Typically these
-will be used as a starting point to build more complex RDF processing applications.
-
-Finally there is a [RDF Stats Demo](demo.html) which is a runnable Hadoop job JAR file that demonstrates using these libraries to calculate
-a number of basic statistics over arbitrary RDF data.
-
-## Getting Started
-
-To get started you will need to add the relevant dependencies to your project, the exact dependencies necessary will depend 
-on what you are trying to do.  Typically you will likely need at least the IO library and possibly the Map/Reduce library:
-
-    <dependency>
-      <groupId>org.apache.jena</groupId>
-      <artifactId>jena-elephas-io</artifactId>
-      <version>x.y.z</version>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.jena</groupId>
-      <artifactId>jena-elephas-mapreduce</artifactId>
-      <version>x.y.z</version>
-    </dependency>
-
-Our libraries depend on the relevant Hadoop libraries but since these libraries are typically provided by the Hadoop cluster those dependencies are marked as `provided` and thus are not transitive.  This means that you will typically also need to add the following additional dependencies:
-
-    <!-- Hadoop Dependencies -->
-    <!-- 
-        Note these will be provided on the Hadoop cluster hence the provided 
-        scope 
-    -->
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-common</artifactId>
-      <version>2.6.0</version>
-      <scope>provided</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-mapreduce-client-common</artifactId>
-      <version>2.6.0</version>
-      <scope>provided</scope>
-    </dependency>
-
-You can then write code to launch a Map/Reduce job that works with RDF.  For example let us consider a RDF variation of the classic Hadoop
-word count example.  In this example which we call node count we do the following:
-
-- Take in some RDF triples
-- Split them up into their constituent nodes i.e. the URIs, Blank Nodes & Literals
-- Assign an initial count of one to each node
-- Group by node and sum up the counts
-- Output the nodes and their usage counts
-
-We will start with our `Mapper` implementation, as you can see this simply takes in a triple and splits it into its constituent nodes.  It
-then outputs each node with an initial count of 1:
-
-    package org.apache.jena.hadoop.rdf.mapreduce.count;
-    
-    import org.apache.jena.hadoop.rdf.types.NodeWritable;
-    import org.apache.jena.hadoop.rdf.types.TripleWritable;
-    import org.apache.jena.graph.Triple;
-    
-    /**
-     * A mapper for counting node usages within triples designed primarily for use
-     * in conjunction with {@link NodeCountReducer}
-     *
-     * @param <TKey> Key type
-     */
-    public class TripleNodeCountMapper<TKey> extends AbstractNodeTupleNodeCountMapper<TKey, Triple, TripleWritable> {
-
-        @Override
-        protected NodeWritable[] getNodes(TripleWritable tuple) {
-            Triple t = tuple.get();
-            return new NodeWritable[] { new NodeWritable(t.getSubject()), 
-                                        new NodeWritable(t.getPredicate()),
-                                        new NodeWritable(t.getObject()) };
-        }
-    }
-
-And then our `Reducer` implementation, this takes in the data grouped by node and sums up the counts outputting the node and the final count:
-
-    package org.apache.jena.hadoop.rdf.mapreduce.count;
-
-    import java.io.IOException;
-    import java.util.Iterator;
-    import org.apache.hadoop.io.LongWritable;
-    import org.apache.hadoop.mapreduce.Reducer;
-    import org.apache.jena.hadoop.rdf.types.NodeWritable;
-
-    /**
-     * A reducer which takes node keys with a sequence of longs representing counts
-     * as the values and sums the counts together into pairs consisting of a node
-     * key and a count value.
-     */
-    public class NodeCountReducer extends Reducer<NodeWritable, LongWritable, NodeWritable, LongWritable> {
-
-        @Override
-        protected void reduce(NodeWritable key, Iterable<LongWritable> values, Context context) throws IOException,
-                InterruptedException {
-            long count = 0;
-            Iterator<LongWritable> iter = values.iterator();
-            while (iter.hasNext()) {
-                count += iter.next().get();
-            }
-            context.write(key, new LongWritable(count));
-        }
-    }
-
-Finally we then need to define an actual Hadoop job we can submit to run this.  Here we take advantage of the [IO](io.html) library to provide
-us with support for our desired RDF input format:
-
-    package org.apache.jena.hadoop.rdf.stats;
-
-    import org.apache.hadoop.conf.Configuration;
-    import org.apache.hadoop.fs.Path;
-    import org.apache.hadoop.io.LongWritable;
-    import org.apache.hadoop.mapreduce.Job;
-    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
-    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-    import org.apache.jena.hadoop.rdf.io.input.TriplesInputFormat;
-    import org.apache.jena.hadoop.rdf.io.output.ntriples.NTriplesNodeOutputFormat;
-    import org.apache.jena.hadoop.rdf.mapreduce.count.NodeCountReducer;
-    import org.apache.jena.hadoop.rdf.mapreduce.count.TripleNodeCountMapper;
-    import org.apache.jena.hadoop.rdf.types.NodeWritable;
-    
-    public class RdfMapReduceExample {
-
-        public static void main(String[] args) {
-            try {
-                // Get Hadoop configuration
-                Configuration config = new Configuration(true);
-
-                // Create job
-                Job job = Job.getInstance(config);
-                job.setJarByClass(RdfMapReduceExample.class);
-                job.setJobName("RDF Triples Node Usage Count");
- 
-                // Map/Reduce classes
-                job.setMapperClass(TripleNodeCountMapper.class);
-                job.setMapOutputKeyClass(NodeWritable.class);
-                job.setMapOutputValueClass(LongWritable.class);
-                job.setReducerClass(NodeCountReducer.class);
-
-                // Input and Output
-                job.setInputFormatClass(TriplesInputFormat.class);
-                job.setOutputFormatClass(NTriplesNodeOutputFormat.class);
-                FileInputFormat.setInputPaths(job, new Path("/example/input/"));
-                FileOutputFormat.setOutputPath(job, new Path("/example/output/"));
-
-                // Launch the job and await completion
-                job.submit();
-                if (job.monitorAndPrintJob()) {
-                    // OK
-                    System.out.println("Completed");
-                } else {
-                    // Failed
-                    System.err.println("Failed");
-                }
-            } catch (Throwable e) {
-                e.printStackTrace();
-            }
-        }
-    }
-
-So this really is no different from configuring any other Hadoop job, we simply have to point to the relevant input and output formats and provide our mapper and reducer.  Note that here we use the `TriplesInputFormat` which can handle RDF in any Jena supported format, if you know your RDF is in a specific format it is usually more efficient to use a more specific input format.  Please see the [IO](io.html) page for more detail on the available input formats and the differences between them.
-
-We recommend that you next take a look at our [RDF Stats Demo](demo.html) which shows how to do some more complex computations by chaining multiple jobs together.
-
-## APIs
-
-There are three main libraries each with their own API:
-
-- [Common](common.html) - this provides the basic data model for representing RDF data within Hadoop
-- [IO](io.html) - this provides support for reading and writing RDF
-- [Map/Reduce](mapred.html) - this provides support for writing Map/Reduce jobs that work with RDF
-
-
-
- 
\ No newline at end of file
+----
+> The Jena Elephas module has been retired.<br/>
+> The last release of Jena with Elephas is Jena 3.17.0.<br/>
+> See [jena-elephas/README.md](https://github.com/apache/jena/tree/main/jena-elephas).<br/>
+> The [original documentation](elephas_index.html).
+----
diff --git a/source/documentation/hadoop/elephas_index.md b/source/documentation/hadoop/elephas_index.md
new file mode 100644
index 0000000..7066bfd
--- /dev/null
+++ b/source/documentation/hadoop/elephas_index.md
@@ -0,0 +1,221 @@
+---
+title: Apache Jena Elephas
+---
+
+Apache Jena Elephas is a set of libraries which provide various basic building blocks which enable you to start writing Apache Hadoop based applications which work with RDF data.
+
+Historically there has been no serious support for RDF within the Hadoop ecosystem and what support has existed has
+often been limited and task specific.  These libraries aim to be as generic as possible and provide the necessary
+infrastructure that enables developers to create their application specific logic without worrying about the
+underlying plumbing.
+
+## Beta
+
+These modules are currently considered to be in a **Beta** state, they have been under active development for about a year but have not yet been widely deployed and may contain as yet undiscovered bugs.
+
+Please see the [How to Report a Bug](../../help_and_support/bugs_and_suggestions.html) page for how to report any bugs you may encounter.
+
+## Documentation
+
+- [Overview](#overview)
+- [Getting Started](#getting-started)
+- APIs
+    - [Common](common.html)
+    - [IO](io.html)
+    - [Map/Reduce](mapred.html)
+    - [Javadoc](../javadoc/elephas/)
+- Examples
+    - [RDF Stats Demo](demo.html)
+- [Maven Artifacts](artifacts.html)
+
+## Overview
+
+Apache Jena Elephas is published as a set of Maven module via its [maven artifacts](artifacts.html).  The source for these libraries
+may be [downloaded](/download/index.cgi) as part of the source distribution.  These modules are built against the Hadoop 2.x. APIs and no
+backwards compatibility for 1.x is provided.
+
+The core aim of these libraries it to provide the basic building blocks that allow users to start writing Hadoop applications that
+work with RDF.  They are mostly fairly low level components but they are designed to be used as building blocks to help users and developers
+focus on actual application logic rather than on the low level plumbing.
+
+Firstly at the lowest level they provide `Writable` implementations that allow the basic RDF primitives - nodes, triples and quads -
+to be represented and exchanged within Hadoop applications, this support is provided by the [Common](common.html) library.
+
+Secondly they provide support for all the RDF serialisations which Jena supports as both input and output formats subject to the specific 
+limitations of those serialisations.  This support is provided by the [IO](io.html) library in the form of standard `InputFormat` and
+`OutputFormat` implementations.
+
+There are also a set of basic `Mapper` and `Reducer` implementations provided by the [Map/Reduce](mapred.html) library which contains code
+that enables various common Hadoop tasks such as counting, filtering, splitting and grouping to be carried out on RDF data.  Typically these
+will be used as a starting point to build more complex RDF processing applications.
+
+Finally there is a [RDF Stats Demo](demo.html) which is a runnable Hadoop job JAR file that demonstrates using these libraries to calculate
+a number of basic statistics over arbitrary RDF data.
+
+## Getting Started
+
+To get started you will need to add the relevant dependencies to your project, the exact dependencies necessary will depend 
+on what you are trying to do.  Typically you will likely need at least the IO library and possibly the Map/Reduce library:
+
+    <dependency>
+      <groupId>org.apache.jena</groupId>
+      <artifactId>jena-elephas-io</artifactId>
+      <version>x.y.z</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.jena</groupId>
+      <artifactId>jena-elephas-mapreduce</artifactId>
+      <version>x.y.z</version>
+    </dependency>
+
+Our libraries depend on the relevant Hadoop libraries but since these libraries are typically provided by the Hadoop cluster those dependencies are marked as `provided` and thus are not transitive.  This means that you will typically also need to add the following additional dependencies:
+
+    <!-- Hadoop Dependencies -->
+    <!-- 
+        Note these will be provided on the Hadoop cluster hence the provided 
+        scope 
+    -->
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-common</artifactId>
+      <version>2.6.0</version>
+      <scope>provided</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-mapreduce-client-common</artifactId>
+      <version>2.6.0</version>
+      <scope>provided</scope>
+    </dependency>
+
+You can then write code to launch a Map/Reduce job that works with RDF.  For example let us consider a RDF variation of the classic Hadoop
+word count example.  In this example which we call node count we do the following:
+
+- Take in some RDF triples
+- Split them up into their constituent nodes i.e. the URIs, Blank Nodes & Literals
+- Assign an initial count of one to each node
+- Group by node and sum up the counts
+- Output the nodes and their usage counts
+
+We will start with our `Mapper` implementation, as you can see this simply takes in a triple and splits it into its constituent nodes.  It
+then outputs each node with an initial count of 1:
+
+    package org.apache.jena.hadoop.rdf.mapreduce.count;
+    
+    import org.apache.jena.hadoop.rdf.types.NodeWritable;
+    import org.apache.jena.hadoop.rdf.types.TripleWritable;
+    import org.apache.jena.graph.Triple;
+    
+    /**
+     * A mapper for counting node usages within triples designed primarily for use
+     * in conjunction with {@link NodeCountReducer}
+     *
+     * @param <TKey> Key type
+     */
+    public class TripleNodeCountMapper<TKey> extends AbstractNodeTupleNodeCountMapper<TKey, Triple, TripleWritable> {
+
+        @Override
+        protected NodeWritable[] getNodes(TripleWritable tuple) {
+            Triple t = tuple.get();
+            return new NodeWritable[] { new NodeWritable(t.getSubject()), 
+                                        new NodeWritable(t.getPredicate()),
+                                        new NodeWritable(t.getObject()) };
+        }
+    }
+
+And then our `Reducer` implementation, this takes in the data grouped by node and sums up the counts outputting the node and the final count:
+
+    package org.apache.jena.hadoop.rdf.mapreduce.count;
+
+    import java.io.IOException;
+    import java.util.Iterator;
+    import org.apache.hadoop.io.LongWritable;
+    import org.apache.hadoop.mapreduce.Reducer;
+    import org.apache.jena.hadoop.rdf.types.NodeWritable;
+
+    /**
+     * A reducer which takes node keys with a sequence of longs representing counts
+     * as the values and sums the counts together into pairs consisting of a node
+     * key and a count value.
+     */
+    public class NodeCountReducer extends Reducer<NodeWritable, LongWritable, NodeWritable, LongWritable> {
+
+        @Override
+        protected void reduce(NodeWritable key, Iterable<LongWritable> values, Context context) throws IOException,
+                InterruptedException {
+            long count = 0;
+            Iterator<LongWritable> iter = values.iterator();
+            while (iter.hasNext()) {
+                count += iter.next().get();
+            }
+            context.write(key, new LongWritable(count));
+        }
+    }
+
+Finally we then need to define an actual Hadoop job we can submit to run this.  Here we take advantage of the [IO](io.html) library to provide
+us with support for our desired RDF input format:
+
+    package org.apache.jena.hadoop.rdf.stats;
+
+    import org.apache.hadoop.conf.Configuration;
+    import org.apache.hadoop.fs.Path;
+    import org.apache.hadoop.io.LongWritable;
+    import org.apache.hadoop.mapreduce.Job;
+    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+    import org.apache.jena.hadoop.rdf.io.input.TriplesInputFormat;
+    import org.apache.jena.hadoop.rdf.io.output.ntriples.NTriplesNodeOutputFormat;
+    import org.apache.jena.hadoop.rdf.mapreduce.count.NodeCountReducer;
+    import org.apache.jena.hadoop.rdf.mapreduce.count.TripleNodeCountMapper;
+    import org.apache.jena.hadoop.rdf.types.NodeWritable;
+    
+    public class RdfMapReduceExample {
+
+        public static void main(String[] args) {
+            try {
+                // Get Hadoop configuration
+                Configuration config = new Configuration(true);
+
+                // Create job
+                Job job = Job.getInstance(config);
+                job.setJarByClass(RdfMapReduceExample.class);
+                job.setJobName("RDF Triples Node Usage Count");
+ 
+                // Map/Reduce classes
+                job.setMapperClass(TripleNodeCountMapper.class);
+                job.setMapOutputKeyClass(NodeWritable.class);
+                job.setMapOutputValueClass(LongWritable.class);
+                job.setReducerClass(NodeCountReducer.class);
+
+                // Input and Output
+                job.setInputFormatClass(TriplesInputFormat.class);
+                job.setOutputFormatClass(NTriplesNodeOutputFormat.class);
+                FileInputFormat.setInputPaths(job, new Path("/example/input/"));
+                FileOutputFormat.setOutputPath(job, new Path("/example/output/"));
+
+                // Launch the job and await completion
+                job.submit();
+                if (job.monitorAndPrintJob()) {
+                    // OK
+                    System.out.println("Completed");
+                } else {
+                    // Failed
+                    System.err.println("Failed");
+                }
+            } catch (Throwable e) {
+                e.printStackTrace();
+            }
+        }
+    }
+
+So this really is no different from configuring any other Hadoop job, we simply have to point to the relevant input and output formats and provide our mapper and reducer.  Note that here we use the `TriplesInputFormat` which can handle RDF in any Jena supported format, if you know your RDF is in a specific format it is usually more efficient to use a more specific input format.  Please see the [IO](io.html) page for more detail on the available input formats and the differences between them.
+
+We recommend that you next take a look at our [RDF Stats Demo](demo.html) which shows how to do some more complex computations by chaining multiple jobs together.
+
+## APIs
+
+There are three main libraries each with their own API:
+
+- [Common](common.html) - this provides the basic data model for representing RDF data within Hadoop
+- [IO](io.html) - this provides support for reading and writing RDF
+- [Map/Reduce](mapred.html) - this provides support for writing Map/Reduce jobs that work with RDF
diff --git a/source/documentation/javadoc.md b/source/documentation/javadoc.md
index 4a98947..1e68acf 100644
--- a/source/documentation/javadoc.md
+++ b/source/documentation/javadoc.md
@@ -14,4 +14,3 @@
 - [GeoSPARQL](javadoc/geosparql/index.html)
 - [Security Permissions JavaDoc](javadoc/permissions/index.html)
 - [JDBC JavaDoc](javadoc/jdbc/index.html)
-- [Elephas](javadoc_elephas.html)
diff --git a/source/documentation/query/text-query.md b/source/documentation/query/text-query.md
index 88b7459..0e3f37f 100644
--- a/source/documentation/query/text-query.md
+++ b/source/documentation/query/text-query.md
@@ -3,9 +3,9 @@
 ---
 
 This extension to ARQ combines SPARQL and full text search via
-[Lucene](https://lucene.apache.org) or
-[ElasticSearch](https://www.elastic.co) (built on
-Lucene). It gives applications the ability to perform indexed full text
+[Lucene](https://lucene.apache.org).
+[ElasticSearch](https://www.elastic.co) 
+It gives applications the ability to perform indexed full text
 searches within SPARQL queries. Here is a version compatibility table:
 
 | &nbsp;Jena&nbsp; | &nbsp;Lucene&nbsp; |  &nbsp;Solr&nbsp; | &nbsp;ElasticSearch&nbsp; |
@@ -13,7 +13,7 @@
 | upto 3.2.0       | 5.x or 6.x         | 5.x or 6.x        | not supported  |
 | 3.3.0 - 3.9.0    | 6.4.x              | not supported     | 5.2.2 - 5.2.13 |
 | 3.10.0           | 7.4.0              | not supported     | 6.4.2          |
-| 3.15.0           | 7.7.x              | not supported     | 6.8.6          |
+| 3.15.0 - 3.17.0  | 7.7.x              | not supported     | 6.8.6          |
 | 4.0.0            | 8.8.x              | not supported     | not supported  |
 
 SPARQL allows the use of 
diff --git a/source/documentation/sdb/__index.md b/source/documentation/sdb/__index.md
index 73f6c5e..36661f6 100644
--- a/source/documentation/sdb/__index.md
+++ b/source/documentation/sdb/__index.md
@@ -1,80 +1,11 @@
 ---
-title: SDB - persistent triple stores using relational databases
+title: Apache Jena SDB - persistent triple stores using relational databases
 slug: index
 ---
 
-SDB uses an SQL database for the storage and query of RDF data.
-Many databases are supported, both Open Source and proprietary.
-
-An SDB store can be accessed and managed with the provided command
-line scripts and via the Jena API.
-
-<blockquote>
-<i>
-Use of SDB for new applications is not recommended.
-</i>
-</blockquote>
-
-<i>This component is "maintenance only".</i>
-
-<i>[TDB](../tdb/index.html) is faster, more scalable and better supported
-than SDB.</i>
-
-## Status 
-
-As of June 2013 the Jena developers agreed to treat SDB as 
-being only maintained where possible. 
-See [Future of SDB](http://mail-archives.apache.org/mod_mbox/jena-users/201306.mbox/%3c51B1A7FB.4070601@apache.org%3e) thread on the mailing list.
-
-The developers intend to continue releasing SDB alongside other Jena
-components but it is not actively developed.  None of the developers
-use it within their organizations.
-
-SDB may be revived as a fully supported component if members of the
-community come forward to develop it.  The Jena team strongly recommends
-the use of [TDB](../tdb/) instead of SDB for all new development due to
-TDBs substantially better performance and scalability.
-
-## Documentation
-
--   [SDB Installation](installation.html)
--   [Quickstart](quickstart.html)
--   [Command line utilities](commands.html)
--   [Store Description format](store_description.html)
--   [Dataset And Model Descriptions](dataset_description.html)
--   [Use from Java](javaapi.html)
--   [Specialized configuration](configuration.html)
--   [Database Layouts](database_layouts.html)
--   [FAQ](faq.html)
--   [Fuseki Integration](fuseki_integration.html)
--   [Databases supported](databases_supported.html)
-
-## Downloads
-
-SDB is distributed from the Apache Jena project. See the
-[downloads page](/download/index.cgi) for details.
-
-## Support
-
-[Support and questions](/help_and_support)
-
-## Details
-
--   [Loading data](loading_data.html)
--   [Loading performance](loading_performance.html)
--   [Query performance](query_performance.html)
-
-## Database Notes
-
-List of [databases supported](databases_supported.html)
-
-Notes:
-
--   [PostgreSQL notes](db_notes.html#postgresql)
--   [MySQL notes](db_notes.html#mysql)
--   Oracle notes
--   [Microsoft SQL Server notes](db_notes.html#ms_sql)
--   [DB2 notes](db_notes.html#db2)
--   [Derby notes](db_notes.html#derby)
--   HSQLDB notes
--   H2 notes
+----
+> The Jena SDB module has been retired.<br/>
+> The last release of Jena with this module is Jena 3.17.0.<br/>
+> See [jena-sdb/README.md](https://github.com/apache/jena/tree/main/jena-sdb).<br/>
+> The [original documentation](sdb_index.html).
+----
diff --git a/source/documentation/sdb/sdb_index.md b/source/documentation/sdb/sdb_index.md
new file mode 100644
index 0000000..1dfeefd
--- /dev/null
+++ b/source/documentation/sdb/sdb_index.md
@@ -0,0 +1,79 @@
+---
+title: SDB - persistent triple stores using relational databases
+---
+
+SDB uses an SQL database for the storage and query of RDF data.
+Many databases are supported, both Open Source and proprietary.
+
+An SDB store can be accessed and managed with the provided command
+line scripts and via the Jena API.
+
+<blockquote>
+<i>
+Use of SDB for new applications is not recommended.
+</i>
+</blockquote>
+
+<i>This component is "maintenance only".</i>
+
+<i>[TDB](../tdb/index.html) is faster, more scalable and better supported
+than SDB.</i>
+
+## Status 
+
+As of June 2013 the Jena developers agreed to treat SDB as 
+being only maintained where possible. 
+See [Future of SDB](http://mail-archives.apache.org/mod_mbox/jena-users/201306.mbox/%3c51B1A7FB.4070601@apache.org%3e) thread on the mailing list.
+
+The developers intend to continue releasing SDB alongside other Jena
+components but it is not actively developed.  None of the developers
+use it within their organizations.
+
+SDB may be revived as a fully supported component if members of the
+community come forward to develop it.  The Jena team strongly recommends
+the use of [TDB](../tdb/) instead of SDB for all new development due to
+TDBs substantially better performance and scalability.
+
+## Documentation
+
+-   [SDB Installation](installation.html)
+-   [Quickstart](quickstart.html)
+-   [Command line utilities](commands.html)
+-   [Store Description format](store_description.html)
+-   [Dataset And Model Descriptions](dataset_description.html)
+-   [Use from Java](javaapi.html)
+-   [Specialized configuration](configuration.html)
+-   [Database Layouts](database_layouts.html)
+-   [FAQ](faq.html)
+-   [Fuseki Integration](fuseki_integration.html)
+-   [Databases supported](databases_supported.html)
+
+## Downloads
+
+SDB is distributed from the Apache Jena project. See the
+[downloads page](/download/index.cgi) for details.
+
+## Support
+
+[Support and questions](/help_and_support)
+
+## Details
+
+-   [Loading data](loading_data.html)
+-   [Loading performance](loading_performance.html)
+-   [Query performance](query_performance.html)
+
+## Database Notes
+
+List of [databases supported](databases_supported.html)
+
+Notes:
+
+-   [PostgreSQL notes](db_notes.html#postgresql)
+-   [MySQL notes](db_notes.html#mysql)
+-   Oracle notes
+-   [Microsoft SQL Server notes](db_notes.html#ms_sql)
+-   [DB2 notes](db_notes.html#db2)
+-   [Derby notes](db_notes.html#derby)
+-   HSQLDB notes
+-   H2 notes
diff --git a/source/download/maven.md b/source/download/maven.md
index e1e99e7..a134386 100644
--- a/source/download/maven.md
+++ b/source/download/maven.md
@@ -90,12 +90,6 @@
     <td>Jena as an OSGi bundle</td>
   </tr>
   <tr>
-    <td><code>jena-sdb</code></td>
-    <td><code>jar</code></td>
-    <td>SDB (SQL based triple store). SDB should only be used when there is an absolute requirement on
-      using SQL. TDB is to be preferred.</td>
-  </tr>
-  <tr>
     <td><code>jena</code></td>
     <td></td>
     <td>The formal released source-released for each Jena release. This is not a maven-runnable set of binary files</td>
@@ -112,11 +106,6 @@
     </td>
   </tr>
   <tr>
-    <td><code>jena-elephas</code></td>
-    <td><code>pom</code></td>
-    <td>A collection of tools for working with RDF on the Hadoop platform</td>
-  </tr>
-  <tr>
     <td><code>jena-fuseki-main</code></td>
     <td><code>war</code></td>
     <td>Fuseki packaged for embedding in an application.</td>