blob: 047ac7fd9fb77ac664357dfccb1c517624842196 [file] [log] [blame]
<!DOCTYPE html>
<!--[if IE]><![endif]-->
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>Namespace Lucene.Net.Analysis.Standard
| Apache Lucene.NET 4.8.0-beta00010 Documentation </title>
<meta name="viewport" content="width=device-width">
<meta name="title" content="Namespace Lucene.Net.Analysis.Standard
| Apache Lucene.NET 4.8.0-beta00010 Documentation ">
<meta name="generator" content="docfx 2.56.0.0">
<link rel="shortcut icon" href="https://lucenenet.apache.org/docs/4.8.0-beta00009/logo/favicon.ico">
<link rel="stylesheet" href="https://lucenenet.apache.org/docs/4.8.0-beta00009/styles/docfx.vendor.css">
<link rel="stylesheet" href="https://lucenenet.apache.org/docs/4.8.0-beta00009/styles/docfx.css">
<link rel="stylesheet" href="https://lucenenet.apache.org/docs/4.8.0-beta00009/styles/main.css">
<meta property="docfx:navrel" content="toc.html">
<meta property="docfx:tocrel" content="analysis-common/toc.html">
<meta property="docfx:rel" content="https://lucenenet.apache.org/docs/4.8.0-beta00009/">
</head>
<body data-spy="scroll" data-target="#affix" data-offset="120">
<div id="wrapper">
<header>
<nav id="autocollapse" class="navbar ng-scope" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navbar">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="/">
<img id="logo" class="svg" src="https://lucenenet.apache.org/docs/4.8.0-beta00009/logo/lucene-net-color.png" alt="">
</a>
</div>
<div class="collapse navbar-collapse" id="navbar">
<form class="navbar-form navbar-right" role="search" id="search">
<div class="form-group">
<input type="text" class="form-control" id="search-query" placeholder="Search" autocomplete="off">
</div>
</form>
</div>
</div>
</nav>
<div class="subnav navbar navbar-default">
<div class="container hide-when-search">
<ul class="level0 breadcrumb">
<li>
<a href="https://lucenenet.apache.org/docs/4.8.0-beta00009/">API</a>
<span id="breadcrumb">
<ul class="breadcrumb">
<li></li>
</ul>
</span>
</li>
</ul>
</div>
</div>
</header>
<div class="container body-content">
<div id="search-results">
<div class="search-list"></div>
<div class="sr-items">
<p><i class="glyphicon glyphicon-refresh index-loading"></i></p>
</div>
<ul id="pagination"></ul>
</div>
</div>
<div role="main" class="container body-content hide-when-search">
<div class="sidenav hide-when-search">
<a class="btn toc-toggle collapse" data-toggle="collapse" href="#sidetoggle" aria-expanded="false" aria-controls="sidetoggle">Show / Hide Table of Contents</a>
<div class="sidetoggle collapse" id="sidetoggle">
<div id="sidetoc"></div>
</div>
</div>
<div class="article row grid-right">
<div class="col-md-10">
<article class="content wrap" id="_content" data-uid="Lucene.Net.Analysis.Standard">
<h1 id="Lucene_Net_Analysis_Standard" data-uid="Lucene.Net.Analysis.Standard" class="text-break">Namespace Lucene.Net.Analysis.Standard
</h1>
<div class="markdown level0 summary"><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p> Fast, general-purpose grammar-based tokenizers. </p>
<p>The <code>org.apache.lucene.analysis.standard</code> package contains three fast grammar-based tokenizers constructed with JFlex:</p>
<ul>
<li><p><a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a>:
as of Lucene 3.1, implements the Word Break rules from the Unicode Text
Segmentation algorithm, as specified in
<a href="http://unicode.org/reports/tr29/">Unicode Standard Annex #29</a>.
Unlike <code>UAX29URLEmailTokenizer</code>, URLs and email addresses are
<strong>not</strong> tokenized as single tokens, but are instead split up into
tokens according to the UAX#29 word break rules.</p>
<pre><code>[StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer) includes
[StandardTokenizer](xref:Lucene.Net.Analysis.Standard.StandardTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
When the `Version` specified in the constructor is lower than
</code></pre><p>3.1, the <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a>
implementation is invoked.</p>
</li>
<li><p><a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a>:
this class was formerly (prior to Lucene 3.1) named
<code>StandardTokenizer</code>. (Its tokenization rules are not
based on the Unicode Text Segmentation algorithm.)
<a class="xref" href="Lucene.Net.Analysis.Standard.ClassicAnalyzer.html">ClassicAnalyzer</a> includes
<a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a>,
<a class="xref" href="Lucene.Net.Analysis.Standard.StandardFilter.html">StandardFilter</a>,
<a class="xref" href="Lucene.Net.Analysis.Core.LowerCaseFilter.html">LowerCaseFilter</a>
and <a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a>.</p>
</li>
<li><p><a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer.html">UAX29URLEmailTokenizer</a>:
implements the Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
<a href="http://unicode.org/reports/tr29/">Unicode Standard Annex #29</a>.
URLs and email addresses are also tokenized according to the relevant RFCs.</p>
<pre><code>[UAX29URLEmailAnalyzer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer) includes
[UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
</code></pre></li>
</ul>
</div>
<div class="markdown level0 conceptual"></div>
<div class="markdown level0 remarks"></div>
<h3 id="classes">Classes
</h3>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.ClassicAnalyzer.html">ClassicAnalyzer</a></h4>
<section><p>Filters <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a> with <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicFilter.html">ClassicFilter</a>,
<a class="xref" href="Lucene.Net.Analysis.Core.LowerCaseFilter.html">LowerCaseFilter</a> and <a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a>, using a list of
English stop words.</p>
<p>You must specify the required <span class="xref">Lucene.Net.Util.LuceneVersion</span>
compatibility when creating <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicAnalyzer.html">ClassicAnalyzer</a>:
<ul><li> As of 3.1, <a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a> correctly handles Unicode 4.0
supplementary characters in stopwords</li><li> As of 2.9, <a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a> preserves position
increments</li><li> As of 2.4, <span class="xref">Lucene.Net.Analysis.Token</span>s incorrectly identified as acronyms
are corrected (see <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1068</a>)</li></ul>
<a class="xref" href="Lucene.Net.Analysis.Standard.ClassicAnalyzer.html">ClassicAnalyzer</a> was named <a class="xref" href="Lucene.Net.Analysis.Standard.StandardAnalyzer.html">StandardAnalyzer</a> in Lucene versions prior to 3.1.
As of 3.1, <a class="xref" href="Lucene.Net.Analysis.Standard.StandardAnalyzer.html">StandardAnalyzer</a> implements Unicode text segmentation,
as specified by UAX#29.
</p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.ClassicFilter.html">ClassicFilter</a></h4>
<section><p>Normalizes tokens extracted with <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a>. </p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.ClassicFilterFactory.html">ClassicFilterFactory</a></h4>
<section><p>Factory for <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicFilter.html">ClassicFilter</a>.</p>
<pre><code>&lt;fieldType name=&quot;text_clssc&quot; class=&quot;solr.TextField&quot; positionIncrementGap=&quot;100&quot;>
&lt;analyzer>
&lt;tokenizer class=&quot;solr.ClassicTokenizerFactory&quot;/>
&lt;filter class=&quot;solr.ClassicFilterFactory&quot;/>
&lt;/analyzer>
&lt;/fieldType></code></pre>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a></h4>
<section><p>A grammar-based tokenizer constructed with JFlex (and then ported to .NET)</p>
<p> This should be a good tokenizer for most European-language documents:
<ul><li>Splits words at punctuation characters, removing punctuation. However, a
dot that&apos;s not followed by whitespace is considered part of a token.</li><li>Splits words at hyphens, unless there&apos;s a number in the token, in which case
the whole token is interpreted as a product number and is not split.</li><li>Recognizes email addresses and internet hostnames as one token.</li></ul>
</p>
<p>Many applications have specific tokenizer needs. If this tokenizer does
not suit your application, please consider copying this source code
directory to your project and maintaining your own grammar-based tokenizer.
<a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a> was named <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a> in Lucene versions prior to 3.1.
As of 3.1, <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a> implements Unicode text segmentation,
as specified by UAX#29.
</p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizerFactory.html">ClassicTokenizerFactory</a></h4>
<section><p>Factory for <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a>.</p>
<pre><code>&lt;fieldType name=&quot;text_clssc&quot; class=&quot;solr.TextField&quot; positionIncrementGap=&quot;100&quot;>
&lt;analyzer>
&lt;tokenizer class=&quot;solr.ClassicTokenizerFactory&quot; maxTokenLength=&quot;120&quot;/>
&lt;/analyzer>
&lt;/fieldType></code></pre>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.StandardAnalyzer.html">StandardAnalyzer</a></h4>
<section><p>Filters <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a> with <a class="xref" href="Lucene.Net.Analysis.Standard.StandardFilter.html">StandardFilter</a>,
<a class="xref" href="Lucene.Net.Analysis.Core.LowerCaseFilter.html">LowerCaseFilter</a> and <a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a>, using a list of
English stop words.</p>
<p>You must specify the required <span class="xref">Lucene.Net.Util.LuceneVersion</span>
compatibility when creating <a class="xref" href="Lucene.Net.Analysis.Standard.StandardAnalyzer.html">StandardAnalyzer</a>:
<ul><li> As of 3.4, Hiragana and Han characters are no longer wrongly split
from their combining characters. If you use a previous version number,
you get the exact broken behavior for backwards compatibility.</li><li> As of 3.1, <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a> implements Unicode text segmentation,
and <a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a> correctly handles Unicode 4.0 supplementary characters
in stopwords. <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a> and <a class="xref" href="Lucene.Net.Analysis.Standard.ClassicAnalyzer.html">ClassicAnalyzer</a>
are the pre-3.1 implementations of <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a> and
<a class="xref" href="Lucene.Net.Analysis.Standard.StandardAnalyzer.html">StandardAnalyzer</a>.</li><li> As of 2.9, <a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a> preserves position increments</li><li> As of 2.4, <span class="xref">Lucene.Net.Analysis.Token</span>s incorrectly identified as acronyms
are corrected (see <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1068</a>)</li></ul>
</p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.StandardFilter.html">StandardFilter</a></h4>
<section><p>Normalizes tokens extracted with <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a>.</p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.StandardFilterFactory.html">StandardFilterFactory</a></h4>
<section><p>Factory for <a class="xref" href="Lucene.Net.Analysis.Standard.StandardFilter.html">StandardFilter</a>.</p>
<pre><code>&lt;fieldType name=&quot;text_stndrd&quot; class=&quot;solr.TextField&quot; positionIncrementGap=&quot;100&quot;>
&lt;analyzer>
&lt;tokenizer class=&quot;solr.StandardTokenizerFactory&quot;/>
&lt;filter class=&quot;solr.StandardFilterFactory&quot;/>
&lt;/analyzer>
&lt;/fieldType></code></pre>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a></h4>
<section><p>A grammar-based tokenizer constructed with JFlex.
<p>
As of Lucene version 3.1, this class implements the Word Break rules from the
Unicode Text Segmentation algorithm, as specified in
<a href="http://unicode.org/reports/tr29/">Unicode Standard Annex #29</a>.
<p>
</p>
<p>Many applications have specific tokenizer needs. If this tokenizer does
not suit your application, please consider copying this source code
directory to your project and maintaining your own grammar-based tokenizer.</p>
<p>
<p>You must specify the required <span class="xref">Lucene.Net.Util.LuceneVersion</span>
compatibility when creating <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a>:
<ul><li> As of 3.4, Hiragana and Han characters are no longer wrongly split
from their combining characters. If you use a previous version number,
you get the exact broken behavior for backwards compatibility.</li><li> As of 3.1, StandardTokenizer implements Unicode text segmentation.
If you use a previous version number, you get the exact behavior of
<a class="xref" href="Lucene.Net.Analysis.Standard.ClassicTokenizer.html">ClassicTokenizer</a> for backwards compatibility.</li></ul>
</p></p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizerFactory.html">StandardTokenizerFactory</a></h4>
<section><p>Factory for <a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizer.html">StandardTokenizer</a>. </p>
<pre><code>&lt;fieldType name=&quot;text_stndrd&quot; class=&quot;solr.TextField&quot; positionIncrementGap=&quot;100&quot;>
&lt;analyzer>
&lt;tokenizer class=&quot;solr.StandardTokenizerFactory&quot; maxTokenLength=&quot;255&quot;/>
&lt;/analyzer>
&lt;/fieldType></code></pre>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizerImpl.html">StandardTokenizerImpl</a></h4>
<section><p>This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
<a href="http://unicode.org/reports/tr29/">Unicode Standard Annex #29</a>.
<p>
Tokens produced are of the following types:
<ul><li>&lt;ALPHANUM&gt;: A sequence of alphabetic and numeric characters</li><li>&lt;NUM&gt;: A number</li><li>&lt;SOUTHEAST_ASIAN&gt;: A sequence of characters from South and Southeast
Asian languages, including Thai, Lao, Myanmar, and Khmer</li><li>&lt;IDEOGRAPHIC&gt;: A single CJKV ideographic character</li><li>&lt;HIRAGANA&gt;: A single hiragana character</li><li>&lt;KATAKANA&gt;: A sequence of katakana characters</li><li>&lt;HANGUL&gt;: A sequence of Hangul characters</li></ul></p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.StandardTokenizerInterface.html">StandardTokenizerInterface</a></h4>
<section></section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer.html">UAX29URLEmailAnalyzer</a></h4>
<section><p>Filters <a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer.html">UAX29URLEmailTokenizer</a>
with <a class="xref" href="Lucene.Net.Analysis.Standard.StandardFilter.html">StandardFilter</a>,
<a class="xref" href="Lucene.Net.Analysis.Core.LowerCaseFilter.html">LowerCaseFilter</a> and
<a class="xref" href="Lucene.Net.Analysis.Core.StopFilter.html">StopFilter</a>, using a list of
English stop words.</p>
<p>
You must specify the required <span class="xref">Lucene.Net.Util.LuceneVersion</span>
compatibility when creating <a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer.html">UAX29URLEmailAnalyzer</a>
</p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer.html">UAX29URLEmailTokenizer</a></h4>
<section><p>This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in `
<a href="http://unicode.org/reports/tr29/">Unicode Standard Annex #29</a>
URLs and email addresses are also tokenized according to the relevant RFCs.
<p>
Tokens produced are of the following types:
<ul><li>&lt;ALPHANUM&gt;: A sequence of alphabetic and numeric characters</li><li>&lt;NUM&gt;: A number</li><li>&lt;URL&gt;: A URL</li><li>&lt;EMAIL&gt;: An email address</li><li>&lt;SOUTHEAST_ASIAN&gt;: A sequence of characters from South and Southeast
Asian languages, including Thai, Lao, Myanmar, and Khmer</li><li>&lt;IDEOGRAPHIC&gt;: A single CJKV ideographic character</li><li>&lt;HIRAGANA&gt;: A single hiragana character</li></ul>
<p>You must specify the required <span class="xref">Lucene.Net.Util.LuceneVersion</span>
compatibility when creating <a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer.html">UAX29URLEmailTokenizer</a>:
<ul><li> As of 3.4, Hiragana and Han characters are no longer wrongly split
from their combining characters. If you use a previous version number,
you get the exact broken behavior for backwards compatibility.</li></ul>
</p></p>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizerFactory.html">UAX29URLEmailTokenizerFactory</a></h4>
<section><p>Factory for <a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer.html">UAX29URLEmailTokenizer</a>. </p>
<pre><code>&lt;fieldType name=&quot;text_urlemail&quot; class=&quot;solr.TextField&quot; positionIncrementGap=&quot;100&quot;>
&lt;analyzer>
&lt;tokenizer class=&quot;solr.UAX29URLEmailTokenizerFactory&quot; maxTokenLength=&quot;255&quot;/>
&lt;/analyzer>
&lt;/fieldType></code></pre>
</section>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizerImpl.html">UAX29URLEmailTokenizerImpl</a></h4>
<section><p>This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
<a href="http://unicode.org/reports/tr29/">Unicode Standard Annex #29</a>
URLs and email addresses are also tokenized according to the relevant RFCs.
<p>
Tokens produced are of the following types:
<ul><li>&lt;ALPHANUM&gt;: A sequence of alphabetic and numeric characters</li><li>&lt;NUM&gt;: A number</li><li>&lt;URL&gt;: A URL</li><li>&lt;EMAIL&gt;: An email address</li><li>&lt;SOUTHEAST_ASIAN&gt;: A sequence of characters from South and Southeast
Asian languages, including Thai, Lao, Myanmar, and Khmer</li><li>&lt;IDEOGRAPHIC&gt;: A single CJKV ideographic character</li><li>&lt;HIRAGANA&gt;: A single hiragana character</li><li>&lt;KATAKANA&gt;: A sequence of katakana characters</li><li>&lt;HANGUL&gt;: A sequence of Hangul characters</li></ul></p>
</section>
<h3 id="interfaces">Interfaces
</h3>
<h4><a class="xref" href="Lucene.Net.Analysis.Standard.IStandardTokenizerInterface.html">IStandardTokenizerInterface</a></h4>
<section><p>Internal interface for supporting versioned grammars.</p>
<div class="lucene-block lucene-internal">This is a Lucene.NET INTERNAL API, use at your own risk</div><p>
</section>
</article>
</div>
<div class="hidden-sm col-md-2" role="complementary">
<div class="sideaffix">
<div class="contribution">
<ul class="nav">
<li>
<a href="https://github.com/apache/lucenenet/blob/docs/4.8.0-beta00010/src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md/#L2" class="contribution-link">Improve this Doc</a>
</li>
</ul>
</div>
<nav class="bs-docs-sidebar hidden-print hidden-xs hidden-sm affix" id="affix">
<!-- <p><a class="back-to-top" href="#top">Back to top</a><p> -->
</nav>
</div>
</div>
</div>
</div>
<footer>
<div class="grad-bottom"></div>
<div class="footer">
<div class="container">
<span class="pull-right">
<a href="#top">Back to top</a>
</span>
Copyright © 2020 Licensed to the Apache Software Foundation (ASF)
</div>
</div>
</footer>
</div>
<script type="text/javascript" src="https://lucenenet.apache.org/docs/4.8.0-beta00009/styles/docfx.vendor.js"></script>
<script type="text/javascript" src="https://lucenenet.apache.org/docs/4.8.0-beta00009/styles/docfx.js"></script>
<script type="text/javascript" src="https://lucenenet.apache.org/docs/4.8.0-beta00009/styles/main.js"></script>
</body>
</html>