blob: dee8ddfb144a898bad43f053c887594e808b3c7c [file] [log] [blame]
= Language Analysis
:example-source-dir: {solr-root-path}core/src/test-files/solr/collection1/conf/
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
This section contains information about tokenizers and filters related to character set conversion or for use with specific languages.
For the European languages, tokenization is fairly straightforward. Tokens are delimited by white space and/or a relatively small set of punctuation characters.
In other languages the tokenization rules are often not so simple. Some European languages may also require special tokenization rules, such as rules for decompounding German words.
For information about language detection at index time, see <<detecting-languages-during-indexing.adoc#,Detecting Languages During Indexing>>.
== KeywordMarkerFilterFactory
Protects words from being modified by stemmers. A customized protected word list may be specified with the "protected" attribute in the schema. Any words in the protected word list will not be modified by any stemmer in Solr.
A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` <<config-sets.adoc#,configset>> directory:
[source,xml]
----
<fieldtype name="myfieldtype" class="solr.TextField">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt" />
<filter class="solr.PorterStemFilterFactory" />
</analyzer>
</fieldtype>
----
== KeywordRepeatFilterFactory
Emits each token twice, one with the `KEYWORD` attribute and once without.
If placed before a stemmer, the result will be that you will get the unstemmed token preserved on the same position as the stemmed one. Queries matching the original exact term will get a better score while still maintaining the recall benefit of stemming. Another advantage of keeping the original token is that wildcard truncation will work as expected.
To configure, add the `KeywordRepeatFilterFactory` early in the analysis chain. It is recommended to also include `RemoveDuplicatesTokenFilterFactory` to avoid duplicates when tokens are not stemmed.
A sample fieldType configuration could look like this:
[source,xml]
----
<fieldtype name="english_stem_preserve_original" class="solr.TextField">
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.KeywordRepeatFilterFactory" />
<filter class="solr.PorterStemFilterFactory" />
<filter class="solr.RemoveDuplicatesTokenFilterFactory" />
</analyzer>
</fieldtype>
----
IMPORTANT: When adding the same token twice, it will also score twice (double), so you may have to re-tune your ranking rules.
== StemmerOverrideFilterFactory
Overrides stemming algorithms by applying a custom mapping, then protecting these terms from being modified by stemmers.
A customized mapping of words to stems, in a tab-separated file, can be specified to the `dictionary` attribute in the schema. Words in this mapping will be stemmed to the stems from the file, and will not be further changed by any stemmer.
[source,xml]
----
<fieldtype name="myfieldtype" class="solr.TextField">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.StemmerOverrideFilterFactory" dictionary="stemdict.txt" />
<filter class="solr.PorterStemFilterFactory" />
</analyzer>
</fieldtype>
----
A sample `stemdict.txt` file is shown below:
[source,text]
----
include::{example-source-dir}stemdict.txt[lines=18..22]
----
If you have a checkout of Solr's source code locally, you can also find this example in Solr's test resources at `solr/core/src/test-files/solr/collection1/conf/stemdict.txt`.
== Dictionary Compound Word Token Filter
This filter splits, or _decompounds_, compound words into individual words using a dictionary of the component words. Each input token is passed through unchanged. If it can also be decompounded into subwords, each subword is also added to the stream at the same logical position.
Compound words are most commonly found in Germanic languages.
*Factory class:* `solr.DictionaryCompoundWordTokenFilterFactory`
*Arguments:*
`dictionary`:: (required) The path of a file that contains a list of simple words, one per line. Blank lines and lines that begin with "`#`" are ignored. See <<resource-loading.adoc#,Resource Loading>> for more information.
`minWordSize`:: (integer, default 5) Any token shorter than this is not decompounded.
`minSubwordSize`:: (integer, default 2) Subwords shorter than this are not emitted as tokens.
`maxSubwordSize`:: (integer, default 15) Subwords longer than this are not emitted as tokens.
`onlyLongestMatch`:: (true/false) If true (the default), only the longest matching subwords will generate new tokens.
*Example:*
Assume that `germanwords.txt` contains at least the following words: `dumm kopf donau dampf schiff`
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.DictionaryCompoundWordTokenFilterFactory" dictionary="germanwords.txt"/>
</analyzer>
----
*In:* "Donaudampfschiff dummkopf"
*Tokenizer to Filter:* "Donaudampfschiff"(1), "dummkopf"(2),
*Out:* "Donaudampfschiff"(1), "Donau"(1), "dampf"(1), "schiff"(1), "dummkopf"(2), "dumm"(2), "kopf"(2)
== Unicode Collation
Unicode Collation is a language-sensitive method of sorting text that can also be used for advanced search purposes.
Unicode Collation in Solr is fast, because all the work is done at index time.
Rather than specifying an analyzer within `<fieldtype ... class="solr.TextField">`, the `solr.CollationField` and `solr.ICUCollationField` field type classes provide this functionality. `solr.ICUCollationField`, which is backed by http://site.icu-project.org[the ICU4J library], provides more flexible configuration, has more locales, is significantly faster, and requires less memory and less index space, since its keys are smaller than those produced by the JDK implementation that backs `solr.CollationField`.
To use `solr.ICUCollationField`, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add.
`solr.ICUCollationField` and `solr.CollationField` fields can be created in two ways:
* Based upon a system collator associated with a Locale.
* Based upon a tailored `RuleBasedCollator` ruleset.
*Arguments for `solr.ICUCollationField`, specified as attributes within the `<fieldtype>` element:*
Using a System collator:
`locale`:: (required) http://www.rfc-editor.org/rfc/rfc3066.txt[RFC 3066] locale ID. See http://demo.icu-project.org/icu-bin/locexp[the ICU locale explorer] for a list of supported locales.
`strength`:: Valid values are `primary`, `secondary`, `tertiary`, `quaternary`, or `identical`. See http://userguide.icu-project.org/collation/concepts#TOC-Comparison-Levels[Comparison Levels in ICU Collation Concepts] for more information.
`decomposition`:: Valid values are `no` or `canonical`. See http://userguide.icu-project.org/collation/concepts#TOC-Normalization[Normalization in ICU Collation Concepts] for more information.
Using a Tailored ruleset:
`custom`:: (required) Path to a UTF-8 text file containing rules supported by the ICU http://icu-project.org/apiref/icu4j/com/ibm/icu/text/RuleBasedCollator.html[`RuleBasedCollator`]
`strength`:: Valid values are `primary`, `secondary`, `tertiary`, `quaternary`, or `identical`. See http://userguide.icu-project.org/collation/concepts#TOC-Comparison-Levels[Comparison Levels in ICU Collation Concepts] for more information.
`decomposition`:: Valid values are `no` or `canonical`. See http://userguide.icu-project.org/collation/concepts#TOC-Normalization[Normalization in ICU Collation Concepts] for more information.
Expert options:
`alternate`:: Valid values are `shifted` or `non-ignorable`. Can be used to ignore punctuation/whitespace.
`caseLevel`:: (true/false) If true, in combination with `strength="primary"`, accents are ignored but case is taken into account. The default is false. See http://userguide.icu-project.org/collation/concepts#TOC-CaseLevel[CaseLevel in ICU Collation Concepts] for more information.
`caseFirst`:: Valid values are `lower` or `upper`. Useful to control which is sorted first when case is not ignored.
`numeric`:: (true/false) If true, digits are sorted according to numeric value, e.g., foobar-9 sorts before foobar-10. The default is false.
`variableTop`:: Single character or contraction. Controls what is variable for `alternate`.
=== Sorting Text for a Specific Language
In this example, text is sorted according to the default German rules provided by ICU4J.
Locales are typically defined as a combination of language and country, but you can specify just the language if you want. For example, if you specify "de" as the language, you will get sorting that works well for the German language. If you specify "de" as the language and "CH" as the country, you will get German sorting specifically tailored for Switzerland.
[source,xml]
----
<!-- Define a field type for German collation -->
<fieldType name="collatedGERMAN" class="solr.ICUCollationField"
locale="de"
strength="primary" />
...
<!-- Define a field to store the German collated manufacturer names. -->
<field name="manuGERMAN" type="collatedGERMAN" indexed="false" stored="false" docValues="true"/>
...
<!-- Copy the text to this field. We could create French, English, Spanish versions too,
and sort differently for different users! -->
<copyField source="manu" dest="manuGERMAN"/>
----
In the example above, we defined the strength as "primary". The strength of the collation determines how strict the sort order will be, but it also depends upon the language. For example, in English, "primary" strength ignores differences in case and accents.
Another example:
[source,xml]
----
<fieldType name="polishCaseInsensitive" class="solr.ICUCollationField"
locale="pl_PL"
strength="secondary" />
...
<field name="city" type="text_general" indexed="true" stored="true"/>
...
<field name="city_sort" type="polishCaseInsensitive" indexed="true" stored="false"/>
...
<copyField source="city" dest="city_sort"/>
----
The type will be used for the fields where the data contains Polish text. The "secondary" strength will ignore case differences, but, unlike "primary" strength, a letter with diacritic(s) will be sorted differently from the same base letter without diacritics.
An example using the "city_sort" field to sort:
[source,plain]
----
q=*:*&fl=city&sort=city_sort+asc
----
=== Sorting Text for Multiple Languages
There are two approaches to supporting multiple languages: if there is a small list of languages you wish to support, consider defining collated fields for each language and using `copyField`. However, adding a large number of sort fields can increase disk and indexing costs. An alternative approach is to use the Unicode `default` collator.
The Unicode `default` or `ROOT` locale has rules that are designed to work well for most languages. To use the `default` locale, simply define the locale as the empty string. This Unicode default sort is still significantly more advanced than the standard Solr sort.
[source,xml]
----
<fieldType name="collatedROOT" class="solr.ICUCollationField"
locale=""
strength="primary" />
----
=== Sorting Text with Custom Rules
You can define your own set of sorting rules. It's easiest to take existing rules that are close to what you want and customize them.
In the example below, we create a custom rule set for German called DIN 5007-2. This rule set treats umlauts in German differently: it treats ö as equivalent to oe, ä as equivalent to ae, and ü as equivalent to ue. For more information, see the http://icu-project.org/apiref/icu4j/com/ibm/icu/text/RuleBasedCollator.html[ICU RuleBasedCollator javadocs].
This example shows how to create a custom rule set for `solr.ICUCollationField` and dump it to a file:
[source,java]
----
// get the default rules for Germany
// these are called DIN 5007-1 sorting
RuleBasedCollator baseCollator = (RuleBasedCollator) Collator.getInstance(new ULocale("de", "DE"));
// define some tailorings, to make it DIN 5007-2 sorting.
// For example, this makes ö equivalent to oe
String DIN5007_2_tailorings =
"& ae , a\u0308 & AE , A\u0308"+
"& oe , o\u0308 & OE , O\u0308"+
"& ue , u\u0308 & UE , u\u0308";
// concatenate the default rules to the tailorings, and dump it to a String
RuleBasedCollator tailoredCollator = new RuleBasedCollator(baseCollator.getRules() + DIN5007_2_tailorings);
String tailoredRules = tailoredCollator.getRules();
// write these to a file, be sure to use UTF-8 encoding!!!
FileOutputStream os = new FileOutputStream(new File("/solr_home/conf/customRules.dat"));
IOUtils.write(tailoredRules, os, "UTF-8");
----
This rule set can now be used for custom collation in Solr:
[source,xml]
----
<fieldType name="collatedCUSTOM" class="solr.ICUCollationField"
custom="customRules.dat"
strength="primary" />
----
=== JDK Collation
As mentioned above, ICU Unicode Collation is better in several ways than JDK Collation, but if you cannot use ICU4J for some reason, you can use `solr.CollationField`.
The principles of JDK Collation are the same as those of ICU Collation; you just specify `language`, `country` and `variant` arguments instead of the combined `locale` argument.
*Arguments for `solr.CollationField`, specified as attributes within the `<fieldtype>` element:*
Using a System collator (see http://www.oracle.com/technetwork/java/javase/java8locales-2095355.html[Oracle's list of locales supported in Java]):
`language`:: (required) http://www.loc.gov/standards/iso639-2/php/code_list.php[ISO-639] language code
`country`:: http://www.iso.org/iso/country_codes/iso_3166_code_lists/country_names_and_code_elements.htm[ISO-3166] country code
`variant`:: Vendor or browser-specific code
`strength`:: Valid values are `primary`, `secondary`, `tertiary` or `identical`. See {java-javadocs}java/text/Collator.html[Java Collator javadocs] for more information.
`decomposition`:: Valid values are `no`, `canonical`, or `full`. See {java-javadocs}java/text/Collator.html[Java Collator javadocs] for more information.
Using a Tailored ruleset:
`custom`:: (required) Path to a UTF-8 text file containing rules supported by the {java-javadocs}java/text/RuleBasedCollator.html[`JDK RuleBasedCollator`]
`strength`:: Valid values are `primary`, `secondary`, `tertiary` or `identical`. See {java-javadocs}java/text/Collator.html[Java Collator javadocs] for more information.
`decomposition`:: Valid values are `no`, `canonical`, or `full`. See {java-javadocs}java/text/Collator.html[Java Collator javadocs] for more information.
.A `solr.CollationField` example:
[source,xml]
----
<fieldType name="collatedGERMAN" class="solr.CollationField"
language="de"
country="DE"
strength="primary" /> <!-- ignore Umlauts and letter case when sorting -->
...
<field name="manuGERMAN" type="collatedGERMAN" indexed="false" stored="false" docValues="true" />
...
<copyField source="manu" dest="manuGERMAN"/>
----
== ASCII & Decimal Folding Filters
=== ASCII Folding
This filter converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists. Only those characters with reasonable ASCII alternatives are converted.
This can increase recall by causing more matches. On the other hand, it can reduce precision because language-specific character differences may be lost.
*Factory class:* `solr.ASCIIFoldingFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.ASCIIFoldingFilterFactory"/>
</analyzer>
----
*In:* "Björn Ångström"
*Tokenizer to Filter:* "Björn", "Ångström"
*Out:* "Bjorn", "Angstrom"
=== Decimal Digit Folding
This filter converts any character in the Unicode "Decimal Number" general category (`Nd`) into their equivalent Basic Latin digits (0-9).
This can increase recall by causing more matches. On the other hand, it can reduce precision because language-specific character differences may be lost.
*Factory class:* `solr.DecimalDigitFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.DecimalDigitFilterFactory"/>
</analyzer>
----
== OpenNLP Integration
The `lucene/analysis/opennlp` module provides OpenNLP integration via several analysis components: a tokenizer, a part-of-speech tagging filter, a phrase chunking filter, and a lemmatization filter. In addition to these analysis components, Solr also provides an update request processor to extract named entities - see <<update-request-processors.adoc#update-processor-factories-that-can-be-loaded-as-plugins,Update Processor Factories That Can Be Loaded as Plugins>>.
NOTE: The <<OpenNLP Tokenizer>> must be used with all other OpenNLP analysis components, for two reasons: first, the OpenNLP Tokenizer detects and marks the sentence boundaries required by all the OpenNLP filters; and second, since the pre-trained OpenNLP models used by these filters were trained using the corresponding language-specific sentence-detection/tokenization models, the same tokenization, using the same models, must be used at runtime for optimal performance.
To use the OpenNLP components, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add.
=== OpenNLP Tokenizer
The OpenNLP Tokenizer takes two language-specific binary model files as parameters: a sentence detector model and a tokenizer model. The last token in each sentence is flagged, so that following OpenNLP-based filters can use this information to apply operations to tokens one sentence at a time. See the http://opennlp.apache.org/models.html[OpenNLP website] for information on downloading pre-trained models.
*Factory class:* `solr.OpenNLPTokenizerFactory`
*Arguments:*
`sentenceModel`:: (required) The path of a language-specific OpenNLP sentence detection model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
`tokenizerModel`:: (required) The path of a language-specific OpenNLP tokenization model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
</analyzer>
----
=== OpenNLP Part-Of-Speech Filter
This filter sets each token's type attribute to the part of speech (POS) assigned by the configured model. See the http://opennlp.apache.org/models.html[OpenNLP website] for information on downloading pre-trained models.
NOTE: Lucene currently does not index token types, so if you want to keep this information, you have to preserve it either in a payload or as a synonym; see the examples below.
*Factory class:* `solr.OpenNLPPOSFilterFactory`
*Arguments:*
`posTaggerModel`:: (required) The path of a language-specific OpenNLP POS tagger model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Examples:*
The OpenNLP tokenizer will tokenize punctuation, which is useful for following token filters, but ordinarily you don't want to include punctuation in your index, so the `TypeTokenFilter` (<<filter-descriptions.adoc#type-token-filter,described here>>) is included in the examples below, with `stop.pos.txt` containing the following:
.stop.pos.txt
[source,text]
----
#
$
''
``
,
-LRB-
-RRB-
:
.
----
Index the POS for each token as a payload:
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.TypeAsPayloadFilterFactory"/>
<filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
</analyzer>
----
Index the POS for each token as a synonym, after prefixing the POS with "@" (see the <<filter-descriptions.adoc#type-as-synonym-filter,TypeAsSynonymFilter description>>):
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.TypeAsSynonymFilterFactory" prefix="@"/>
<filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
</analyzer>
----
Only index nouns - the `keep.pos.txt` file contains lines `NN`, `NNS`, `NNP` and `NNPS`:
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.TypeTokenFilterFactory" types="keep.pos.txt" useWhitelist="true"/>
</analyzer>
----
=== OpenNLP Phrase Chunking Filter
This filter sets each token's type attribute based on the output of an OpenNLP phrase chunking model. The chunk labels replace the POS tags that previously were in each token's type attribute. See the http://opennlp.apache.org/models.html[OpenNLP website] for information on downloading pre-trained models.
Prerequisite: the <<OpenNLP Tokenizer>> and the <<OpenNLP Part-Of-Speech Filter>> must precede this filter.
NOTE: Lucene currently does not index token types, so if you want to keep this information, you have to preserve it either in a payload or as a synonym; see the examples below.
*Factory class:* `solr.OpenNLPChunkerFilter`
*Arguments:*
`chunkerModel`:: (required) The path of a language-specific OpenNLP phrase chunker model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Examples*:
Index the phrase chunk label for each token as a payload:
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.OpenNLPChunkerFactory" chunkerModel="en-chunker.bin"/>
<filter class="solr.TypeAsPayloadFilterFactory"/>
</analyzer>
----
Index the phrase chunk label for each token as a synonym, after prefixing it with "#" (see the <<filter-descriptions.adoc#type-as-synonym-filter,TypeAsSynonymFilter description>>):
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.OpenNLPChunkerFactory" chunkerModel="en-chunker.bin"/>
<filter class="solr.TypeAsSynonymFilterFactory" prefix="#"/>
</analyzer>
----
=== OpenNLP Lemmatizer Filter
This filter replaces the text of each token with its lemma. Both a dictionary-based lemmatizer and a model-based lemmatizer are supported. If both are configured, the dictionary-based lemmatizer is tried first, and then the model-based lemmatizer is consulted for out-of-vocabulary tokens. See the http://opennlp.apache.org/models.html[OpenNLP website] for information on downloading pre-trained models.
*Factory class:* `solr.OpenNLPLemmatizerFilter`
*Arguments:*
Either `dictionary` or `lemmatizerModel` must be provided, and both may be provided - see the examples below:
`dictionary`:: (optional) The path of a lemmatization dictionary file. See <<resource-loading.adoc#,Resource Loading>> for more information. The dictionary file must be encoded as UTF-8, with one entry per line, in the form `word[tab]lemma[tab]part-of-speech`, e.g., `wrote[tab]write[tab]VBD`.
`lemmatizerModel`:: (optional) The path of a language-specific OpenNLP lemmatizer model file. See <<resource-loading.adoc#,Resource Loading>> for more information.
*Examples:*
Perform dictionary-based lemmatization, and fall back to model-based lemmatization for out-of-vocabulary tokens (see the <<OpenNLP Part-Of-Speech Filter>> section above for information about using `TypeTokenFilter` to avoid indexing punctuation):
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.OpenNLPLemmatizerFilterFactory"
dictionary="lemmas.txt"
lemmatizerModel="en-lemmatizer.bin"/>
<filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
</analyzer>
----
Perform dictionary-based lemmatization only:
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.OpenNLPLemmatizerFilterFactory" dictionary="lemmas.txt"/>
<filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
</analyzer>
----
Perform model-based lemmatization only, preserving the original token and emitting the lemma as a synonym (see the <<KeywordRepeatFilterFactory,KeywordRepeatFilterFactory description>>)):
[source,xml]
----
<analyzer>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-tokenizer.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.KeywordRepeatFilterFactory"/>
<filter class="solr.OpenNLPLemmatizerFilterFactory" lemmatizerModel="en-lemmatizer.bin"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
<filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
</analyzer>
----
== Language-Specific Factories
These factories are each designed to work with specific languages. The languages covered here are:
* <<Arabic>>
* <<Bengali>>
* <<Brazilian Portuguese>>
* <<Bulgarian>>
* <<Catalan>>
* <<Traditional Chinese>>
* <<Simplified Chinese>>
* <<Czech>>
* <<Danish>>
* <<Dutch>>
* <<Estonian>>
* <<Finnish>>
* <<French>>
* <<Galician>>
* <<German>>
* <<Greek>>
* <<hebrew-lao-myanmar-khmer,Hebrew, Lao, Myanmar, Khmer>>
* <<Hindi>>
* <<Indonesian>>
* <<Italian>>
* <<Irish>>
* <<Japanese>>
* <<Korean>>
* <<Latvian>>
* <<Norwegian>>
* <<Persian>>
* <<Polish>>
* <<Portuguese>>
* <<Romanian>>
* <<Russian>>
* <<Scandinavian>>
* <<Serbian>>
* <<Spanish>>
* <<Swedish>>
* <<Thai>>
* <<Turkish>>
* <<Ukrainian>>
=== Arabic
Solr provides support for the http://www.mtholyoke.edu/~lballest/Pubs/arab_stem05.pdf[Light-10] (PDF) stemming algorithm, and Lucene includes an example stopword list.
This algorithm defines both character normalization and stemming, so these are split into two filters to provide more flexibility.
*Factory classes:* `solr.ArabicStemFilterFactory`, `solr.ArabicNormalizationFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.ArabicNormalizationFilterFactory"/>
<filter class="solr.ArabicStemFilterFactory"/>
</analyzer>
----
=== Bengali
There are two filters written specifically for dealing with Bengali language. They use the Lucene classes `org.apache.lucene.analysis.bn.BengaliNormalizationFilter` and `org.apache.lucene.analysis.bn.BengaliStemFilter`.
*Factory classes:* `solr.BengaliStemFilterFactory`, `solr.BengaliNormalizationFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.BengaliNormalizationFilterFactory"/>
<filter class="solr.BengaliStemFilterFactory"/>
</analyzer>
----
*Normalisation* - `মানুষ` \-> `মানুস`
*Stemming* - `সমস্ত` \-> `সমস্`
=== Brazilian Portuguese
This is a Java filter written specifically for stemming the Brazilian dialect of the Portuguese language. It uses the Lucene class `org.apache.lucene.analysis.br.BrazilianStemmer`. Although that stemmer can be configured to use a list of protected words (which should not be stemmed), this factory does not accept any arguments to specify such a list.
*Factory class:* `solr.BrazilianStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.BrazilianStemFilterFactory"/>
</analyzer>
----
*In:* "praia praias"
*Tokenizer to Filter:* "praia", "praias"
*Out:* "pra", "pra"
=== Bulgarian
Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/jacques.savoy/Papers/BUIR.pdf[this algorithm] (PDF), and Lucene includes an example stopword list.
*Factory class:* `solr.BulgarianStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.BulgarianStemFilterFactory"/>
</analyzer>
----
=== Catalan
Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `language="Catalan"`. Solr includes a set of contractions for Catalan, which can be stripped using `solr.ElisionFilterFactory`.
*Factory class:* `solr.SnowballPorterFilterFactory`
*Arguments:*
`language`:: (required) stemmer language, "Catalan" in this case
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.ElisionFilterFactory"
articles="lang/contractions_ca.txt"/>
<filter class="solr.SnowballPorterFilterFactory" language="Catalan" />
</analyzer>
----
*In:* "llengües llengua"
*Tokenizer to Filter:* "llengües"(1) "llengua"(2),
*Out:* "llengu"(1), "llengu"(2)
=== Traditional Chinese
The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is suitable for Traditional Chinese text. It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add.
<<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> can also be used to tokenize Traditional Chinese text. Following the Word Break rules from the Unicode Text Segmentation algorithm, it produces one token per Chinese character. When combined with <<CJK Bigram Filter>>, overlapping bigrams of Chinese characters are formed.
<<CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms.
*Examples:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.ICUTokenizerFactory"/>
<filter class="solr.CJKWidthFilterFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.CJKBigramFilterFactory"/>
<filter class="solr.CJKWidthFilterFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
----
=== CJK Bigram Filter
Forms bigrams (overlapping 2-character sequences) of CJK characters that are generated from <<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> or <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>>.
By default, all CJK characters produce bigrams, but finer grained control is available by specifying orthographic type arguments `han`, `hiragana`, `katakana`, and `hangul`. When set to `false`, characters of the corresponding type will be passed through as unigrams, and will not be included in any bigrams.
When a CJK character has no adjacent characters to form a bigram, it is output in unigram form. If you want to always output both unigrams and bigrams, set the `outputUnigrams` argument to `true`.
In all cases, all non-CJK input is passed through unmodified.
*Arguments:*
`han`:: (true/false) If false, Han (Chinese) characters will not form bigrams. Default is true.
`hiragana`:: (true/false) If false, Hiragana (Japanese) characters will not form bigrams. Default is true.
`katakana`:: (true/false) If false, Katakana (Japanese) characters will not form bigrams. Default is true.
`hangul`:: (true/false) If false, Hangul (Korean) characters will not form bigrams. Default is true.
`outputUnigrams`:: (true/false) If true, in addition to forming bigrams, all characters are also passed through as unigrams. Default is false.
See the example under <<Traditional Chinese>>.
=== Simplified Chinese
For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<HMM Chinese Tokenizer>>. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add.
The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is also suitable for Simplified Chinese text. It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add.
Also useful for Chinese analysis:
<<CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
*Examples:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.HMMChineseTokenizerFactory"/>
<filter class="solr.CJKWidthFilterFactory"/>
<filter class="solr.StopFilterFactory"
words="org/apache/lucene/analysis/cn/smart/stopwords.txt"/>
<filter class="solr.PorterStemFilterFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer>
<tokenizer class="solr.ICUTokenizerFactory"/>
<filter class="solr.CJKWidthFilterFactory"/>
<filter class="solr.StopFilterFactory"
words="org/apache/lucene/analysis/cn/smart/stopwords.txt"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
----
=== HMM Chinese Tokenizer
For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the `solr.HMMChineseTokenizerFactory` in the `analysis-extras` contrib module. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add.
*Factory class:* `solr.HMMChineseTokenizerFactory`
*Arguments:* None
*Examples:*
To use the default setup with fallback to English Porter stemmer for English words, use:
`<analyzer class="org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer"/>`
Or to configure your own analysis setup, use the `solr.HMMChineseTokenizerFactory` along with your custom filter setup. See an example of this in the <<Simplified Chinese>> section.
=== Czech
Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.cfm?id=1598600[this algorithm], and Lucene includes an example stopword list.
*Factory class:* `solr.CzechStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.CzechStemFilterFactory"/>
<analyzer>
----
*In:* "prezidenští, prezidenta, prezidentského"
*Tokenizer to Filter:* "prezidenští", "prezidenta", "prezidentského"
*Out:* "preziden", "preziden", "preziden"
=== Danish
Solr can stem Danish using the Snowball Porter Stemmer with an argument of `language="Danish"`.
Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
*Factory class:* `solr.SnowballPorterFilterFactory`
*Arguments:*
`language`:: (required) stemmer language, "Danish" in this case
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="Danish" />
</analyzer>
----
*In:* "undersøg undersøgelse"
*Tokenizer to Filter:* "undersøg"(1) "undersøgelse"(2),
*Out:* "undersøg"(1), "undersøg"(2)
=== Dutch
Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `language="Dutch"`.
*Factory class:* `solr.SnowballPorterFilterFactory`
*Arguments:*
`language`:: (required) stemmer language, "Dutch" in this case
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="Dutch"/>
</analyzer>
----
*In:* "kanaal kanalen"
*Tokenizer to Filter:* "kanaal", "kanalen"
*Out:* "kanal", "kanal"
=== Estonian
Solr can stem Estonian using the Snowball Porter Stemmer with an argument of `language="Estonian"`.
*Factory class:* `solr.SnowballPorterFilterFactory`
*Arguments:*
`language`:: (required) stemmer language, "Estonian" in this case
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="Estonian"/>
</analyzer>
----
*In:* "Taevani tõustes"
*Tokenizer to Filter:* "Taevani", "tõustes"
*Out:* "taevani", "tõus"
=== Finnish
Solr includes support for stemming Finnish, and Lucene includes an example stopword list.
*Factory class:* `solr.FinnishLightStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.FinnishLightStemFilterFactory"/>
</analyzer>
----
*In:* "kala kalat"
*Tokenizer to Filter:* "kala", "kalat"
*Out:* "kala", "kala"
=== French
==== Elision Filter
Removes article elisions from a token stream. This filter can be useful for languages such as French, Catalan, Italian, and Irish.
*Factory class:* `solr.ElisionFilterFactory`
*Arguments:*
`articles`:: The pathname of a file that contains a list of articles, one per line, to be stripped. Articles are words such as "le", which are commonly abbreviated, such as in _l'avion_ (the plane). This file should include the abbreviated form, which precedes the apostrophe. In this case, simply "_l_". If no `articles` attribute is specified, a default set of French articles is used.
`ignoreCase`:: (boolean) If true, the filter ignores the case of words when comparing them to the common word file. Defaults to `false`
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.ElisionFilterFactory"
ignoreCase="true"
articles="lang/contractions_fr.txt"/>
</analyzer>
----
*In:* "L'histoire d'art"
*Tokenizer to Filter:* "L'histoire", "d'art"
*Out:* "histoire", "art"
==== French Light Stem Filter
Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFactory`, a lighter stemmer called `solr.FrenchLightStemFilterFactory`, and an even less aggressive stemmer called `solr.FrenchMinimalStemFilterFactory`. Lucene includes an example stopword list.
*Factory classes:* `solr.FrenchLightStemFilterFactory`, `solr.FrenchMinimalStemFilterFactory`
*Arguments:* None
*Examples:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.ElisionFilterFactory"
articles="lang/contractions_fr.txt"/>
<filter class="solr.FrenchLightStemFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.ElisionFilterFactory"
articles="lang/contractions_fr.txt"/>
<filter class="solr.FrenchMinimalStemFilterFactory"/>
</analyzer>
----
*In:* "le chat, les chats"
*Tokenizer to Filter:* "le", "chat", "les", "chats"
*Out:* "le", "chat", "le", "chat"
=== Galician
Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua/stemming.jsp[this algorithm], and Lucene includes an example stopword list.
*Factory class:* `solr.GalicianStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.GalicianStemFilterFactory"/>
</analyzer>
----
*In:* "felizmente Luzes"
*Tokenizer to Filter:* "felizmente", "luzes"
*Out:* "feliz", "luz"
=== German
Solr includes four stemmers for German: one in the `solr.SnowballPorterFilterFactory language="German"`, a stemmer called `solr.GermanStemFilterFactory`, a lighter stemmer called `solr.GermanLightStemFilterFactory`, and an even less aggressive stemmer called `solr.GermanMinimalStemFilterFactory`. Lucene includes an example stopword list.
*Factory classes:* `solr.GermanStemFilterFactory`, `solr.LightGermanStemFilterFactory`, `solr.MinimalGermanStemFilterFactory`
*Arguments:* None
*Examples:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory "/>
<filter class="solr.GermanStemFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.GermanLightStemFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory "/>
<filter class="solr.GermanMinimalStemFilterFactory"/>
</analyzer>
----
*In:* "haus häuser"
*Tokenizer to Filter:* "haus", "häuser"
*Out:* "haus", "haus"
=== Greek
This filter converts uppercase letters in the Greek character set to the equivalent lowercase character.
*Factory class:* `solr.GreekLowerCaseFilterFactory`
*Arguments:* None
[IMPORTANT]
====
Use of custom charsets is no longer supported as of Solr 3.1. If you need to index text in these encodings, please use Java's character set conversion facilities (InputStreamReader, etc.) during I/O, so that Lucene can analyze this text as Unicode instead.
====
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.GreekLowerCaseFilterFactory"/>
</analyzer>
----
=== Hindi
Solr includes support for stemming Hindi following http://computing.open.ac.uk/Sites/EACLSouthAsia/Papers/p6-Ramanathan.pdf[this algorithm] (PDF), support for common spelling differences through the `solr.HindiNormalizationFilterFactory`, support for encoding differences through the `solr.IndicNormalizationFilterFactory` following http://ldc.upenn.edu/myl/IndianScriptsUnicode.html[this algorithm], and Lucene includes an example stopword list.
*Factory classes:* `solr.IndicNormalizationFilterFactory`, `solr.HindiNormalizationFilterFactory`, `solr.HindiStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.IndicNormalizationFilterFactory"/>
<filter class="solr.HindiNormalizationFilterFactory"/>
<filter class="solr.HindiStemFilterFactory"/>
</analyzer>
----
=== Indonesian
Solr includes support for stemming Indonesian (Bahasa Indonesia) following http://www.illc.uva.nl/Publications/ResearchReports/MoL-2003-02.text.pdf[this algorithm] (PDF), and Lucene includes an example stopword list.
*Factory class:* `solr.IndonesianStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.IndonesianStemFilterFactory" stemDerivational="true" />
</analyzer>
----
*In:* "sebagai sebagainya"
*Tokenizer to Filter:* "sebagai", "sebagainya"
*Out:* "bagai", "bagai"
=== Italian
Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFactory language="Italian"`, and a lighter stemmer called `solr.ItalianLightStemFilterFactory`. Lucene includes an example stopword list.
*Factory class:* `solr.ItalianStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.ElisionFilterFactory"
articles="lang/contractions_it.txt"/>
<filter class="solr.ItalianLightStemFilterFactory"/>
</analyzer>
----
*In:* "propaga propagare propagamento"
*Tokenizer to Filter:* "propaga", "propagare", "propagamento"
*Out:* "propag", "propag", "propag"
=== Irish
Solr can stem Irish using the Snowball Porter Stemmer with an argument of `language="Irish"`. Solr includes `solr.IrishLowerCaseFilterFactory`, which can handle Irish-specific constructs. Solr also includes a set of contractions for Irish which can be stripped using `solr.ElisionFilterFactory`.
*Factory class:* `solr.SnowballPorterFilterFactory`
*Arguments:*
`language`:: (required) stemmer language, "Irish" in this case
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.ElisionFilterFactory"
articles="lang/contractions_ga.txt"/>
<filter class="solr.IrishLowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="Irish" />
</analyzer>
----
*In:* "siopadóireacht síceapatacha b'fhearr m'athair"
*Tokenizer to Filter:* "siopadóireacht", "síceapatacha", "b'fhearr", "m'athair"
*Out:* "siopadóir", "síceapaite", "fearr", "athair"
=== Japanese
Solr includes support for analyzing Japanese, via the Lucene Kuromoji morphological analyzer, which includes several analysis components - more details on each below:
* <<Japanese Iteration Mark CharFilter,`JapaneseIterationMarkCharFilter`>> normalizes Japanese horizontal iteration marks (odoriji) to their expanded form.
* <<Japanese Tokenizer,`JapaneseTokenizer`>> tokenizes Japanese using morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
* <<Japanese Base Form Filter,`JapaneseBaseFormFilter`>> replaces original terms with their base forms (a.k.a. lemmas).
* <<Japanese Part Of Speech Stop Filter,`JapanesePartOfSpeechStopFilter`>> removes terms that have one of the configured parts-of-speech.
* <<Japanese Katakana Stem Filter,`JapaneseKatakanaStemFilter`>> normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
Also useful for Japanese analysis, from lucene-analyzers-common:
* <<CJK Width Filter,`CJKWidthFilter`>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
==== Japanese Iteration Mark CharFilter
Normalizes horizontal Japanese iteration marks (odoriji) to their expanded form. Vertical iteration marks are not supported.
*Factory class:* `JapaneseIterationMarkCharFilterFactory`
*Arguments:*
`normalizeKanji`:: set to `false` to not normalize kanji iteration marks (default is `true`)
`normalizeKana`:: set to `false` to not normalize kana iteration marks (default is `true`)
==== Japanese Tokenizer
Tokenizer for Japanese that uses morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
`JapaneseTokenizer` has a `search` mode (the default) that does segmentation useful for search: a heuristic is used to segment compound terms into their constituent parts while also keeping the original compound terms as synonyms.
*Factory class:* `solr.JapaneseTokenizerFactory`
*Arguments:*
`mode`:: Use `search` mode to get a noun-decompounding effect useful for search. `search` mode improves segmentation for search at the expense of part-of-speech accuracy. Valid values for `mode` are:
+
* `normal`: default segmentation
* `search`: segmentation useful for search (extra compound splitting)
* `extended`: search mode plus unigramming of unknown words (experimental)
+
For some applications it might be good to use `search` mode for indexing and `normal` mode for queries to increase precision and prevent parts of compounds from being matched and highlighted.
`userDictionary`:: filename for a user dictionary, which allows overriding the statistical model with your own entries for segmentation, part-of-speech tags and readings without a need to specify weights. See `lang/userdict_ja.txt` for a sample user dictionary file.
`userDictionaryEncoding`:: user dictionary encoding (default is UTF-8)
`discardPunctuation`:: set to `false` to keep punctuation, `true` to discard (the default)
`discardCompoundToken`:: set to `false` to keep original compound tokens with the `search` mode, `true` to discard.
==== Japanese Base Form Filter
Replaces original terms' text with the corresponding base form (lemma). (`JapaneseTokenizer` annotates each term with its base form.)
*Factory class:* `JapaneseBaseFormFilterFactory`
(no arguments)
==== Japanese Part Of Speech Stop Filter
Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` annotates terms with parts-of-speech.
*Factory class* *:* `JapanesePartOfSpeechStopFilterFactory`
*Arguments:*
`tags`:: filename for a list of parts-of-speech for which to remove terms; see `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#,configset>> for an example.
`enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens. *This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
==== Japanese Katakana Stem Filter
Normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
<<CJK Width Filter,`solr.CJKWidthFilterFactory`>> should be specified prior to this filter to normalize half-width katakana to full-width.
*Factory class:* `JapaneseKatakanaStemFilterFactory`
*Arguments:*
`minimumLength`:: terms below this length will not be stemmed. Default is 4, value must be 2 or more.
==== CJK Width Filter
Folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
*Factory class:* `CJKWidthFilterFactory`
(no arguments)
Example:
[source,xml]
----
<fieldType name="text_ja" positionIncrementGap="100" autoGeneratePhraseQueries="false">
<analyzer>
<!-- Uncomment if you need to handle iteration marks: -->
<!-- <charFilter class="solr.JapaneseIterationMarkCharFilterFactory" /> -->
<tokenizer class="solr.JapaneseTokenizerFactory" mode="search" userDictionary="lang/userdict_ja.txt"/>
<filter class="solr.JapaneseBaseFormFilterFactory"/>
<filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt"/>
<filter class="solr.CJKWidthFilterFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ja.txt"/>
<filter class="solr.JapaneseKatakanaStemFilterFactory" minimumLength="4"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>
----
=== Korean
The Korean (nori) analyzer integrates Lucene's nori analysis module into Solr.
It uses the https://bitbucket.org/eunjeon/mecab-ko-dic[mecab-ko-dic] dictionary to perform morphological analysis of Korean texts.
The dictionary was built with http://taku910.github.io/mecab/[MeCab] and defines a format for the features adapted for the Korean language.
Nori also has a user dictionary feature that allows overriding the statistical model with your own entries for segmentation, part-of-speech tags, and readings without a need to specify weights.
*Example*:
[source,xml]
----
<fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.KoreanTokenizerFactory" decompoundMode="discard" outputUnknownUnigrams="false"/>
<filter class="solr.KoreanPartOfSpeechStopFilterFactory" />
<filter class="solr.KoreanReadingFormFilterFactory" />
<filter class="solr.LowerCaseFilterFactory" />
</analyzer>
</fieldType>
----
==== Korean Tokenizer
*Factory class*: `solr.KoreanTokenizerFactory`
*Arguments*:
`userDictionary`::
Path to a user-supplied dictionary to add custom nouns or compound terms to the default dictionary.
`userDictionaryEncoding`::
Character encoding of the user dictionary.
`decompoundMode`::
Defines how to handle compound tokens. The options are:
* `none`: No decomposition for tokens.
* `discard`: (default) Tokens are decomposed and the original form is discarded.
* `mixed`: Tokens are decomposed and the original form is retained.
`outputUnknownUnigrams`::
If `true`, unigrams will be output for unknown words.
The default is `false`.
`discardPunctuation`::
If `true`, the default, punctuation will be discarded.
==== Korean Part of Speech Stop Filter
This filter removes tokens that match parts of speech tags.
*Factory class*: `solr.KoreanPartOfSpeechStopFilterFactory`
*Arguments*: None.
==== Korean Reading Form Filter
This filter replaces term text with the Reading Attribute, the Hangul transcription of Hanja characters.
*Factory class*: `solr.KoreanReadingFormFilterFactory`
*Arguments*: None.
[[hebrew-lao-myanmar-khmer]]
=== Hebrew, Lao, Myanmar, Khmer
Lucene provides support, in addition to UAX#29 word break rules, for Hebrew's use of the double and single quote characters, and for segmenting Lao, Myanmar, and Khmer into syllables with the `solr.ICUTokenizerFactory` in the `analysis-extras` contrib module. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See `solr/contrib/analysis-extras/README.txt for` instructions on which jars you need to add.
See <<tokenizers.adoc#icu-tokenizer,the ICUTokenizer>> for more information.
=== Latvian
Solr includes support for stemming Latvian, and Lucene includes an example stopword list.
*Factory class:* `solr.LatvianStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<fieldType name="text_lvstem" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.LatvianStemFilterFactory"/>
</analyzer>
</fieldType>
----
*In:* "tirgiem tirgus"
*Tokenizer to Filter:* "tirgiem", "tirgus"
*Out:* "tirg", "tirg"
=== Norwegian
Solr includes two classes for stemming Norwegian, `NorwegianLightStemFilterFactory` and `NorwegianMinimalStemFilterFactory`. Lucene includes an example stopword list.
Another option is to use the Snowball Porter Stemmer with an argument of language="Norwegian".
Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
==== Norwegian Light Stemmer
The `NorwegianLightStemFilterFactory` requires a "two-pass" sort for the -dom and -het endings. This means that in the first pass the word "kristendom" is stemmed to "kristen", and then all the general rules apply so it will be further stemmed to "krist". The effect of this is that "kristen," "kristendom," "kristendommen," and "kristendommens" will all be stemmed to "krist."
The second pass is to pick up -dom and -het endings. Consider this example:
[width="100%",options="header",]
|===
2+^|*One pass* 2+^|*Two passes*
|*Before* |*After* |*Before* |*After*
|forlegen |forleg |forlegen |forleg
|forlegenhet |forlegen |forlegenhet |forleg
|forlegenheten |forlegen |forlegenheten |forleg
|forlegenhetens |forlegen |forlegenhetens |forleg
|firkantet |firkant |firkantet |firkant
|firkantethet |firkantet |firkantethet |firkant
|firkantetheten |firkantet |firkantetheten |firkant
|===
*Factory class:* `solr.NorwegianLightStemFilterFactory`
*Arguments:*
`variant`:: Choose the Norwegian language variant to use. Valid values are:
+
* `nb:` Bokmål (default)
* `nn:` Nynorsk
* `no:` both
*Example:*
[source,xml]
----
<fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball"/>
<filter class="solr.NorwegianLightStemFilterFactory"/>
</analyzer>
</fieldType>
----
*In:* "Forelskelsen"
*Tokenizer to Filter:* "forelskelsen"
*Out:* "forelske"
==== Norwegian Minimal Stemmer
The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns only.
*Factory class:* `solr.NorwegianMinimalStemFilterFactory`
*Arguments:*
`variant`:: Choose the Norwegian language variant to use. Valid values are:
+
* `nb:` Bokmål (default)
* `nn:` Nynorsk
* `no:` both
*Example:*
[source,xml]
----
<fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball"/>
<filter class="solr.NorwegianMinimalStemFilterFactory"/>
</analyzer>
</fieldType>
----
*In:* "Bilens"
*Tokenizer to Filter:* "bilens"
*Out:* "bil"
=== Persian
==== Persian Filter Factories
Solr includes support for normalizing Persian, and Lucene includes an example stopword list.
*Factory class:* `solr.PersianNormalizationFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.ArabicNormalizationFilterFactory"/>
<filter class="solr.PersianNormalizationFilterFactory">
</analyzer>
----
=== Polish
Solr provides support for Polish stemming with the `solr.StempelPolishStemFilterFactory`, and `solr.MorphologikFilterFactory` for lemmatization, in the `contrib/analysis-extras` module. The `solr.StempelPolishStemFilterFactory` component includes an algorithmic stemmer with tables for Polish. To use either of these filters, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add.
*Factory class:* `solr.StempelPolishStemFilterFactory` and `solr.MorfologikFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.StempelPolishStemFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.MorfologikFilterFactory" dictionary="morfologik/stemming/polish/polish.dict"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
----
*In:* ""studenta studenci"
*Tokenizer to Filter:* "studenta", "studenci"
*Out:* "student", "student"
More information about the Stempel stemmer is available in {lucene-javadocs}/analyzers-stempel/index.html[the Lucene javadocs].
Note the lower case filter is applied _after_ the Morfologik stemmer; this is because the Polish dictionary contains proper names and then proper term case may be important to resolve disambiguities (or even lookup the correct lemma at all).
The Morfologik dictionary parameter value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
=== Portuguese
Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilterFactory`, an alternative stemmer called `solr.PortugueseStemFilterFactory`, a lighter stemmer called `solr.PortugueseLightStemFilterFactory`, and an even less aggressive stemmer called `solr.PortugueseMinimalStemFilterFactory`. Lucene includes an example stopword list.
*Factory classes:* `solr.PortugueseStemFilterFactory`, `solr.PortugueseLightStemFilterFactory`, `solr.PortugueseMinimalStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.PortugueseStemFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.PortugueseLightStemFilterFactory"/>
</analyzer>
----
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.PortugueseMinimalStemFilterFactory"/>
</analyzer>
----
*In:* "praia praias"
*Tokenizer to Filter:* "praia", "praias"
*Out:* "pra", "pra"
=== Romanian
Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `language="Romanian"`.
*Factory class:* `solr.SnowballPorterFilterFactory`
*Arguments:*
`language`:: (required) stemmer language, "Romanian" in this case
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="Romanian" />
</analyzer>
----
=== Russian
==== Russian Stem Filter
Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFactory language="Russian"`, and a lighter stemmer called `solr.RussianLightStemFilterFactory`. Lucene includes an example stopword list.
*Factory class:* `solr.RussianLightStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.RussianLightStemFilterFactory"/>
</analyzer>
----
=== Scandinavian
Scandinavian is a language group spanning three languages <<Norwegian>>, <<Swedish>> and <<Danish>> which are very similar.
Swedish å, ä, ö are in fact the same letters as Norwegian and Danish å, æ, ø and thus interchangeable when used between these languages. They are however folded differently when people type them on a keyboard lacking these characters.
In that situation almost all Swedish people use a, a, o instead of å, ä, ö. Norwegians and Danes on the other hand usually type aa, ae and oe instead of å, æ and ø. Some do however use a, a, o, oo, ao and sometimes permutations of everything above.
There are two filters for helping with normalization between Scandinavian languages: one is `solr.ScandinavianNormalizationFilterFactory` trying to preserve the special characters (æäöå) and another `solr.ScandinavianFoldingFilterFactory` which folds these to the more broad ø/ö\->o, etc.
See also each language section for other relevant filters.
==== Scandinavian Normalization Filter
This filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØ and folded variants (aa, ao, ae, oe and oo) by transforming them to åÅæÆøØ.
It's a semantically less destructive solution than `ScandinavianFoldingFilter`, most useful when a person with a Norwegian or Danish keyboard queries a Swedish index and vice versa. This filter does *not* perform the common Swedish folds of å and ä to a nor ö to o.
*Factory class:* `solr.ScandinavianNormalizationFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.ScandinavianNormalizationFilterFactory"/>
</analyzer>
----
*In:* "blåbærsyltetøj blåbärsyltetöj blaabaarsyltetoej blabarsyltetoj"
*Tokenizer to Filter:* "blåbærsyltetøj", "blåbärsyltetöj", "blaabaersyltetoej", "blabarsyltetoj"
*Out:* "blåbærsyltetøj", "blåbærsyltetøj", "blåbærsyltetøj", "blabarsyltetoj"
==== Scandinavian Folding Filter
This filter folds Scandinavian characters åÅäæÄÆ\->a and öÖøØ\->o. It also discriminate against use of double vowels aa, ae, ao, oe and oo, leaving just the first one.
It's a semantically more destructive solution than `ScandinavianNormalizationFilter`, but can in addition help with matching raksmorgas as räksmörgås.
*Factory class:* `solr.ScandinavianFoldingFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.ScandinavianFoldingFilterFactory"/>
</analyzer>
----
*In:* "blåbærsyltetøj blåbärsyltetöj blaabaarsyltetoej blabarsyltetoj"
*Tokenizer to Filter:* "blåbærsyltetøj", "blåbärsyltetöj", "blaabaersyltetoej", "blabarsyltetoj"
*Out:* "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj"
=== Serbian
==== Serbian Normalization Filter
Solr includes a filter that normalizes Serbian Cyrillic and Latin characters. Note that this filter only works with lowercased input.
For user tips & advice on using this filter, see https://cwiki.apache.org/confluence/display/solr/SerbianLanguageSupport[Serbian Language Support] in the Solr Wiki.
*Factory class:* `solr.SerbianNormalizationFilterFactory`
*Arguments:*
`haircut`:: Select the extend of normalization. Valid values are:
+
* `bald`: (Default behavior) Cyrillic characters are first converted to Latin; then, Latin characters have their diacritics removed, with the exception of https://en.wikipedia.org/wiki/D_with_stroke[LATIN SMALL LETTER D WITH STROKE] (U+0111) which is converted to "```dj```"
* `regular`: Only Cyrillic to Latin normalization will be applied, preserving the Latin diatrics
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SerbianNormalizationFilterFactory" haircut="bald"/>
</analyzer>
----
=== Spanish
Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFactory language="Spanish"`, and a lighter stemmer called `solr.SpanishLightStemFilterFactory`. Lucene includes an example stopword list.
*Factory class:* `solr.SpanishStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SpanishLightStemFilterFactory"/>
</analyzer>
----
*In:* "torear toreara torearlo"
*Tokenizer to Filter:* "torear", "toreara", "torearlo"
*Out:* "tor", "tor", "tor"
=== Swedish
==== Swedish Stem Filter
Solr includes two stemmers for Swedish: one in the `solr.SnowballPorterFilterFactory language="Swedish"`, and a lighter stemmer called `solr.SwedishLightStemFilterFactory`. Lucene includes an example stopword list.
Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
*Factory class:* `solr.SwedishStemFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SwedishLightStemFilterFactory"/>
</analyzer>
----
*In:* "kloke klokhet klokheten"
*Tokenizer to Filter:* "kloke", "klokhet", "klokheten"
*Out:* "klok", "klok", "klok"
=== Thai
This filter converts sequences of Thai characters into individual Thai words. Unlike European languages, Thai does not use whitespace to delimit words.
*Factory class:* `solr.ThaiTokenizerFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer type="index">
<tokenizer class="solr.ThaiTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
----
=== Turkish
Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFactory`; support for case-insensitive search with the `solr.TurkishLowerCaseFilterFactory`; support for stripping apostrophes and following suffixes with `solr.ApostropheFilterFactory` (see http://www.ipcsit.com/vol57/015-ICNI2012-M021.pdf[Role of Apostrophes in Turkish Information Retrieval]); support for a form of stemming that truncating tokens at a configurable maximum length through the `solr.TruncateTokenFilterFactory` (see https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.20750[Information Retrieval on Turkish Texts]); and Lucene includes an example stopword list.
*Factory class:* `solr.TurkishLowerCaseFilterFactory`
*Arguments:* None
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.ApostropheFilterFactory"/>
<filter class="solr.TurkishLowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="Turkish"/>
</analyzer>
----
*Another example, illustrating diacritics-insensitive search:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.ApostropheFilterFactory"/>
<filter class="solr.TurkishLowerCaseFilterFactory"/>
<filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="true"/>
<filter class="solr.KeywordRepeatFilterFactory"/>
<filter class="solr.TruncateTokenFilterFactory" prefixLength="5"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
----
=== Ukrainian
Solr provides support for Ukrainian lemmatization with the `solr.MorphologikFilterFactory`, in the `contrib/analysis-extras` module. To use this filter, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>). See `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add.
Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzers-morfologik` jar.
*Factory class:* `solr.MorfologikFilterFactory`
*Arguments:*
`dictionary`:: (required) lemmatizer dictionary - the `lucene-analyzers-morfologik` jar contains a Ukrainian dictionary at `org/apache/lucene/analysis/uk/ukrainian.dict`.
*Example:*
[source,xml]
----
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.StopFilterFactory" words="org/apache/lucene/analysis/uk/stopwords.txt"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.MorfologikFilterFactory" dictionary="org/apache/lucene/analysis/uk/ukrainian.dict"/>
</analyzer>
----
The Morfologik `dictionary` parameter value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.