Class CategorizeTextAggregation
java.lang.Object
co.elastic.clients.elasticsearch._types.aggregations.AggregationBase
co.elastic.clients.elasticsearch._types.aggregations.CategorizeTextAggregation
- All Implemented Interfaces:
AggregationVariant
,JsonpSerializable
@JsonpDeserializable
public class CategorizeTextAggregation
extends AggregationBase
implements AggregationVariant
A multi-bucket aggregation that groups semi-structured text into buckets.
Each text field is re-analyzed using a custom analyzer. The resulting tokens
are then categorized creating buckets of similarly formatted text values.
This aggregation works best with machine generated text like system logs.
Only the first 100 analyzed tokens are used to categorize the text.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from class co.elastic.clients.elasticsearch._types.aggregations.AggregationBase
AggregationBase.AbstractBuilder<BuilderT extends AggregationBase.AbstractBuilder<BuilderT>>
-
Field Summary
Modifier and TypeFieldDescriptionstatic final JsonpDeserializer<CategorizeTextAggregation>
Json deserializer forCategorizeTextAggregation
-
Method Summary
Modifier and TypeMethodDescriptionAggregation variant kind.final CategorizeTextAnalyzer
The categorization analyzer specifies how the text is analyzed and tokenized before being categorized.This property expects an array of regular expressions.final String
field()
Required - The semi-structured text field to categorize.final Integer
The maximum number of token positions to match on before attempting to merge categories.final Integer
The maximum number of unique tokens at any position up to max_matched_tokens.final Integer
The minimum number of documents in a bucket to be returned to the results.static CategorizeTextAggregation
protected void
serializeInternal
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static void
setupCategorizeTextAggregationDeserializer
(ObjectDeserializer<CategorizeTextAggregation.Builder> op) final Integer
The minimum number of documents in a bucket to be returned from the shard before merging.final Integer
The number of categorization buckets to return from each shard before merging all the results.final Integer
The minimum percentage of tokens that must match for text to be added to the category bucket.final Integer
size()
The number of buckets to return.Methods inherited from class co.elastic.clients.elasticsearch._types.aggregations.AggregationBase
meta, name, serialize, setupAggregationBaseDeserializer, toString
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface co.elastic.clients.elasticsearch._types.aggregations.AggregationVariant
_toAggregation
-
Field Details
-
_DESERIALIZER
Json deserializer forCategorizeTextAggregation
-
-
Method Details
-
of
public static CategorizeTextAggregation of(Function<CategorizeTextAggregation.Builder, ObjectBuilder<CategorizeTextAggregation>> fn) -
_aggregationKind
Aggregation variant kind.- Specified by:
_aggregationKind
in interfaceAggregationVariant
-
field
Required - The semi-structured text field to categorize.API name:
field
-
maxUniqueTokens
The maximum number of unique tokens at any position up to max_matched_tokens. Must be larger than 1. Smaller values use less memory and create fewer categories. Larger values will use more memory and create narrower categories. Max allowed value is 100.API name:
max_unique_tokens
-
maxMatchedTokens
The maximum number of token positions to match on before attempting to merge categories. Larger values will use more memory and create narrower categories. Max allowed value is 100.API name:
max_matched_tokens
-
similarityThreshold
The minimum percentage of tokens that must match for text to be added to the category bucket. Must be between 1 and 100. The larger the value the narrower the categories. Larger values will increase memory usage and create narrower categories.API name:
similarity_threshold
-
categorizationFilters
This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time as categorization_analyzer. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use the categorization_analyzer property instead and include the filters as pattern_replace character filters.API name:
categorization_filters
-
categorizationAnalyzer
The categorization analyzer specifies how the text is analyzed and tokenized before being categorized. The syntax is very similar to that used to define the analyzer in the Analyze endpoint. This property cannot be used at the same time as categorization_filters.API name:
categorization_analyzer
-
shardSize
The number of categorization buckets to return from each shard before merging all the results.API name:
shard_size
-
size
The number of buckets to return.API name:
size
-
minDocCount
The minimum number of documents in a bucket to be returned to the results.API name:
min_doc_count
-
shardMinDocCount
The minimum number of documents in a bucket to be returned from the shard before merging.API name:
shard_min_doc_count
-
serializeInternal
- Overrides:
serializeInternal
in classAggregationBase
-
setupCategorizeTextAggregationDeserializer
protected static void setupCategorizeTextAggregationDeserializer(ObjectDeserializer<CategorizeTextAggregation.Builder> op)
-