Tag cardinality limit
Limit the cardinality of tags on metrics events as a safeguard against cardinality explosion
Limits the cardinality of tags on metric events, protecting against accidental high cardinality usage that can commonly disrupt the stability of metrics storages.
The default behavior is to drop the tag from incoming metrics when the configured
limit would be exceeded. Note that this is usually only useful when applied to
incremental counter metrics and can have unintended effects when applied to other
metric types. The default action to take can be modified with the
limit_exceeded_action
option.
Configuration
Example configurations
{
"transforms": {
"my_transform_id": {
"type": "tag_cardinality_limit",
"inputs": [
"my-source-or-transform-id"
],
"mode": "exact"
}
}
}
[transforms.my_transform_id]
type = "tag_cardinality_limit"
inputs = [ "my-source-or-transform-id" ]
mode = "exact"
transforms:
my_transform_id:
type: tag_cardinality_limit
inputs:
- my-source-or-transform-id
mode: exact
{
"transforms": {
"my_transform_id": {
"type": "tag_cardinality_limit",
"inputs": [
"my-source-or-transform-id"
],
"cache_size_per_key": 5120,
"limit_exceeded_action": "drop_tag",
"mode": "exact",
"value_limit": 500
}
}
}
[transforms.my_transform_id]
type = "tag_cardinality_limit"
inputs = [ "my-source-or-transform-id" ]
cache_size_per_key = 5_120
limit_exceeded_action = "drop_tag"
mode = "exact"
value_limit = 500
transforms:
my_transform_id:
type: tag_cardinality_limit
inputs:
- my-source-or-transform-id
cache_size_per_key: 5120
limit_exceeded_action: drop_tag
mode: exact
value_limit: 500
cache_size_per_key
optional uintThe size of the cache for detecting duplicate tags, in bytes.
The larger the cache size, the less likely it is to have a false positive, or a case where we allow a new value for tag even after we have reached the configured limits.
5120
mode = "probabilistic"
graph
optional objectExtra graph configuration
Configure output for component when generated with graph command
graph.node_attributes
optional objectNode attributes to add to this component’s node in resulting graph
They are added to the node as provided
graph.node_attributes.*
required string literalinputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
limit_exceeded_action
optional string literal enumOption | Description |
---|---|
drop_event | Drop the entire event itself. |
drop_tag | Drop the tag(s) that would exceed the configured limit. |
drop_tag
mode
required string literal enumOption | Description |
---|---|
exact | Tracks cardinality exactly. This mode has higher memory requirements than |
probabilistic | Tracks cardinality probabilistically. This mode has lower memory requirements than |
Outputs
<component_id>
Telemetry
Metrics
linkcomponent_discarded_events_total
counterfilter
transform, or false if due to an error.component_errors_total
countercomponent_received_event_bytes_total
countercomponent_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_received_events_total
countercomponent_sent_event_bytes_total
countercomponent_sent_events_total
countertag_value_limit_exceeded_total
countervalue_limit
.utilization
gaugevalue_limit_reached_total
counterExamples
Drop high-cardinality tag
Given this event...[{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{"user_id":"user_id_1"}}},{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{"user_id":"user_id_2"}}}]
transforms:
my_transform_id:
type: tag_cardinality_limit
inputs:
- my-source-or-transform-id
value_limit: 1
limit_exceeded_action: drop_tag
[transforms.my_transform_id]
type = "tag_cardinality_limit"
inputs = [ "my-source-or-transform-id" ]
value_limit = 1
limit_exceeded_action = "drop_tag"
{
"transforms": {
"my_transform_id": {
"type": "tag_cardinality_limit",
"inputs": [
"my-source-or-transform-id"
],
"value_limit": 1,
"limit_exceeded_action": "drop_tag"
}
}
}
[{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{"user_id":"user_id_1"}}},{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{}}}]
How it works
Intended Usage
request_id
tag. When this is happens, it is recommended to fix the upstream error as soon
as possible. This is because Vector’s cardinality cache is held in memory and it
will be erased when Vector is restarted. This will cause new tag values to pass
through until the cardinality limit is reached again. For normal usage this
should not be a common problem since Vector processes are normally long-lived.Failed Parsing
This transform stores in memory a copy of the key for every tag on every metric
event seen by this transform. In mode exact
, a copy of every distinct
value for each key is also kept in memory, until value_limit
distinct values
have been seen for a given key, at which point new values for that key will be
rejected. So to estimate the memory usage of this transform in mode exact
you can use the following formula:
(number of distinct field names in the tags for your metrics * average length of
the field names for the tags) + (number of distinct field names in the tags of
your metrics * `value_limit` * average length of the values of tags for your
metrics)
In mode probabilistic
, rather than storing all values seen for each key, each
distinct key has a bloom filter which can probabilistically determine whether
a given value has been seen for that key. The formula for estimating memory
usage in mode probabilistic
is:
(number of distinct field names in the tags for your metrics * average length of
the field names for the tags) + (number of distinct field names in the tags of
-your metrics * `cache_size_per_key`)
The cache_size_per_key
option controls the size of the bloom filter used
for storing the set of acceptable values for any single key. The larger the
bloom filter the lower the false positive rate, which in our case means the less
likely we are to allow a new tag value that would otherwise violate a
configured limit. If you want to know the exact false positive rate for a given
cache_size_per_key
and value_limit
, there are many free on-line bloom filter
calculators that can answer this. The formula is generally presented in terms of
’n’, ‘p’, ‘k’, and ’m’ where ’n’ is the number of items in the filter
(value_limit
in our case), ‘p’ is the probability of false positives (what we
want to solve for), ‘k’ is the number of hash functions used internally, and ’m’
is the number of bits in the bloom filter. You should be able to provide values
for just ’n’ and ’m’ and get back the value for ‘p’ with an optimal ‘k’ selected
for you. Remember when converting from value_limit
to the ’m’ value to plug
into the calculator that value_limit
is in bytes, and ’m’ is often presented
in bits (1/8 of a byte).