The Vector team is pleased to announce version 0.28.0!
This is a smaller maintenance release primarily including bug fixes and small enhancements, while we do some background work to enable upcoming new features.
With this release we also completed an initiative to generate Vector’s reference documentation from the configuration structures in the code which will result in less inaccuracies in published documentation.
Be sure to check out the upgrade guide for breaking changes in this release.
aws_s3 sink, are not functional due to issues with request
signing. This is fixed in v0.28.1.framing.*.max_length configuration options cannot be used on the socket source
as Vector returns an error about them conflicting with the deprecated top-level
max_length configuration option. This is fixed in v0.28.1.http_server source incorrectly defaults to GET rather than POST for method.
Fixed in 0.28.2.elasticsearch sink panics when starting if bulk.index is unspecified and the
default mode of bulk is used. Fixed in 0.28.2.syslog source incorrectly inserted the source IP of incoming messages as
source_id rather than source_ip. Fixed in 0.28.2.azure_blob sink now allows setting a custom endpoint for use with
alternative Azure clouds like USGov and China.
Thanks to
archoversight
for contributing this change!clickhouse sink now supports a date_time_best_effort config option to
have ClickHouse parse a greater variety of timestamps (like RFC3339).
Thanks to
DarkWanderer
for contributing this change!http sink now supports payload_prefix and payload_suffix options to
prepend and append text in the HTTP bodies it is sending. This happens after batches
are encoded and so can be used, for example, to wrap the batches in a JSON
envelope.
Thanks to
jdiebold
for contributing this change!aws_kinesis_firehose now has a store_access_key, similar to the splunk_hec
and datadog_agent sources, to store the token that the incoming request was sent
with in the event secrets. This can be read later in VRL to drive behavior.
Thanks to
dizlv
for contributing this change!reduce transform now has a max_events option that can be used to limit the
total number of events in a reduced batch.
Thanks to
jches
for contributing this change!kafka source now tries to commit offsets during shutdown to avoid duplicate
processing on start-up.
Thanks to
aholmberg
for contributing this change!reduce transform performance improved by only flushing when events were ready
to be flushed and avoiding repeated checks for stale events.
Thanks to
dbcfd
for contributing this change!seahash function was added to VRL for a non-cryptographic fast hash.
Thanks to
psemeniuk
for contributing this change!pulsar sink now supports batching via the added batch.max_events
configuration option.
Thanks to
zamazan4ik
for contributing this change!kafka source is now capable of making consumer lag metrics available via the
internal_metrics source. This can be enabled by setting metrics.topic_lag_metric
to true. Note that this can result in high cardinality metrics given they are
tagged with the topic and partition id.
Thanks to
zamazan4ik
for contributing this change!redis source now retries failed requests with an exponential back-off rather
than immediately (which can cause high resource usage).
Thanks to
hargut
for contributing this change!encode_gzip and decode_gzip functions were added to VRL to interact with gzip’d
data.
Thanks to
zamazan4ik
for contributing this change!encode_zlib and decode_zlib functions were added to VRL to interact with zlib
compressed data.
Thanks to
zamazan4ik
for contributing this change!encode_zstd and decode_zstd functions were added to VRL to interact with zstd
compressed data.
Thanks to
zamazan4ik
for contributing this change!max_line_bytes on the file source no longer typically invalidates all
previous checksums. It still will invalidate the checksum if the value is set to
lower than the length of the line used that checksum but this should be much less
common.
Thanks to
Ilmarii
for contributing this change!gcp_stackdriver_metrics now correctly encodes metrics to send to GCP.
Thanks to
jasonahills
for contributing this change!axiom sink now always sets the timestamp-field header, which tells Axiom
where to find the timestamp, to @timestamp rather than the configured
log_schema.timestamp_key since Vector was always sending it as @timestamp.the
upgrade guide for
details.Sign up to receive emails on the latest Vector content and new releases
Thank you for joining our Updates Newsletter