The Vector team is pleased to announce version 0.36.0!
There are no breaking changes in this release.
In addition to the usual enhancements and bug fixes, this release also includes
prometheus_pushgateway
source to receive Prometheus datavrl
decoder that can be used to decode data in sources using a VRL programA reminder that the repositories.timber.io
package repositories will be decommissioned on
February 28th, 2024. Please see the release
highlight for details about this change and
instructions on how to migrate.
credentials_process
in AWS configs. Fixed in v0.36.1.assume_role
. Fixed in v0.36.1.kafka
sink occasionally panics during rebalance events. Fixed in v0.36.1.% = ...
syntax.
Thanks to
GreyTeardrop
for contributing this change!@
characters in labels when decoding GELF.
Thanks to
MartinEmrich
for contributing this change!Added a boolean graphql
field to the api configuration to allow disabling the graphql endpoint.
Note that the playground
endpoint will now only be enabled if the graphql
endpoint is also enabled.
New Option --skip-healthchecks
for vector validate
validates config
including VRL, but skips health checks for sinks.
Useful to validate configuration before deploying it remotely.
Thanks to MartinEmrich for contributing this change!Vector can now emulate a Prometheus Pushgateway through the new prometheus_pushgateway
source. Counters and histograms can optionally be aggregated across pushes to support use-cases like cron jobs.
There are some caveats, which are listed in the implementation.
Thanks to Sinjo for contributing this change!clickhouse
sink now supports format
. This can be used to specify the data format provided to INSERT
s. The default is JSONEachRow
.
Thanks to
gabriel376
for contributing this change!aws_s3
source that prevents deletion of messages which failed to be delivered to a sink.
Thanks to
tanushri-sundar
for contributing this change!decoding.codec = "vrl"
in their
source configurations and use VRL programs to decode logs.aws_s3
sink adds a trailing period to the s3 key when the filename_extension
is empty.datadog_agent
source when the corresponding output is disabled in the source config.max_connection_age
, only send
Connection: Close
for HTTP/0.9, HTTP/1.0, and HTTP/1.1 requests. This header is not supported on
HTTP/2 and HTTP/3 requests. This may be supported on these HTTP versions in the future.The following metrics now correctly have the component_kind
, component_type
, and component_id
tags:
- component_errors_total
- component_discarded_events_total
For the following sinks:
- splunk_hec
- clickhouse
- loki
- redis
- azure_blob
- azure_monitor_logs
- webhdfs
- appsignal
- amqp
- aws_kinesis
- statsd
- honeycomb
- gcp_stackdriver_metrics
- gcs_chronicle_unstructured
- gcp_stackdriver_logs
- gcp_pubsub
- gcp_cloud_storage
- nats
- http
- kafka
- new_relic
- datadog_metrics
- datadog_traces
- datadog_events
- databend
- prometheus_remote_write
- pulsar
- aws_s3
- aws_sqs
- aws_sns
- elasticsearch
journald
source was not correctly emitting metadata when log_namespace = True
.
Thanks to
dalegaard
for contributing this change!datadog_logs
sink could produce a request larger than the allowed API
limit.Sign up to receive emails on the latest Vector content and new releases
Thank you for joining our Updates Newsletter