The Vector team is pleased to announce version 0.15.0!
This release includes a number of new components for collecting and sending data using Vector:
datadog_events
sink for sending events to Datadog’s event stream.dnstap
source for collecting events from a DNS server via the dnstap protocol.fluent
source for collecting logs forwarded by FluentD,
Fluent Bit, or other services capable of forwarding using the fluent
protocol such as Docker.logstash
source for collecting logs forwarded by Logstash,
Elastic Beats, or other services capable of forwarding using the lumberjack
protocol such as Docker.azure_blob
sink for forwarding logs to Azure’s Blob Storage.redis
sink for forwarding logs to Redis.eventstoredb_metrics
source for collecting metrics from EventStoreDB.It also contains a number of additional enhancements and bug fixes. Check out the highlights and changelog for more details.
match_array
function
x-forwarded-for
header support
warn
level internal_event
input_format_skip_unknown_fields
binary
encoding support for http source
content_md5
when writing objects to work with S3 object locking
parse_key_value
decode_percent
and encode_percent
functions
encode_key_value
function
encode_key_value
datadog_search
, condition type
fingerprint.lines
option to use multiple lines to calculate fingeprints
parse_logfmt
+ minor related fix
key_field
configured
ip_aton
and ip_ntoa
functions
parse_xml
function
format_int
and parse_int
functions
lines
option
parse_key_value
fixes
host
tag to internal_metrics
parse_ruby_hash
function
dnstap
source
fluent
source
datadog_events
sink
--config-dir
to read configuration from directories
datadog_events
sink
datadog_events
sink
azure_blob
sink
eventstoredb_metrics
source
datadog_events
sink
redis
sink
graph
subcommand for generating graph in DOT format
datadog_events
sink
parse_apache_log
to handle lack of thread id
init_roots
on build
to only output warnings when host_metrics
is used
parse_syslog
handles non structured messages
referrer
by referer
on nginx parser
to_int
now truncates floats
We’ve heard from a number of users that they’d like improved delivery guarantees for events flowing through Vector. We are working on a feature to allow, for components that are able to support it, to only acknowledging data flowing into source components after that data has been sent by any associated sinks. For example, this would avoid acknowledging messages in Kafka until the data in those messages has been sent via all associated sinks.
This release includes support in additional source and sink components that support acknowledgements, but it has not yet been fully documented and tested. We expect to officially release this with 0.16.0.
We are hard at work at expanding the ability to run Vector as an aggregator in Kubernetes. This will allow you to build end-to-end observability pipelines in Kubernetes with Vector. Distributing processing on the edge, centralizing it with an aggregator, or both. If you are interested in beta testing, please join our chat and let us know.
We do expect this to be released with 0.16.0.
Sign up to receive emails on the latest Vector content and new releases
Thank you for joining our Updates Newsletter