Azure Blob Storage
Store your observability data in Azure Blob Storage
Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "azure_blob",
"inputs": [
"my-source-or-transform-id"
],
"container_name": "my-logs"
}
}
}
[sinks.my_sink_id]
type = "azure_blob"
inputs = [ "my-source-or-transform-id" ]
container_name = "my-logs"
sinks:
my_sink_id:
type: azure_blob
inputs:
- my-source-or-transform-id
container_name: my-logs
{
"sinks": {
"my_sink_id": {
"type": "azure_blob",
"inputs": [
"my-source-or-transform-id"
],
"blob_prefix": "blob/%F/",
"compression": "gzip",
"connection_string": "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net",
"container_name": "my-logs",
"endpoint": "https://test.blob.core.usgovcloudapi.net/",
"storage_account": "mylogstorage"
}
}
}
[sinks.my_sink_id]
type = "azure_blob"
inputs = [ "my-source-or-transform-id" ]
blob_prefix = "blob/%F/"
compression = "gzip"
connection_string = "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net"
container_name = "my-logs"
endpoint = "https://test.blob.core.usgovcloudapi.net/"
storage_account = "mylogstorage"
sinks:
my_sink_id:
type: azure_blob
inputs:
- my-source-or-transform-id
blob_prefix: blob/%F/
compression: gzip
connection_string: DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net
container_name: my-logs
endpoint: https://test.blob.core.usgovcloudapi.net/
storage_account: mylogstorage
acknowledgements
optional objectControls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
acknowledgements.enabled
optional boolWhether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
batch
optional objectbatch.max_bytes
optional uintThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are serialized/compressed.
1e+07
(bytes)batch.max_events
optional uintbatch.timeout_secs
optional float300
(seconds)blob_append_uuid
optional boolWhether or not to append a UUID v4 token to the end of the blob key.
The UUID is appended to the timestamp portion of the object key, such that if the blob key
generated is date=2022-07-18/1658176486
, setting this field to true
results
in an blob key that looks like
date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547
.
This ensures there are no name collisions, and can be useful in high-volume workloads where blob keys must be unique.
blob_prefix
optional string templateA prefix to apply to all blob keys.
Prefixes are useful for partitioning objects, such as by creating a blob key that
stores blobs under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
"date/%F/hour/%H/"
"year=%Y/month=%m/day=%d/"
"kubernetes/{{ metadata.cluster }}/{{ metadata.application_name }}/"
blob/%F/
blob_time_format
optional string strftimeThe timestamp format for the time component of the blob key.
By default, blob keys are appended with a timestamp that reflects when the blob are sent to
Azure Blob Storage, such that the resulting blob key is functionally equivalent to joining
the blob prefix with the formatted timestamp, such as date=2022-07-18/1658176486
.
This would represent a blob_prefix
set to date=%F/
and the timestamp of Mon Jul 18 2022
20:34:44 GMT+0000, with the filename_time_format
being set to %s
, which renders
timestamps in seconds since the Unix epoch.
Supports the common strftime
specifiers found in most
languages.
When set to an empty string, no timestamp is appended to the blob prefix.
buffer
optional objectConfigures the buffering behavior for this sink.
More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section.
buffer.max_events
optional uinttype = "memory"
500
buffer.max_size
required uintThe maximum size of the buffer on disk.
Must be at least ~256 megabytes (268435488 bytes).
type = "disk"
buffer.type
optional string literal enumOption | Description |
---|---|
disk | Events are buffered on disk. This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. |
memory | Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. |
drop_newest | Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. |
block
compression
optional string literal enumCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
gzip
connection_string
optional string literalThe Azure Blob Storage Account connection string.
Authentication with access key is the only supported authentication method.
Either storage_account
, or this field, must be specified.
container_name
required string literalencoding
required objectencoding.avro
required objectcodec = "avro"
encoding.avro.schema
required string literalencoding.codec
required string literal enumOption | Description |
---|---|
avro | Encodes an event as an Apache Avro message. |
csv | Encodes an event as a CSV message. This codec must be configured with fields to encode. |
gelf | Encodes an event as a GELF message. |
json | Encodes an event as JSON. |
logfmt | Encodes an event as a logfmt message. |
native | Encodes an event in the native Protocol Buffers format. This codec is experimental. |
native_json | Encodes an event in the native JSON format. This codec is experimental. |
raw_message | No encoding. This encoding uses the Be careful if you are modifying your log events (for example, by using a |
text | Plain text encoding. This encoding uses the Be careful if you are modifying your log events (for example, by using a |
encoding.csv
required objectcodec = "csv"
encoding.csv.capacity
optional uint8192
encoding.csv.double_quote
optional boolEnable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
true
encoding.csv.escape
optional uintThe escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).
To use this, double_quotes
needs to be disabled as well otherwise it is ignored.
34
encoding.csv.fields
required [string]Configures the fields that will be encoded, as well as the order in which they appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
encoding.csv.quote_style
optional string literal enumOption | Description |
---|---|
always | Always puts quotes around every field. |
necessary | Puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter, or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field). |
never | Never writes quotes, even if it produces invalid CSV data. |
non_numeric | Puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes are used even if they aren’t strictly necessary. |
necessary
encoding.except_fields
optional [string]encoding.metric_tag_values
optional string literal enumControls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
codec = "json" or codec = "text"
Option | Description |
---|---|
full | All tags are exposed as arrays of either string or null values. |
single | Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored. |
single
encoding.only_fields
optional [string]encoding.timestamp_format
optional string literal enumOption | Description |
---|---|
rfc3339 | Represent the timestamp as a RFC 3339 timestamp. |
unix | Represent the timestamp as a Unix timestamp. |
endpoint
optional string literalThe Azure Blob Storage Endpoint URL.
This is used to override the default blob storage endpoint URL in cases where you are using credentials read from the environment/managed identities or access tokens without using an explicit connection_string (which already explicitly supports overriding the blob endpoint URL).
This may only be used with storage_account
and is ignored when used with
connection_string
.
framing
optional objectframing.character_delimited
required objectmethod = "character_delimited"
framing.character_delimited.delimiter
required uintframing.method
required string literal enumOption | Description |
---|---|
bytes | Event data is not delimited at all. |
character_delimited | Event data is delimited by a single ASCII (7-bit) character. |
length_delimited | Event data is prefixed with its length in bytes. The prefix is a 32-bit unsigned integer, little endian. |
newline_delimited | Event data is delimited by a newline (LF) character. |
healthcheck
optional objecthealthcheck.enabled
optional booltrue
inputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
request
optional objectMiddleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
request.adaptive_concurrency
optional objectConfiguration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
request.adaptive_concurrency.decrease_ratio
optional floatThe fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
0.9
request.adaptive_concurrency.ewma_alpha
optional floatThe weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
0.4
request.adaptive_concurrency.initial_concurrency
optional uintThe initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service’s average limit if you’re seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
1
request.adaptive_concurrency.rtt_deviation_scale
optional floatScale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
2.5
request.concurrency
optional string literal enum uintConfiguration for outbound request concurrency.
This can be set either to one of the below enum values or to a positive integer, which denotes a fixed concurrency limit.
Option | Description |
---|---|
adaptive | Concurrency will be managed by Vector’s Adaptive Request Concurrency feature. |
none | A fixed concurrency of 1. Only one request can be outstanding at any given time. |
adaptive
request.rate_limit_duration_secs
optional uintrate_limit_num
option.1
(seconds)request.rate_limit_num
optional uintrate_limit_duration_secs
time window.9.223372036854776e+18
(requests)request.retry_attempts
optional uintThe maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
9.223372036854776e+18
(retries)request.retry_initial_backoff_secs
optional uintThe amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
1
(seconds)request.retry_max_duration_secs
optional uint3600
(seconds)request.timeout_secs
optional uintThe time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service’s internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
60
(seconds)storage_account
optional string literalThe Azure Blob Storage Account name.
Attempts to load credentials for the account in the following ways, in order:
- read from environment variables (more information)
- looks for a Managed Identity
- uses the
az
CLI tool to get an access token (more information)
Either connection_string
, or this field, must be specified.
Telemetry
Metrics
linkbuffer_byte_size
gaugecomponent_id
instead. The value is the same as component_id
.buffer_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_events
gaugecomponent_id
instead. The value is the same as component_id
.buffer_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_received_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.component_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.filter
transform, or false if due to an error.component_errors_total
countercomponent_id
instead. The value is the same as component_id
.component_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_id
instead. The value is the same as component_id
.component_received_events_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.http_error_response_total
counterhttp_request_errors_total
counterutilization
gaugecomponent_id
instead. The value is the same as component_id
.How it works
Buffers and batches
This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.
Batches are flushed when 1 of 2 conditions are met:
- The batch age meets or exceeds the configured
timeout_secs
. - The batch size meets or exceeds the configured
max_bytes
ormax_events
.
Buffers are controlled via the buffer.*
options.
Health checks
Require health checks
If you’d like to exit immediately upon a health check failure, you can pass the
--require-healthy
flag:
vector --config /etc/vector/vector.yaml --require-healthy
Disable health checks
healthcheck
option to
false
.Object naming
By default, Vector names your blobs different based on whether or not the blobs are compressed.
Here is the format without compression:
<key_prefix><timestamp>-<uuidv4>.log
Here’s an example blob name without compression:
blob/2021-06-23/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log
And here is the format with compression:
<key_prefix><timestamp>-<uuidv4>.log.gz
An example blob name with compression:
blob/2021-06-23/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log.gz
Vector appends a UUIDV4 token to ensure there are no name conflicts in the unlikely event that two Vector instances are writing data at the same time.
You can control the resulting name via the blob_prefix
,
blob_time_format
, and blob_append_uuid
options.
For example, to store objects at the root Azure storage folder, without a timestamp or UUID use these configuration options:
blob_prefix = "{{ my_file_name }}"
blob_time_format = ""
blob_append_uuid = false
Rate limits & adaptive concurrency
Adaptive Request Concurrency (ARC)
Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,
We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required.
Static concurrency
If Adaptive Request Concurrency is not for you, you can manually set static concurrency
limits by specifying an integer for request.concurrency
:
sinks:
my-sink:
request:
concurrency: 10
Rate limits
In addition to limiting request concurrency, you can also limit the overall request
throughput via the request.rate_limit_duration_secs
and request.rate_limit_num
options.
sinks:
my-sink:
request:
rate_limit_duration_secs: 1
rate_limit_num: 10
These will apply to both adaptive
and fixed request.concurrency
values.