AWS S3
Store observability events in the AWS S3 object storage system
Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "aws_s3",
"inputs": [
"my-source-or-transform-id"
],
"bucket": "my-bucket",
"key_prefix": "date=%F/",
"acknowledgements": null,
"batch": null,
"compression": "gzip",
"encoding": {
"codec": "json"
},
"healthcheck": null,
"region": "us-east-1"
}
}
}
[sinks.my_sink_id]
type = "aws_s3"
inputs = [ "my-source-or-transform-id" ]
bucket = "my-bucket"
key_prefix = "date=%F/"
compression = "gzip"
region = "us-east-1"
[sinks.my_sink_id.encoding]
codec = "json"
---
sinks:
my_sink_id:
type: aws_s3
inputs:
- my-source-or-transform-id
bucket: my-bucket
key_prefix: date=%F/
acknowledgements: null
batch: null
compression: gzip
encoding:
codec: json
healthcheck: null
region: us-east-1
{
"sinks": {
"my_sink_id": {
"type": "aws_s3",
"inputs": [
"my-source-or-transform-id"
],
"auth": null,
"endpoint": "http://127.0.0.0:5000/path/to/service",
"acl": "private",
"bucket": "my-bucket",
"content_encoding": "gzip",
"content_type": "text/x-log",
"filename_append_uuid": true,
"filename_extension": "log",
"filename_time_format": "%s",
"grant_full_control": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"grant_read": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"grant_read_acp": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"grant_write_acp": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"key_prefix": "date=%F/",
"server_side_encryption": "AES256",
"ssekms_key_id": "abcd1234",
"storage_class": "STANDARD",
"buffer": null,
"acknowledgements": null,
"batch": null,
"compression": "gzip",
"encoding": {
"codec": "json"
},
"healthcheck": null,
"request": null,
"tls": null,
"proxy": null,
"region": "us-east-1",
"tags": {
"Tag1": "Value1"
}
}
}
}
[sinks.my_sink_id]
type = "aws_s3"
inputs = [ "my-source-or-transform-id" ]
endpoint = "http://127.0.0.0:5000/path/to/service"
acl = "private"
bucket = "my-bucket"
content_encoding = "gzip"
content_type = "text/x-log"
filename_append_uuid = true
filename_extension = "log"
filename_time_format = "%s"
grant_full_control = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_read = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_read_acp = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
grant_write_acp = "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
key_prefix = "date=%F/"
server_side_encryption = "AES256"
ssekms_key_id = "abcd1234"
storage_class = "STANDARD"
compression = "gzip"
region = "us-east-1"
[sinks.my_sink_id.encoding]
codec = "json"
[sinks.my_sink_id.tags]
Tag1 = "Value1"
---
sinks:
my_sink_id:
type: aws_s3
inputs:
- my-source-or-transform-id
auth: null
endpoint: http://127.0.0.0:5000/path/to/service
acl: private
bucket: my-bucket
content_encoding: gzip
content_type: text/x-log
filename_append_uuid: true
filename_extension: log
filename_time_format: "%s"
grant_full_control: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
grant_read: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
grant_read_acp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
grant_write_acp: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
key_prefix: date=%F/
server_side_encryption: AES256
ssekms_key_id: abcd1234
storage_class: STANDARD
buffer: null
acknowledgements: null
batch: null
compression: gzip
encoding:
codec: json
healthcheck: null
request: null
tls: null
proxy: null
region: us-east-1
tags:
Tag1: Value1
acknowledgements
common optional objectacknowledgement
settings.acknowledgements.enabled
optional boolfalse
acl
optional string literal enumOption | Description |
---|---|
authenticated-read | Owner gets FULL_CONTROL . The AuthenticatedUsers group gets READ access. |
aws-exec-read | Owner gets FULL_CONTROL . Amazon EC2 gets READ access to GET an Amazon Machine Image (AMI) bundle from Amazon S3. |
bucket-owner-full-control | Both the object owner and the bucket owner get FULL_CONTROL over the object. |
bucket-owner-read | Object owner gets FULL_CONTROL . Bucket owner gets `READ. access. |
log-delivery-write | The LogDelivery group gets WRITE and READ_ACP permissions on the bucket. For more information about logs, see Amazon S3 Server Access Logging. |
private | Owner gets FULL_CONTROL . No one else has access rights (default). |
public-read | Owner gets FULL_CONTROL . The AllUsers group gets READ access. |
public-read-write | Owner gets FULL_CONTROL . The AllUsers group gets READ and WRITE access. Granting this on a bucket is generally not recommended. |
auth
optional objectauth.access_key_id
optional string literalauth.assume_role
optional string literalauth.load_timeout_secs
optional uintassume_role
.5
(seconds)auth.profile
optional string literaldefault
auth.secret_access_key
optional string literalbatch
common optional objectbatch.max_bytes
optional uintbatch.max_events
optional uintbatch.timeout_secs
optional float300
(seconds)bucket
required string literals3://
or a trailing /
.buffer
optional objectbuffer.max_events
optional uinttype = "memory"
500
(events)buffer.type
optional string literal enumOption | Description |
---|---|
disk | Stores the sink’s buffer on disk. This is less performant, but durable. Data will not be lost between restarts. Will also hold data in memory to enhance performance. WARNING: This may stall the sink if disk performance isn’t on par with the throughput. For comparison, AWS gp2 volumes are usually too slow for common cases. |
memory | Stores the sink’s buffer in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Applies back pressure when the buffer is full. This prevents data loss, but will cause data to pile up on the edge. |
drop_newest | Drops new data as it’s received. This data is lost. This should be used when performance is the highest priority. |
block
compression
common optional string literal enumThe compression strategy used to compress the encoded event data before transmission.
Some cloud storage API clients and browsers will handle decompression transparently, so files may not always appear to be compressed depending how they are accessed.
Option | Description |
---|---|
gzip | Gzip standard DEFLATE compression. |
none | No compression. |
gzip
content_encoding
optional string literalcompression
value.content_type
optional string literaltext/x-log
encoding
required objectConfigures the encoding specific sink behavior.
Note: When data in encoding
is malformed, currently only a very generic error “data did not match any variant of untagged enum EncodingConfig” is reported. Follow this issue to track progress on improving these error messages.
encoding.codec
optional string literal enumOption | Description |
---|---|
ndjson | Newline delimited list of JSON encoded events. |
text | Newline delimited list of messages generated from the message key from each event. |
encoding.except_fields
optional [string]encoding.only_fields
optional [string]encoding.timestamp_format
optional string literal enumOption | Description |
---|---|
rfc3339 | Formats as a RFC3339 string |
unix | Formats as a unix timestamp |
rfc3339
endpoint
optional string literalfilename_append_uuid
optional booltrue
filename_extension
optional string literallog
filename_time_format
optional string strftimestrftime
specifiers are supported.%s
grant_full_control
optional string literalgrant_read
optional string literalgrant_read_acp
optional string literalgrant_write_acp
optional string literalinputs
required [string]A list of upstream source or transform
IDs. Wildcards (*
) are supported.
See configuration for more info.
key_prefix
common optional string template/
if you want this to be the root S3 “folder”."date=%F/"
"date=%F/hour=%H/"
"year=%Y/month=%m/day=%d/"
"application_id={{ application_id }}/date=%F/"
date=%F/
proxy
optional objectproxy.http
optional string literalproxy.https
optional string literalproxy.no_proxy
optional [string]A list of hosts to avoid proxying. Allowed patterns here include:
Pattern | Example match |
---|---|
Domain names | example.com matches requests to example.com |
Wildcard domains | .example.com matches requests to example.com and its subdomains |
IP addresses | 127.0.0.1 matches requests to 127.0.0.1 |
CIDR blocks | 192.168.0.0./16 matches requests to any IP addresses in this range |
Splat | * matches all hosts |
request
optional objectrequest.adaptive_concurrency
optional objectrequest.adaptive_concurrency.decrease_ratio
optional float0.9
request.adaptive_concurrency.ewma_alpha
optional float0.7
request.adaptive_concurrency.rtt_deviation_scale
optional float2
request.concurrency
optional uintrequest.rate_limit_duration_secs
optional uintrate_limit_num
option.1
(seconds)request.rate_limit_num
optional uintrate_limit_duration_secs
time window.9.223372036854776e+18
request.retry_attempts
optional uint1.8446744073709552e+19
request.retry_initial_backoff_secs
optional uint1
(seconds)request.retry_max_duration_secs
optional uint3600
(seconds)request.timeout_secs
optional uint60
(seconds)server_side_encryption
optional string literal enumOption | Description |
---|---|
AES256 | 256-bit Advanced Encryption Standard |
aws:kms | AWS managed key encryption |
ssekms_key_id
optional string literalserver_side_encryption
has the value "aws.kms"
, this specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed customer master key (CMK) that will used for the created objects. If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.storage_class
optional string literal enumOption | Description |
---|---|
DEEP_ARCHIVE | Use for archiving data that rarely needs to be accessed. |
GLACIER | Use for archives where portions of the data might need to be retrieved in minutes. |
INTELLIGENT_TIERING | Stores objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequently accessed data. |
ONEZONE_IA | Amazon S3 stores the object data in only one Availability Zone. |
REDUCED_REDUNDANCY | Designed for noncritical, reproducible data that can be stored with less redundancy than the STANDARD storage class. AWS recommends that you not use this storage class. The STANDARD storage class is more cost effective. |
STANDARD | The default storage class. If you don’t specify the storage class when you upload an object, Amazon S3 assigns the STANDARD storage class. |
STANDARD_IA | Amazon S3 stores the object data redundantly across multiple geographically separated Availability Zones (similar to the STANDARD storage class). |
tls
optional objecttls.ca_file
optional string literaltls.crt_file
optional string literalkey_file
must also be set.tls.key_file
optional string literalcrt_file
must also be set.tls.key_pass
optional string literalkey_file
is set.tls.verify_certificate
optional booltrue
(the default), Vector will validate the TLS certificate of the remote host.true
tls.verify_hostname
optional booltrue
(the default), Vector will validate the configured remote host name against the remote host’s TLS certificate. Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.true
Environment variables
AWS_ACCESS_KEY_ID
common optional string literalAWS_CONFIG_FILE
common optional string literal~/.aws/config
AWS_CREDENTIAL_EXPIRATION
common optional string literalAWS_DEFAULT_REGION
common optional string literalAWS_PROFILE
common optional string literaldefault
AWS_ROLE_SESSION_NAME
common optional string literalAWS_SECRET_ACCESS_KEY
common optional string literalAWS_SESSION_TOKEN
common optional string literalAWS_SHARED_CREDENTIALS_FILE
common optional string literal~/.aws/credentials
Telemetry
Metrics
linkbuffer_byte_size
gaugecomponent_id
instead. The value is the same as component_id
.buffer_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_events
gaugecomponent_id
instead. The value is the same as component_id
.buffer_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_received_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.component_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_received_events_count
histogramcomponent_id
instead. The value is the same as component_id
.component_received_events_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.events_discarded_total
counterevents_in_total
countercomponent_received_events_total
instead.component_id
instead. The value is the same as component_id
.processing_errors_total
countercomponent_id
instead. The value is the same as component_id
.utilization
gaugecomponent_id
instead. The value is the same as component_id
.Permissions
Policy | Required for | Required when |
---|---|---|
s3:HeadBucket |
| |
s3:PutObject |
|
How it works
AWS authentication
Vector checks for AWS credentials in the following order:
- The
access_key_id
andsecret_access_key
options. - The
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
environment variables. - The AWS credentials file (usually located at
~/.aws/credentials
). - The IAM instance profile (only works if running on an EC2 instance with an instance profile/role). Requires IMDSv2 to be enabled. For EKS, you may need to increase the metadata token response hop limit to 2.
Note that use of
credentials_process
in AWS credentials files is not supported as the underlying AWS SDK currently lacks
support.
If no credentials are found, Vector’s health check fails and an error is logged. If your AWS credentials expire, Vector will automatically search for up-to-date credentials in the places (and order) described above.
Obtaining an access key
access_key_id
and secret_access_key
options.Assuming roles
assume_role
option. This is an
optional setting that is helpful for a variety of use cases, such as cross
account access.Buffers and batches
This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.
Batches are flushed when 1 of 2 conditions are met:
- The batch age meets or exceeds the configured
timeout_secs
. - The batch size meets or exceeds the configured
max_bytes
ormax_events
.
Buffers are controlled via the buffer.*
options.
Cross account object writing
grant_full_control
option to the bucket owner’s
canonical user ID. AWS provides a
full tutorial for this use case. If
don’t know the bucket owner’s canonical ID you can find it by following
this tutorial.Health checks
Require health checks
If you’d like to exit immediately upon a health check failure, you can pass the
--require-healthy
flag:
vector --config /etc/vector/vector.toml --require-healthy
Disable health checks
healthcheck
option to
false
.Object Access Control List (ACL)
acl
, grant_full_control
, grant_read
, grant_read_acp
, or
grant_write_acp
options.acl.*
vs grant_*
options
grant_*
options name a specific entity to grant access to. The acl
options is one of a set of specific canned ACLs that
can only name the owner or world.Object naming
Vector uses two different naming schemes for S3 objects. If you set the
compression
parameter to true
(this is the default), Vector uses
this scheme:
<key_prefix><timestamp>-<uuidv4>.log.gz
If compression isn’t enabled, Vector uses this scheme (only the file extension is different):
<key_prefix><timestamp>-<uuidv4>.log
Some sample S3 object names (with and without compression, respectively):
date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log.gz
date=2019-06-18/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log
Vector appends a UUIDV4 token to ensure there are no naming conflicts in the unlikely event that two Vector instances are writing data at the same time.
You can control the resulting name via the key_prefix
,
filename_time_format
, and
filename_append_uuid
options.
Object Tags & metadata
Vector currently only supports AWS S3 object tags and does not support object metadata. If you require metadata support see issue #1694.
We believe tags are more flexible since they are separate from the actual S3 object. You can freely modify tags without modifying the object. Conversely, object metadata requires a full rewrite of the object to make changes.
Partitioning
Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:
[sinks.my-sink]
dynamic_option = "application={{ application_id }}"
In the above example, the application_id
for each event will be
used to partition outgoing data.
Rate limits & adaptive concurrency
Adaptive Request Concurrency (ARC)
Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,
We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required.
Static concurrency
If Adaptive Request Concurrency is not for you, you can manually set static concurrency
limits by specifying an integer for request.concurrency
:
[sinks.my-sink]
request.concurrency = 10
Rate limits
In addition to limiting request concurrency, you can also limit the overall request
throughput via the request.rate_limit_duration_secs
and request.rate_limit_num
options.
[sinks.my-sink]
request.rate_limit_duration_secs = 1
request.rate_limit_num = 10
These will apply to both adaptive
and fixed request.concurrency
values.
Retry policy
request.retry_attempts
and
request.retry_backoff_secs
options.Server-Side Encryption (SSE)
server_side_encryption
option.Storage class
storage_class
option.