File sink throughput bottleneck: HTTP source → remap transform → file sink maxes out at ~12k EPS (target: 50+) #25324
Unanswered
mrSpencer91
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Question
Setup: Logstash forwards Windows event logs via HTTP (json_batch) to Vector on a dedicated archive server. Vector remaps the events (hostname extraction, timestamp parsing) and writes them to per-hostname log files.
Logstash (32 workers, json_batch) → HTTP → Vector (remap) → file sink (~3000 unique files)
Environment: Vector 0.44 in Docker, 20 CPUs, 32GB RAM, Linux. The archive server handles no other workload.
Problem: Logstash delivers ~25k EPS to Vector's HTTP source, but the file sink only sustains ~12k EPS. Component utilization is at 100%. We've been iterating on this for a week across multiple test runs and have eliminated several suspects:
What we've ruled out:
acknowledgements: falseWhat we're seeing now (memory buffer, ack disabled):
Current config:
Questions:
Any guidance appreciated — happy to share Prometheus metrics exports or the full vector.yaml if helpful.
Vector Config
Vector Logs
No response
Beta Was this translation helpful? Give feedback.
All reactions