Skip to content

subsquid/loadtest-k6

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

loadtest-portal

Automated saturation finder for the portal HTTP streaming service. See docs/specs/2026-05-11-design.md for the design.

What it does

  • Daily CronJob (saturation-daily) walks each query profile's concurrency upward in additive steps.
  • Each step runs as a k6 TestRun (handled by the upstream k6-operator).
  • After each step the controller queries vmselect for throughput and time-weighted error rate.
  • On plateau or error-budget breach, the controller records the last good level and moves to the next profile.
  • Per-profile summary metrics are pushed to vminsert and visualized in Grafana.

Components

Path What
controller/ Python controller image (saturation)
k6/stream.js Per-user k6 streaming script
chart/ Helm chart: namespace, RBAC, ConfigMaps, CronJob
grafana/dashboard.json Grafana dashboard (datasource UID victoria)

Prerequisites in the target cluster

  1. VictoriaMetrics with vmselect and vminsert reachable in-cluster.
  2. k6-operator installed (upstream chart): helm install k6-operator grafana/k6-operator -n k6-operator-system --create-namespace.
  3. portal deployed in a load environment, exposing the /datasets/... API.

Build the controller image

cd controller
docker build -t <your-registry>/loadtest-portal-controller:<tag> .
docker push <your-registry>/loadtest-portal-controller:<tag>

Install the chart

Author a values.yaml with your profiles:

controller:
  image:
    repository: <your-registry>/loadtest-portal-controller
    tag: <tag>
portal:
  namespace: mainnet-load-portal
  deployment: portal
  baseUrl: http://portal.mainnet-load-portal:8080
profiles:
  - name: evm-topic-scan
    dataset: ethereum-mainnet
    stream_url: /datasets/ethereum-mainnet/stream
    blocks_per_request: 100
    query: { ... }

Install:

helm install loadtest-portal ./chart -f values.yaml

Trigger a manual run (e.g. on a release)

kubectl -n portal-loadtest create job --from=cronjob/saturation-daily \
  run-release-$(date +%s) -- env RUN_TYPE=release VERSION=<release-tag>

(RUN_TYPE and VERSION overrides are picked up by the controller. Without overrides, the controller resolves VERSION from the portal Deployment image tag.)

Import the Grafana dashboard

curl -X POST -H "Content-Type: application/json" \
  -u admin:<pass> \
  http://grafana.example.com/api/dashboards/db \
  -d @grafana/dashboard.json

Tuning a profile

A new profile should have blocks_per_request set so a healthy 200-status response lands in ~3–6 seconds (so a 30s measurement window contains multiple completions per user). If you're not sure, start with blocks_per_request: 10 and adjust based on a smoke run.

If a saturation test plateaus at a suspiciously round number, check container_cpu_usage_seconds_total on the runner pods — the load generator may be the bottleneck. Increase runner.parallelism (more pods, fewer VUs per pod) and re-test.

Known limitations

  • gzip only in v1. Real clients prefer zstd; saturation numbers will differ once zstd is enabled.
  • No locking — operator confirms no active run before triggering manually.

See docs/specs/2026-05-11-design.md § "Known limitations and follow-ups".

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors