Minimal server that converts https://models.dev/api.json into RSS on demand.
Built with Hono so the same app runs on Node and Cloudflare Workers.
Node runtime can optionally emit OpenTelemetry traces and metrics.
The Docker image in this repo is for the Node server only and publishes linux/amd64 and linux/arm64 variants.
npm run start
# to test
npm run testLocal check URLs (Node server):
http://localhost:3000/http://localhost:3000/healthhttp://localhost:3000/rss
Optional environment variables:
PORT(default:3000)MAX_ITEMS(default:1000) limits RSS item countFEED_BASE_URL(optional) overrides feed links base URLOTEL_ENABLEDset totrueto enable Node-only OpenTelemetryOTEL_EXPORTER_OTLP_ENDPOINTshared OTLP backend endpoint for traces and metricsOTEL_EXPORTER_OTLP_TRACES_ENDPOINToptional trace endpoint when not using shared endpointOTEL_EXPORTER_OTLP_METRICS_ENDPOINToptional metrics endpoint when not using shared endpointOTEL_SERVICE_NAME(optional) overrides the default service namemodels.dev-rss
Build for your local platform:
docker build -t models.dev-rss:local .Build a specific target architecture:
docker build --platform linux/amd64 -t models.dev-rss:amd64 .
docker build --platform linux/arm64 -t models.dev-rss:arm64 .Run it:
docker run --rm \
-p 3000:3000 \
models.dev-rss:localAll Node env vars work the same in Docker:
docker run --rm \
-p 8080:8080 \
-e PORT=8080 \
-e MAX_ITEMS=250 \
-e FEED_BASE_URL=https://rss.example.com \
-e OTEL_ENABLED=true \
-e OTEL_EXPORTER_OTLP_ENDPOINT=https://otel.example/v1 \
-e OTEL_SERVICE_NAME=models-dev-rss \
models.dev-rss:localIf you change PORT, match the published port mapping to the container port.
On a different host architecture, pass --platform linux/amd64 or --platform linux/arm64 explicitly.
GitHub Actions publishes a multi-arch image to ghcr.io/flexdinesh/models.dev-rss.
Workflow:
Publish Docker image (amd64 + arm64)
Tags:
sha-<full-commit-sha>on everymainpushlateston the default branch
Pull and run the published image:
docker pull ghcr.io/flexdinesh/models.dev-rss:latest
docker run --rm \
-p 3000:3000 \
ghcr.io/flexdinesh/models.dev-rss:latestIf you need a specific published target, pull or run with --platform linux/amd64 or --platform linux/arm64.
- Node server only. Worker runtime stays uninstrumented.
- Disabled by default.
- Uses OpenTelemetry auto-instrumentation.
- When
OTEL_ENABLED=true, you must set either:OTEL_EXPORTER_OTLP_ENDPOINT, or- both
OTEL_EXPORTER_OTLP_TRACES_ENDPOINTandOTEL_EXPORTER_OTLP_METRICS_ENDPOINT
- If OTel config is invalid or SDK init fails, the server logs a warning and keeps serving traffic.
Example:
OTEL_ENABLED=true \
OTEL_EXPORTER_OTLP_ENDPOINT=https://otel.example/v1 \
OTEL_SERVICE_NAME=models-dev-rss \
npm run startGET /plain text usage hintGET /healthreturns JSON liveness status without calling upstreamGET /rssfetchesapi.json, converts to RSS 2.0, returnsapplication/rss+xmlGET /rss?providerId=openai&providerId=openrouterfilters the feed to matching upstream provider ids and appends the matched provider names to the channel title
npm run dev
npm run deployLocal check URLs (Wrangler dev):
http://localhost:8787/http://localhost:8787/healthhttp://localhost:8787/rss
For Worker MAX_ITEMS, add it as an env var in wrangler.toml:
[vars]
MAX_ITEMS = "1000"
FEED_BASE_URL = "https://your-domain.example"