Conversation
There was a problem hiding this comment.
Pull request overview
This PR introduces a firewall management capability to the agent’s command system, wiring a new firewall policy into both the local API server state and the long-polling daemon so firewall-related commands can be executed with config-driven restrictions.
Changes:
- Add a new
configure_firewallstacker command and implement iptables-based rule management with persistence support. - Extend agent configuration with an optional
firewallsection and propagate a derivedFirewallPolicythroughAppStateand daemon polling context. - Update test configs and
AppState::newcall sites to include the new firewall-related parameters.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/security_integration.rs | Updates test config + AppState::new call to match new state constructor signature. |
| tests/http_routes.rs | Updates test config + router construction to include new firewall config field and AppState::new arg. |
| src/comms/local_api.rs | Adds firewall_policy to AppState, constructs it from config + API port, and passes it into stacker execution. |
| src/commands/stacker.rs | Adds ConfigureFirewall command parsing and plumbs FirewallPolicy into stacker execution path. |
| src/commands/mod.rs | Exposes the new firewall module. |
| src/commands/firewall.rs | Implements firewall command handling, policy checks, rate limiting, iptables rule application, and persistence filtering. |
| src/agent/daemon.rs | Introduces PollingContext (incl. firewall policy) and plumbs it into command execution/reporting. |
| src/agent/config.rs | Adds FirewallConfig + optional firewall field to Config. |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
PR works? Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
sounds reasonable Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
sounds good Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
@copilot tests do not pass, because of the status change. Must be fixed. |
| fn determine_status(rules: &[FirewallRuleResult], errors: &[CommandError]) -> String { | ||
| if errors.is_empty() && rules.iter().all(|r| r.applied) { | ||
| // All rules applied successfully, no errors: overall command is a success. | ||
| "success".to_string() | ||
| } else if rules.iter().any(|r| r.applied) { |
There was a problem hiding this comment.
determine_status currently returns only "success"/"failed", but the unit tests below assert "ok"/"partial_success". As written, these tests will fail (and it’s unclear which behavior is intended). Either update the tests to match the "success"/"failed" contract, or change the function to return the detailed status and introduce a separate mapping for CommandResult.status.
| if has_docker && !is_public { | ||
| let docker_ok = add_docker_user_rules(rule, &comment, &mut errors).await; | ||
| if !docker_ok { | ||
| warn!( | ||
| port = rule.port, | ||
| "DOCKER-USER rule failed; INPUT rule was applied" | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
When applying private-port rules with Docker present, docker_ok can be false (DOCKER-USER rules failed), but the code still records a FirewallRuleResult with applied: true later. This reports success even though container-published traffic may remain unrestricted. Consider marking the rule as not fully applied (or adding a distinct per-rule status/message) when DOCKER-USER rule application fails.
| // Docker interface references in POSTROUTING/PREROUTING (docker0, br-*) | ||
| if line.contains("docker0") || line.contains("br-") { | ||
| return true; |
There was a problem hiding this comment.
is_docker_rule treats any rule containing the substring "br-" as Docker-managed. This can match non-Docker bridge interfaces (e.g., br-lan) and cause legitimate user rules to be stripped during persistence. Narrow the heuristic (e.g., detect Docker bridge naming patterns or only strip when the interface is known Docker-managed).
| /// Docker-managed chain names that must NOT be persisted. | ||
| /// Docker recreates these on every daemon start; saving them leads to stale / | ||
| /// duplicate rules and broken container networking after reboot. | ||
| const DOCKER_CHAINS: &[&str] = &[ | ||
| "DOCKER", | ||
| "DOCKER-ISOLATION-STAGE-1", | ||
| "DOCKER-ISOLATION-STAGE-2", | ||
| "DOCKER-USER", | ||
| ]; |
There was a problem hiding this comment.
persist_rules filters out the DOCKER-USER chain entirely (DOCKER_CHAINS includes it). But this module also adds stacker-managed rules into DOCKER-USER for private ports, so persist: true won’t actually persist the container-traffic restrictions across reboot. Consider persisting stacker-tagged rules in DOCKER-USER (while still stripping Docker’s own chains/rules), or otherwise make it explicit in behavior/response that private-port Docker rules are not persisted.
| result.status = status.to_string(); | ||
| if !errors.is_empty() { |
There was a problem hiding this comment.
handle_flush assigns result.status to values like "ok"/"partial_success", but transport::CommandResult.status is documented as "success" | "failed" | "timeout". This will break dashboard reporting/consumers. Keep the outer CommandResult.status to the supported values (e.g., map ok/partial_success -> "success", and put the detailed status only inside the JSON payload).
No description provided.