π€ Hugging FaceΒ Β | Β Β π€ ModelScopeΒ Β | Β Β π Technical ReportΒ Β | Β Β π DemoΒ Β | Β Β π Documentation
Visit our Hugging Face or ModelScope organization (click links above), search checkpoints with names starting with AgentDoG-, and you will find all you need! Enjoy!
AgentDoG is a risk-aware evaluation and guarding framework for autonomous agents. It focuses on trajectory-level risk assessment, aiming to determine whether an agentβs execution trajectory contains safety risks under diverse application scenarios. Unlike single-step content moderation or final-output filtering, AgentDoG analyzes the full execution trace of tool-using agents to detect risks that emerge mid-trajectory.
- π§ Trajectory-Level Monitoring: evaluates multi-step agent executions spanning observations, reasoning, and actions.
- π§© Taxonomy-Guided Diagnosis: provides fine-grained risk labels (risk source, failure mode, and real-world harm) to explain why unsafe behavior occurs. More crucially, AgentDoG diagnoses the root cause of a specific action, tracing it to specific planning steps or tool selections.
- π‘οΈ Flexible Use Cases: can serve as a benchmark, a risk classifier for trajectories, or a guard module in agent systems.
- π₯ State-of-the-Art Performance: Outperforms existing approaches on R-Judge, ASSE-Safety, and ATBench.
| Name | Parameters | BaseModel | Download |
|---|---|---|---|
| AgentDoG-Qwen3-4B | 4B | Qwen3-4B-Instruct-2507 | π€ Hugging Face |
| AgentDoG-Qwen2.5-7B | 7B | Qwen2.5-7B-Instruct | π€ Hugging Face |
| AgentDoG-Llama3.1-8B | 8B | Llama3.1-8B-Instruct | π€ Hugging Face |
| AgentDoG-FG-Qwen3-4B | 4B | Qwen3-4B-Instruct-2507 | π€ Hugging Face |
| AgentDoG-FG-Qwen2.5-7B | 7B | Qwen2.5-7B-Instruct | π€ Hugging Face |
| AgentDoG-FG-Llama3.1-8B | 8B | Llama3.1-8B-Instruct | π€ Hugging Face |
For more details, please refer to Technical Report.
We release ATBench (Agent Trajectory Safety and Security Benchmark) for trajectory-level safety evaluation and fine-grained risk diagnosis.
- Download: π€ Hugging Face Datasets
- Scale: 500 trajectories (250 safe / 250 unsafe), ~8.97 turns per trajectory (~4486 turn interactions)
- Tools: 1575 unique tools appearing in trajectories; an independent unseen-tools library with 2292 tool definitions (no overlap with training tools)
- Labels: binary
safe/unsafe; unsafe trajectories additionally include fine-grained labels (Risk Source, Failure Mode, Real-World Harm)
We adopt a unified, three-dimensional safety taxonomy for agentic systems. It organizes risks along three orthogonal axes, answering: why a risk arises (risk source), how it manifests in behavior (failure mode), and what harm it causes (real-world harm).
- Risk Source: where the threat originates in the agent loop, e.g., user inputs, environmental observations, external tools/APIs, or the agent's internal reasoning.
- Failure Mode: how the unsafe behavior is realized, such as flawed planning, unsafe tool usage, instruction-priority confusion, or unsafe content generation.
- Real-World Harm: the real-world impact, including privacy leakage, financial loss, physical harm, security compromise, or broader societal/psychological harms.
In the current release, the taxonomy includes 8 risk-source categories, 14 failure modes, and 10 real-world harm categories, and is used for fine-grained labeling during training and evaluation.
Figure: Example task instructions for the two AgentDoG classification tasks (trajectory-level evaluation and fine-grained diagnosis).
Prior works (e.g., LlamaGuard, Qwen3Guard) formulate safety moderation as classifying whether the final output in a multi-turn chat is safe. In contrast, AgentDoG defines a different task: diagnosing an entire agent trajectory to determine whether the agent exhibits any unsafe behavior at any point during execution.
Concretely, we consider two tasks:
- Trajectory-level safety evaluation (binary). Given an agent trajectory (a sequence of steps, each step containing an action and an observation), predict
safe/unsafe. A trajectory is labeledunsafeif any step exhibits unsafe behavior; otherwise it issafe. - Fine-grained risk diagnosis. Given an
unsafetrajectory, additionally predict the tuple (Risk Source, Failure Mode, Real-World Harm).
Prompting. Trajectory-level evaluation uses (i) task definition, (ii) agent trajectory, and (iii) output format. Fine-grained diagnosis additionally includes the safety taxonomy for reference and asks the model to output the three labels line by line.
| Task | Prompt Components |
|---|---|
| Trajectory-level safety evaluation | Task Definition + Agent Trajectory + Output Format |
| Fine-grained risk diagnosis | Task Definition + Safety Taxonomy + Agent Trajectory + Output Format |
We use a taxonomy-guided synthesis pipeline to generate realistic, multi-step agent trajectories. Each trajectory is conditioned on a sampled risk tuple (risk source, failure mode, real-world harm), then expanded into a coherent tool-augmented execution and filtered by quality checks.
Figure: Three-stage pipeline for multi-step agent safety trajectory synthesis.
To reflect realistic agent tool use, our tool library is orders of magnitude larger than prior benchmarks. For example, it is about 86x, 55x, and 41x larger than R-Judge, ASSE-Safety, and ASSE-Security, respectively.
Figure: Tool library size compared to existing agent safety benchmarks.
We also track the coverage of the three taxonomy dimensions (risk source, failure mode, and harm type) to ensure balanced and diverse risk distributions in our synthesized data.
Figure: Distribution over risk source, failure mode, and harm type categories.
Our guard models are trained with standard supervised fine-tuning (SFT) on trajectory demonstrations. Given a training set safe/unsafe, and optionally fine-grained labels), we minimize the negative log-likelihood:
We fine-tuned multiple base models: Qwen3-4B-Instruct-2507, Qwen2.5-7B-Instruct, and Llama3.1-8B-Instruct.
-
Evaluated on R-Judge, ASSE-Safety, and ATBench
-
Outperforms step-level baselines in detecting:
- Long-horizon instruction hijacking
- Tool misuse after benign prefixes
-
Strong generalization across:
- Different agent frameworks
- Different LLM backbones
-
Fine-grained label accuracy on ATBench (best of our FG models): Risk Source 82.0%, Failure Mode 32.4%, Harm Type 59.2%
Accuracy comparison (ours + baselines):
| Model | Type | R-Judge | ASSE-Safety | ATBench |
|---|---|---|---|---|
| GPT-5.2 | General | 90.8 | 77.4 | 90.0 |
| Gemini-3-Flash | General | 95.2 | 75.9 | 75.6 |
| Gemini-3-Pro | General | 94.3 | 78.5 | 87.2 |
| QwQ-32B | General | 89.5 | 68.2 | 63.0 |
| Qwen3-235B-A22B-Instruct | General | 85.1 | 77.6 | 84.6 |
| LlamaGuard3-8B | Guard | 61.2 | 54.5 | 53.3 |
| LlamaGuard4-12B | Guard | 63.8 | 56.3 | 58.1 |
| Qwen3-Guard | Guard | 40.6 | 48.2 | 55.3 |
| ShieldAgent | Guard | 81.0 | 79.6 | 76.0 |
| AgentDoG-4B (Ours) | Guard | 91.8 | 80.4 | 92.8 |
| AgentDoG-7B (Ours) | Guard | 91.7 | 79.8 | 87.4 |
| AgentDoG-8B (Ours) | Guard | 78.2 | 81.1 | 87.6 |
Fine-grained label accuracy on ATBench (unsafe trajectories only):
| Model | Risk Source Acc | Failure Mode Acc | Harm Type Acc |
|---|---|---|---|
| Gemini-3-Flash | 38.0 | 22.4 | 34.8 |
| GPT-5.2 | 41.6 | 20.4 | 30.8 |
| Gemini-3-Pro | 36.8 | 17.6 | 32.0 |
| Qwen3-235B-A22B-Instruct-2507 | 19.6 | 17.2 | 38.0 |
| QwQ-32B | 23.2 | 14.4 | 34.8 |
| AgentDoG-FG-4B (Ours) | 82.0 | 32.4 | 58.4 |
| AgentDoG-FG-8B (Ours) | 81.6 | 31.6 | 57.6 |
| AgentDoG-FG-7B (Ours) | 81.2 | 28.8 | 59.2 |
For deployment, you can use sglang>=0.4.6 or vllm>=0.10.0 to create an OpenAI-compatible API endpoint:
SGLang
python -m sglang.launch_server --model-path AI45Research/AgentDoG-Qwen3-4B --port 30000 --context-length 16384
python -m sglang.launch_server --model-path AI45Research/AgentDoG-FG-Qwen3-4B --port 30001 --context-length 16384vLLM
vllm serve AI45Research/AgentDoG-Qwen3-4B --port 8000 --max-model-len 16384
vllm serve AI45Research/AgentDoG-FG-Qwen3-4B --port 8001 --max-model-len 16384Recommended: use prompt templates in prompts/ and run the example script in examples/.
Binary trajectory moderation
python examples/run_openai_moderation.py \
--base-url http://localhost:8000/v1 \
--model AI45Research/AgentDoG-Qwen3-4B \
--trajectory examples/trajectory_sample.json \
--prompt prompts/trajectory_binary.txtFine-grained risk diagnosis
python examples/run_openai_moderation.py \
--base-url http://localhost:8000/v1 \
--model AI45Research/AgentDoG-FG-Qwen3-4B \
--trajectory examples/trajectory_sample.json \
--prompt prompts/trajectory_finegrained.txt \
--taxonomy prompts/taxonomy_finegrained.txtWe also introduce a novel hierarchical framework for Agentic Attribution, designed to unveil the internal drivers behind agent actions beyond simple failure localization. By decomposing interaction trajectories into pivotal components and fine-grained textual evidence, our approach explains why an agent makes specific decisions regardless of the outcome. This framework enhances the transparency and accountability of autonomous systems by identifying key factors such as memory biases and tool outputs.
To evaluate the effectiveness of the proposed agentic attribution framework, we conducted several case studies across diverse scenarios. The figure illustrates how our framework localizes decision drivers across four representative cases. The highlighted regions denote the historical components and fine-grained sentences identified by our framework as the primary decision drivers.
Figure: Illustration of attribution results across two representative scenarios.
Figure: Comparative attribution results between AgentDoG and Basemodel.
You can run the analysis in three steps:
Analyze the contribution of each conversation step.
python component_attri.py \
--model_id "your_model_path" \
--data_dir ./samples \
--output_dir ./resultsPerform fine-grained analysis on the top-K most influential steps.
python sentence_attri.py \
--model_id "your_model_path" \
--traj_file ./samples/xx.json \
--attr_file ./results/xx_attr_trajectory.json \
--output_file ./results/xx_attr_sentence.json \
--top_k 3Create an interactive HTML heatmap.
python case_plot_html.py \
--original_traj_file ./samples/xx.json \
--traj_attr_file ./results/xx_attr_trajectory.json \
--sent_attr_file ./results/xx_attr_sentence.json \
--output_file ./results/xx_visualization.htmlTo run the complete pipeline automatically, configure and run the shell script:
bash run_all_pipeline.shAgentDoG/
βββ README.md
βββ figures/
βββ prompts/
β βββ trajectory_binary.txt
β βββ trajectory_finegrained.txt
β βββ taxonomy_finegrained.txt
βββ examples/
β βββ run_openai_moderation.py
β βββ trajectory_sample.json
βββ AgenticXAI
β βββ case_plot_html.py
β βββ component_attri.py
β βββ README.md
β βββ run_all_pipeline.sh
β βββ samples
β β βββ finance.json
β β βββ resume.json
β β βββ transaction.json
β βββ sentence_attri.py
- Edit prompt templates:
prompts/trajectory_binary.txt,prompts/trajectory_finegrained.txt - Update taxonomy labels:
prompts/taxonomy_finegrained.txt - Change runtime integration:
examples/run_openai_moderation.py
This project is released under the Apache 2.0 License.
If you use AgentDoG in your research, please cite:
@misc{liu2026agentdogdiagnosticguardrailframework,
title={AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and Security},
author={Dongrui Liu and Qihan Ren and Chen Qian and Shuai Shao and Yuejin Xie and Yu Li and Zhonghao Yang and Haoyu Luo and Peng Wang and Qingyu Liu and Binxin Hu and Ling Tang and Jilin Mei and Dadi Guo and Leitao Yuan and Junyao Yang and Guanxu Chen and Qihao Lin and Yi Yu and Bo Zhang and Jiaxuan Guo and Jie Zhang and Wenqi Shao and Huiqi Deng and Zhiheng Xi and Wenjie Wang and Wenxuan Wang and Wen Shen and Zhikai Chen and Haoyu Xie and Jialing Tao and Juntao Dai and Jiaming Ji and Zhongjie Ba and Linfeng Zhang and Yong Liu and Quanshi Zhang and Lei Zhu and Zhihua Wei and Hui Xue and Chaochao Lu and Jing Shao and Xia Hu},
year={2026},
journal={arXiv preprint arXiv:2601.18491}
}This project builds upon prior work in agent safety, trajectory evaluation, and risk-aware AI systems.









