feat(seer): Convert structured LLMContext JSON to markdown for on_page_context#112181
Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
3180081 to
41ba5d4
Compare
…e_context When the frontend sends a structured LLMContextSnapshot as JSON in on_page_context (instead of an ASCII DOM screenshot), the endpoint now detects it and converts it to a markdown string before forwarding to Seer. Non-JSON on_page_context (the existing ASCII screenshot) passes through unchanged. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
41ba5d4 to
af62092
Compare
|
|
||
|
|
||
| def _render_node(node: dict[str, Any], depth: int) -> str: | ||
| """Recursively render an LLMContextSnapshot node and its children as markdown.""" |
There was a problem hiding this comment.
I think the answer is probably no, but there's a question as to whether some sort of indentation might make sense here. It tends to break markdown, but AI probably won't care tbh. My concern is that a parent with N children may cause the LLMs to kind of lose context of hierarchy later if they're reliant on matching up # counts. This is probably a non-issue, but flagging
There was a problem hiding this comment.
Yeah with increasing depth matching up # is probably not best. I'll try this out a biton the a dashboard pages and see. I'll add indents if there seem to be issues.

When the frontend sends a structured LLMContextSnapshot as JSON in on_page_context (instead of the legacy ASCII DOM screenshot), the explorer chat endpoint now detects it and converts it to a readable markdown string before forwarding to Seer. Non-JSON on_page_context (the existing ASCII screenshot) passes through unchanged.
+ This is the backend half of the structured page context feature. The frontend counterpart (conditionally sending JSON instead of ASCII when organizations:context-engine-structured-page-context flag is on + dashboards page) will follow in a separate PR per the frontend/backend split deploy requirement.
This won't do anything currently right cuz the this is always false - snapshot = json.loads(on_page_context)
Follow-up PRs
raw timeseries data)