Skip to content

UN-3403 [FEAT] Agentic table extractor plugin with multi-agent LLM-powered table extraction#1914

Merged
jaseemjaskp merged 108 commits into
mainfrom
feat/agentic-table-extractor
Apr 28, 2026
Merged

UN-3403 [FEAT] Agentic table extractor plugin with multi-agent LLM-powered table extraction#1914
jaseemjaskp merged 108 commits into
mainfrom
feat/agentic-table-extractor

Conversation

@harini-venkataraman
Copy link
Copy Markdown
Contributor

What

  • Adds a new Agentic Table Extractor plugin that uses multi-agent LLM orchestration to detect, extract, and structure tabular data from documents
  • Introduces AgenticTableSettings CRUD backend (pluggable app) with per-prompt configuration for the extractor (LLM adapter, page range, parallel pages, highlight toggle)
  • Adds frontend components: AgenticTableSettings modal for configuring the extractor and AgenticTableChecklist for real-time prompt readiness validation
  • Integrates the extractor into the existing Prompt Studio IDE execution pipeline via Celery async dispatch

Why

  • Existing table extraction approaches (regex, template-based) struggle with complex, unstructured, or multi-format tables across diverse document types
  • An agentic approach with specialized agents (presence detection, table detection, content extraction, code generation, code execution) enables more accurate and schema-conforming extraction
  • This is a cloud-only feature that extends Prompt Studio's enforce type options with agentic_table

How

Backend

  • New agentic_table_settings_v2 pluggable app with model, serializer, views, URL routing, and validation service
  • AgenticTableSettingsViewSet — full CRUD with update_or_create semantics; returns saved instance (with id) so frontend can PATCH
  • PromptValidationView — LLM-powered prompt analysis endpoint that checks whether a prompt contains target table, JSON structure, and instructions; uses get_or_create to avoid 404 chicken-and-egg issues
  • Payload modifier extended to build agentic_table execution payloads with adapter UUIDs from profile
  • Celery dispatch via backend app with dedicated agentic_table queue

Can this PR break any existing features? If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)

  • Low risk to existing features. All changes are additive:
    • New pluggable app with its own URL namespace (/prompt-studio/prompt/agentic-table/)
    • New worker plugin registered under the agentic_table enforce type — no existing enforce types are modified
    • Frontend components only render when enforceType === "agentic_table" is selected
    • The create view status code change (201 for new, 200 for update) is internal to this feature
    • Payload modifier changes are gated behind the agentic_table type check
  • Possible concern: The helm chart changes add a new queue — existing deployments will need to pick up the new worker configuration

Relevant Docs

  • UN-3266 Jira ticket

Related Issues or PRs

- UN-3403

Dependencies Versions / Env Variables

  • No new environment variables required
  • Requires a configured LiteLLM adapter instance for the validation and extraction to function

Notes on Testing

Backend Tests

Run the agentic table settings test suite:

cd backend && .venv/bin/python -m pytest pluggable_apps/apps/agentic_table_settings_v2/ -v

Manual Testing

  1. Settings persistence: Open agentic table settings gear -> save -> refresh -> reopen -> verify values persist and "Update" button shows
  2. Checklist persistence: Type prompt with all 3 components -> wait for green checkboxes -> refresh -> verify checkboxes restore immediately
  3. Validation flow: Select agentic_table on a fresh prompt card -> type prompt -> verify no 404 -> configure LLM adapter -> verify checkboxes update
  4. End-to-end extraction: Configure a prompt with agentic_table enforce type, set up LLM adapter, run extraction on a document with tables

Screenshots

Attached in respective cloud PR.

...

Checklist

I have read and understood the Contribution Guidelines.

harini-venkataraman and others added 30 commits February 19, 2026 20:39
Conflicts resolved:
- docker-compose.yaml: Use main's dedicated dashboard_metric_events queue for worker-metrics
- PromptCard.jsx: Keep tool_id matching condition from our async socket feature
- PromptRun.jsx: Merge useEffect import from main with our branch
- ToolIde.jsx: Keep fire-and-forget socket approach (spinner waits for socket event)
- SocketMessages.js: Keep both session-store and socket-custom-tool imports + updateCusToolMessages dep
- SocketContext.js: Keep simpler path-based socket connection approach
- usePromptRun.js: Keep Celery fire-and-forget with socket delivery over polling
- setupProxy.js: Accept main's deletion (migrated to Vite)
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 15, 2026

Greptile Summary

This PR introduces the Agentic Table Extractor as a cloud-only feature, adding a new agentic_table enforce type backed by a multi-agent LLM pipeline, a dedicated Celery queue/executor, frontend checklist/settings components, and a complete_vision multimodal SDK method. All changes are properly additive and gated behind the new enforce type.

  • P1 – TableExtractionSettingsBtn type guard removed (PromptCardItems.jsx line 397): the enforceType === TABLE guard is gone, so the settings button now renders for every prompt type when the cloud plugin is present; correctness depends entirely on the plugin filtering internally, which cannot be verified here.
  • P1 – AgenticTableChecklist lacks an enforce-type guard (noted in previous review): the checklist mounts for all prompt types and calls onReadinessChange, which can flip isAgenticTableReady to false and silently block run buttons on non-agentic prompts.

Confidence Score: 3/5

Hold until the AgenticTableChecklist enforce-type guard and TableExtractionSettingsBtn guard removal are resolved; both can silently disable run buttons on non-agentic prompts.

Two P1 issues affect the frontend: AgenticTableChecklist calling onReadinessChange for non-agentic prompts can block execution on all prompt types, and the removed enforceType === TABLE guard on TableExtractionSettingsBtn makes the button appear for every enforce type unless the cloud plugin guards internally.

frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx — both the AgenticTableChecklist guard and the TableExtractionSettingsBtn guard removal need attention before merge.

Important Files Changed

Filename Overview
frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx Adds AgenticTableChecklist plugin slot and isAgenticTableReady state. Removes the enforceType === TABLE guard from TableExtractionSettingsBtn, making it render for all prompt types — filtering now depends solely on plugin internals.
workers/file_processing/structure_tool_task.py Partitions outputs into agentic/regular, validates agentic settings, dispatches each agentic prompt to a dedicated executor, then optionally runs the legacy pipeline. Missing log_events_id in agentic ExecutionContext may prevent IDE log streaming.
workers/ide_callback/tasks.py Reshapes agentic executor output to map the tables list under prompt_key. Replaces outputs wholesale, discarding any sibling keys the executor might return alongside tables.
backend/prompt_studio/prompt_studio_core_v2/views.py Adds an agentic-table fast path in fetch_response that builds the payload via the cloud plugin, dispatches to a dedicated Celery queue, and returns 202. The is_first_prompt_run query is copied identically from the existing non-agentic path.
unstract/sdk1/src/unstract/sdk1/llm.py Adds complete_vision for multimodal (text + image) completions. Follows the same structure as complete() — error handling, usage recording, and LLMResponseCompat wrapping all match the existing pattern.
workers/executor/executors/legacy_executor.py Adds a defensive agentic_table skip guard in _apply_type_conversion and refactors email handling to use the shared _convert_scalar_answer helper, aligned with updated tests.
backend/prompt_studio/prompt_studio_v2/migrations/0014_alter_toolstudioprompt_enforce_type.py Adds agentic_table to the enforce_type choices in the migration, correctly chaining from 0013.
workers/tests/test_answer_prompt.py Updates NA-sanitization test expectations from preserved to None, consistent with the refactored _sanitize_null_values behavior change.

Sequence Diagram

sequenceDiagram
    participant UI as Prompt Studio UI
    participant BE as Backend (views.py)
    participant Plugin as Cloud Plugin
    participant Celery as Celery (agentic_table queue)
    participant Executor as AgenticTable Executor
    participant CB as IDE Callback Worker

    UI->>BE: POST /fetch_response (enforce_type=agentic_table)
    BE->>Plugin: build_agentic_table_payload(...)
    Plugin-->>BE: context, cb_kwargs
    BE->>Celery: dispatch_with_callback(context, on_success=ide_prompt_complete)
    BE-->>UI: 202 Accepted {task_id, run_id}
    Celery->>Executor: execute table extraction (page-by-page)
    Executor-->>CB: {tables, page_count, ...}
    CB->>CB: reshape outputs[prompt_key] = tables
    CB->>BE: update_prompt_output(outputs)
    BE-->>UI: WebSocket event (run complete)
Loading

Comments Outside Diff (1)

  1. frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx, line 397-402 (link)

    P1 TableExtractionSettingsBtn renders for all enforce types after guard removal

    The enforceType === TABLE guard was dropped, so TableExtractionSettingsBtn now mounts for every prompt type (text, number, boolean, etc.) whenever the cloud plugin is available. Filtering now depends entirely on the plugin component's internal logic — if the plugin renders the button unconditionally, users will see a "Table Extraction Settings" gear on every prompt card regardless of type.

    The enforceType prop is still forwarded to the component, so the fix can live inside the plugin; but removing the OSS-side guard without a corresponding guard in the plugin (which can't be verified here) is a regression path that silently shows the button where it shouldn't appear.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx
    Line: 397-402
    
    Comment:
    **`TableExtractionSettingsBtn` renders for all enforce types after guard removal**
    
    The `enforceType === TABLE` guard was dropped, so `TableExtractionSettingsBtn` now mounts for every prompt type (text, number, boolean, etc.) whenever the cloud plugin is available. Filtering now depends entirely on the plugin component's internal logic — if the plugin renders the button unconditionally, users will see a "Table Extraction Settings" gear on every prompt card regardless of type.
    
    The `enforceType` prop is still forwarded to the component, so the fix can live inside the plugin; but removing the OSS-side guard without a corresponding guard in the plugin (which can't be verified here) is a regression path that silently shows the button where it shouldn't appear.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Fix in Claude Code

Fix All in Claude Code

Prompt To Fix All With AI
This is a comment left during a code review.
Path: frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx
Line: 397-402

Comment:
**`TableExtractionSettingsBtn` renders for all enforce types after guard removal**

The `enforceType === TABLE` guard was dropped, so `TableExtractionSettingsBtn` now mounts for every prompt type (text, number, boolean, etc.) whenever the cloud plugin is available. Filtering now depends entirely on the plugin component's internal logic — if the plugin renders the button unconditionally, users will see a "Table Extraction Settings" gear on every prompt card regardless of type.

The `enforceType` prop is still forwarded to the component, so the fix can live inside the plugin; but removing the OSS-side guard without a corresponding guard in the plugin (which can't be verified here) is a regression path that silently shows the button where it shouldn't appear.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: workers/file_processing/structure_tool_task.py
Line: 455-463

Comment:
**`log_events_id` absent from agentic table `ExecutionContext`**

The legacy `ExecutionContext` includes `log_events_id=StateStore.get("LOG_EVENTS_ID") or ""` so the executor can stream log events back to the IDE. The agentic table context omits this field entirely. During an IDE agentic-table run, the executor worker won't know which log-events channel to write to, so real-time log lines won't appear in the Prompt Studio UI.

Consider adding `log_events_id=StateStore.get("LOG_EVENTS_ID") or ""` to the `at_ctx` constructor to match the legacy pipeline.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: workers/ide_callback/tasks.py
Line: 846-851

Comment:
**Wholesale `outputs` replacement drops sibling executor keys**

The reshape replaces `outputs` entirely with a single-key dict, discarding any other fields the agentic executor might return alongside `"tables"` (e.g. page counts, partial-failure info). An in-place remap that only moves the tables value under the prompt key would be safer and leave other keys intact for future use.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (4): Last reviewed commit: "Merge branch 'main' into feat/agentic-ta..." | Re-trigger Greptile

Comment thread workers/ide_callback/tasks.py
@harini-venkataraman harini-venkataraman marked this pull request as ready for review April 15, 2026 09:22
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx (1)

300-306: ⚠️ Potential issue | 🟡 Minor

Keep the table-settings button behind an enforce-type gate.

This now renders the settings entry for every prompt as soon as the plugin is installed, including text/number/email prompts. That is confusing at best, and it makes it easier to save table-specific config on incompatible prompt types.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx` around
lines 300 - 306, The TableExtractionSettingsBtn is being rendered for all
prompts; guard its render with the enforce-type check so the settings only
appear for table-enforced prompts. Update the conditional around
TableExtractionSettingsBtn in PromptCardItems.jsx (the JSX block that currently
uses TableExtractionSettingsBtn, promptDetails?.prompt_id, enforceType,
setAllTableSettings) to require a table-specific enforceType (e.g., enforceType
=== 'table' or enforceType?.includes('table')) in addition to
TableExtractionSettingsBtn before rendering the component, so incompatible
prompt types won’t show the table settings.
🧹 Nitpick comments (5)
unstract/sdk1/src/unstract/sdk1/llm.py (1)

390-390: Avoid per-call global mutation of litellm.drop_params.

Line 390 reassigns a module-global already initialized at Line 33; this contradicts the module-level intent to avoid repeated global mutation per request.

♻️ Proposed fix
-            litellm.drop_params = True
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/llm.py` at line 390, Remove the per-call
reassignment of the module-global litellm.drop_params (the assignment at the
shown call site) and instead either set the desired value once at module
initialization where litellm is imported/initialized (the earlier initialization
around line 33) or avoid mutating the global by using a local variable (e.g.,
drop_params) and pass that into the litellm API calls; in short, delete the
litellm.drop_params = True line and either consolidate the flag into
module-level setup or thread a local parameter through the functions that invoke
litellm.
frontend/src/hooks/usePromptRun.js (1)

19-23: Prefer a config-driven timeout instead of a fixed 16-minute constant.

Line 23 can silently drift from server adapter settings across environments. Consider sourcing this value from backend-exposed config (with buffer applied client-side) to avoid premature UI timeout regressions after infra changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/hooks/usePromptRun.js` around lines 19 - 23, The hardcoded
SOCKET_TIMEOUT_MS constant in usePromptRun.js can drift from server adapter
settings; change it to derive the timeout from a backend-exposed config value
(e.g., an API response or injected runtime config) and apply the client-side
buffer (e.g., subtract or add the intended 1 minute) when computing
SOCKET_TIMEOUT_MS; implement a safe fallback to the current 16-minute value if
the backend config is unavailable, and update any functions using
SOCKET_TIMEOUT_MS so they reference the computed/config-driven value instead of
the hardcoded constant.
docker/docker-compose.yaml (1)

532-532: Good queue addition—mirror this default in all deployment targets.

Line 532 is correct for local/dev, but please ensure Helm/chart and runtime env defaults include celery_executor_agentic_table as well, or agentic-table jobs can remain unconsumed in some environments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/docker-compose.yaml` at line 532, The docker-compose default for
CELERY_QUEUES_EXECUTOR was extended to include celery_executor_agentic_table but
other deployment targets are missing it; update all runtime/defaults to match by
adding celery_executor_agentic_table to the CELERY_QUEUES_EXECUTOR default in
Helm values (values.yaml), chart Deployment/StatefulSet env entries (templates/*
where CELERY_QUEUES_EXECUTOR is set), and any CI/runtime environment variable
configs (e.g., container env vars, systemd or cloud run settings) so every
environment uses
"celery_executor_legacy,celery_executor_agentic,celery_executor_agentic_table"
as the default queue list.
backend/prompt_studio/prompt_studio_output_manager_v2/output_manager_helper.py (1)

173-179: Use centralized enforce-type constants here to avoid string drift.

The new agentic_table branch is correct, but this block is still string-literal based. Switching to shared constants will prevent future typo/divergence bugs.

♻️ Suggested refactor
+from prompt_studio.prompt_studio_core_v2.constants import (
+    ToolStudioPromptKeys as TSPKeys,
+)
...
-            if prompt.enforce_type in {
-                "json",
-                "table",
-                "record",
-                "line-item",
-                "agentic_table",
-            }:
+            if prompt.enforce_type in {
+                TSPKeys.JSON,
+                TSPKeys.TABLE,
+                TSPKeys.RECORD,
+                TSPKeys.LINE_ITEM,
+                TSPKeys.AGENTIC_TABLE,
+            }:
                 output = json.dumps(output)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@backend/prompt_studio/prompt_studio_output_manager_v2/output_manager_helper.py`
around lines 173 - 179, Replace the literal string set check on
prompt.enforce_type with the centralized enforce-type constants: import and use
the shared constants for JSON/TABLE/RECORD/LINE_ITEM/AGENTIC_TABLE (e.g.,
ENFORCE_TYPE_JSON, ENFORCE_TYPE_TABLE, ENFORCE_TYPE_RECORD,
ENFORCE_TYPE_LINE_ITEM, ENFORCE_TYPE_AGENTIC_TABLE) from the module that defines
enforce-type values (the centralized constants module in prompt_studio), and
change the condition in output_manager_helper.py (the prompt.enforce_type check)
to use those constants instead of the string literals to avoid string drift.
workers/file_processing/structure_tool_task.py (1)

402-442: Consider defensive access for llm and name keys to provide clearer error messages.

Lines 414 and 442 use direct key access (at_output["llm"], at_output[_SK.NAME]) which will raise KeyError with a generic traceback if missing. Since the validation block (lines 302-313) only checks agentic_table_settings, these fields aren't validated beforehand.

If the export process guarantees these keys, this is acceptable. Otherwise, wrapping in explicit checks would produce actionable error messages matching the style at lines 305-313.

🔧 Optional: Add explicit validation for required output keys
     for at_output in agentic_table_outputs:
         at_settings = at_output.get("agentic_table_settings") or {}
+        if not at_output.get(_SK.NAME) or not at_output.get("llm"):
+            return ExecutionResult.failure(
+                error=(
+                    f"Agentic table output is missing required 'name' or 'llm' key. "
+                    f"Re-export the tool from Prompt Studio."
+                )
+            ).to_dict()
         if not at_settings.get("target_table") or not at_settings.get("json_structure"):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workers/file_processing/structure_tool_task.py` around lines 402 - 442, The
loop over agentic_table_outputs accesses at_output["llm"] and
at_output[_SK.NAME] directly which can raise KeyError; add defensive validation
before using them (e.g., confirm required keys in each at_output or use
at_output.get(...) and raise/return a clear error) so failures mirror the
earlier validation style for agentic_table_settings; specifically check each
entry in agentic_table_outputs for "llm" and _SK.NAME (or provide sensible
defaults) before building agentic_params and before assigning
agentic_results[...], and if missing return a structured error result (similar
to other validation paths) rather than letting a KeyError bubble from
dispatcher.dispatch/ExecutionContext.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py`:
- Around line 1574-1590: The single-pass prompt filter must also exclude
agentic-table prompts to prevent them from being bundled into legacy single-pass
execution; update the single-pass filter logic (the code that currently excludes
only TSPKeys.TABLE and TSPKeys.RECORD) to additionally exclude
TSPKeys.AGENTIC_TABLE by checking prompt_instance.enforce_type ==
TSPKeys.AGENTIC_TABLE (same symbol used in the single-prompt branch) so
agentic-table prompts follow the payload_modifier_plugin path and do not end up
in legacy_executor silent skips.

In `@frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx`:
- Around line 94-95: The `isAgenticTableReady` state is being used globally
causing non-agentic prompts to be blocked; scope this readiness to only
agentic_table prompts by initializing and updating `isAgenticTableReady` based
on `promptDetails?.prompt_type === 'agentic_table'` (use the
`promptDetails`/`promptId` context) and reset it to true or undefined when
`promptDetails.prompt_type` changes away from 'agentic_table'; update the places
that read this flag (components/functions `Header`, `PromptOutput`, and any
setters in `PromptCardItems.jsx` such as the `setIsAgenticTableReady` usage) so
they only disable run buttons when the current prompt is of type 'agentic_table'
and the readiness flag is false.

In `@workers/executor/executors/legacy_executor.py`:
- Around line 1873-1885: The current guard in the legacy executor silently
returns when output_type == "agentic_table", leaving
structured_output[prompt_name] unset; change this to raise an explicit exception
instead so the run fails visibly: in the same block that checks output_type ==
"agentic_table" (using variables output_type and prompt_name and logger),
replace the silent return with raising a clear exception (e.g., RuntimeError or
ValueError) that includes prompt_name and a message stating the prompt was
misrouted and should have been dispatched to the agentic_table executor; keep
the logger.warning call if you want a log entry before raising so the error is
recorded.

In `@workers/ide_callback/tasks.py`:
- Around line 395-403: The current branch for cb.get("is_agentic_table")
incorrectly replaces the full executor payload with only outputs["tables"],
discarding fields like page_count and headers; instead, preserve the entire
payload by nesting it under the prompt key before calling
update_prompt_output(): when cb.get("is_agentic_table") and prompt_key is set,
set outputs = {prompt_key: outputs} (if outputs is already a dict, wrap that
dict; if it isn't, wrap the original value as-is) so update_prompt_output()
receives the complete agentic-table payload (reference symbols: cb, prompt_key,
outputs, update_prompt_output, is_agentic_table).

---

Outside diff comments:
In `@frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx`:
- Around line 300-306: The TableExtractionSettingsBtn is being rendered for all
prompts; guard its render with the enforce-type check so the settings only
appear for table-enforced prompts. Update the conditional around
TableExtractionSettingsBtn in PromptCardItems.jsx (the JSX block that currently
uses TableExtractionSettingsBtn, promptDetails?.prompt_id, enforceType,
setAllTableSettings) to require a table-specific enforceType (e.g., enforceType
=== 'table' or enforceType?.includes('table')) in addition to
TableExtractionSettingsBtn before rendering the component, so incompatible
prompt types won’t show the table settings.

---

Nitpick comments:
In
`@backend/prompt_studio/prompt_studio_output_manager_v2/output_manager_helper.py`:
- Around line 173-179: Replace the literal string set check on
prompt.enforce_type with the centralized enforce-type constants: import and use
the shared constants for JSON/TABLE/RECORD/LINE_ITEM/AGENTIC_TABLE (e.g.,
ENFORCE_TYPE_JSON, ENFORCE_TYPE_TABLE, ENFORCE_TYPE_RECORD,
ENFORCE_TYPE_LINE_ITEM, ENFORCE_TYPE_AGENTIC_TABLE) from the module that defines
enforce-type values (the centralized constants module in prompt_studio), and
change the condition in output_manager_helper.py (the prompt.enforce_type check)
to use those constants instead of the string literals to avoid string drift.

In `@docker/docker-compose.yaml`:
- Line 532: The docker-compose default for CELERY_QUEUES_EXECUTOR was extended
to include celery_executor_agentic_table but other deployment targets are
missing it; update all runtime/defaults to match by adding
celery_executor_agentic_table to the CELERY_QUEUES_EXECUTOR default in Helm
values (values.yaml), chart Deployment/StatefulSet env entries (templates/*
where CELERY_QUEUES_EXECUTOR is set), and any CI/runtime environment variable
configs (e.g., container env vars, systemd or cloud run settings) so every
environment uses
"celery_executor_legacy,celery_executor_agentic,celery_executor_agentic_table"
as the default queue list.

In `@frontend/src/hooks/usePromptRun.js`:
- Around line 19-23: The hardcoded SOCKET_TIMEOUT_MS constant in usePromptRun.js
can drift from server adapter settings; change it to derive the timeout from a
backend-exposed config value (e.g., an API response or injected runtime config)
and apply the client-side buffer (e.g., subtract or add the intended 1 minute)
when computing SOCKET_TIMEOUT_MS; implement a safe fallback to the current
16-minute value if the backend config is unavailable, and update any functions
using SOCKET_TIMEOUT_MS so they reference the computed/config-driven value
instead of the hardcoded constant.

In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Line 390: Remove the per-call reassignment of the module-global
litellm.drop_params (the assignment at the shown call site) and instead either
set the desired value once at module initialization where litellm is
imported/initialized (the earlier initialization around line 33) or avoid
mutating the global by using a local variable (e.g., drop_params) and pass that
into the litellm API calls; in short, delete the litellm.drop_params = True line
and either consolidate the flag into module-level setup or thread a local
parameter through the functions that invoke litellm.

In `@workers/file_processing/structure_tool_task.py`:
- Around line 402-442: The loop over agentic_table_outputs accesses
at_output["llm"] and at_output[_SK.NAME] directly which can raise KeyError; add
defensive validation before using them (e.g., confirm required keys in each
at_output or use at_output.get(...) and raise/return a clear error) so failures
mirror the earlier validation style for agentic_table_settings; specifically
check each entry in agentic_table_outputs for "llm" and _SK.NAME (or provide
sensible defaults) before building agentic_params and before assigning
agentic_results[...], and if missing return a structured error result (similar
to other validation paths) rather than letting a KeyError bubble from
dispatcher.dispatch/ExecutionContext.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 745f3b34-3732-4f3c-9564-7de5c201cfcd

📥 Commits

Reviewing files that changed from the base of the PR and between 6383b10 and d69a8f0.

📒 Files selected for processing (21)
  • backend/prompt_studio/prompt_studio_core_v2/constants.py
  • backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py
  • backend/prompt_studio/prompt_studio_core_v2/static/select_choices.json
  • backend/prompt_studio/prompt_studio_core_v2/views.py
  • backend/prompt_studio/prompt_studio_output_manager_v2/output_manager_helper.py
  • backend/prompt_studio/prompt_studio_registry_v2/constants.py
  • backend/prompt_studio/prompt_studio_registry_v2/prompt_studio_registry_helper.py
  • backend/prompt_studio/prompt_studio_v2/migrations/0014_alter_toolstudioprompt_enforce_type.py
  • backend/prompt_studio/prompt_studio_v2/models.py
  • docker/docker-compose.yaml
  • frontend/src/components/custom-tools/prompt-card/Header.jsx
  • frontend/src/components/custom-tools/prompt-card/PromptCardItems.jsx
  • frontend/src/components/custom-tools/prompt-card/PromptOutput.jsx
  • frontend/src/hooks/usePromptRun.js
  • unstract/sdk1/src/unstract/sdk1/llm.py
  • workers/executor/executors/legacy_executor.py
  • workers/executor/executors/retrievers/fusion.py
  • workers/executor/executors/retrievers/keyword_table.py
  • workers/file_processing/structure_tool_task.py
  • workers/ide_callback/tasks.py
  • workers/tests/test_answer_prompt.py

Comment thread backend/prompt_studio/prompt_studio_core_v2/prompt_studio_helper.py
Comment thread workers/executor/executors/legacy_executor.py
Comment thread workers/ide_callback/tasks.py
@chandrasekharan-zipstack
Copy link
Copy Markdown
Contributor

Additional Review Findings

🔵 LOW: _sanitize_null_values behavior change affects all enforce types

File: workers/executor/executors/legacy_executor.py

Top-level "NA" strings are now converted to None. Tests were updated to match, but this is a global behavioral change for all enforce types, not just the new agentic_table type. Downstream consumers (deployed tools, ETL pipelines, connectors) that currently receive the string "NA" will now receive null instead.

Worth confirming this was intentional and won't break existing deployed tool outputs.


🔵 LOW: Socket timeout bump is global (5 → 16 min)

File: frontend/src/hooks/usePromptRun.js

SOCKET_TIMEOUT_MS changed from 5 minutes to 16 minutes. The comment explains it trails the server-side 900s LLM timeout — makes sense for agentic table extraction which is inherently slow. However, this timeout applies to all prompt runs including simple text/json/number prompts.

A regular prompt that hangs will now take 16 minutes before the UI gives up, instead of 5. Consider either:

  • Making the timeout conditional on enforce_type
  • Or adding a visible progress/elapsed-time indicator so users aren't staring at a spinner

Copy link
Copy Markdown
Contributor

@jaseemjaskp jaseemjaskp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Review Findings

Beyond what CodeRabbit and Greptile already flagged, here are additional issues found during a deeper review:


Critical

1. Silent incomplete export when payload_modifier plugin is missing
backend/prompt_studio/prompt_studio_registry_v2/prompt_studio_registry_helper.py — the new elif prompt.enforce_type == AGENTIC_TABLE block (around line 375)

When exporting an agentic_table prompt without the payload_modifier plugin available, the if payload_modifier_plugin: guard silently skips the call to export_agentic_table_settings. The export succeeds without agentic_table_settings, and the user only discovers this at document-processing time when structure_tool_task.py validation fails with "Re-export the tool from Prompt Studio."

This is a "fail later" anti-pattern — the failure should happen at export time:

elif prompt.enforce_type == PromptStudioRegistryKeys.AGENTIC_TABLE:
    payload_modifier_plugin = get_plugin("payload_modifier")
    if not payload_modifier_plugin:
        raise OperationNotSupported(
            "Agentic table export requires the payload_modifier plugin."
        )
    modifier_service = payload_modifier_plugin["service_class"]()
    output = modifier_service.export_agentic_table_settings(...)

Important

2. Missing prompt_key silently skips callback reshaping
workers/ide_callback/tasks.py — around line 397

When is_agentic_table=True but prompt_key is empty/missing, the if prompt_key: guard skips reshaping silently. The raw executor output ({"tables": [...], "page_count": ..., "headers": [...]}) gets persisted as-is with zero logging. Should log an error and fail explicitly rather than persisting malformed data.

3. No error handling around agentic table dispatch in views.py
backend/prompt_studio/prompt_studio_core_v2/views.py — lines ~512-562

The entire agentic table dispatch block (plugin instantiation, build_agentic_table_payload, dispatch_with_callback) runs without any try/except. Compare this to the existing indexing dispatch which wraps dispatch_with_callback in try/except with cleanup logic. If the cloud plugin's build_agentic_table_payload raises or the Celery broker is down, users get an opaque 500 with no actionable information.

4. Single agentic_table failure aborts ALL remaining prompts
workers/file_processing/structure_tool_task.py — around line 430

In the agentic_table dispatch loop, if any single prompt fails (if not at_result.success: return at_result.to_dict()), the function returns immediately — all subsequent agentic prompts AND the entire regular legacy pipeline are abandoned. For a tool with 10 prompts where only 1 is agentic_table, a failure in that one prompt produces zero output for all 10. At minimum, log the broader impact (how many prompts were abandoned).

5. 16-minute SOCKET_TIMEOUT_MS applies to ALL prompt types
frontend/src/hooks/usePromptRun.js — line 19

The timeout increase from 5→16 minutes is global. For regular text/number/email prompts that should complete in seconds, a stalled request now takes 16 minutes to surface a timeout error. Consider making the timeout type-aware (e.g. keep 5min for regular prompts, 16min for agentic_table).


Suggestions

6. Inaccurate comments referencing non-existent terminology

  • workers/executor/executors/legacy_executor.py: References "Layer 2 in workers/file_processing/structure_tool_task.py" — "Layer 2" doesn't appear anywhere in the codebase
  • workers/file_processing/structure_tool_task.py: References "populated by Layer 1 export" — same issue
  • workers/file_processing/structure_tool_task.py (~line 670): Comment says "Use local variables so tool_metadata[_SK.OUTPUTS] is preserved for METADATA.json serialization downstream in _write_tool_result" — this is factually incorrect. _write_tool_result() does not read tool_metadata[_SK.OUTPUTS]. The real reason is to feed only regular prompts into answer_params while keeping the full list for the agentic dispatch loop.

7. complete_vision() docstring omits key behavioral differences from complete()
unstract/sdk1/src/unstract/sdk1/llm.py — around line 488

The docstring says "Same error handling, usage tracking, and metrics as complete()" but doesn't mention:

  • Does NOT support extract_json or post_process_fn post-processing
  • Does NOT prepend the adapter's system prompt (unlike complete() which builds [{"role": "system", ...}, {"role": "user", ...}] internally)

Callers reading "same as complete()" might expect feature parity.

8. Significant test coverage gaps
No tests added for:

  • complete_vision() — 90-line new public method, zero coverage
  • Structure tool task partitioning/dispatch logic — core routing with zero tests
  • IDE callback agentic table reshaping — 2-3 test cases needed in existing TestIdePromptComplete
  • Legacy executor agentic_table guard — single test case needed

The IDE callback reshaping test is highest ROI: catches critical data-loss scenarios and the test infrastructure already exists in workers/tests/test_ide_callback.py.

Read from SOURCE instead of INFILE when dispatching to the
agentic_table executor. INFILE gets overwritten with JSON output
by the regular pipeline, causing PDFium parse errors when the
agentic_table executor tries to process it as a PDF.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@workers/file_processing/structure_tool_task.py`:
- Around line 302-313: The error text claims "target_table / json_structure /
instructions" but the code only validates target_table and json_structure;
update the validation to either include instructions as required or remove it
from the message. Concretely, in the loop over agentic_table_outputs (variables
at_output, at_settings) adjust the if-condition to also check
at_settings.get("instructions") when instructions should be required, or change
the ExecutionResult.failure message (the f-string that references
at_output[_SK.NAME]) to only mention target_table / json_structure if
instructions are optional.
- Around line 492-498: The all-agentic branch currently sets pipeline_elapsed =
0.0 which causes METADATA.json to record zero pipeline time; instead measure
wall-clock time spent in the agentic dispatch and set pipeline_elapsed to that
duration before calling _write_tool_result. Specifically, around the agentic
dispatch loop that produces agentic_results (the "Step 6a" loop), capture start
= time.monotonic() before entering the loop and end = time.monotonic() after it
completes, compute pipeline_elapsed = end - start, and replace the hard-coded
0.0 in the else branch (where structured_output and metadata.agentic_only are
set) so _write_tool_result(...) receives the measured duration. Ensure you
import/time function usage is consistent with the rest of the module.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5707d676-d319-4a8e-a6a3-b00c1cf1f27d

📥 Commits

Reviewing files that changed from the base of the PR and between d69a8f0 and 340963f.

📒 Files selected for processing (1)
  • workers/file_processing/structure_tool_task.py

Comment thread workers/file_processing/structure_tool_task.py
Comment thread workers/file_processing/structure_tool_task.py
@harini-venkataraman
Copy link
Copy Markdown
Contributor Author

@jaseemjaskp
Thanks for the thorough review! Here's the status on each item:

  1. Silent incomplete export (Critical) — Fixed. Inverted the guard to if not payload_modifier_plugin: raise
    OperationNotSupported(...) so export fails immediately instead of silently producing incomplete output.

  2. Missing prompt_key silently skips reshaping (Important) — Fixed.

  3. No error handling around agentic dispatch (Important) — Fixed.

  4. Single agentic failure aborts all remaining prompts (Important) — Added logging.

  5. 16-minute SOCKET_TIMEOUT_MS applies to all prompt types (Important) — No change. The 16-min client timeout trails the
    server-side 900s LLM adapter timeout. Making it type-aware would require the frontend to know the prompt type before the request
    fires, and lowering it for regular prompts risks reintroducing false timeouts on slower LLMs. The tradeoff (longer timeout on
    stalled regular prompts) is acceptable given the alternative.

  6. Inaccurate comments (Suggestion) — Fixed all three: removed "Layer 2" / "Layer 1" references that don't exist in the
    codebase, and corrected the _write_tool_result claim (it doesn't read tool_metadata[_SK.OUTPUTS]).

  7. complete_vision() docstring (Suggestion) — Deferred. It is explanatory.

  8. Test coverage gaps (Suggestion) — Agreed, will address in a follow-up PR.

@harini-venkataraman
Copy link
Copy Markdown
Contributor Author

@chandrasekharan-zipstack


_sanitize_null_values behavior change (Low) — This was intentional. The previous behavior of passing literal "NA" strings downstream caused issues with type coercion in destination connectors (e.g., a NUMBER field receiving the string "NA" instead of
null). Converting to None/null is the correct semantic representation and aligns with how JSON consumers expect missing values.
The tests were updated to reflect the intended behavior. That said, good callout — if any deployed tools have downstream logic
explicitly checking for the string "NA", they'd need to handle null instead. We consider this a bugfix rather than a breaking
change since "NA" was never a valid typed value.

Socket timeout bump is global (Low) — Same reasoning as comment #5 from the previous batch: the 16-min client timeout trails the
server-side 900s LLM adapter timeout.

@chandrasekharan-zipstack
Copy link
Copy Markdown
Contributor

[Scope] Unrelated behavior changes bundled into this PR

Two changes have nothing to do with agentic_table and aren't mentioned in the PR description:

  1. workers/executor/executors/legacy_executor.py — email enforce type
    The if answer.lower() == "na": ... else: <run LLM extraction prompt> short-circuit is removed. Now "NA" answers will unconditionally be fed into an LLM prompt asking to extract an email from "NA". This is a behavior change, not a refactor — the _convert_scalar_answer helper it's replacing with doesn't have the NA guard either.

  2. workers/tests/test_answer_prompt.py — NA sanitization assertions flipped
    test_na_string_preservedtest_na_string_becomes_none. test_na_case_insensitive_preservedtest_na_case_insensitive. test_invalid_strategy_skips_retrieval now asserts field_a is None instead of "NA".
    This implies _sanitize_null_values behavior changed — but I don't see that code change in this PR's diff. Either these are tests catching up to a prior undocumented change, or there's a matching code change hiding elsewhere.

Please split these into a separate PR with a proper description of the intended behavior change. Keeps this PR's review/revert history clean and makes git blame actually useful for the agentic_table work.

@harini-venkataraman
Copy link
Copy Markdown
Contributor Author

[Scope] Unrelated behavior changes bundled into this PR

Two changes have nothing to do with agentic_table and aren't mentioned in the PR description:

  1. workers/executor/executors/legacy_executor.py — email enforce type
    The if answer.lower() == "na": ... else: <run LLM extraction prompt> short-circuit is removed. Now "NA" answers will unconditionally be fed into an LLM prompt asking to extract an email from "NA". This is a behavior change, not a refactor — the _convert_scalar_answer helper it's replacing with doesn't have the NA guard either.
  2. workers/tests/test_answer_prompt.py — NA sanitization assertions flipped
    test_na_string_preservedtest_na_string_becomes_none. test_na_case_insensitive_preservedtest_na_case_insensitive. test_invalid_strategy_skips_retrieval now asserts field_a is None instead of "NA".
    This implies _sanitize_null_values behavior changed — but I don't see that code change in this PR's diff. Either these are tests catching up to a prior undocumented change, or there's a matching code change hiding elsewhere.

Please split these into a separate PR with a proper description of the intended behavior change. Keeps this PR's review/revert history clean and makes git blame actually useful for the agentic_table work.

@chandrasekharan-zipstack This was merged in previous commit to main. Will update the description.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Around line 409-415: complete_vision currently calls litellm.completion
directly (using completion_kwargs) which bypasses the retry policy used by
complete() and streaming calls; change the implementation so the call to
litellm.completion is invoked through the same retry wrapper used by the
existing complete()/streaming code path (i.e., use the internal retry helper
that complete() uses) and pass messages and completion_kwargs (after popping
"cost_model") through that wrapper so transient provider/rate-limit errors are
retried consistently.
- Around line 401-403: Remove the per-call global mutation of LiteLLM by
deleting the assignment "litellm.drop_params = True" inside the try block in
unstract/sdk1/src/unstract/sdk1/llm.py; rely on the module-level initialization
(set at import) instead, and if per-call behavior is required use a local/config
variable rather than mutating the global litellm.drop_params flag.

In `@workers/file_processing/structure_tool_task.py`:
- Around line 416-437: The code currently does a hard lookup at at_output["llm"]
when building agentic_params, which can raise KeyError for older/malformed
exports; update the handling to either (a) validate presence of "llm" during
readiness checks for agentic_table_outputs (the same place that validates
target_table/json_structure) or (b) change the build of agentic_params in the
loop to access the key safely (e.g., use at_output.get("llm") and if missing
return the same user-friendly failure path via ExecutionResult.failure with a
clear message), ensuring any fallback behavior is documented in comments around
agentic_table_outputs/agentic_params and preserving the existing user-facing
re-export guidance.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8344653e-1517-4cc7-b3e5-051e7675662a

📥 Commits

Reviewing files that changed from the base of the PR and between 340963f and 611ec9a.

📒 Files selected for processing (2)
  • unstract/sdk1/src/unstract/sdk1/llm.py
  • workers/file_processing/structure_tool_task.py

Comment thread unstract/sdk1/src/unstract/sdk1/llm.py
Comment thread unstract/sdk1/src/unstract/sdk1/llm.py
Comment thread workers/file_processing/structure_tool_task.py
Signed-off-by: harini-venkataraman <115449948+harini-venkataraman@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown
Contributor

Frontend Lint Report (Biome)

All checks passed! No linting or formatting issues found.

@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
7.0% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

@github-actions
Copy link
Copy Markdown
Contributor

Test Results

Summary
  • Runner Tests: 11 passed, 0 failed (11 total)
  • SDK1 Tests: 230 passed, 0 failed (230 total)

Runner Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_logs}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_cleanup\_skip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_client\_init}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_container\_run\_config\_without\_mount}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_run\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_get\_image\_for\_sidecar}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{runner/src/unstract/runner/clients/test\_docker.py}}$$ $$\textcolor{#23d18b}{\tt{test\_sidecar\_container}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{11}}$$ $$\textcolor{#23d18b}{\tt{11}}$$
SDK1 Tests - Full Report
filepath function $$\textcolor{#23d18b}{\tt{passed}}$$ SUBTOTAL
$$\textcolor{#23d18b}{\tt{tests/file\_storage/test\_impl\_rm.py}}$$ $$\textcolor{#23d18b}{\tt{TestRmHappyPath.test\_bulk\_delete\_succeeds}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/file\_storage/test\_impl\_rm.py}}$$ $$\textcolor{#23d18b}{\tt{TestRmFallback.test\_missing\_md5\_triggers\_individual\_delete\_via\_rm\_file}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/file\_storage/test\_impl\_rm.py}}$$ $$\textcolor{#23d18b}{\tt{TestRmFallback.test\_fallback\_continues\_on\_per\_file\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/file\_storage/test\_impl\_rm.py}}$$ $$\textcolor{#23d18b}{\tt{TestRmFallback.test\_fallback\_swallows\_rmdir\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/file\_storage/test\_impl\_rm.py}}$$ $$\textcolor{#23d18b}{\tt{TestRmFallback.test\_non\_md5\_error\_propagates}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/file\_storage/test\_impl\_rm.py}}$$ $$\textcolor{#23d18b}{\tt{TestRmFallback.test\_md5\_error\_without\_recursive\_propagates}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/file\_storage/test\_impl\_rm.py}}$$ $$\textcolor{#23d18b}{\tt{TestFallbackDoesNotReenterBulkDelete.test\_only\_singular\_delete\_called}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/patches/test\_litellm\_cohere\_timeout.py}}$$ $$\textcolor{#23d18b}{\tt{TestPatchedEmbeddingSyncTimeoutForwarding.test\_timeout\_passed\_to\_client\_post}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/patches/test\_litellm\_cohere\_timeout.py}}$$ $$\textcolor{#23d18b}{\tt{TestPatchedEmbeddingSyncTimeoutForwarding.test\_none\_timeout\_passed\_to\_client\_post}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/patches/test\_litellm\_cohere\_timeout.py}}$$ $$\textcolor{#23d18b}{\tt{TestPatchedEmbeddingSyncTimeoutForwarding.test\_httpx\_timeout\_object\_forwarded}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/patches/test\_litellm\_cohere\_timeout.py}}$$ $$\textcolor{#23d18b}{\tt{TestMonkeyPatchApplied.test\_cohere\_handler\_patched}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/patches/test\_litellm\_cohere\_timeout.py}}$$ $$\textcolor{#23d18b}{\tt{TestMonkeyPatchApplied.test\_bedrock\_handler\_patched}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/patches/test\_litellm\_cohere\_timeout.py}}$$ $$\textcolor{#23d18b}{\tt{TestMonkeyPatchApplied.test\_patch\_module\_loaded\_via\_embedding\_import}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_round\_trip\_serialization}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_json\_serializable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_enum\_values\_normalized}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_string\_values\_accepted}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_auto\_generates\_request\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_explicit\_request\_id\_preserved}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_optional\_organization\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_empty\_executor\_params\_default}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_complex\_executor\_params}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_validation\_rejects\_empty\_required\_fields}}$$ $$\textcolor{#23d18b}{\tt{4}}$$ $$\textcolor{#23d18b}{\tt{4}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_all\_operations\_accepted}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionContext.test\_from\_dict\_missing\_optional\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_success\_round\_trip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_round\_trip}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_json\_serializable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_requires\_error\_message}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_success\_allows\_no\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_success\_rejects\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_factory}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_failure\_factory\_no\_metadata}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_error\_none\_in\_success\_dict}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_error\_in\_failure\_dict}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_default\_empty\_dicts}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_from\_dict\_missing\_optional\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_response\_contract\_extract}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_response\_contract\_index}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionResult.test\_response\_contract\_answer\_prompt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestBaseExecutor.test\_cannot\_instantiate\_abstract}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestBaseExecutor.test\_concrete\_subclass\_works}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestBaseExecutor.test\_execute\_returns\_result}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_and\_get}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_get\_returns\_fresh\_instance}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_as\_decorator}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_list\_executors}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_list\_executors\_empty}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_get\_unknown\_raises\_key\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_get\_unknown\_lists\_available}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_duplicate\_name\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_non\_subclass\_raises\_type\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_register\_non\_class\_raises\_type\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_clear}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorRegistry.test\_execute\_through\_registry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_dispatches\_to\_correct\_executor}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_unknown\_executor\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_executor\_exception\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_exception\_result\_has\_elapsed\_metadata}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_successful\_result\_passed\_through}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionOrchestrator.test\_executor\_returning\_failure\_is\_not\_wrapped}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_sends\_task\_and\_returns\_result}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_uses\_default\_timeout}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_timeout\_from\_env}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_explicit\_timeout\_overrides\_env}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_timeout\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_generic\_exception\_returns\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_async\_returns\_task\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_no\_app\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_async\_no\_app\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_failure\_result\_from\_executor}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_context\_serialized\_correctly}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_sends\_link\_and\_link\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_success\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_error\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_no\_callbacks}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_returns\_async\_result}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_no\_app\_raises\_value\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_context\_serialized}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_custom\_task\_id}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutionDispatcher.test\_dispatch\_with\_callback\_no\_task\_id\_omits\_kwarg}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_platform\_api\_key\_returned}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_platform\_api\_key\_missing\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_other\_env\_var\_from\_environ}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_missing\_env\_var\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_empty\_env\_var\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_log\_routes\_to\_logging}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_log\_respects\_level}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_error\_and\_exit\_raises\_sdk\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_execution.py}}$$ $$\textcolor{#23d18b}{\tt{TestExecutorToolShim.test\_stream\_error\_and\_exit\_wraps\_original}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_model\_prefixes\_when\_missing}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_model\_does\_not\_double\_prefix}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_model\_blank\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_thinking\_disabled\_by\_default}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_excludes\_control\_fields\_from\_model}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_thinking\_enabled\_with\_budget}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_thinking\_overrides\_user\_temperature}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_thinking\_enabled\_without\_budget\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_thinking\_budget\_tokens\_invalid\_type\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_thinking\_budget\_tokens\_too\_small\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_preserves\_existing\_thinking\_config}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_validate\_does\_not\_mutate\_input}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_thinking\_controls\_not\_pydantic\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_api\_key\_is\_required}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_adapter\_identity}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_schema\_required\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_schema\_enable\_thinking\_default\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_adapter.py}}$$ $$\textcolor{#23d18b}{\tt{test\_schema\_budget\_tokens\_conditional}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_adapter\_registration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_get\_id\_format}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_get\_adapter\_type}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_get\_name}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_get\_provider}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_json\_schema\_loads}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_json\_schema\_required\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_json\_schema\_no\_batch\_size\_default}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_json\_schema\_api\_key\_password\_format}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_json\_schema\_model\_default}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_model\_adds\_prefix}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_model\_idempotent}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_model\_does\_not\_mutate\_input}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_does\_not\_mutate\_input}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_model\_empty\_string\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_model\_whitespace\_only\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_model\_none\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_model\_missing\_key\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_empty\_model\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_none\_model\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_missing\_api\_key\_raises}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_calls\_validate\_model}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_embed\_batch\_size\_none\_by\_default}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_embed\_batch\_size\_preserved}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_strips\_extra\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_validate\_includes\_base\_fields}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_gemini\_embedding.py}}$$ $$\textcolor{#23d18b}{\tt{TestGeminiEmbeddingAdapter.test\_metadata}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatFromLlm.test\_from\_llm\_reuses\_llm\_instance}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatFromLlm.test\_from\_llm\_returns\_llmcompat\_instance}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatFromLlm.test\_from\_llm\_sets\_model\_name}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatFromLlm.test\_from\_llm\_does\_not\_call\_init}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_complete\_delegates\_to\_llm}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_chat\_delegates\_to\_llm\_complete}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_chat\_forwards\_kwargs\_to\_llm}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_complete\_forwards\_kwargs\_to\_llm}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_acomplete\_delegates\_to\_llm}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_achat\_delegates\_to\_llm\_acomplete}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_stream\_chat\_not\_implemented}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_stream\_complete\_not\_implemented}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_astream\_chat\_not\_implemented}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_astream\_complete\_not\_implemented}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_metadata\_returns\_emulated\_type}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_get\_model\_name\_delegates}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_get\_metrics\_delegates}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestLLMCompatDelegation.test\_test\_connection\_delegates}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestEmulatedTypes.test\_message\_role\_values}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestEmulatedTypes.test\_chat\_message\_defaults}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestEmulatedTypes.test\_chat\_response\_message\_access}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestEmulatedTypes.test\_completion\_response\_text}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestEmulatedTypes.test\_llm\_metadata\_defaults}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestMessagesToPrompt.test\_single\_user\_message}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestMessagesToPrompt.test\_none\_content\_becomes\_empty\_string}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestMessagesToPrompt.test\_preserves\_all\_messages}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestMessagesToPrompt.test\_multi\_turn\_conversation}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestMessagesToPrompt.test\_empty\_messages\_returns\_empty\_string}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_llm\_compat.py}}$$ $$\textcolor{#23d18b}{\tt{TestMessagesToPrompt.test\_string\_role\_fallback}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_non\_retryable\_http\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retryable\_http\_errors}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_post\_method\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_platform.py}}$$ $$\textcolor{#23d18b}{\tt{TestPlatformHelperRetry.test\_retry\_logging}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_success\_on\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_retry\_on\_errors}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/test\_prompt.py}}$$ $$\textcolor{#23d18b}{\tt{TestPromptToolRetry.test\_wrapper\_methods\_retry}}$$ $$\textcolor{#23d18b}{\tt{4}}$$ $$\textcolor{#23d18b}{\tt{4}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_connection\_error\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_timeout\_is\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{3}}$$ $$\textcolor{#23d18b}{\tt{3}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_non\_retryable\_status\_codes}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_http\_error\_without\_response}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{5}}$$ $$\textcolor{#23d18b}{\tt{5}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_os\_error\_non\_retryable\_errno}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestIsRetryableError.test\_other\_exception\_not\_retryable}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_without\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_exponential\_backoff\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCalculateDelay.test\_max\_delay\_cap\_with\_jitter}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_successful\_call\_first\_attempt}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_after\_transient\_failure}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_max\_retries\_exceeded}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_retry\_with\_custom\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_no\_retry\_with\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryWithExponentialBackoff.test\_exception\_not\_in\_tuple\_not\_retried}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_default\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_environment\_variable\_configuration}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_max\_retries}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_base\_delay}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_invalid\_multiplier}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_jitter\_values}}$$ $$\textcolor{#23d18b}{\tt{2}}$$ $$\textcolor{#23d18b}{\tt{2}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_exceptions\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_custom\_predicate\_only}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_both\_exceptions\_and\_predicate}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestCreateRetryDecorator.test\_exceptions\_match\_but\_predicate\_false}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_platform\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_retry\_prompt\_service\_call\_exists}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_platform\_service\_decorator\_retries\_on\_connection\_error}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestPreconfiguredDecorators.test\_prompt\_service\_decorator\_retries\_on\_timeout}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_warning\_logged\_on\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_info\_logged\_on\_success\_after\_retry}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{tests/utils/test\_retry\_utils.py}}$$ $$\textcolor{#23d18b}{\tt{TestRetryLogging.test\_exception\_logged\_on\_giving\_up}}$$ $$\textcolor{#23d18b}{\tt{1}}$$ $$\textcolor{#23d18b}{\tt{1}}$$
$$\textcolor{#23d18b}{\tt{TOTAL}}$$ $$\textcolor{#23d18b}{\tt{230}}$$ $$\textcolor{#23d18b}{\tt{230}}$$

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
workers/file_processing/structure_tool_task.py (1)

433-468: Add per-prompt dispatch logging for the agentic loop.

The legacy branch logs a single Dispatching structure_pipeline: ... line at 475–482, but the agentic loop dispatches one executor call per prompt with no equivalent log. Given each call can run for minutes (EXECUTOR_TIMEOUT=3600s) and may be repeated across multiple prompts, a brief logger.info before/after dispatcher.dispatch per prompt (with prompt name and elapsed time) would meaningfully aid triage of stuck or slow runs without changing behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@workers/file_processing/structure_tool_task.py` around lines 433 - 468, Add
per-prompt dispatch logging inside the agentic loop around the call to
dispatcher.dispatch so each prompt logs when it starts and when it finishes with
elapsed time; specifically, just before calling dispatcher.dispatch(at_ctx,
timeout=EXECUTOR_TIMEOUT) log a brief logger.info including the prompt
identifier (use at_output[_SK.NAME] or at_settings.get("target_table") as
available) and execution_id/file_execution_id, capture start = time.time(), then
after the dispatch completes log another logger.info with the same identifiers
plus success status and elapsed = time.time() - start; keep behavior unchanged
(still return at_result.to_dict() on failure) and add only lightweight log lines
near the dispatcher.dispatch call in the agentic_table_outputs loop.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@workers/file_processing/structure_tool_task.py`:
- Around line 433-468: Add per-prompt dispatch logging inside the agentic loop
around the call to dispatcher.dispatch so each prompt logs when it starts and
when it finishes with elapsed time; specifically, just before calling
dispatcher.dispatch(at_ctx, timeout=EXECUTOR_TIMEOUT) log a brief logger.info
including the prompt identifier (use at_output[_SK.NAME] or
at_settings.get("target_table") as available) and
execution_id/file_execution_id, capture start = time.time(), then after the
dispatch completes log another logger.info with the same identifiers plus
success status and elapsed = time.time() - start; keep behavior unchanged (still
return at_result.to_dict() on failure) and add only lightweight log lines near
the dispatcher.dispatch call in the agentic_table_outputs loop.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e937efe2-09e7-4f5c-b781-6350d62c7d14

📥 Commits

Reviewing files that changed from the base of the PR and between 611ec9a and 585199b.

📒 Files selected for processing (2)
  • workers/executor/executors/legacy_executor.py
  • workers/file_processing/structure_tool_task.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • workers/executor/executors/legacy_executor.py

Comment thread backend/prompt_studio/prompt_studio_core_v2/views.py
Comment thread docker/docker-compose.yaml
Comment thread workers/tests/test_answer_prompt.py
@jaseemjaskp jaseemjaskp self-requested a review April 27, 2026 11:40
@jaseemjaskp jaseemjaskp merged commit acf5573 into main Apr 28, 2026
8 of 9 checks passed
@jaseemjaskp jaseemjaskp deleted the feat/agentic-table-extractor branch April 28, 2026 05:28
kirtimanmishrazipstack pushed a commit that referenced this pull request Apr 29, 2026
…wered table extraction (#1914)

* Execution backend - revamp

* async flow

* Streaming progress to FE

* Removing multi hop in Prompt studio ide and structure tool

* UN-3234 [FIX] Add beta tag to agentic prompt studio navigation item

* Added executors for agentic prompt studio

* Added executors for agentic prompt studio

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* Removed redundant envs

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Removed redundant envs

* adding worker for callbacks

* adding worker for callbacks

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pluggable apps and plugins to fit the new async prompt execution architecture

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pluggable apps and plugins to fit the new async prompt execution architecture

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pluggable apps and plugins to fit the new async prompt execution architecture

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* adding worker for callbacks

* fix: write output files in agentic extraction pipeline

Agentic extraction returned early without writing INFILE (JSON) or
METADATA.json, causing destination connectors to read the original PDF
and fail with "Expected tool output type: TXT, got: application/pdf".

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: replace hardcoded /tmp paths with secure temp dirs in tests (#1850)

* UN-3266 fix: replace hardcoded /tmp paths with secure temp dirs in tests

Replace hardcoded /tmp/ paths (SonarCloud S5443 security hotspots) with
pytest's tmp_path fixture or module-level tempfile.mkdtemp() constants
in all affected test files to avoid world-writable directory vulnerabilities.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update docs

* UN-3266 fix: remove dead code with undefined names in fetch_response

Remove unreachable code block after the async callback return in
fetch_response that still referenced output_count_before and response
from the old synchronous implementation, causing ruff F821 errors.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Un 3266 fix security hotspot tmp paths (#1851)

* UN-3266 fix: replace hardcoded /tmp paths with secure temp dirs in tests

Replace hardcoded /tmp/ paths (SonarCloud S5443 security hotspots) with
pytest's tmp_path fixture or module-level tempfile.mkdtemp() constants
in all affected test files to avoid world-writable directory vulnerabilities.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve ruff linting failures across multiple files

- B026: pass url positionally in worker_celery.py to avoid star-arg after keyword
- N803: rename MockAsyncResult to mock_async_result in test_tasks.py
- E501/I001: fix long line and import sort in llm_whisperer helper
- ANN401: replace Any with object|None in dispatcher.py; add noqa in test helpers
- F841: remove unused workflow_id and result assignments

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* UN-3266 fix: resolve SonarCloud bugs S2259 and S1244 in PR #1849

- S2259: guard against None after _discover_plugins() in loader.py
  to satisfy static analysis on the dict[str,type]|None field type
- S1244: replace float equality checks with pytest.approx() in
  test_answer_prompt.py and test_phase2h.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve SonarCloud code smells in PR #1849

- S5799: Merge all implicit string concatenations in log messages
  (legacy_executor.py, tasks.py, dispatcher.py, orchestrator.py,
   registry.py, variable_replacement.py, structure_tool_task.py)
- S1192: Extract duplicate literal to _NO_CELERY_APP_MSG constant in
  dispatcher.py
- S1871: Merge identical elif/else branches in tasks.py and
  test_sanity_phase6j.py
- S1186: Add comment to empty stub method in test_sanity_phase6a.py
- S1481: Remove unused local variables in test_sanity_phase6d/e/f/g/h/j
  and test_phase5d.py
- S117: Rename PascalCase local variables to snake_case in
  test_sanity_phase3/5/6i.py
- S5655: Broaden tool type annotation to StreamMixin in
  IndexingUtils.generate_index_key and PlatformHelper.get_adapter_config
- docker:S7031: Merge consecutive RUN instructions in
  worker-unified.Dockerfile
- javascript:S1128: Remove unused pollForCompletion import in
  usePromptRun.js

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: wrap long log message in dispatcher.py to fix E501

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve remaining SonarCloud S117 naming violations

Rename PascalCase local variables to snake_case to comply with S117:

- legacy_executor.py: rename tuple-unpacked _get_prompt_deps() results
  (AnswerPromptService→answer_prompt_svc, RetrievalService→retrieval_svc,
  VariableReplacementService→variable_replacement_svc, LLM→llm_cls,
  EmbeddingCompat→embedding_compat_cls, VectorDB→vector_db_cls) and
  update all downstream usages including _apply_type_conversion and
  _handle_summarize
- test_phase1_log_streaming.py: rename Mock* local variables to
  mock_* snake_case equivalents
- test_sanity_phase3.py: rename MockDispatcher→mock_dispatcher_cls
  and MockShim→mock_shim_cls across all 10 test methods
- test_sanity_phase5.py: rename MockShim→mock_shim, MockX2Text→mock_x2text
  in 6 test methods; MockDispatcher→mock_dispatcher_cls in dispatch test;
  fix LLM_cls→llm_cls, EmbeddingCompat→embedding_compat_cls,
  VectorDB→vector_db_cls in _mock_prompt_deps helper

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: resolve remaining SonarCloud code smells in PR #1849

- test_sanity_phase2/4.py, test_answer_prompt.py: rename PascalCase
  local variables in _mock_prompt_deps/_mock_deps to snake_case
  (RetrievalService→retrieval_svc, VariableReplacementService→
  variable_replacement_svc, Index→index_cls, LLM_cls→llm_cls,
  EmbeddingCompat→embedding_compat_cls, VectorDB→vector_db_cls,
  AnswerPromptService→answer_prompt_svc_cls) — fixes S117
- test_sanity_phase3.py: remove unused local variable "result" — fixes S1481
- structure_tool_task.py: remove redundant json.JSONDecodeError from
  except clause (subclass of ValueError) — fixes S5713
- shared/workflow/execution/service.py: replace generic Exception with
  RuntimeError for structure tool failure — fixes S112
- run-worker-docker.sh: define EXECUTOR_WORKER_TYPE constant and
  replace 10 literal "executor" occurrences — fixes S1192

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve SonarCloud cognitive complexity and code smell violations

- Reduce cognitive complexity in answer_prompt.py:
  - Extract _build_grammar_notes, _run_webhook_postprocess helpers
  - _is_safe_public_url: extracted _resolve_host_addresses helper
  - handle_json: early-return pattern eliminates nesting
  - construct_prompt: delegates grammar loop to _build_grammar_notes
- Reduce cognitive complexity in legacy_executor.py:
  - Extract _execute_single_prompt, _run_table_extraction helpers
  - Extract _run_challenge_if_enabled, _run_evaluation_if_enabled
  - Extract _inject_table_settings, _finalize_pipeline_result
  - Extract _convert_number_answer, _convert_scalar_answer
  - Extract _sanitize_dict_values helper
  - _handle_answer_prompt CC reduced from 50 to ~7
- Reduce CC in structure_tool_task.py: guard-clause refactor
- Reduce CC in backend: dto.py, deployment_helper.py,
  api_deployment_views.py, prompt_studio_helper.py
- Fix S117: rename PascalCase local vars in test_answer_prompt.py
- Fix S1192: extract EXECUTOR_WORKER_TYPE constant in run-worker.sh
- Fix S1172: remove unused params from structure_tool_task.py
- Fix S5713: remove redundant JSONDecodeError in json_repair_helper.py
- Fix S112/S5727 in test_execution.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: remove unused RetrievalStrategy import from _handle_answer_prompt

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: rename UsageHelper params to lowercase (N803)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* UN-3266 fix: resolve remaining SonarCloud issues from check run 66691002192

- Add @staticmethod to _sanitize_null_values (fixes S2325 missing self)
- Reduce _execute_single_prompt params from 25 to 11 (S107)
  by grouping services as deps tuple and extracting exec params
  from context.executor_params
- Add NOSONAR suppression for raise exc in test helper (S112)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UN-3266 fix: remove unused locals in _handle_answer_prompt (F841)

execution_id, file_hash, log_events_id, custom_data are now extracted
inside _execute_single_prompt from context.executor_params.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: resolve Biome linting errors in frontend source files

Auto-fixed 48 lint errors across 56 files: import ordering, block
statements, unused variable prefixing, and formatting issues.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: replace dynamic import of SharePermission with static import in Workflows

Resolves vite build warning about SharePermission.jsx being both
dynamically and statically imported across the codebase.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve SonarCloud warnings in frontend components

- Remove unnecessary try-catch around PostHog event calls
- Flip negated condition in PromptOutput.handleTable for clarity

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Address PR #1849 review comments: fix null guards, dead code, and test drift

- Remove redundant inline `import uuid as _uuid` in views.py (use module-level uuid)
- URL-encode DB_USER in worker_celery.py result backend connection string
- Remove misleading task_queues=[Queue("executor")] from dispatch-only Celery app
- Remove dead `if not tool:` guards after objects.get() (already raises DoesNotExist)
- Move profile_manager/default_profile null checks before first dereference
- Reorder ProfileManager.objects.get before mark_document_indexed in tasks.py
- Handle ProfileManager.DoesNotExist as warning, not hard failure
- Wrap PostHog analytics in try/catch so failures don't block prompt execution
- Handle pending-indexing 200 response in usePromptRun.js (clear RUNNING status)
- Reset formData when metadata is missing in ConfigureDs.jsx
- Fix test_should_skip_extraction tests: function now takes 1 arg (outputs only)
- Fix agentic routing tests: mock X2Text.process, remove stale platform_helper kwarg

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix missing llm_usage_reason for summarize LLM usage tracking

Add PSKeys.LLM_USAGE_REASON to usage_kwargs in _handle_summarize() so
summarization costs appear under summarize_llm in API response metadata.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Fix single-pass extraction routing in LegacyExecutor

- Route _handle_structure_pipeline to _handle_single_pass_extraction when
  is_single_pass=True (was always calling _handle_answer_prompt)
- Delegate _handle_single_pass_extraction to cloud plugin via ExecutorRegistry,
  falling back to _handle_answer_prompt if plugin not installed

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fixing API depployment response mismatches

* Add complete_vision() method to SDK1 LLM for multimodal completions

Adds a new complete_vision() method alongside existing complete() that
accepts pre-built multimodal messages (text + image_url) in OpenAI-style
format. LiteLLM auto-translates for Anthropic/Bedrock/Vertex providers.
This enables the agentic table extractor plugin to send page images
alongside text prompts for VLM-based table detection and extraction.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Gate Run button by agentic table readiness checklist

- PromptCardItems loads AgenticTableChecklist plugin and owns the
  isAgenticTableReady state, rendering the checklist above the prompt
  text area and delegating the settings gear visibility to the plugin.
- Header and PromptOutput disable their Run buttons when
  isAgenticTableReady is false (default true for non-agentic types).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [FIX] Use correct primary key field in prompt count subquery (#1905)

ToolStudioPrompt uses prompt_id as its primary key, not id.
Count("id") causes FieldError on the list endpoint (500).

Co-authored-by: Chandrasekharan M <chandrasekharan@zipstack.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* [FIX] Add agentic_table as valid enforce_type choice

The cloud build adds "agentic_table" to the prompt enforce_type
dropdown, but the OSS ToolStudioPrompt model rejected it as an
invalid choice. Add AGENTIC_TABLE to EnforceType and ship a
matching migration so the value can be persisted.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Wire agentic_table enforce_type to executor dispatch

The single-prompt run flow had no branch for prompts with
enforce_type=agentic_table, so clicking Run silently fell through to
the legacy prompt-service path and never invoked the agentic_table
executor. Adds an AGENTIC_TABLE constant to TSPKeys, includes it in
the OperationNotSupported guard, and dispatches to
PayloadModifier.execute_agentic_table when the plugin is available
so the result still flows through _handle_response.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Add agentic_table queue to executor worker defaults

The ExecutionDispatcher derives the queue name from the executor name
(celery_executor_{name}), so dispatches to the agentic_table executor
land on celery_executor_agentic_table. The local docker-compose default
only listed celery_executor_legacy and celery_executor_agentic, so no
worker consumed the new queue and dispatch hung for the full 1-hour
result timeout. Adds the missing queue to the docker-compose default.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Dispatch agentic_table prompts to executor on IDE Run

The IDE Run button was building a legacy answer_prompt payload for
agentic_table prompts, so the agentic table executor was never
invoked. Branch fetch_response on enforce_type so agentic_table
prompts are built via the cloud payload_modifier plugin and
dispatched directly to celery_executor_agentic_table. Add the
enforce_type to the OSS dropdown choices and the JSON-dump set in
OutputManagerHelper so the persisted output is parseable by the FE
table renderer.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* UN-3266 [FIX] Reshape agentic_table executor output in IDE callback

The agentic_table executor returns {"output": {"tables": [...],
"page_count": ..., "headers": [...], ...}}, but
OutputManagerHelper.handle_prompt_output_update reads
outputs[prompt.prompt_key] when persisting prompt output. Without a
reshape the table list never lands under the prompt key and the FE
sees an empty result.

When cb_kwargs carries is_agentic_table=True and prompt_key (set by
the cloud build_agentic_table_payload), reshape outputs to
{prompt_key: tables} before calling update_prompt_output. The
executor itself also shapes its envelope, so this is a defensive
double-keying that keeps the legacy answer_prompt path untouched.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fixing timeout issues

* API deployment fixes for Agentic table extractor

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fixing syntax issues

* Fix agentic_table executor reading INFILE after JSON overwrite

Read from SOURCE instead of INFILE when dispatching to the
agentic_table executor. INFILE gets overwritten with JSON output
by the regular pipeline, causing PDFium parse errors when the
agentic_table executor tries to process it as a PDF.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Signed-off-by: harini-venkataraman <115449948+harini-venkataraman@users.noreply.github.com>
Co-authored-by: Ghost Jake <89829542+Deepak-Kesavan@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Ritwik G <100672805+ritwik-g@users.noreply.github.com>
Co-authored-by: Chandrasekharan M <chandrasekharan@zipstack.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants