Skip to content

[Enhancement] Support tools in LLM Node#60

Merged
psriramsnc merged 30 commits intomainfrom
scratch/new_model_response
Nov 12, 2025
Merged

[Enhancement] Support tools in LLM Node#60
psriramsnc merged 30 commits intomainfrom
scratch/new_model_response

Conversation

@psriramsnc
Copy link
Collaborator

@psriramsnc psriramsnc commented Nov 7, 2025

🚀 Support Tool Calls in LLM Node via New ModelResponse

🧩 Summary

This PR introduces comprehensive support for tool calls in the LLM Node by introducing a new ModelResponse abstraction. It enables better agent-style evaluation, structured responses, and multi-modal compatibility — all while maintaining backward compatibility and clean integration across components.


⚙️ Features Implemented

  • 🆕 Added ModelResponse class — All custom models now return a standardized ModelResponse object from the generate_response method.
  • 🧠 Tool call support in LLM Node — Enables evaluation of agentic model behaviors with native tool execution flow.
  • 🪄 Graph post-processor support — Integrates graph-based processing for flexible response handling.
  • 🌐 Enhanced HTTP Client — Supports json_payload as a boolean flag and allows custom headers via model config.
  • 🖼️ Multi-modal test message support — Ping messages now supports multimodal formats by default.
  • 💬 Fixed multi-turn chat history injection — Properly injects user, assistant, and tool roles in conversation state.
  • 🧰 Added convert_openai_to_langchain_toolcall utility — Simplifies tool call conversion for chat history injection.

🚄 Performance Impact

Tool calls are now captured as part of the ModelResponse and persisted in state.
No significant performance impact expected.


🧪 How to Test

  • Run tasks/examples/llm_node_tool_simulation. Observe if tool_calls in LLM Node are being captured in the output data.
  • Add graph_post_process in graph_config to run Graph Post processing job. Observe a new processed file with prefic as processor name being stored along with output file.

📸 Screenshots

N/A


✅ Checklist

  • Lint fixes and unit tests completed
  • End-to-end task testing verified
  • Documentation updated

🗒️ Notes

This update lays the foundation for agentic model evaluation and structured response pipelines in the LLM Node, making it easier to benchmark tool-augmented models in complex workflows.

@psriramsnc psriramsnc changed the title Scratch/new model response Support Tools via new Model Response Nov 7, 2025
@psriramsnc psriramsnc self-assigned this Nov 7, 2025
@psriramsnc psriramsnc added the enhancement New feature or request label Nov 7, 2025
@psriramsnc psriramsnc changed the title Support Tools via new Model Response [Enhancement] Support Tools via new Model Response Nov 7, 2025
@psriramsnc psriramsnc changed the title [Enhancement] Support Tools via new Model Response [Enhancement] New Model Response Nov 10, 2025
@psriramsnc psriramsnc changed the title [Enhancement] New Model Response [Enhancement] New Model Response for LLM Node Nov 10, 2025
@psriramsnc psriramsnc changed the title [Enhancement] New Model Response for LLM Node [Enhancement] Support tools in LLM Node Nov 10, 2025
@psriramsnc psriramsnc marked this pull request as ready for review November 10, 2025 12:18
@psriramsnc psriramsnc requested a review from a team as a code owner November 10, 2025 12:18
bidyapati-p
bidyapati-p previously approved these changes Nov 11, 2025
bidyapati-p
bidyapati-p previously approved these changes Nov 11, 2025
Copy link
Collaborator

@amitsnow amitsnow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀

@psriramsnc psriramsnc merged commit ac21df9 into main Nov 12, 2025
6 checks passed
@psriramsnc psriramsnc deleted the scratch/new_model_response branch November 12, 2025 04:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants