Skip to content

[Enhancement] Support Vertex AI, Bedrock Models#64

Merged
psriramsnc merged 14 commits intomainfrom
scratch/litellm_integration2
Nov 28, 2025
Merged

[Enhancement] Support Vertex AI, Bedrock Models#64
psriramsnc merged 14 commits intomainfrom
scratch/litellm_integration2

Conversation

@psriramsnc
Copy link
Collaborator

@psriramsnc psriramsnc commented Nov 25, 2025

🚀 Add Support for Vertex AI, Bedrock Models

✨ Summary

This PR expands and upgrades SyGra’s support for LiteLLM-based models by adding multiple new backend integrations (Ollama, Azure, Vertex AI, Triton, Bedrock), refactoring the LiteLLM model abstractions into a cleaner, more modular model factory, and adding documentation + unit tests for the new model paths. This broadens SyGra’s flexibility and makes it easier to hook into various LLM-serving providers.

✅ Features implemented

  • Support for new LiteLLM providers
    • Ollama
    • Azure
    • Triton
    • Vertex AI
    • Bedrock
  • Refactored LiteLLM model factory / abstraction layer
    • Consolidated model instantiation logic so all providers reuse shared code paths
    • Cleaner, more maintainable, and easier to extend for future providers.
  • Documentation updates
    • Added/updated docs to explain how to configure and use the newly supported LiteLLM providers.
  • Test coverage
    • Added unit tests for Vertex AI and other backend integrations to ensure correctness and stability

⚡ Performance impact (if any)

  • Minimal or none — since this is primarily an architectural/refactor and integration PR.
  • Cleaner abstractions may slightly reduce overhead; logging and config handling should remain efficient.
  • No regressions in runtime performance expected.

🧪 How to Test the feature

Steps for reviewers to verify the new backend support:

  1. Update models.yaml (or equivalent config) to point to one of the newly supported providers (e.g. Ollama or Vertex AI).
  2. Run a sample SyGra workflow (or minimal test case) that invokes an LLM node using the configured model.
  3. Verify that:
    • The model instantiates correctly via the LiteLLM factory.
    • The inference request completes successfully without errors.
    • The output matches expectations for that backend.
    • Logging and environment configuration (URLs, tokens, headers) works correctly.
  4. Run the unit tests (including those added for Vertex AI) and confirm they pass.

📸 Screenshots (if applicable)

N/A

📝 Checklist

  • Lint fixes and unit testing done
  • End-to-end task testing (e.g. full SyGra workflow)
  • Documentation updated

🔎 Notes

  • Backward compatibility remains intact — existing workflows should continue working unchanged.

@psriramsnc psriramsnc self-assigned this Nov 25, 2025
@psriramsnc psriramsnc added the enhancement New feature or request label Nov 25, 2025
@psriramsnc psriramsnc changed the title [Enhancement] LiteLLM Integration 2 [Enhancement] Support Vertex AI, Bedrock Models Nov 26, 2025
@psriramsnc psriramsnc marked this pull request as ready for review November 26, 2025 06:16
@psriramsnc psriramsnc requested a review from a team as a code owner November 26, 2025 06:16
bidyapati-p
bidyapati-p previously approved these changes Nov 28, 2025
Copy link
Collaborator

@amitsnow amitsnow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀

@psriramsnc psriramsnc merged commit d18cb3f into main Nov 28, 2025
2 checks passed
@psriramsnc psriramsnc deleted the scratch/litellm_integration2 branch November 28, 2025 11:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants