Skip to content

fix: Add vllm-omni backend to video generation model detection (#8659)#8781

Merged
mudler merged 1 commit intomudler:masterfrom
localai-bot:fix-video-model-ui-8659
Mar 5, 2026
Merged

fix: Add vllm-omni backend to video generation model detection (#8659)#8781
mudler merged 1 commit intomudler:masterfrom
localai-bot:fix-video-model-ui-8659

Conversation

@localai-bot
Copy link
Copy Markdown
Contributor

Problem

Video generation models using the vllm-omni backend (like vllm-omni-wan2.2-t2v) were not appearing in the Model Selector dropdown under the Video tab in the LocalAI UI.

Root Cause

The GuessUsecases function in core/config/model_config.go only recognized diffusers and stablediffusion backends as supporting video generation (FLAG_VIDEO). The vllm-omni backend was missing from this list.

Solution

Added vllm-omni to the list of video-capable backends in the FLAG_VIDEO check.

Testing

  • Models with backend: vllm-omni and known_usecases: video_generation will now appear in the video model selector
  • This fix allows Wan2.2-T2V and other vllm-omni video models to be selected and used

Fixes

Closes #8659

- Include vllm-omni in the list of backends that support FLAG_VIDEO
- This allows models like vllm-omni-wan2.2-t2v to appear in the video model selector UI
- Fixes issue mudler#8659 where video generation models using vllm-omni backend were not showing in the dropdown
@netlify
Copy link
Copy Markdown

netlify bot commented Mar 4, 2026

Deploy Preview for localai ready!

Name Link
🔨 Latest commit aadcd1e
🔍 Latest deploy log https://app.netlify.com/projects/localai/deploys/69a8b8dab8ef760008884ace
😎 Deploy Preview https://deploy-preview-8781--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@mudler mudler merged commit 9fc7790 into mudler:master Mar 5, 2026
32 of 33 checks passed
localai-bot added a commit to localai-bot/LocalAI that referenced this pull request Mar 6, 2026
…r#8659) (mudler#8781)

fix: Add vllm-omni backend to video generation model detection

- Include vllm-omni in the list of backends that support FLAG_VIDEO
- This allows models like vllm-omni-wan2.2-t2v to appear in the video model selector UI
- Fixes issue mudler#8659 where video generation models using vllm-omni backend were not showing in the dropdown

Co-authored-by: team-coding-agent-1 <team-coding-agent-1@localai.dev>
@mudler mudler added the bug Something isn't working label Mar 14, 2026
localai-bot added a commit to localai-bot/LocalAI that referenced this pull request Mar 25, 2026
…r#8659) (mudler#8781)

fix: Add vllm-omni backend to video generation model detection

- Include vllm-omni in the list of backends that support FLAG_VIDEO
- This allows models like vllm-omni-wan2.2-t2v to appear in the video model selector UI
- Fixes issue mudler#8659 where video generation models using vllm-omni backend were not showing in the dropdown

Co-authored-by: team-coding-agent-1 <team-coding-agent-1@localai.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Video Generation - model not selectable

2 participants