Skip to content

Commit f0614ac

Browse files
localai-botteam-coding-agent-1
andcommitted
fix: Add vllm-omni backend to video generation model detection (mudler#8659) (mudler#8781)
fix: Add vllm-omni backend to video generation model detection - Include vllm-omni in the list of backends that support FLAG_VIDEO - This allows models like vllm-omni-wan2.2-t2v to appear in the video model selector UI - Fixes issue mudler#8659 where video generation models using vllm-omni backend were not showing in the dropdown Co-authored-by: team-coding-agent-1 <team-coding-agent-1@localai.dev>
1 parent b54adb7 commit f0614ac

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

core/config/model_config.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -641,7 +641,7 @@ func (c *ModelConfig) GuessUsecases(u ModelConfigUsecase) bool {
641641

642642
}
643643
if (u & FLAG_VIDEO) == FLAG_VIDEO {
644-
videoBackends := []string{"diffusers", "stablediffusion"}
644+
videoBackends := []string{"diffusers", "stablediffusion", "vllm-omni"}
645645
if !slices.Contains(videoBackends, c.Backend) {
646646
return false
647647
}

0 commit comments

Comments
 (0)