Skip to content

installing new models on timeshare server causes them to be unreadable #8119

@stubhead

Description

@stubhead

LocalAI version:
3.10

Environment, CPU architecture, OS, and Version:
Linux x86_64 GNU/Linux AlmaLinux 9.6 with nVidia L40 GPU

Describe the bug
installing a new model with local-ai models install from the command line causes all or part of the model (.gguf, .yaml, ...) to be installed readable only by the user who happened to request the install. the files are installed as owned by that user, with mode 0600. this prevents the local-ai server from loading the models.

To Reproduce
on a linux timeshare system (not your own private docker) :

$ local-ai models install <modelname> # install the model locally
$ local-ai models list | grep installed # verify the model was installed
$ ls -l <model_install_location>

you will get something like this :
4 -rw-------. 1 stubhead devs 1481 Jan 19 18:59 qwen3-vl-2b-instruct.yaml

this occurs even if the user's umask is set to allow created files to be readable by group and/or world.

Expected behavior

  • the model and its support files should all be installed readable by world, rights 0644 rather than 0600.
  • ideally, they should be installed as owned by the user local-ai.
    4 -rw-r--r--. 1 local-ai local-ai 1481 Jan 19 18:59 qwen3-vl-2b-instruct.yaml

Logs

DEBUG context local model name not found, setting to the first model first model name="llama-3.2-1b-instruct:q4_k_m" caller={caller.file="/home/runner/work/LocalAI/LocalAI/core/http/middleware/request.go"  caller.L=115 }
WARN  Model Configuration File not found model="qwen3-vl-2b-instruct" error=failed loading model config (/usr/local/share/models/qwen3-vl-2b-instruct.yaml) ReadModelConfig cannot read config file "/usr/local/share/models/qwen3-vl-2b-instruct.yaml": readModelConfigFromFile cannot read config file "/usr/local/share/models/qwen3-vl-2b-instruct.yaml": open /usr/local/share/models/qwen3-vl-2b-instruct.yaml: permission denied caller={caller.file="/home/runner/work/LocalAI/LocalAI/core/http/middleware/request.go"  caller.L=146 }

Additional context

  • the problem can be fixed manually by resetting the rights and ownership on the model files, allowing the models to be loaded. however, this work-around is sub-optimal.
  • in a multi-user environment, the best solution would be for the CLI to pass a request to the running local-ai server (which is launched by systemd) so that it can process the download request and thereby install the files correctly as the user-id it's running under ("local-ai"), with appropriate rights.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions