Running out of disk space, how to troubleshoot? #46496
-
|
Hi everyone! I changed my codespace's machine type to 8-core, to get 64G of disk space. I'm constantly running out of space, and this is what I get with So it looks as though I should have plenty to spare, but then looking at Any suggestions on what I could do? Last time I've reached this point, I destroyed the codespace and started over, but I'd like to avoid doing that this time around. |
Beta Was this translation helpful? Give feedback.
Replies: 8 comments 7 replies
-
|
After running |
Beta Was this translation helpful? Give feedback.
-
|
@langri-sha Would you mind if I ask a question? How did you get 64GB space on the '/' volume? I tried to create a new codespace or change machine type for an existing codespace, however the only option for disk is 32GB. |
Beta Was this translation helpful? Give feedback.
-
|
My workspace had jetbrains installed by default and this was taking 12 GB. Is this normal..? Also please see below issue with docker @xxxxx ➜ /workspaces/xx (main) $ du -ah /workspaces | sort -rh | head -20 @XX➜ /workspaces/xx (main) $ please explain below.... @xxx ➜ /workspacesxxx(main) $ df -h /var/lib/docker
Are you sure you want to continue? [y/N] y |
Beta Was this translation helpful? Give feedback.
-
|
If you don't need any preinstalled editors. That'll give you ~11gb back. |
Beta Was this translation helpful? Give feedback.
-
|
Just sharing, if you are using Docker, you can check their disk usage using this command |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
What you’re seeing is actually normal behavior in GitHub Codespaces, even though it’s confusing. What’s going on? / and /workspaces are backed by an overlay filesystem (mounted via /dev/loopX) Docker data lives on a separate loop-mounted disk at /var/lib/docker You cannot access or delete /var/lib/docker directly (Permission denied is expected) docker system prune, docker builder prune, etc. only clean Docker-managed objects, not space reserved by the Codespace image itself So when df -h shows something like: /dev/loop3 32G 15G used but: docker images → empty docker system prune → 0B reclaimed that means the space is preallocated or reserved by the base Codespace image, not dangling images or volumes. About JetBrains taking ~12GB Older Codespaces images used to install JetBrains IDEs under: /workspaces/.codespaces/shared/editors/jetbrains That folder used to be removable, but on newer images it’s: read-only, or recreated automatically So deleting it no longer works reliably (or at all) Why ncdu doesn’t show the usage ncdu / without sudo cannot see protected mounts Even with sudo, loop-backed storage and overlay layers can look “empty” while still consuming space The uncomfortable truth 😅 If: Docker prune reclaims nothing You can’t access /var/lib/docker JetBrains folders can’t be removed Disk usage stays high ➡️ A full Codespace rebuild is the only way to reclaim that reserved space This is a known limitation of Codespaces, not something you’re doing wrong. Tips to reduce hitting this again Use Dev Containers with minimal base images Avoid large Docker builds inside Codespaces Periodically do Full Rebuild instead of normal rebuild Move large temp files to /tmp (which is on a different mount) TL;DR |
Beta Was this translation helpful? Give feedback.
-
|
Use ncdu or du -sh * (Linux) / Storage Sense or WizTree (Windows) to identify large files, then clear logs, temp files, or cache to reclaim space. |
Beta Was this translation helpful? Give feedback.


After running
sudo ncdu /instead I was able to notice a lot of space taken up by dangling Docker build caches 🌞.