Remember the last post? When I foolishly believed I’d solved my Gitlab woes? Yeah, so Gitlab took that personally and also evidently took it as a challenge.

So what happened? Well, in short, something was going real weird with storage. I did solve the memory issues, but the storage on disk was constantly filling up and I like to pretend that it wasn’t my fault (it was).

I like to pretend I’m good at documenting things, so when I went to check something in my repo-based wiki and couldn’t access it, I thought that was weird. Surely I hadn’t accidentally turned the VM off or stopped the container, right? No, I had not. But what I was greeted with was a disk full error from the VM. Not the container, but the VM itself. Now, it’s worth noting that this dumb thing had 100GB to chew through, and I was hosting TWO repositories. I’m not a savant at math, but that math ain’t mathing.

So, how do we solve this? Well, we first need to figure out what the heck is eating the disk. Let me introduce you to one of the most valuable commands you’ll ever use in Linux (outside of tools that are better at this): du -sh /* | sort -rh.

That will show you the directories and how much space they’re using, in a decently human-readable format, sorted by size.

I discovered via this that var/lib/docker was using 41GB of the available 100GB, but here’s the kicker: the remaining directories were not adding up to the missing 49GB, even when accounting for SWAP which was set to 4GB at the time.

So now I had to figure out what the heck was going on. I remembered that I encountered this a few months prior, and I solved it by allocating more space. The issue then was logs (as it usually is), so I figured this time might be the same. The issue was partially logs, but even cleaning those up didn’t solve the issue. Cleaning all logs from Docker and Gitlab only saved me megabytes, not nearly enough.

I managed to trace the usage further down into var/lib/docker/overlay2. Now things were making sense, because as I searched the internet I realized that I was not alone in this issue.

Let’s get a little technical: what IS overlay2? Simply put, overlay2 is where Docker stores metadata and layers for images and containers. So basically, information and parts for images and containers. It’s good practice to keep an eye on this and prune it occasionally or it gets real big real fast. The best way to do this is to be smart and plan ahead and have a cron job or something that prunes orphaned or dangling images every so often. I did not have this because I am lazy and I evidently like it when systems cause issues or something. Idk.

Cleaning this directory is usually pretty simple. You can do docker system prune -a -f to force prune all unused objects.

So what if that doesn’t solve the issue? Well, there’s a few other things you can try. You can use the following commands to perform some additional cleanup: docker image prune, docker container prune, docker volume prune. These are pretty self-explanatory in what they each do. You can also check the subfolders against your used containers and then delete any subfolders you don’t need manually (be careful doing this).

You can also go the Hail Mary route:

  1. Stop all containers
  2. Back up said containers
  3. Remove the containers you stopped and backed up
  4. Remove all images with docker image prune -a
  5. Stop Docker itself
  6. Rename or move the var/lib/docker directory
  7. Start Docker back up (this will recreate a fresh /var/lib/docker folder)
  8. Reimport your images
  9. Redeploy your containers

If all of the above fails, you can do what I chose to do: give up.

Now, there’s a bit more to my decision to give up than just Docker woes, because honestly I could have redeployed Gitlab as a from-source install instead of shoving it in Docker, but I had been having doubts for a little bit about if I even needed Gitlab, and I ultimately decided to use this issue as a way to justify not needing it. So I got rid of Gitlab and went back to Gitea, which is substantially smaller and less resource intensive than Gitlab. Don’t get me wrong, I like Gitlab, nothing against it, it’s just too much for my use in my environment. I’m not an enterprise and don’t need 90% of what Gitlab has baked into its free version.

I guess the remaining question is probably “why use du and not some other, better, tool?” well, here’s the thing, when your disk is full, you can’t even do apt upgrade, let alone apt install. So you gotta use what is available :)

So lessons learned:

  1. If you don’t need a certain solution/tool, set it up, let it fail, learn a thing, then go back to your comfort zone
  2. Set up cron jobs for maintenance tasks, because preventative maintenance is better than reactive maintenance 9/10 times
  3. Don’t be lazy, unless being lazy solves an issue, then be lazy
  4. If all else fails, give up :D

So yeah, anyway, long post. It’s been a few months. I’m bad at updating. I moved back to Gitea, it’s disgustingly hot here, I want a nap. That’s all, I’ll update again when I break something else lol