So, you probably read the title and description and are thoroughly confused, allow me to draw you a picture:
Understand now? No? Good, we’re in the same sinking boat.
Okay, let’s back up a little and actually establish what I’m talking about. I switched from Gitea to Gitlab recently-ish. There’s not really any real reason, because I’m not an enterprise and don’t need all the fancy metrics and stuff Gitlab provides, but I wanted to feel fancy, so I switched. I’m half regretting it, but turns out that Gitlab is rife with opportunities to fix broken stuff because self-hosting is fun (lol this is a joke. Jk, self-hosting is kinda fun).
I’m running Gitlab in a VM through Docker, and this is likely where my first actual issue comes in because there’s evidently some weird shenanigans that occur in how Gitlab works when it’s semi-containerized. It works, it’s just sometimes wonky and requires a swift smack with a wrench to fix it. One of the most notable issues is its incessant need to eat RAM. It’s almost as bad as Chrome in how it hoards RAM. Unlike Chrome though, there are some decently documented solutions that you can do to fix it, or at least scale back the consumption.
Before we dig into those though, remember that Gitlab is aimed at enterprise use and isn’t really designed to run in smaller or more memory-constrained environments. It can run, and does run, fine in a homelab, but homelab use is not the primary use case for Gitlab. So if you don’t want to spend time fiddling and learning about how Gitlab works under the hood, just use the free website, don’t self host it.
That out of the way, you CAN run Gitlab in very small environments and even on small devices (you can technically run it on a Pi but I would really advise against it). The minimum expected specs for Gitlab are as follows:
- Linux-based system (Debian or RHEL preferred)
- 4 CPU cores on ARM or 1 CPU core on AMD64 (unless you’re absolutely dead-set on running it on ARM, save yourself the trouble and just go AMD64)
- 2GB RAM + 1GB SWAP (there is a camp of people who don’t believe in SWAP. I don’t associate with those people)
- 20GB of free space on disk
- Ideally storage with good I/O performance like an SSD
So, you have all of the above. Great, what next? Well, realistically you can run Gitlab by just following the basic install documentation for CE. You’re probably going to run into memory issues though, so go do that, wait an hour or so, then come back here with the realization of “wow, Gitlab really does eat RAM”.
First thing you’re going to want to do is configure SWAP if you haven’t. Once you have SWAP, you’re going to want to edit your swappiness. I don’t pretend to understand all the intricacies of what this does, but I know that the lower the swappiness the more likely Linux is to reclaim memory and improve system performance. The default swappiness of a system is 60 (on a range from 0 to 100). I set my swappiness to 10 on my Gitlab host. Your mileage may vary, so adjust as needed but avoid setting it to 0 because that somehow causes other issues.
Now that you have SWAP figured out, we’ve got a few config changes to do to /etc/gitlab/gitlab.rb (this is the main config file and the ONLY config file you should be regularly editing for Gitlab). We’re going to do the following:
- optimize Puma
- optimize Sidekiq
- optimize Gitaly - optional
- disable monitoring through Prometheus
- configure how Gitlab should manage memory
I’m going to just go through and define what each of those things means, explain the edits/changes, and then just paste the full config changes.
Puma is a multi-threaded concurrent HTTP server for Ruby, which Gitlab runs on. It runs the core of the Rails application that provides the user-facing features of Gitlab. Since we’re running it in a smaller environment, we don’t need all the fancy multi-threaded performance enhancements. In fact, we suffer from them more than we benefit from them because Puma’s performance enhancements scale upward, not downward. So we solve this by setting the worker processes to 0, which saves us about half a gig of RAM by itself.
Sidekiq is the background processing daemon. It goes hand-in-hand with Puma, where Puma does the heavy lifting on the front-end, Sidekiq handles the heavy lifting on the back-end. Like Puma, it’s multi-threaded and runs concurrent processes to scale performance upward, but not downward. The default concurrency for Sidekiq is 20. We’re setting this significantly lower around the 5 to 10 mark. Effectively halving the amount of memory that Sidekiq would consume.
Gitaly is a storage service that allows for concurrency in accessing Git-based repositories. For whatever reason, I was not able to get the config changes to work without breaking Gitlab, but the overall idea is the same as Puma and Sidekiq, we reduce concurrency and memory allocation to save memory. If you make these changes and your Gitlab instance breaks or fails to start, just remove them. You’ll still see noticeable RAM usage decreases with the other changes. I don’t have an answer for why this doesn’t work, but I’m guessing it has something to do with Docker.
As far as Prometheus monitoring, we don’t really need it. It’s a really cool feature that lets you monitor a lot of stuff, but we’re running a small instance and don’t need all of that so we just set monitoring to false. This by itself saves about 200MB of RAM, according to Gitlab’s documentation.
Now the heavy-handed part: configure how Gitlab itself will handle memory. Gitlab is written in Ruby and Go and runs on Rails. Rails is the biggest consumer of memory and uses jemalloc as the memory allocator. Jemalloc preallocates memory in bigger chunks that are held for longer periods with the aim at increasing performance. Which is great in larger instances, and once again, scales upward but not downward. These configurations might result in a slight performance decrease, but if you’re just using it in your homelab and it’s just you using it, you honestly won’t notice a difference.
There’s also a few settings to change in the actual interface as well. Go to Admin Area>Settings>Metrics and profiling>Metrics - Prometheus>Disable Prometheus Metrics>Save.
Here’s the full gitlab.rb file:
# configure Puma concurrency
puma['worker_processes'] = 0
# configure Sidekiq concurrency
sidekiq['concurrency'] = 5
# disable Prometheus monitoring
prometheus_monitoring['enable'] = false
# configure Gitlab Rails memory allocation and freeing
gitlab_rails['env'] = {
'MALLOC_CONF' => 'dirty_decay_ms:1000,muzzy_decay_ms:1000'
}
# OPTIONAL - configure concurrency in Gitaly
gitaly['configuration'] = {
concurrency: [
{
'rpc' => "/gitaly.SmartHTTPService/PostReceivePack",
'max_per_repo' => 3,
}, {
'rpc' => "/gitaly.SSHService/SSHUploadPack",
'max_per_repo' => 3,
},
],
cgroups: {
repositories: {
count: 2,
},
mountpoint: '/sys/fs/cgroup',
hierarchy_root: 'gitaly',
memory_bytes: 500000,
cpu_shares: 512,
},
}
# configure Gitaly memory allocation and freeing
gitaly['env'] = {
'MALLOC_CONF' => 'dirty_decay_ms:1000,muzzy_decay_ms:1000',
'GITALY_COMMAND_SPAWN_MAX_PARALLEL' => '2'
}
After you’ve made the changes, depending on how you’re running Gitlab, you’ll need to restart it. In Docker you can just stop/start the container, otherwise run sudo gitlab-ctl reconfigure
to have Gitlab grab the changes.
In my environment I saw about a 40-50% memory decrease by making these changes.
As mentioned earlier, your mileage may vary. Gitlab has an entire documentation area for things like this, so go read those for more tips and tricks. (This is honestly one of the biggest reasons I switched to Gitlab, the documentation is so much better and more refined than Gitea’s. Gitea isn’t bad by any means, it’s just not as mature. And evidently there’s some turmoil going on about its future? Idk.)
Anywho, hope that helped you tame your hungry hungry Gitlab.