Today I've been working with a colleague to find the source of stress on one of our servers. We are deploying multiple solutions to safeguard the continuity of this and other servers and processes. Capping the resources on some visualized machines was part of this process.
Testing this was interesting. I ran into some great commands that you can use to stress your system using everyday commands. Normally I would be using
stress-ng to stress test. Yet today I couldn't easily get it to work in my Alpine container. I decided not to put to much effort into getting stress to work and looked a little further and found a few other ways to stress my system.
We are using docker to run our applications and we ran these without resource limitations. This didn't really had good reasons to be like this. We just never ran nearly close the resource cap. Now we ran into a few issues that ate the memory like sweet cookies! Thus it became a real risk and problem. Also, we've scaled up to a point were we need to safeguard the resources of the server. The project has grown to big to go unchecked.
# Utilize CPU sha1sum /dev/zero # Load bytes in memory (3GB in this case) yes | tr \\n x | head -c $((1024*1024*3000)) | grep n
You can easily set the CPU and memory limit with
--memory=2G. You can use cpus as a float and assign
0.5 as a value for example. Memory can also be expressed in
M if you prefer megabytes.