Running out of disk space on MySQL partition? A quick rescue.

No space left on device – this can happen to anyone. Sooner or later you may face the situation where a database either has already or is only minutes away from running out of disk space. What many people do in such cases, they just start looking for semi-random things to remove – perhaps a backup, a few older log files, or pretty much anything that seems redundant. However this means acting under a lot of stress and without much thinking, so it would be great if there was a possibility to avoid that. Often there is. Or what if there isn’t anything to remove?

While xfs is usually the recommended filesystem for a MySQL data partition on Linux, the extended filesystem family continues to be very popular as it is used as default in all major Linux distributions. There is a feature specific to ext3 and ext4 that can help the goal of resolving the full disk situation.

Unless explicitly changed during filesystem creation, both by default reserve five percent of a volume capacity to the superuser (root). It helps preventing non-privileged processes from filling up all disk, leaving no room for system logging or system applications. However such reservation only makes sense for the system volumes, while MySQL often sits on its own, dedicated partition, so there is no real reason to keep any number of blocks away from it.

So if your database files are stored on a partition formatted with ext3 or ext4 and MySQL runs out of disk space, you may be in luck as there may be some extra capacity hidden the database may be able to use.

How to enable it?

I had a server that ran out of space on the MySQL volume. The system was reporting 5.7M free and MySQL essentially blocked waiting on the opportunity to complete the writes:

[root@db4 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
                       30G   25G  3.4G  89% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sda1             243M   47M  183M  21% /boot
                      145G  145G  5.7M 100% /vol/mysql

A quick verification of the filesystem used for that volume:

[root@db4 ~]# mount
/dev/mapper/vg_centos-lv_mysql on /vol/mysql type ext4 (rw,noatime,nodiratime)

As the next step, I had to verify if the volume had any reserved blocks that could be freed. I have not seen many servers that actually had the default setting changed during the installation process, so in many cases there should be something:

[root@db4 ~]# dumpe2fs /dev/mapper/vg_centos-lv_mysql | grep 'Reserved block count'
dumpe2fs 1.41.12 (17-May-2010)
Reserved block count: 1927884

It turned out 1927884 of 4KB blocks were reserved for the superuser, which was exactly five percent of the volume capacity. I was able to free this space and make it available to MySQL:

[root@db4 ~]# tune2fs -m 0 /dev/mapper/vg_centos-lv_mysql
tune2fs 1.41.12 (17-May-2010)
Setting reserved blocks percentage to 0% (0 blocks)

This works instantaneously. Simply applications start to see more disk space available.

[root@db4 ~]# df -h         
Filesystem            Size  Used Avail Use% Mounted on
                       30G   25G  3.4G  89% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sda1             243M   47M  183M  21% /boot
                      145G  145G  7.3G  95% /vol/mysql

Without removing a single file I managed to create over seven gigabytes of free space that allowed MySQL to resume operations. That didn’t solve the problem entirely, but it got me a lot of time to figure out a long term solution.

The method is a quick remedy for the emergency situation when your database runs out of disk space. I used it numerous times while helping many people to solve such problems. Of course, it is not a proper solution, but rather something that buys you time to figure out the options. As it only works once in a system lifetime, because after you remove the entire reservation, there would not be anything to remove if the server ran out of space for the second time, you should make sure to avoid facing such problem twice. Learn your lesson and work on implementing proper monitoring to alert you early enough.

[MySQL Health Check]
About Maciej Dobrzanski

A MySQL consultant with the primary focus on systems, databases and application stacks performance and scalability. Expert on open source technologies such as Linux, BSD, Apache, nginx, MySQL, and many more. @linkedin


  1. I typically would suggest lowering this from 5% but not to 0% during this full disk scenario. The reason being that it might happen that your disk gets full again very quickly, perhaps some rapidly growing log file. Give yourself more than one chance to fix this by lowering to 3% and then 1% if necessary. Don’t go to 0% generally (to avoid file fragmentation issues)

    If planning to move to a bigger filesystem due to this, perhaps by promoting a new slave, consider using xfs :) Be sure to mount this (nobarrier) if you have a battery backed RAID.

    • You can of course choose any value from 0% to 4%, depending on the situation. You could maybe even restore it back to 5% once the problems were addressed. However it is important to understand that this is only meant to work as a quick way out of trouble and to give some time to work out the real solution. It should be clear that there’s no point in extending the volume size by a few gigs and do nothing else. It would be asking for more problems in future indeed.

      • Yes, I have used this trick many times. What I mean to say is that in my experience, this can get full again in just a few seconds if the problem is some massively growing catalina.out or similar file. Giving your applications a little room to write (but not every last block) sometimes allows you to quickly determine that some file is growing much more quickly than expected.
        A way you might do this is:

        sysctl vm.block_dump=1
        tune2fs -m 3 /dev/sda5

        You might see here just a couple second later that the disk is again full but in dmesg the 45G catalina.out file just grew based on the pdflush write block messages to that file. Maybe here you can make some determination such as truncating the open file or stopping some extra service. Now you can go a little lower on reserved blocks to let real work get done. Sorry if this is rantish or does not make a lot of sense, but has worked out well for me in the past. TLDR, don’t assume that an extra 5% will last more than a few seconds in all scenarios.

        Another thing to remember is that if you are using InnoDB with innodb_file_per_table=0 and have lots of Data_free, an almost full datadir disk might not be of immediate concern. TLDR, don’t panic.

Speak Your Mind