I'm Jonathan Bullock; creator of JBake, developer with a passion for the Internet, geek, gadget lover and fan of far too many sports. I occasionally give talks and tweet every now and then.
18 January 2013
Late last year I re-installed the home server, a HP ProLiant Microserver running Ubuntu 12.04 LTS, to get rid of the original 250GB disk and use the 2 x 2TB Samsung drives I had in a Linux software RAID 1 array. I decided to stay away from using the on-board RAID controller as all my research seemed to suggest that it's a "fake" RAID controller so I wouldn't have gained much if anything over software RAID. Plus I knew I could always take a disk out of the array and mount it on another Linux box if ever needed to (and I have needed to in the past). By using the onboard controller it may not be as easy according to this forum post I found.
After installing Ubuntu 12.04 LTS on the array I did notice something strange while I was waiting for the array to be synced (monitor it via /proc/mdstat) and file system to be lazily initialised, the "df" command appeared to be reporting more disk space used up than I was expecting? Now I wasn't sure if this was something due to the RAID or file system initialisation so I left it for the time being in case it righted itself. It didn't and it's been bugging me ever since to the point where I did some proper investigation recently.
According to df I had 1859GB in total space with about 789GB used and 977GB free, so I'd lost 93GB somewhere.
me@server:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 1949324764 827062456 1024696468 45% / udev 1000084 4 1000080 1% /dev tmpfs 403636 396 403240 1% /run none 5120 0 5120 0% /run/lock none 1009084 0 1009084 0% /run/shm
Thanks to this great reply on ServerFault I discovered the reason why using the dumpe2fs command:
me@server:~$ sudo dumpe2fs -h /dev/md0 Block size: 4096 ...
Using the correct block size with the df command I discovered I was missing 24391460 4K-blocks:
me@server:~$ df -B 4K Filesystem 4K-blocks Used Available Use% Mounted on /dev/md0 487331191 206765616 256174115 45% / udev 250021 1 250020 1% /dev tmpfs 100909 99 100810 1% /run none 1280 0 1280 0% /run/lock none 252271 0 252271 0% /run/shm
Which was exactly the same number of blocks that was listed under Reserved block count:
me@server:~$ sudo dumpe2fs -h /dev/md0 Filesystem OS type: Linux Reserved block count: 24391460 Free blocks: 280565614 Free inodes: 121836462 First block: 0 Block size: 4096 ...
The next question was, why is 93GB reserved? After a bit more research I discovered the answer, the default on Linux is to reserve 5%, (1859 / 100) * 5 = 92.95 or my missing 93GB, of blocks. This can be tweaked using the tune2fs command but I'm quite happy to leave it as I know where my missing space has gone. If you do want to tweak the setting do some further research first before changing it so you know the what the consequences are.