# Some disk space lost?



## talien (Jan 19, 2013)

Hi there. So I'm trying out FreeBSD. So far so good.

Here's something I don't understand:


```
jack# gpart show
=>      63  15662241  ad0  MBR  (7.5G)
        63  15662241    1  freebsd  [active]  (7.5G)

=>       0  15662241  ad0s1  BSD  (7.5G)
         0  15662241      1  freebsd-ufs  (7.5G)

=>        63  1953525105  ad4  MBR  (931G)
          63  1953525105    1  freebsd  [active]  (931G)

=>         0  1953525105  ad4s1  BSD  (931G)
           0          16         - free -  (8.0k)
          16  1953525089      1  !0  (931G)
```


```
jack# df
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ad0s1a    7.2G    6.2G    462M    93%    /
devfs          1.0k    1.0k      0B   100%    /dev
/dev/ad4s1a    902G    7.7G    894G     1%    /music
```

Why is size of file system 902G on 1 TB disk? Should be 931, which is shown as partition (or slice, or label, I don't really know) size.

Same for ad0, 7.5G vs 7.2G.


----------



## wblock@ (Jan 19, 2013)

A quick answer from the FAQ: How is it possible for a partition to be more than 100% full?


----------



## talien (Jan 19, 2013)

Wrong answer.


----------



## wblock@ (Jan 19, 2013)

Are you going by the title, or did you read it?


----------



## talien (Jan 19, 2013)

Yes I read it, before posting this thread.

It seems to me this lost space is not the 8% reserved space. For starters, it's about 5%. Second, I could clearly see that 8% reserve as difference in filesystem size and available space as shown by df. Third, I set it to 0% immediately anyway as it seems useless for me.

I'm using 8.3, in case that should matter.

Investigating scrollback buffer, I noticed that here size is correct:

```
jack# newfs ad4s1a
/dev/ad4s1a: 953869.7MB (1953525088 sectors) block size 16384, fragment size 2048
```


----------



## wblock@ (Jan 19, 2013)

There is a similar 5% difference here:

```
902G    531G    298G    64%    /other
```

Seems like I've worked this out in the past, but can't recall right now.  Maybe filesystem overhead.


----------



## kpa (Jan 19, 2013)

The bookkeeping information where the used and free blocks are stored and other information about the directory tree and files has to go somewhere.


----------



## talien (Jan 19, 2013)

Well, in that case having bookkeeping information must be quite unique to UFS, as I haven't noticed this with other file systems.

So far this seems like a very serious bug or very bad design to me.


----------



## kpa (Jan 19, 2013)

Yeah, in other systems the computer works by pure guesswork and with a help of psychics to keep tabs where the files are stored.


----------



## Savagedlight (Jan 19, 2013)

I'm not seeing anything even remotely near that loss of space on FreeBSD 9.1 (AMD64) with a UFS2-formatted partition. I'm really curious what's causing it for you. ><

`# df -h -t ufs`

```
Filesystem                Size    Used   Avail Capacity  Mounted on
/dev/gpt/Bay1.2-system    7.9G    4.0G    3.2G    56%    /
/dev/md0                   15M     24k     14M     0%    /tmp
```
`# gpart show -l | grep Bay1.2-system`

```
2097442   16777216     3  Bay1.2-system  (8.0G)
```


----------



## Toast (Jan 20, 2013)

talien said:
			
		

> Yes I read it, before posting this thread.
> 
> It seems to me this lost space is not the 8% reserved space. For starters, it's about 5%. Second, I could clearly see that 8% reserve as difference in filesystem size and available space as shown by df. Third, I set it to 0% immediately anyway as it seems useless for me.
> 
> ...





			
				talien said:
			
		

> Well, in that case having bookkeeping information must be quite unique to UFS, as I haven't noticed this with other file systems.
> 
> So far this seems like a very serious bug or very bad design to me.



http://en.wikipedia.org/wiki/Inode


> In computing, an inode (index node) is a data structure found in many Unix file systems. Each inode stores all the information about a file system object (file, device node, socket, pipe, etc.), except data content and file name.


----------



## talien (Jan 20, 2013)

OK, I investigated a little and solved the problem for me by tuning the newfs parameters. Using defaults it created way too many inodes or cylinder groups or something else or all of the above, I didn't bother to research more.


```
newfs -b 65536 -f 8198 -i 1000000 -m 0 -n ad4s1a
```

So, larger blocksize and less inodes. This seems to work fine and the resulting space on file system is now what you would expect.

The issue as such remains though. But as these adventures are taking place in 8.3, it may be fixed in newer releases. From man page I can see that at least default block size has been increased.


----------

