# growfs: we are not growing



## pennello (Jun 19, 2010)

Hi all!

I'm running FreeBSD 8.0, and I recently migrated a hardware raid from 11T to 15T (I added some new hard drives), and I was successfully able to figure out how to resize the one GPT partition (just blow away the partition and add a new one, voila!).

The issue is resizing the one UFS filesystem on the GPT partition.


```
# growfs /dev/da1p1
growfs: we are not growing (5859342319->296263663)
```

296263663 < 5859342319, so this seems to me like a 32-bit vs. 64-bit issue.  I found this page, but it looks woefully out of date (the last update on growfs is from 2004!).  I also found this old thread on the subject from 2007.

In the thread was included a patch for growfs to resolve the issue.  I tried naively applying the patch, but was unable to get it to compile.

Any thoughts?  What other debugging information would be helpful to figure out what's going on, if it's unclear?  Anyone get that patch working?  Is there any way to expand a giant UFS filesystem to an even giant-er one?

Thanks!


----------



## pennello (Jun 22, 2010)

Is everyone on FreeBSD just using ZFS for giant volumes nowadays?


----------



## phoenix (Jun 22, 2010)

Yeah.  Once you get above 2 TB, you really don't want to use UFS.  Especially if you have to fsck it for any reason.


----------



## pennello (Jun 22, 2010)

phoenix said:
			
		

> Especially if you have to fsck it for any reason.



Interesting.  I have an 8T UFS filesystem right now that I use regularly and I've fsck'd it before.  Are there known issues with fsck for volumes larger than 2T?


----------



## pennello (Jun 23, 2010)

Regardless, I found this useful thread, and will start playing around with ZFS.

Thanks!


----------



## Matty (Jun 23, 2010)

pennello said:
			
		

> Interesting.  I have an 8T UFS filesystem right now that I use regularly and I've fsck'd it before.  Are there known issues with fsck for volumes larger than 2T?



Guess it's more a time issue when fscking large volumes.


----------



## pennello (Jun 23, 2010)

Matty said:
			
		

> Guess it's more a time issue when fscking large volumes.



Ah, sure--that makes sense.  They did take forever.


----------



## phoenix (Jun 23, 2010)

Yeah, it's just the amount of time required to do the fsck.  There's nothing wrong, per se, with using UFS for large filesystems.  Just be prepared to spend a lot of time waiting if things fail.

Now that SUJ and gjournal have been committed, though, this may not be as big of an issue.

When you get above 2 TB, you really want to start using journalled filesystems, or transactional filesystems, to eliminated the fsck as much as possible.  Using a volume manager helps a lot, as well, for managing large amounts of storage.

ZFS is pretty much built for just this purpose.


----------



## pennello (Jun 23, 2010)

phoenix said:
			
		

> ZFS is pretty much built for just this purpose.



I started playing around with it last night, and it's pretty magical so far.

 indeed!


----------



## fasznyak (Aug 25, 2010)

you can't expand a raidz1 array with a single disk, thats a REAL problem with zfs now. i wrote to the developer list, they told that this functionality aren't gonna be implemented.

so, zfs is still not the best solution, and this is a real sad thing, because it would be good for me that way.

i'm also having problem with gpt'ed ufs growing with growfs.

i set up a 3disk (245mb each) raid5 array with gvinum, and then added another 245mb disk, but when i use growfs, it does not expand all the 245mb i added. ofcourse this was done in vmware.

growfs tells a warning that 251376 sectors cannot be allocated, this is about 100mbytes of space now. so, the new size is 613mb, instead of approx 700mb, which i see when i do a newfs to the 4 disk array.

WTF?!


----------



## pennello (Aug 26, 2010)

fasznyak said:
			
		

> you can't expand a raidz1 array with a single disk, thats a REAL problem with zfs now. i wrote to the developer list, they told that this functionality aren't gonna be implemented.
> 
> so, zfs is still not the best solution, and this is a real sad thing, because it would be good for me that way.



I don't think that's how zfs is meant to be used.  I think the idea for expansion is to add another vdev into the pool, rather than to expand the capacity of a single vdev.

So, for example, if you have a pool with one raidz1, the way to expand it is to add another raidz1 of the same configuration (same number of disks, same size).


----------

