# Expand gmirror on BSD 7.2



## TTABKATA (Nov 1, 2011)

Hello Friends
I have one question. The system is with BSD 7.2 with 1 x 120GB + 2 x 1,5TB HDD in RAID1E. First HDD is for system, two others are in RAID1 gmirror. Now the free space is very low and i want to expand it with new 2 x 1,5TB identical HDDs. 
Can some one tell mi how to do it without to lose any information from RAID?
Thank you in advance
Best Regards


----------



## aragon (Nov 1, 2011)

I don't see that being possible.  Closest would be to create another gmirror instance with the 2 new drives, and mount it to a new location in the file system, perhaps even below the directory hierarchy of your existing gmirror instance.


----------



## TTABKATA (Nov 2, 2011)

Thanks for the reply *aragon*, but how to do it? I need a little guidance.
Thanks in advance,


----------



## fluca1978 (Nov 2, 2011)

You can do it as you did for the first mirror.
Let's say your new drives are da3 and da4. Consider da3 already formatted (and partitioned). Something like the following should work, of course be careful about disk names to avoid destroying your data.


```
gmirror label -v -b round-robin gm1 /dev/ad3
gmirror insert gm1 /dev/ad4
mount /dev/mirror/gm1s1a /mnt/mirror1
<copy data from one mirror to the other>
```


----------



## TTABKATA (Nov 3, 2011)

If I make backup on data can I create a new gmirror with 4 HDD? If yes, how to do it? 
Thank you in advance.


----------



## fluca1978 (Nov 3, 2011)

Of course you can, but in your case you have two set of disks, so you cannot mirror all the four disks at once, but a single set at time. Let say that da0 and da1 are the 120 GB, and da2 and da3 are the 1.5 TB. Now you can mirror da0+da2 over da1+da3, or you can mirror da0+da1 and da2+da3, or you can mirror da0+da1+da2(a part)+da3(a part) but I don't suggest the latter.
I suggest you create a stripe of da0+da2 and da1+da3 and mirror the two stripes one over the other. I don't have a virtual machine to test each command, however this should work (but please read carefully the man pages and *backup* your data!):


```
gstripe label -v stripe0 da0 da2
bsdlabel -w /dev/stripe/stripe0
gstripe label -v stripe1 da1 da3
bsdlabel -w /dev/stripe/stripe1
newfs -U /dev/stripe/stripe0a
newfs -U /dev/stripe/stripe1a
gmirror label -v -b round-robin gm1 /dev/stripe/stripe0
gmirror insert gm1 /dev/stripe/stripe1
```


----------



## wblock@ (Nov 3, 2011)

RAID 1+0 or "10" (mirror, then stripe) is preferred over RAID 0+1 (stripe, then mirror) for data safety.

But here, it's not worth bothering with the two 120G drives.  Replace the 120G mirror with a 1.5T mirror of the new drives, using dump/restore to copy the old data.  Then use the 120G drives for something else.


----------



## fluca1978 (Nov 3, 2011)

wblock@ said:
			
		

> RAID 1+0 or "10" (mirror, then stripe) is preferred over RAID 0+1 (stripe, then mirror) for data safety.



Correct! It should be better to mirror the disks and then stripe. Better than this could be to place them all in a zfs raid.


----------



## TTABKATA (Nov 4, 2011)

Now I see problem with one of my HDDs:

`#egrep '^ad[0-9]|^da[0-9]' /var/run/dmesg.boot`

```
ad4: 114473MB <Seagate ST3120026AS 3.18> at ata2-master SATA150
ad5: 1430799MB <Seagate ST31500341AS CC1H> at ata2-slave SATA150
ad6: 1430799MB <Seagate ST31500341AS CC1H> at ata3-master SATA150
```

`#dmesg | grep ad4`

```
ad4: 114473MB <Seagate ST3120026AS 3.18> at ata2-master SATA150
GEOM_LABEL: Label for provider ad4s1a is ufsid/4af58edc6a7768ca.
GEOM_LABEL: Label for provider ad4s1d is ufsid/4af58f38a544a08c.
GEOM_LABEL: Label for provider ad4s1e is ufsid/4af58f324eaf8217.
GEOM_LABEL: Label for provider ad4s1f is ufsid/4af58f3200820ad4.
Trying to mount root from ufs:/dev/ad4s1a
GEOM_LABEL: Label for provider ad4s1a is ufsid/4af58edc6a7768ca.
GEOM_LABEL: Label ufsid/4af58f3200820ad4 removed.
ad4: TIMEOUT - READ_DMA retrying (1 retry left) LBA=55050463
ad4: TIMEOUT - READ_DMA retrying (0 retries left) LBA=55050463
ad4: FAILURE - READ_DMA timed out LBA=55050463
g_vfs_done():ad4s1f[READ(offset=19651395584, length=16384)]error = 5
```

`#dmesg | grep ad5`

```
ad5: 1430799MB <Seagate ST31500341AS CC1H> at ata2-slave SATA150
```

`#dmesg | grep ad6`

```
ad6: 1430799MB <Seagate ST31500341AS CC1H> at ata3-master SATA150
GEOM_MIRROR: Component ad6 (device gm0) broken, skipping.
GEOM_LABEL: Label for provider ad6s1 is ufsid/4af58f0ceec40539.
```

Will replace broken (ad6) HDD and will add two new ST31500341AS. Will back up my data and then


```
#gmirror label gm0 ad5 ad6
#gmirror label gm1 ad7 ad8
#gstripe label stripe0 gm0 gm1
```


----------



## TTABKATA (Nov 13, 2011)

Hello again,

I created 2 gmirrors, gm0 and gm1, but can't create stripe with gstripe.

```
gstripe: can't find gm0 nosuch file
```
Can FreeBSD 7.2-RELEASE-amd64 support more then 2TB? And if no, how to deal with this issue?

Thank you in advance.


----------



## fluca1978 (Nov 14, 2011)

What does [cmd=]gstripe status[/cmd] report? I suspect you have to use /dev/mirror/gm0 to reference your mirrored device when passing it to gstripe.


----------



## TTABKATA (Nov 17, 2011)

With path to the gm0 and gm1 I did stripe0 but now is limited to 2^32-1 sectors per disk or 2TB. Is there a solution or must I install a new version of BSD.


----------



## TTABKATA (Nov 18, 2011)

I resolve my issue with UFS Journaling Through GEOM


```
#gmirror label gm0 ad6 ad8
#gmirror label gm1 ad10 ad12
#dd if=/dev/zero of=/dev/mirror/gm0 bs=1k count =1
#dd if=/dev/zero of=/dev/mirror/gm1 bs=1k count =1

#gstripe label stripe0 /dev/mirror/gm0 /dev/mirror/gm1
#dd if=/dev/zero of=/dev/stripe/stripe0 bs=1k count=1

#gjournal load
#gjournal label -f /dev/stripe/stripe0
#newfs -O 2 -J /dev/stripe/stripe0.journal
#mount /dev/stripe/stripe0.journal /mnt

#ee /boot/loader.conf
	geom_mirror_load=YES
	geom_stripe_load=YES
	geom_journal_load=YES
```
Thanks to all for helping me
Best Regards


----------

