# FreeBSD SW raid-1



## romeor (Apr 11, 2011)

Hello folks,

Got in a situation where both raid1 consumers are now bigger in size than provider is:


```
Providers:
1. Name: mirror/gm0
   Mediasize: 80026361344 (75G)
   Sectorsize: 512
   Mode: r2w2e5
Consumers:
1. Name: ad2
   Mediasize: 160041885696 (149G)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
   Priority: 0
   Flags: DIRTY
   GenID: 0
   SyncID: 2
   ID: 1089895960
2. Name: ad3
   Mediasize: 160041885696 (149G)
   Sectorsize: 512
   Mode: r1w1e1
   State: ACTIVE
   Priority: 0
   Flags: DIRTY
   GenID: 0
   SyncID: 2
   ID: 996281232
```

i've got in such situation when first drive was failed to start, i changed it with bigger one, and then same story with second drive.

So is there any way to use this free space on these consumers or not?


----------



## da1 (Apr 11, 2011)

growfs(8) comes to mind but never actually done it myself


----------



## romeor (Apr 11, 2011)

Sorry?


----------



## mav@ (Apr 11, 2011)

I think first destroy and recreate gmirror. Then either create another partition in additional space, or cross fingers, modify existing and use growfs.


----------



## da1 (Apr 11, 2011)

@mav:My idea was that since he already has a mirror, he can play quite safe on 1 hdd. Worst case scenario, he erases it and inserts it in the gmirror thus leading to a sync.


----------



## romeor (Apr 12, 2011)

Could you guys give me more detailed hints, how can I do that? It's a production server, so I'm quite unsure about this


----------



## mix_room (Apr 12, 2011)

romeor said:
			
		

> Could you guys give me more detailed hints, how can I do that? It's a production server, so I'm quite unsure about this



Since it is a production server you obviously have backups. Then just destroy the mirror and recreate it. 

Have you read the google results? 
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2007-08/msg01773.html


----------



## romeor (Apr 13, 2011)

Thanks for link and a hint. Seems like I can't do it without shutting down. Damn.


----------

