# Add external SATA drive without reboot.



## woodson2 (Jun 22, 2009)

Running FreeBSD 7.2

I have a external hot swappable hard drive enclosure that connects via a PCI-e SATA card. BSD sees the drive when I reboot but I need to be able to swap a drive out and create a new filesystem without rebooting...So after some research I've come across the following utilities that I think can get me there.....Problem is I need help sorting this out as the man page for camcontrol and atacontrol warn of disk corruption and data loss if you're not careful.....This is a blank disk so I'm not worried about the new drive just want to make sure I don't screw the OS drive...Thanks

camcontrol
atacontrol
kldload 


After removing the old device I see this in messages 


```
Jun 22 09:16:47 BSD kernel: ata3: reset timeout - no device found
```

OK so I tried 


```
atacontrol reinit ata3         
Master:      no device present
Slave:       no device present
```

AND 


```
atacontrol attach ata3
atacontrol: ioctl(IOCATAATTACH): File exists
```


----------



## woodson2 (Jun 22, 2009)

OK...So I think I might be in business after running


```
atacontrol detach ata3
```

Then


```
atacontrol attach ata3
Master:  ad6 <WDC WD2500AAJS-00L7A0/01.03E01> SATA revision 2.x
Slave:       no device present
```


----------



## woodson2 (Jun 22, 2009)

Ok...Looks good...I've created a new filesystem and mounted it...I have a minor concern though.

The disk is a 250GB SATA drive.

When I create an ext3 filesystem in Linux on the same drive I get about 218GB of disk space..

When I create the filesystem on FreeBSD I get about 208GB of usable space..

I created the filesystem using sysinstall so the newfs command was run automatically for me.....I'm wondering If I created the slice and filesystem from the command line would I be able to get more usable space using the -m switch option in newfs?..Or is this just a difference in space requirements for UFS2 and ext3?


----------



## Vye (Jun 22, 2009)

If you run mount you can see if you have soft-updates enabled on your UFS file system.

example:

```
/dev/da1s1d on /mnt/storage (ufs, local, soft-updates)
```

soft-updates requires approximately 8% of your usable space.


----------



## woodson2 (Jun 22, 2009)

Vye said:
			
		

> If you run mount you can see if you have soft-updates enabled on your UFS file system.
> 
> example:
> 
> ...




Ahh..Thanks

Can you point me in the right direction so I can read about soft-updates and what it does?


----------



## DutchDaemon (Jun 22, 2009)

http://www.usenix.org/publications/library/proceedings/usenix99/mckusick.html

BTW, i don't think softupdates hijacks 8% of your disk, or you couldn't turn softupdates on or off whenever you like. Filesystems _themselves_ do reserve 8-10% of disk space for the root user to enable it to still perform necessary taks on a disk that is full to regular users. The numbers you see (~15%) do look very high to me, but simply using -m to force it down may have adverse effects, see tunefs(8).


----------



## woodson2 (Jun 22, 2009)

Thanks..


----------



## woodson2 (Jun 22, 2009)

DutchDaemon said:
			
		

> http://www.usenix.org/publications/library/proceedings/usenix99/mckusick.html
> 
> BTW, i don't think softupdates hijacks 8% of your disk, or you couldn't turn softupdates on or off whenever you like. Filesystems _themselves_ do reserve 8-10% of disk space for the root user to enable it to still perform necessary taks on a disk that is full to regular users. The numbers you see (~15%) do look very high to me, but simply using -m to force it down may have adverse effects, see tunefs(8).



tunefs -n disable /dev/ad6s1d
tunefs: soft updates cleared
tunefs: /dev/ad6s1d: failed to write superblock

So when I run mount I still see soft-update enabled
/dev/ad6s1d on /BACKUPS (ufs, local, soft-updates)

I think I'm going to keep soft-update on but I just wanted to the see the affects of turning it off...Any ideas why it's failing to disable...Do I need to umount the filesystem and remount?


----------



## DutchDaemon (Jun 22, 2009)

Don't disable softupdates, they're way too much of a performance boost to disable. If you do want to disable it, you need to unmount before using tunefs. Read the 'description' in tunefs(8) ...


----------



## woodson2 (Jun 22, 2009)

DutchDaemon said:
			
		

> Don't disable softupdates, they're way too much of a performance boost to disable. If you do want to disable it, you need to unmount before using tunefs. Read the 'description' in tunefs(8) ...



Got it..Thanks again..I think I'll just leave the filesystem as is..


----------



## aragon (Jun 27, 2009)

DutchDaemon said:
			
		

> Filesystems _themselves_ do reserve 8-10% of disk space for the root user to enable it to still perform necessary taks on a disk that is full to regular users.


I think the reserve is actually to help prevent fragmentation from occurring.


----------



## Beastie (Jun 27, 2009)

aragon said:
			
		

> I think the reserve is actually to help prevent fragmentation from occurring.



No, officially it's as DutchDaemon said.

Fragmentation _always_ occurs on _any_ file system (the fragmentation rate for each of your UFS partitions can be seen when starting up the machine).
What differs is the ability of the OS's block allocation algorithms to deal with it efficiently (or not).

But it's true that the fragmentation problem increases as the last blocks and frags get allocated because it inhibits the proper work of the above mentioned algorithms.


----------



## aragon (Jun 27, 2009)

Beastie said:
			
		

> No, officially it's as DutchDaemon said.
> 
> But it's true that the fragmentation problem increases as the last blocks and frags get allocated because it inhibits the proper work of the above mentioned algorithms.


The way I read DutchDaemon's explanation is to say that the reserve is there as a builtin soft quota, allowing presumably important root processes to still write to a filesystem by preventing users from writing to it.  To be more specific, it is there to prevent users from inhibiting a filesystem's ability to avoid fragmentation and maintain good write performance. tunefs(8) agrees with me:


```
-m minfree
     Specify the percentage of space held back from normal users; the
     minimum free space threshold.  The default value used is 8%.
     Note that lowering the threshold can adversely affect perfor-
     mance:

     o   Settings of 5% and less force space optimization to always be
         used which will greatly increase the overhead for file
         writes.

     o   The file system's ability to avoid fragmentation will be
         reduced when the total free space, including the reserve,
         drops below 15%.  As free space approaches zero, throughput
         can degrade by up to a factor of three over the performance
         obtained at a 10% threshold.
```


----------

