# could use some advice on partitionning



## rtsiresy (Mar 25, 2019)

hey all, 
I am trying to install FreeBSD on a server machine with five 4TB sata hard drives ... what partitioning tip should I use ???

I was thinking of the zfs one but really need advice from pros ...


----------



## Phishfry (Mar 25, 2019)

I am no pro but I think if you are new to ZFS than use a small OS drive and put the ZFS array on the 5 disks.
Gives you a good chance to learn without messing up the whole machine. Eazy recovery too.
ZFS is much more than a partitioning scheme.


----------



## emensee (Mar 25, 2019)

How much ram is in the server machine?


----------



## rtsiresy (Mar 25, 2019)

emensee said:


> How much ram is in the server machine?


8 gb


----------



## emensee (Mar 25, 2019)

I'm not an expert on zfs either, but I have read that it is RAM intensive. There are different configurations to make it less so I believe.


----------



## SirDice (Mar 25, 2019)

ZFS should be fine if you have more than 2GB. It likes memory, a lot, but can work just fine with a limited amount.


----------



## rtsiresy (Mar 25, 2019)

key thanks


----------



## linux->bsd (Mar 26, 2019)

ZFS is a lifesaver for new users (was for me at least). Just remember to snapshot your datasets before you go mucking about, and you can safely roll back your changes with very little hassle. The FreeBSD installer has a guided ZFS installation option, but won't presume which RAID type you should use; you have to choose.

Edit to add: I recommend sysutils/zfsnap2 for quick and easy snapshot-ing and automatic expiration.


----------



## tommyhp2 (Apr 1, 2019)

I'm assuming you're using FreeBSD 12.0R Announcement.  I suggest you to use the AutoZFS for install and select accordingly and go with Z2 (equivalent to RAID6) if you can spare the additional HDD for reliability.  The very least, go with Z1 (RAID 5).  ZFS would then utilize all of your HDDs.  The ZFS is a unique file system with some what built in partitioning (so to speak) aka data set.  You create each data set via

```
zfs create <pool_name>/<data_set1>/<data_set2>
```

The default install will create ZFS filesystem hierarchy like this:


```
root@fbsd12:~ # df -h
Filesystem             Size    Used   Avail Capacity  Mounted on
fbsd12/ROOT/default     19G    2.7G     16G    14%    /
devfs                  1.0K    1.0K      0B   100%    /dev
fbsd12                  16G     88K     16G     0%    /fbsd12
fbsd12/tmp              16G    172K     16G     0%    /tmp
fbsd12/usr/home         16G    124K     16G     0%    /usr/home
fbsd12/usr/ports        16G     88K     16G     0%    /usr/ports
fbsd12/var/audit        16G     88K     16G     0%    /var/audit
fbsd12/var/crash        16G     88K     16G     0%    /var/crash
fbsd12/var/log          16G    132K     16G     0%    /var/log
fbsd12/var/mail         16G     88K     16G     0%    /var/mail
fbsd12/var/tmp          16G     88K     16G     0%    /var/tmp
```

you can make some modification to the hierarchy datasets like this:


```
root@d-build-fbsd:~ # df -h | grep -v poud
Filesystem                                                  Size    Used   Avail Capacity  Mounted on
d_build_fbsd/ROOT/default                                    88G    3.4G     85G     4%    /
devfs                                                       1.0K    1.0K      0B   100%    /dev
d_build_fbsd                                                 85G     88K     85G     0%    /d_build_fbsd
d_build_fbsd/tmp                                             85G    128K     85G     0%    /tmp
d_build_fbsd/usr/home                                        85G     88K     85G     0%    /usr/home
d_build_fbsd/usr/local                                       85G    153M     85G     0%    /usr/local
d_build_fbsd/usr/local/www                                   85G     88K     85G     0%    /usr/local/www
d_build_fbsd/usr/ports                                       85G    800M     85G     1%    /usr/ports
d_build_fbsd/usr/ports/distfiles                             87G    2.5G     85G     3%    /usr/ports/distfiles
d_build_fbsd/usr/src                                         86G    1.3G     85G     2%    /usr/src
d_build_fbsd/var/audit                                       85G     88K     85G     0%    /var/audit
d_build_fbsd/var/crash                                       85G     88K     85G     0%    /var/crash
d_build_fbsd/var/db                                          85G    223M     85G     0%    /var/db
d_build_fbsd/var/db/mysql                                    85G     88K     85G     0%    /var/db/mysql
d_build_fbsd/var/db/mysql/data                               85G     88K     85G     0%    /var/db/mysql/data
d_build_fbsd/var/db/mysql/innodata                           85G     88K     85G     0%    /var/db/mysql/innodata
d_build_fbsd/var/db/mysql/innolog                            85G     88K     85G     0%    /var/db/mysql/innolog
d_build_fbsd/var/db/pgsql                                    85G     88K     85G     0%    /var/db/pgsql
d_build_fbsd/var/log                                         85G    460K     85G     0%    /var/log
d_build_fbsd/var/mail                                        85G     88K     85G     0%    /var/mail
d_build_fbsd/var/tmp                                         85G     88K     85G     0%    /var/tmp
linprocfs                                                   4.0K    4.0K      0B   100%    /compat/linux/proc
```

depending on your need.  You can also create separate ZFS pool for each use MySQL/MariaDB and www (Apache/nginx) and mount the pool accordingly:


```
root@www:~ # df -h
Filesystem                      Size    Used   Avail Capacity  Mounted on
fbsd11/ROOT/default              15G    4.9G    9.8G    33%    /
devfs                           1.0K    1.0K      0B   100%    /dev
fbsd11                          9.8G     88K    9.8G     0%    /fbsd11
fbsd11/tmp                      9.8G    100K    9.8G     0%    /tmp
fbsd11/usr/home                  10G    211M    9.8G     2%    /usr/home
fbsd11/usr/local                 10G    671M    9.8G     6%    /usr/local
www                              19G    244K     19G     0%    /usr/local/www
www/apache24                     19G    519M     19G     3%    /usr/local/www/apache24
fbsd11/usr/ports                9.8G     88K    9.8G     0%    /usr/ports
fbsd11/usr/ports/distfiles      9.8G     88K    9.8G     0%    /usr/ports/distfiles
fbsd11/usr/src11.2               11G    1.3G    9.8G    11%    /usr/src11.2
fbsd11/usr/src12.0               10G    682M    9.8G     6%    /usr/src12.0
fbsd11/var/audit                9.8G     88K    9.8G     0%    /var/audit
fbsd11/var/crash                9.8G     88K    9.8G     0%    /var/crash
fbsd11/var/db                    11G    1.3G    9.8G    12%    /var/db
fbsd11/var/db/mysql             9.8G     88K    9.8G     0%    /var/db/mysql
fbsd11/var/db/mysql/data        9.8G    140K    9.8G     0%    /var/db/mysql/data
fbsd11/var/db/mysql/innodata    9.8G     88K    9.8G     0%    /var/db/mysql/innodata
fbsd11/var/db/mysql/innolog     9.8G     88K    9.8G     0%    /var/db/mysql/innolog
fbsd11/var/db/pgsql             9.8G     88K    9.8G     0%    /var/db/pgsql
fbsd11/var/log                  9.8G    432K    9.8G     0%    /var/log
fbsd11/var/log/apache24          10G    457M    9.8G     4%    /var/log/apache24
fbsd11/var/mail                 9.8G     88K    9.8G     0%    /var/mail
fbsd11/var/tmp                  9.8G     88K    9.8G     0%    /var/tmp
fdescfs                         1.0K    1.0K      0B   100%    /dev/fd
linprocfs                       4.0K    4.0K      0B   100%    /compat/linux/proc
```

the ZFS pool fbsd11 is where original FreeBSD 11 was installed and another pool www on separate HDDs used for web server.


```
root@www:~ # zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
fbsd11  19.9G  9.44G  10.4G        -         -    59%    47%  1.00x  ONLINE  -
www     19.9G   528M  19.4G        -         -    11%     2%  1.00x  ONLINE  -
```

HTH,
Tommy

[Edit] PS:  ZFS gives you the flexibility of not worrying about correct size for the partition you've created for a specific use.  Everything goes into the 'pool' of hard drives.  The only drawback that I see within ZFS is the fragmentation that's really hard to resolve since it's unlike any other filesystem.  It's worth it since you get the flexibility of the layout (especially when upgrading to larger HDDs - traditional RAID requires you move the data to temp storage and recreate the RAID and move the data back), tuning for different purposes, reliability, etc... for 1 drawback.


----------

