# Managing keys for root



## mefizto (Feb 25, 2020)

Greetings all,

some inter-machines processes require root privileges.  However, for key-based ssh(1) authentication, it is recommended to encrypt the client key pair.

How does one handle automated processes, when one is not at the computer to enter the password?

Kindest regards,

M


----------



## zader (Feb 25, 2020)

as a general rule, one would never log in directly as root..  even to start a job etc..  that been said there are a number of ways to remotely execute a command as root .. the most common would be to create a user, allow them doas privileges, generate a key pair and put it on the machine as normal

ie:
ssh-key-gen -t rsa
press enter for password
chmod 400 /home/user/.ssh/nameofkey

then
ssh-copy-id 192.168.10.102
enter password

should say something like id copied to server 192.168 x...
then you can issue commands directly via ssh

ssh USER@HOST 'COMMAND1 | COMMAND2 | COMMAND3'

ssh jimmy@192.168.10.102 'uptime; df -h'
ssh jimmy@192.168.10.102 'doas mycommand'

another option would be to add a crontab to the machine

su -
then crontab -e
this will bring up roots crontab .. you could add the job in there and it will run as root automatically

not sure exactly what your after, but hopefully these examples may help you out.


----------



## mefizto (Feb 25, 2020)

Hi zader,

thank you for the reply.  I would like to use some automated backup, using net/rsync or sysutils/zrep, which both require root privileges.  I am now looking at delegation.

Kindest regards,

M


----------



## ShelLuser (Feb 25, 2020)

mefizto said:


> thank you for the reply.  I would like to use some automated backup, using net/rsync or sysutils/zrep, which both require root privileges.  I am now looking at delegation.


What I usually do is make the backup process on the host (run as root) dump its data to stdout and then pipe that into an ssh command where I start dd on the remote computer in order to store the datastream. Obviously the latter is done using a regular user account.

A somewhat simplistic example from mind: `# zfs send zroot/home | ssh shell@backup "dd of=/opt/backup/host_home.zfs"`.


----------



## rootbert (Feb 26, 2020)

you could generate a general ssh-key for your manual admin work which is password protected. I suggest having a look at ssh-agent.
For automatised stuff you can generate a separate ssh key without a password and restrict that key to executing some commands on the receiver side ...have a look at https://www.freebsd.org/cgi/man.cgi?sshd(8) at the section AUTHORIZED_KEYS FILE FORMAT, especially at command="command"


----------



## SirDice (Feb 26, 2020)

To run an rsync(1) through a normal user account and have root on the other side you need to use a command like `rsync --rsync-path='/usr/local/bin/sudo /usr/local/bin/rsync' {...}`. Then make sure that user account is able to sudo(1) rsync(1) _without_ a password.


----------



## zader (Feb 26, 2020)

ah yup I do the same thing for my backs ups .. granted its all zfs send/receive/replication  now. (eventually most systems will have zfs/datasets so rsync is useless when you pools/datasets/zvols are done up right)

If it helps this is what I used to do with rsync..  before i deleted rsync forever 

`vi rsync.sh`

```
#!/bin/sh

find /BACKUP/ -depth -type f -mmin -240 -exec rsync --ignore-existing --progress --exclude '/BACKUP/lost+found' -rapv -d {} --ignore-existing USER@192.168.1.100:/ftp/incoming/ \;
```

add crontab
`vi /etc/crontab`

```
0       */4     *       *       *       USER    /home/rsync.sh
```

In short..

the corn tab runs the script every 4 hrs .. 12/4/8 etc ..
the one liner essentially finds all new files within the last 4 hrs .. rips them out of whatever directory they are in and rsync's it to my ftp server.. where another job archives it to tape and sends another copy to tarsnap

you may need to modify the script to pull/send directories . or similar, but it's a simple easy way to use rsync across multiple machines

because its just a crontab and or simple script .. this can be pushed out very easily with puppet ..


if you want to get more hardcore .. I would create a sudo policy with pam and go crazy like that .. but SirD already beat me to it with the user trick above


----------



## mefizto (Feb 26, 2020)

Greetings gentlemen,

thank you all very much, I knew that there would be a way to do it.

Hi zader,

does send/receive not also need root?
Kindest regards,
M


----------



## zader (Feb 26, 2020)

tbh .. I recommend getting MWL's ZFS storage books for a more complete ZFS base .. https://mwl.io/nonfiction/os. chapter 4 in the advanced book goes over replication specifically. 

In a nut shell .. replication allows you to create an exact backup of an entire dataset or zvol .. it doesn't matter if its on the same machine, same network or across the internet .. there are several advantages such as zfs only sends changed blocks, so its very fast and super efficient compared to rsync.. most of the time it may take rsync a few mins just to generate a list of files to send .. in that time zfs is done.

replication, snapshots and the combination of commands that are needed are controlled by zfs properties .. essentially you delegate access to the datasets and assign permissions as needed.   then run simple zfs commands 

replication has a few basic use cases ..
you can replicate a data set, ie make a backup of the entire data set somewhere else ..  a good example .. I have a laptop with a 1tb drive .. when ever I join my wifi part of the process checks a 1tb dataset on my file server . if essentially checks for changes.. if the laptop is out of date it does a receive .. if the backup is out of date it sends the changes to the store ..

********
DO NOT COPY PASTE commands they are for example only.. as well I highly recommend going through those books and actually mapping out datasets, pools and zvols .. this is a super basic example .. a life production system will be MUCH more complicated, but the concept is the same. 
********

behind the scenes

I created a replication user on the laptop and server .
local:   pw user add replicator -m -s /bin/sh
remote: pw user add replicator -m -s /bin/sh

zfs allow -u replicator send,snapshot zroot/backup/bsd-laptop-42

su replicator
ssh-keygen

ssh-copy-id -i .ssh/id_rsa.pub bsd-laptop-42

then on the server something like
zfs create -o mountpoint=/backup zroot/backup
chown replicator:replicator /backup
sysctl vfs.usermount=1

if your automating the process you may consider adding destry so it can clean up after itself.

then allow the user

zfs allow -u replicator compression,mountpoint,create,mount,recieve zpool/backup


you could also manually do the same thing once permissions are configure with something like this

zfs send zroot/usr/homedirectory@2020-02-26_06.15.00--2d | ssh user@host zfs receive remotepool/backup


so in summary

you create your storage datasets, give user permissions to it.. generally the first time you would copy the entire data set into the target backup.. then on the machine you want to replicate you use a snapshotting tool like zfSnap and zfs send your snapshots to the storage server.

this is also incredibly handy for replication offsite .. for example the cost is minimal to replicate to tarsnap


again, highly recommend the MWL books as a solid base .. 

cheers


----------



## mefizto (Feb 26, 2020)

Hi zader,

first, thank you again for you reply and especially the example.  By the way, I do have MWL's books, but since I am not very smart, I need a little help form above, er, forum.

Kindest regards,

M


----------



## zader (Feb 26, 2020)

actually just pulled it off the shelf to have a look and noticed he specifically shows you how to run the Zxfer in both pull (from a server) or push (to a server).. that may also help also.

cheers


----------



## mefizto (Feb 26, 2020)

Hi zader,

awesome, I will look it up.

Kindest regards,

M


----------

