# Automatic shutdown of server when task complete?



## JayArr (Feb 6, 2017)

Hi All

My power bills continue to climb and I'm looking at ways to save by running some of the servers for less hours. (We're a small business but I have a rack with 6 HP Proliant DL360 and380 servers)

The DL380 server is just a repository, all other computers/servers in the business back up to it and store on it. I'd like to install Amanda on it and turn it on once a day (at noon when everything else is on) and have it backup up from various computers and then I'd like it to completely power off until I hit the power button the next day.

Is this a crazy idea? I know it then becomes my responsibility to make sure the backups get done but I'm good at that so it's no worry. I'm thinking I could reduce this servers power consumption from 16 hours a day to 30 minutes.

My question is: Can I put the `shutdown` command into a cron job and turn the server into a "one-shot"?

JayArr


----------



## SirDice (Feb 6, 2017)

Most backup applications allow a script to be fired off when a job finishes. You could use that to script the shutdown when the job's finished.

Using a cronjob is going to be tricky, what if the backup job is still running?


----------



## Datapanic (Feb 6, 2017)

If the DL380 is storage and you shut it down, how do its clients handle it when it comes back up?  For example, with NFS, the client side will typically end up with a stale mount and have to remount before it can access it again.  Are you virtualizing your servers?  With 6 DL360's, you have lots of horsepower to do so and possibly take 1 or more of those out of the picture.


----------



## JayArr (Feb 6, 2017)

Hi Datapanic

Keep in mind this is only a half baked idea so far but as I understand it, Amanda fetches from the other computers so they would no longer need to mount any shares from the backup server, they wouldn't notice if it was up or down and there wouldn't be any mounts to go stale. I'll have to go read up on Amanda now to see if my assumption is correct.

SirDice

Good point, I hadn't thought of that. I think I need to do some Amanda research before I continue.


----------



## PacketMan (Feb 7, 2017)

Consider Bittorrent Sync, since rebranded Resilio.  You can find it here at net-p2p/btsync.  Your various 'shares' can be distributed across the users machines and servers. When a machine comes online it will automatically sync with the others. Even if this works well for you, I would have a machine doing 'traditional' backs. If this works well for you, it just might allow you to have some servers powered down. But I do suggest you consider having all sync machines online on a regular basis. And of course test thoroughly before a full deployment.


----------



## jalla (Feb 8, 2017)

I'd suggest something like the following
Make a small script that runs your backup jobs, end this script with

```
shutdown -p +1
```
Put the script in crontab to be run at boot

```
@reboot * * * * /usr/local/bin/myscript
```

Finally schedule one of the other hosts to wake the backupserver daily

```
0 12 * * * /usr/sbin/wake <iface> <ether>
```


----------



## sko (Feb 9, 2017)

A backup strategy that incorporates _any_ manual work (like manually switching on the backup server) is not a strategy but a failure waiting to happen. And it will most likely fail exactly before the day you really badly need that latest backup.

Are those 6 other machines on full load 24/7? I'd rather look into aggregating services from 2 or more servers to one machine (->jails). If you can run services from 2 hosts on only 1 machine during most of the day (or night), you might consider running these services load-balanced if possible, and only switching on the second machine during load-heavy hours. But _always_ turn them back on automatically e.g. via a simple cron-initiated shellscript on another host calling ipmitool to remotely start the machine. Every recurring job requiring manual intervention is bound to fail, and it will fail at the most inconvenient time.


As for AMANDA: Because it is basically only a collection of scripts wrapped around standard system tools, it is highly modular and can do/run anything you want. That said - don't fiddle with Amandas scripts directly. It offers a simple scripting API to plug in new scripts very easily to do everything you want. The wiki at wiki.zmanda.org has a quick introduction on the API.
Amanda is called by cron, so a very crude way to shutdown after backups would be to just add an `&& shutdown -h now` to the crontab entry. This would also fire if amanda didn,t back up all/any system! Amanda fails gracefully, sending you a short (or thorough - depending on your configuration) report of what went wrong and goes on with the next task if there is any.

Amanda doesn't require any mounts on the client systems it backs up - for the most basic configuration all it really needs is an ssh user able to access the filesystems you want to back up and tar or dump. There are also an Amanda scripts for zfs available (amzfs-snapshot and amzfs-sendrecv) to do proper zfs-based backups. I can highly recommend this strategy on all systems using ZFS - backup times will drop drastically (e.g. from ~3 hours down to ~30mins on one of my machines) and recoveries/rollbacks or restoring single files is extremely simple and fast.

One thing I think you should be aware of when first looking at Amanada: you really have to grasp its concept of backup levels and its dump cycles. It might also feel strange at the beginning to refer to "tapes" even when using disks, directories or even different zfs datasets as targets. It also helps to at least have a basic understanding of the "standard backup tools" like tar, dump/restore, cpio and dd, (and zfs/zpool) as these are what Amanda will use and you will need to know if you have/want to restore things manually (or e.g. only a few files instead of a whole backup). The beauty of amanda using standard unix tools is you don't need amanda for recovery. Use an external target (S3 bucket, tarsnap, zfs pool on a remote host) as an additional backup target and you can even recover from a complete destruction of all your local gear without worrying about first setting up a host with your backup software to restore its somehow special or even proprietary backup format. I really love Amanda for this as it gives you the flexibility you absolutely need when things go downhill and everyone is shouting.

IMHO one of the best introductions to Amanda and the whole topic of backups is given by W. Curtis Preston in his book "backup and recovery" [0]. It might be a few years old, but it really gives you the right mindset for doing backups "the right way". Even if you're only interested in Amanda, I can highly recommend also reading Chapters 1-3 about the bare fundamentals and concepts behind backups and manual backup/restore using standard UNIX-tools. These skills are very handy on a daily basis and invaluable when things get rough and you have to save the day (or the company) with your backups. 
Oh, and the book in printed form with its 700-odd pages makes a great tool to reaffirm your points when discussing about backups with your boss 

[0] http://shop.oreilly.com/product/9780596102463.do


----------



## SirDice (Feb 9, 2017)

sko said:


> Are those 6 other machines on full load 24/7? I'd rather look into aggregating services from 2 or more servers to one machine (->jails).


I'd probably build a XenServer cluster from them. Free up one host, install XenServer, migrate the other machines to virtual. When you've 'freed' one you can add it to the pool. I'm pretty sure you can probably get things down to 3 or 4 physical machines. It will be a more efficient use of the available hardware, better failt-tolerance, easy to maintain and backup (snapshots!).


----------

