# A backup story



## Bobbla (Nov 14, 2011)

When I read this.. well I thought about you people 

Its a short article about someone who did not backup their data. In short they stored everything on a server, apparently source code and everything and then it died or something. I've heard two stories, one is that the server died. The other is that they thought that the best way to solve some problem was to delete the game off the server, but forgot backup. I dunno what to think of this.... :\

zhe fail story


----------



## fluca1978 (Nov 14, 2011)

This bring up another discussion: in the late years distributed source code version managers have reach the hall of fame. This could mean that (a) almost everybody (or at least a lot of people) has a copy of the repository (and this is good) or (b) only you have a copy of your work because nobody has checked out the repository. And what about the cloud? Do you need to backup your repo and/or data that is living in the cloud?
I have no right answer, just as reminded in the other post, do backups! And let other people to know where they are in the case they need them.


----------



## Simba7 (Nov 14, 2011)

*Wow..*

How do you screw up that badly? That is why backups and redundant servers are a good thing. Did they really think that they could host it on a single box? Are they crazy?

I wonder who's rear got chewed off that day.


----------



## SirDice (Nov 15, 2011)

Simba7 said:
			
		

> How do you screw up that badly?


If I received a euro for every time that happened I'd be a very wealthy man.

People think of digital data the same way as books. If it's written down it's safe. If it's stored on disk it's safe. They never realize hardware breaks until it's too late. Partly because they simply do not understand the mechanics involved.

Somebody usually needs to lose a bunch of data before they realize backups are a good idea.


----------



## Dies_Irae (Nov 15, 2011)

SirDice said:
			
		

> If I received a euro for every time that happened I'd be a very wealthy man.


Me too.



			
				SirDice said:
			
		

> People think of digital data the same way as books. If it's written down it's safe. If it's stored on disk it's safe. They never realize hardware breaks until it's too late. Partly because they simply do not understand the mechanics involved.
> 
> Somebody usually needs to lose a bunch of data before they realize backups are a good idea.


Absolutely right.

I remember that a few years ago I was called by a customer because "the pc won't start anymore". It was a 10+ years old dusty Win'95 box with 10+ years of billing data in it.
Of course, the pc won't start becuse the (single) hard disk was dead.
When I asked: "Do you have a backup, right?" they answered "What is a backup?"


----------



## fluca1978 (Nov 16, 2011)

What is also awkward is that people often tie backup to data, while it is really important to have also configuration backups. It happen not so far to me that a colleague of mine screwed up the configuration of a server without having a good and recent backup, and it took hours to get all the things working again.
And this lead me to think that people realizes what backups are not only when they loose data, but even when their net stop working.


----------



## fonz (Nov 17, 2011)

fluca1978 said:
			
		

> What is also awkward is that people often tie backup to data, while it is really important to have also configuration backups.


Very good point indeed. Either keep a backup of /etc and any other important config files (such as /usr/local/etc/ possibly), or document (which is more verbose than just logging) every setup procedure. I usually use both: I keep regular backups of /etc *and* write documents about every non-trivial setup task, kinda like a personal HOWTO collection. Of which I keep backups, of course 

Fonz


----------



## phoenix (Nov 18, 2011)

This is why I advocate full-system backups.  Backup the entire server, every single file on it.  Don't try to be "smart" and only backup certain files.  Back it *all* up.  Every day.

But do it in such away that the individual files are accessible.  IOW, don't use tarballs, don't use binary backup formats of any kind, unless there's an easy-to-use way to access the individual files.  Ideally via a web-GUI or a CLI (I prefer the CLI).

ZFS + rsync + snapshots works beautifully for this.    rsync the entire remote server into a ZFS filesystem.  Then snapshot the filesystem using the date as the name. Then repeat everyday.


----------



## fluca1978 (Nov 18, 2011)

phoenix said:
			
		

> This is why I advocate full-system backups.  Backup the entire server, every single file on it.  Don't try to be "smart" and only backup certain files.  Back it *all* up.  Every day.



True, one day you will discover you have lost that complex shell script you placed somewhere else from the standard script directory....and you don't have backup. By the way, I tend to avoid to backup binaries.


----------



## Slurp (Nov 18, 2011)

fluca1978 said:
			
		

> And what about the cloud? Do you need to backup your repo and/or data that is living in the cloud?



Certainly. Cloud reliability is highly variable and overall not better then that your own servers (as long as you follow good practices). You need to know architecture of your provider to correctly asses how much you can trust them.
A recent example of a failure (of a cloud backup company):
http://www.theregister.co.uk/2011/11/17/livedrive_backify_dispute/


----------



## fluca1978 (Nov 21, 2011)

Depending on the cloud server your backup strategy must be applied from a low level to a paranoia one. I tend to assume that some cloud services are good and therefore I reduce my backup frequency, while some (free) cloud services simply gives you a space somewhere with very few guarantees. I have also witnessed situations where ISPs lost entire web sites and, due to the lack of recent backups, paying users had to restore old versions. 
I guess, after all, each time you place a bit somewhere you should have a copy under your bed...


----------

