Typical Data Backup Mistakes

Data is one of the most important parts of any IT company. Backing up and protecting corporate data has become more problematic in the past few years due to several tech trends. Some of these are the incredible growth of data, the increased number of mobile and stationary devices, the BYOD phenomenon and the continued rise of software-as-a-service (SaaS) applications.

Performing regular backups is critical to any organization’s successful data management system. Yet even certified IT specialists sometimes make certain mistakes. Check our blog post that has some of the common mistakes made while performing data backups.


1. Erasing the previous backup copy before creating a new one

Commonly this mistake is made by junior developers who don’t understand the main purpose of a data backup copy – to ensure the minimal downtime of the information system. As a result, the new system becomes unprotected in the period between the removal of the last backup and the creation of a new one. Because a backup can take a long time to create, this is the perfect time for a Murphy’s Law to show its face.

Recommendation: Do not remove the previous backup until you create a new one.

2. Overwrite an existing database when restoring from a backup copy

This mistake is not as common as the previous one, but the results can be much worse. If a backup copy hasn’t been tested and has been damaged, then after overwriting you won’t have the previous backup copy to go back to after finding that your current backup is not valid. Usually this will happen on Friday night, so a whole weekend can be ruined. Firebird, SQL database management system, can protect you from such mistakes – it creates a restore from a backup with a default key using the gbak tool, and best of all the create function won’t be perform if the specified file name points to the existing database.

Recommendation: do not overwrite the existing database without a written order from company management.

3. Using a one-step backup and restore, without creating an intermediate backup file

Almost in all DBMS, standard data input and output allows for an interesting trick: perform streaming backups with an immediate database recovery. As a result, the intermediate backup file is not being created. This is fine for routine tasks and test recovery (if there is another backup copy), but in any case it should not be used for automatic backups.

During such backup processes, if there is a serious drive failure, for example, then the source database can be damaged while the new one has not yet been established.

Recommendation: do not perform a one-step backup and restore in the automatic mode and always check for the most recent backup copy.

4. Keeping your backup and database on the same storage

Someone cay say that this is the basics of any system administration. And he will be right. But with the development of cloud services, the database and the drive can be stored on the same storage and it can be broken at the most inopportune moment for the company. Still there are people who believe that if they’re using RAID (standard 1 or higher) then nothing can happen to their data.

Recommendation: do not keep backup files and the database on the same storage, no matter how safe it may seem.

5. Not checking if the backup was successful

This is a common mistake of many system administrators and heads of IT departments. If you are not checking the result of the backup, then there is no need to perform it at all. Be sure that you will receive an email notification about a successful backup or even better, a SMS about it. So, the lack of a notification means that there are some problems with the backup.

Recommendation: use tools for automate backups that can track successful and unsuccessful backups and notify users about errors and have some useful control panels.

backups

6. No backup validation

It’s really important to validate your backups from time to time, and to be sure that they were created correctly, weren’t damaged somehow, or weren’t copied to/dev/null and so on.

Recommendation: check everything and everyone.

7. No control of free space for the backup

This is classic. The backup uses all of the available space on the disk and it crashes if there is not enough free space. If a backup is on the same disc with the database, then the database may stop working; if the backup is on the system disk, then it may cause some damage to the system.

Recommendation: use tools for backups that can predict the size of the backup and notify you about a possible lack of free space.

8. No control of the backup duration

Last time the backup took 40 minutes and now it takes 3 hours. Hmm, there should be a reason for this. Probably the reason is the increased size of the database, or the write speed has been highly degraded, or maybe a good colleague added another backup system. If you don’t control the backup performance, then it’s possible to overlook a problem and miss a chance to fix it.

Recommendation: use controls for the duration of the backup process.

9. Running a database backup while updating the OS

Thais is a common problem, especially with automatic Windows updates enabled. In the best case it just slows down the process, but if the OS restarts after updating then the backup will be ruined. It’s good that Operating Systems are not updating every single day.

Recommendation: If you can’t turn off the system updates, then schedule them to a time when they won’t interfere with the d.

10. Replacement of a backup with replication

Backup and replication serve to increase the reliability of data and to prevent data loss, but they differ a lot from each other. Everyone likes replications for their ability to synchronize data on a different server with a minimum delay. But only a backup is protection from situations like accidental removal of data.

Recommendation: if you have configured replications, then do not forget to use backups as well.

Source