Are docker-hosted databases somehow excluded from backup best practices?

As far as I knew, for MS SQL, PostgreSQL and even MySQL databases (so, I assumed, in general, for RDBMS engines), you cannot just create a backup copy of the file system on which they are located, but you need to back up to SQL level to have any hope of internal consistency and hence the ability to actually recover.

But then the answer is like this: and indeed the official docs referenced , assume that the database data can be simply tar

:

docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

      

These two ideas seem to contradict each other. Is there something special about the way Docker works that makes it unnecessary to use SQL-level backups? If not, what am I missing in my understanding? (Why is something being used as an official example when you can't use it to back up a production database? That can't be right ...)

+3


source to share


1 answer


Under certain circumstances, it is safe to use an on-disk database image:

  • The database server is not running.
  • All persistent data is stored in a backup (logs, tablespaces, temporary storage).
  • All components are restored together.
  • You are restoring the image on the same server along the same path.


This last condition is important because some aspects of the database configuration can be stored in operating system files.

You need to make a backup in the database whenever the server is running. The server is responsible for internal data consistency, and the disk image can be incomplete or recoverable. If the server is not running, the state of the database should be consistent in persistent storage.

+4


source







All Articles