backup_log
Querying in ClickHouse Cloud
The data in this system table is held locally on each node in ClickHouse Cloud. Obtaining a complete view of all data, therefore, requires the clusterAllReplicas
function. See here for further details.
Contains logging entries with the information about BACKUP
and RESTORE
operations.
Columns:
hostname
(LowCardinality(String)) — Hostname of the server executing the query.event_date
(Date) — Date of the entry.event_time
(DateTime) — The date and time of the entry.event_time_microseconds
(DateTime64) — Time of the entry with microseconds precision.id
(String) — Identifier of the backup or restore operation.name
(String) — Name of the backup storage (the contents of theFROM
orTO
clause).status
(Enum8) — Operation status. Possible values:'CREATING_BACKUP'
'BACKUP_CREATED'
'BACKUP_FAILED'
'RESTORING'
'RESTORED'
'RESTORE_FAILED'
error
(String) — Error message of the failed operation (empty string for successful operations).start_time
(DateTime) — Start time of the operation.end_time
(DateTime) — End time of the operation.num_files
(UInt64) — Number of files stored in the backup.total_size
(UInt64) — Total size of files stored in the backup.num_entries
(UInt64) — Number of entries in the backup, i.e. the number of files inside the folder if the backup is stored as a folder, or the number of files inside the archive if the backup is stored as an archive. It is not the same asnum_files
if it's an incremental backup or if it contains empty files or duplicates. The following is always true:num_entries <= num_files
.uncompressed_size
(UInt64) — Uncompressed size of the backup.compressed_size
(UInt64) — Compressed size of the backup. If the backup is not stored as an archive it equals touncompressed_size
.files_read
(UInt64) — Number of files read during the restore operation.bytes_read
(UInt64) — Total size of files read during the restore operation.
Example
This is essentially the same information that is written in the system table system.backups
:
See Also