Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
08-24-2018 05:24 AM
Checkpointing is the process of flushing all pending page updates from the page cache to the store files. This is necessary for ensuring that the number of transactions that are in need of being replayed during recovery is kept to a reasonable number, mostly to reduce recovery time after an improper shutdown. Regardless of the existence of checkpoints, database operations remain safe, since all transactions not confirmed to have had their changes persisted to storage will be replayed on the next database startup. However, that is dependent on the existence of the set of changes these transactions consist of, information that is kept in the transaction logs. Keeping the list of non-applied transactions long (the interval between checkpoints large) will result in accumulation of transaction logs, since they are necessary for recovery. Checkpointing introduces a special “Checkpointing” entry in the transaction log which marks the last transaction at which checkpointing happened. This is used to determine which transaction logs are no longer necessary, since all the transactions they contain have been safely persisted to the store files.
The process of removing transaction logs that are no longer necessary for recovery is called pruning. From the above description it is apparent that pruning is dependent on checkpointing, in the sense that checkpointing determines which logs can be pruned or, put differently, when pruning can happen, since if a checkpoint doesn’t take place the set of prune-able transaction log files cannot have changed. This dependency is expressed through the triggering of pruning (which may or may not include a check for their existence, as discussed below) whenever checkpointing happens. This relationship is invariant and applies to the entirety of the discussion that follows.
Checkpointing, which is the driving event for pruning, can be triggered in a few different ways.
The simplest method, called "periodic", checks by default every 15 minutes whether there are changes pending flushing (i.e. transactions that have not been checkpointed yet). If so, it performs a checkpoint and subsequently triggers a log prune. Note that no checkpointing being performed imples no pruning happens. This is the default behaviour and the only one available in community edition.
In the enterprise edition, there are two additional checkpointing policies. The simplest is called "continuous" and, as the name implies, it constantly checks if a checkpoint is possible (that is, if any transactions committed since the last successful checkpoint) and if so, it performs it. Pruning is triggered immediately after it completes, just like in the periodic policy.
The third and final checkpointing policy is "volumetric". It checks every 10 seconds if any logs are available for pruning and, if so, it triggers a checkpoint and subsequently it prunes the logs. This policy appears to invert the control between checkpointing and pruning, but in reality it only changes the criteria for when checkpointing must happen. Instead of relying on a time trigger, as the previous two, it relies on a pruning check. Pruning will still happen after checkpointing has occured, as with the other two policies. Nevertheless, since the check depends on the existence of prunable transaction log files, this policy depends on pruning configuration, as described in the next session.
The policy to be used is controlled by the setting dbms.checkpoint
and it defaults to "periodic".
Transaction log pruning configuration primarily deals with specifing the number of transaction logs that should remain available. The primary reason for leaving more than the absolute minimum amount required for recovery comes from requirements of clustered deployments and online backup. Since database updates are communicated between cluster members and backup clients through the transaction logs, keeping more than the minimum amount necessary allows for transferring just the incremental changes (in the form of transactions) instead of the whole store files, which can lead to substantial savings in time and network bandwidth. This is true for HA deployments, backups and Read Replicas in Causal Clusters. However, in the case of Core members in Causal Clustering it is not the transaction logs that matter, but rather the Raft log contents. That scenario is covered in a separate KB article.
The amount of transaction logs left after a pruning operation is controlled by the setting dbms.tx_log.rotation.retention_policy
and it can take a variety of values. They are of the form <numerical value> <measurement>
.
<measurement>
can be "files", "size", "txs", "entries", "hours", or "days".
All the sessions of the conference are now available online