cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

How to restore Neo4j Backups on Kubernetes

Following on an earlier article dealing with backup in Kubernetes environments, here's a new article published on the Google Cloud medium site on how to handle restore operations in Kubernetes.

15 REPLIES 15

greta
Graph Fellow

Thanks for submitting!

I've added a tag that allows your blog to be displayed on the community home page!

Hi david.allen, I've loved and used this blogpost heavily setting up our restore from backup via initcontainers. However, while I'm finding the restore script works and logs read perfectly, when I check to see what data is in the graph.db in the causal cluster I'm not finding the same data as the logs are saying is there. Is this something that anyone else has reported?

It hasn't been reported - I'm not sure I follow what you're saying. Do you mean that after restore the data you expect isn't there? Or are you saying that during restore it's logging one thing when different data is present?

What is present and what's expected? Maybe an output paste would help.

david.allen, thank you so much for your response and apologies for the lack of clarity. I'll add here the logs from the restore container and then separately a ls of the graph.db dir to display the contrast and maybe you can point out what I'm doing wrong/where I'm incorrectly looking.

Logs from restore container:

~$ kubectl logs infrastructure-neo4j-neo4j-core-0 -c restore-from-file
*********************************************************************************************
* You have not specified BACKUP_SET_DIR -- this means that if your archive set uncompresses *
* to a different directory than the file is named, this restore may fail                    *
* See logs below to ensure the right path was selected.                                     *
*********************************************************************************************
=============== Neo4j Restore ===============================
Beginning restore process
REMOTE_BACKUPSET=s3://knowledge-graph-staging-backup/knowledge-graph-staging-backup-2019-04-16.tar.gz
BACKUP_SET_DIR=
FORCE_OVERWRITE=true
============================================================
No existing graph database found at /data/databases/graph.db
We will be force-overwriting any data present
Making restore directory
Copying s3://knowledge-graph-staging-backup/knowledge-graph-staging-backup-2019-04-16.tar.gz -> /data/backupset
download: s3://knowledge-graph-staging-backup/knowledge-graph-staging-backup-2019-04-16.tar.gz to data/backupset/knowledge-graph-staging-backup-2019-04-16.tar.gz
Backup size pre-uncompress:
45M	/data/backupset
total 46060
drwxr-xr-x 2 root root     4096 Apr 16 15:46 .
drwxr-xr-x 4 root root     4096 Apr 16 15:46 ..
-rw-r--r-- 1 root root 47157070 Apr 16 00:01 knowledge-graph-staging-backup-2019-04-16.tar.gz
Untarring backup file: knowledge-graph-staging-backup-2019-04-16.tar.gz
<all the files untarred>
BACKUP_SET_DIR was not specified, so I am assuming this backup set was formatted by my backup utility
BACKUP_FILENAME=knowledge-graph-staging-backup-2019-04-16.tar.gz
UNTARRED_BACKUP_DIR=knowledge-graph-staging-backup-2019-04-16
UNZIPPED_BACKUP_DIR=
RESTORE_FROM=/data/backupset/data/knowledge-graph-staging-backup-2019-04-16
Set to restore from /data/backupset/data/knowledge-graph-staging-backup-2019-04-16
Post uncompress backup size:
/data/backupset
total 46064
drwxr-xr-x 3 root root     4096 Apr 16 15:46 .
drwxr-xr-x 4 root root     4096 Apr 16 15:46 ..
drwxr-xr-x 3 root root     4096 Apr 16 15:46 data
-rw-r--r-- 1 root root 47157070 Apr 16 00:01 knowledge-graph-staging-backup-2019-04-16.tar.gz
/data/backupset
135M	/data/backupset/data/knowledge-graph-staging-backup-2019-04-16
Dry-run command
neo4j-admin restore --from=/data/backupset/data/knowledge-graph-staging-backup-2019-04-16 --database=graph.db --force
Volume mounts and sizing
Filesystem      Size  Used Avail Use% Mounted on
overlay          20G  9.8G   11G  49% /
tmpfs           7.8G     0  7.8G   0% /dev
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/nvme1n1    296G  244M  295G   1% /data
/dev/nvme0n1p1   20G  9.8G   11G  49% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           7.8G   12K  7.8G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           7.8G     0  7.8G   0% /sys/firmware
Now restoring
Rehoming database
Restored to: 
total 4
drwxr-xr-x 6 root root 4096 Apr 16 15:46 graph.db
-rw-r--r-- 1 root root    0 Apr 16 15:46 store_lock
Final permissions
total 53544
drwxr-xr-x 6 neo4j neo4j     4096 Apr 16 15:46 .
drwxr-xr-x 3 neo4j neo4j     4096 Apr 16 15:46 ..
-rw-r--r-- 1 neo4j neo4j   169084 Apr 16 15:46 debug.log
-rw-r--r-- 1 neo4j neo4j   326343 Apr 16 15:46 debug.log.1555372827544
drwxr-xr-x 3 neo4j neo4j     4096 Apr 16 15:46 index
-rw-r--r-- 1 neo4j neo4j     1391 Apr 16 15:46 index.db
drwxr-xr-x 2 neo4j neo4j     4096 Apr 16 15:46 metrics
-rw-r--r-- 1 neo4j neo4j     8192 Apr 16 15:46 neostore
-rw-r--r-- 1 neo4j neo4j     6560 Apr 16 15:46 neostore.counts.db.a
-rw-r--r-- 1 neo4j neo4j     6560 Apr 16 15:46 neostore.counts.db.b
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.id
-rw-r--r-- 1 neo4j neo4j   278528 Apr 16 15:46 neostore.labelscanstore.db
-rw-r--r-- 1 neo4j neo4j     8190 Apr 16 15:46 neostore.labeltokenstore.db
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.labeltokenstore.db.id
-rw-r--r-- 1 neo4j neo4j     8192 Apr 16 15:46 neostore.labeltokenstore.db.names
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.labeltokenstore.db.names.id
-rw-r--r-- 1 neo4j neo4j  2072070 Apr 16 15:46 neostore.nodestore.db
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.nodestore.db.id
-rw-r--r-- 1 neo4j neo4j     8192 Apr 16 15:46 neostore.nodestore.db.labels
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.nodestore.db.labels.id
-rw-r--r-- 1 neo4j neo4j 38396254 Apr 16 15:46 neostore.propertystore.db
-rw-r--r-- 1 neo4j neo4j     8192 Apr 16 15:46 neostore.propertystore.db.arrays
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.propertystore.db.arrays.id
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.propertystore.db.id
-rw-r--r-- 1 neo4j neo4j     8190 Apr 16 15:46 neostore.propertystore.db.index
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.propertystore.db.index.id
-rw-r--r-- 1 neo4j neo4j     8192 Apr 16 15:46 neostore.propertystore.db.index.keys
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.propertystore.db.index.keys.id
-rw-r--r-- 1 neo4j neo4j  3956736 Apr 16 15:46 neostore.propertystore.db.strings
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.propertystore.db.strings.id
-rw-r--r-- 1 neo4j neo4j   188025 Apr 16 15:46 neostore.relationshipgroupstore.db
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.relationshipgroupstore.db.id
-rw-r--r-- 1 neo4j neo4j  9212640 Apr 16 15:46 neostore.relationshipstore.db
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.relationshipstore.db.id
-rw-r--r-- 1 neo4j neo4j     8190 Apr 16 15:46 neostore.relationshiptypestore.db
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.relationshiptypestore.db.id
-rw-r--r-- 1 neo4j neo4j     8192 Apr 16 15:46 neostore.relationshiptypestore.db.names
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.relationshiptypestore.db.names.id
-rw-r--r-- 1 neo4j neo4j    32768 Apr 16 15:46 neostore.schemastore.db
-rw-r--r-- 1 neo4j neo4j        9 Apr 16 15:46 neostore.schemastore.db.id
-rw-r--r-- 1 neo4j neo4j       70 Apr 16 15:46 neostore.transaction.db.1
drwxr-xr-x 2 neo4j neo4j     4096 Apr 16 15:46 profiles
drwxr-xr-x 3 neo4j neo4j     4096 Apr 16 15:46 schema
Final size
135M	/data/databases/graph.db
Purging backupset from disk

graph.db contents when exec'd onto core pod "0":

total 224
drwxr-xr-x    5 neo4j    neo4j         4096 Apr 16 15:49 .
drwxr-xr-x    3 neo4j    neo4j         4096 Apr 16 15:46 ..
drwxr-xr-x    2 neo4j    neo4j         4096 Apr 16 15:49 index
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore
-rw-r--r--    1 neo4j    neo4j           96 Apr 16 15:49 neostore.counts.db.a
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.id
-rw-r--r--    1 neo4j    neo4j        40960 Apr 16 15:49 neostore.labelscanstore.db
-rw-r--r--    1 neo4j    neo4j         8190 Apr 16 15:49 neostore.labeltokenstore.db
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.labeltokenstore.db.id
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.labeltokenstore.db.names
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.labeltokenstore.db.names.id
-rw-r--r--    1 neo4j    neo4j            0 Apr 16 15:49 neostore.nodestore.db
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.nodestore.db.id
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.nodestore.db.labels
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.nodestore.db.labels.id
-rw-r--r--    1 neo4j    neo4j            0 Apr 16 15:49 neostore.propertystore.db
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.propertystore.db.arrays
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.propertystore.db.arrays.id
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.propertystore.db.id
-rw-r--r--    1 neo4j    neo4j         8190 Apr 16 15:49 neostore.propertystore.db.index
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.propertystore.db.index.id
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.propertystore.db.index.keys
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.propertystore.db.index.keys.id
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.propertystore.db.strings
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.propertystore.db.strings.id
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.relationshipgroupstore.db
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.relationshipgroupstore.db.id
-rw-r--r--    1 neo4j    neo4j            0 Apr 16 15:49 neostore.relationshipstore.db
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.relationshipstore.db.id
-rw-r--r--    1 neo4j    neo4j         8190 Apr 16 15:49 neostore.relationshiptypestore.db
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.relationshiptypestore.db.id
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.relationshiptypestore.db.names
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.relationshiptypestore.db.names.id
-rw-r--r--    1 neo4j    neo4j         8192 Apr 16 15:49 neostore.schemastore.db
-rw-r--r--    1 neo4j    neo4j            9 Apr 16 15:49 neostore.schemastore.db.id
-rw-r--r--    1 neo4j    neo4j          108 Apr 16 15:49 neostore.transaction.db.0
drwxr-xr-x    2 neo4j    neo4j         4096 Apr 17 14:58 profiles
drwxr-xr-x    2 neo4j    neo4j         4096 Apr 16 15:49 temp-copy

You can see the diff in time, file/dir sizes and the absence of some dirs.

In case it helps, this is running as the third initcontainer via a helm chart standing up a 3 core causal cluster.

Thanks again,

Tommy

Thanks this was very helpful. I'm almost 100% sure the issue here is that your initContainer and your regular neo4j pod are not sharing a common volume mount. So what's happening is this:

  • Init container starts, does restore, everything is great.
  • Neo4j Pod starts with a totally different /data mount where nothing is present. So it thinks it's empty, and writes a default database at startup.

This would explain why your files are so small and why stuff is missing.

The reason the initcontainer approach works is that /data on the initcontainer and /data on the neo4j pod end up being the same volume. So check your YAML and verify this is the case.

Thanks again for your quick response!

I've suspected this for a while but the names look the same so I didn't pursue it.

Both the core and the initContainers are configured as follows:

    volumeMounts:
    - name: datadir
      mountPath: /data

Oh - I should add too -- you want an initcontainer in front of all 3 of your neo4j pods, they should all do the same restore. This way they'll be in sync when they start and have the same content. You don't want 1 init container on 1 of the 3 pods. This will end up in a confused state where 2 members have a different (empty) database and 1 member has the full database. You'll get errors for sure.

Interesting. As it is set up at the moment, the initContainer is described in the values.yaml of the neo4j helm chart in the initContainer for the cores section so I assumed that it's running for each core as it sets them up.

I was just wondering if there might be an issue with it being the last of three init containers that are running (doing things like pulling plugins from S3 and the like.

Yes if you've got it set up in values it should run for all 3, not just one. Sorry if I was confusing the issue, as you describe it, it shouldn't be a problem. Just double checking the shared volume mount is the thing to do next.

Awesome. That's what I'd thought so I appreciate the sanitycheck.

Aside from ensuring they're named correctly as below is there a way to hand check if they're sharing the volume (which I agree, it seems they're not). Or a better way of asking what I'm trying to figure out now is: how do I ensure they're using a shared volume?

Here's a working example of how you can share a volume across two pods:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-...

You can see the example, but it has 3 critical parts:

  • a volumeMount on the initContainer
  • a volumeMount on the pod
  • a volumeDeclaration on the SPEC

The third one you may be missing.

This is extremely helpful. Thanks a lot for pointing me in this direction. Assuming that's the issue (and it makes a lot of sense for it to be) I need to either not use helm or fork it or suggest a change, since, sadly, I'm seeing no place to add a volume at the spec level. I'll figure out the best way and keep you updated if you're curious.

Hello again! Per an update, I've succeeded in getting neo4j to standup based on an initcontainer restore and it's very exciting. However, if it ever has to restart I'm getting the following error

ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@f88bfbe' was successfully initialized, but failed to start. Please see the attached cause exception "Unable to find transaction 23289 in any of my logical logs: Couldn't find any log containing 23289". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@f88bfbe' was successfully initialized, but failed to start. Please see the attached cause exception "Unable to find transaction 23289 in any of my logical logs: Couldn't find any log containing 23289".

The only information I can find on the error is here: link

Have you ever run into needing to do anything to the backup even if the back was run on an instance that was the same version @david.allen?

I'm also finding that I need to add a line to remove the auth file if it falls over and kube restarts again. Have you ever run into that as well?

Thanks for any insights you might have!

Tommy

On the specifics of the error, I believe this is caused when the database is trying to do integrity checking, or applying a transaction, and the transaction log files are missing -- but I am not entirely sure, and you might ask further in the clustering topic.

In terms of restoring, you should be able to restore and start off of any good backup, assuming the backup included the integrity checks and the backup succeeded and so forth.

In terms of the auth file -- worth noting is that auth information stored in the cluster isn't included as part of the backup process with Neo4j, so you may be experiencing issues either from an auth file that's left over, or needing to replace an auth file.

Hey david,
just as aguyhasnoname in the beginning of this post, I am facing the issue that after a restore the data expected isn't there. I am using the new and supported helm-charts and configured a initContainer just alike with the deprecated helm-charts. The backup is stored in a cloud provider (Azure) bucket and the volumes and volumeMounts of the initContainer and the deployment itself match each other. The log of the initContainer / restore look fine too.
Any ideas?

Thanks in advance, Theresa