Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
12-15-2020 07:27 AM
In our database, we have several graphs that are all identified by their unique attribute. At any one time, we seem to have about 200 sub graphs, each with between 20K-200K nodes.
Periodically, we do some house cleaning, where we drop off the old subgraphs and add new ones. The command we use to delete is a very simple Match (a {compileunit:'this set'}) detach delete a;
Our issue comes when we have deleted about 10 subgraphs, the database first disconnects and then fails to restart. We have checked the storage and we are between 50-80% although we do peak up to 90 when we are deleting.
This happened about 2 months ago and we simply decided it was a fluke and deleted the WHOLE database and rebuilt. Fortunately it was our test db so it had lots of backups. After the delete and rebuild, it was fine although thats a lousy solution.
Yesterday, we were doing the same operation and the database locked up again. We went through the same issues, etc and even increased the disk space to make sure we were not topped out. Still no recovery. We are going to once again have to rebuild. This can't happen in production ( obviously)
Is there some defect in Neo4j that doesn't like this approach ? It happened around the same number (10)
12-16-2020 07:19 AM
To fix this, you need to first identify the cause of the lock-up. Check the debug.log
file and look for stacktraces about the time when the database locked up. You're alluding to multiple possible causes, that need to get sorted out.
All the sessions of the conference are now available online