cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

The stability of Neo4j CE server

limelyj
Node Link

Neo4j Server version: 4.0.0(community)
Neo4j Browser version: 4.0.1
Operating System: macOS Mojave 10.14.6
settings:
- NEO4J_dbms_memory_pagecache_size=2G

Hi Community,

I'm using the docker version of Neo4j server. Everything's fine, except that the server will crash and restart when long time query is executed.

Example of query include: MATCH (n) DETACH DELETE n when nodes are plenty much, or when complicated search is applied.

Did anyone face this kind of problem? or is anything wrong with my settings?

1 ACCEPTED SOLUTION

anthapu
Graph Fellow

Welcome to the community.

When you run a query like that it is run as a single transaction. So, you need enough heap memory to be able to complete that operation.

What is your system memory available?

If the system is crashing when you have a complex query it could be due to same reason, not enough heap memory available.

There are 3 options here.

  1. modify the query to run in smaller batches.
  2. Increase the heap memory.

For stability purposes we can add a query guard for memory.

By default the heap allocated to a query is unlimited.

Please take a look at these configurations

https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/#config_dbms.memor...

By setting dbms.memory.transaction.max_size you are limiting how much memory a single transaction can use. if it exceeds that size the query will be killed.

View solution in original post

2 REPLIES 2

anthapu
Graph Fellow

Welcome to the community.

When you run a query like that it is run as a single transaction. So, you need enough heap memory to be able to complete that operation.

What is your system memory available?

If the system is crashing when you have a complex query it could be due to same reason, not enough heap memory available.

There are 3 options here.

  1. modify the query to run in smaller batches.
  2. Increase the heap memory.

For stability purposes we can add a query guard for memory.

By default the heap allocated to a query is unlimited.

Please take a look at these configurations

https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/#config_dbms.memor...

By setting dbms.memory.transaction.max_size you are limiting how much memory a single transaction can use. if it exceeds that size the query will be killed.

Thanks for the answer!

I'm trying on my own laptop, with many other processes, so the memory left is only about 2G. Setting dbms.memory.transaction.max_size helps a lot.