cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Neo4j Crashing due to 'java.lang.OutOfMemoryError: Java heap space'

Hi All,
I am using the Neo4j enterprise version in Amazon AWS as a t2.xlarge instance size with 16GB RAM and 4 vCpu as a single node VM.

However in every day or two, my heap memory gets exhausted, I even ran neo4j-admin memrec --memory=graph.db value and assigned the heap memory and page caching in accordance with the result received from the above command.

I am recieving Neo4j Crashes due to 'java.lang.OutOfMemoryError: Java heap space'.

After investing further I enabled the query logging on the server where I customized a neo4j.template a bit as

dbms.memory.heap.initial_size=5g

dbms.memory.heap.max_size=5g

dbms.memory.pagecache.size=7g

dbms.logs.query.enabled=true

dbms.logs.query.parameter_logging_enabled=true

dbms.logs.query.time_logging_enabled=true

dbms.logs.query.allocation_logging_enabled=false

dbms.logs.query.page_logging_enabled=false

dbms.track_query_allocation=true

cypher.query_max_allocations.size=1G

I have enabled this parameter cypher.query_max_allocations.size=1G , so will it restrict the maximum allocation size for each query?

If not, is there any other way to limit the maximum size allocated to each query?

Currently, we are having 267717 nodes, 13428 relationships, and 395 properties, which might be 10x times more in near future.

I am struck with this error for a while, every time I have to take an AWS backup and restore it in a new VM.

Any help will be appreciated.

1 REPLY 1

Sounds like it's time to do some query tuning.

Use EXPLAIN on your queries to make sure they're using indexes when they're supposed to be (instead of AllNodesScans or NodeByLabelScans), and add any missing indexes accordingly.

It's a good idea to figure out which of your queries may need batching. Remember that Neo4j is a transactional database. All transactional changes must be manifested in memory until the query is done, then applied to the graph all at once in an atomic commit. If there's too many transactional changes being applied at once, that could result in the out of heap error you're seeing.

In that case look for ways to batch your query, whether it's batching at the client level, or via APOC with apoc.periodic.iterate()