cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Memory limit exceeded while running embedded database server

pinox101
Node Link

I’m getting this error in local while running tests for my user defined procedures:

org.neo4j.driver.exceptions.TransientException: Can't allocate extra 262144 bytes due to exceeding memory limit; used=2147436544, max=2147483648

Is there a way to increase memory limits for the embedded database server? I tried this but it didn't work:

neo4j = Neo4jBuilders.newInProcessBuilder()
                .withConfig(GraphDatabaseSettings.pagecache_memory, "3000m")
                .withDisabledServer()
                .withProcedure(MyUDP1.class)
                .withProcedure(MyUDP1.class)
                .withFixture(MODEL_STATEMENT)
                .build();
  • neo4j version 4.1.1
  • apoc version 4.1.0.3
1 ACCEPTED SOLUTION

Thanks so much @david.allen ,

Increasing GraphDatabaseSettings.tx_state_max_off_heap_memory solved the issue:

neo4j = Neo4jBuilders.newInProcessBuilder()
                .withConfig(GraphDatabaseSettings.tx_state_max_off_heap_memory,  Long.parseLong("3000000000")) // bytes (default 2147483648)
                .withDisabledServer()
                .withProcedure(MyUDP1.class)
                .withProcedure(MyUDP1.class)
                .withFixture(MODEL_STATEMENT)
                .build();

View solution in original post

5 REPLIES 5

There are different memory settings you can configure in Neo4j. The way you're approaching this looks to me like it should work, but the issue is that you're configuring the page cache. The page cache controls how much of the graph is kept hot in memory. It doesn't help you with large transactions run against the database.

For an overview of memory segmentation, please check this site -- it will explain the different kinds of memory and what they do. I suspect you need to configure the heap to be larger to help with a client transaction. The page cache helps speed queries up, but doesn't affect how large they can be.

Thanks so much @david.allen ,

Increasing GraphDatabaseSettings.tx_state_max_off_heap_memory solved the issue:

neo4j = Neo4jBuilders.newInProcessBuilder()
                .withConfig(GraphDatabaseSettings.tx_state_max_off_heap_memory,  Long.parseLong("3000000000")) // bytes (default 2147483648)
                .withDisabledServer()
                .withProcedure(MyUDP1.class)
                .withProcedure(MyUDP1.class)
                .withFixture(MODEL_STATEMENT)
                .build();

Just wondering, if this is happening in local with test data (taken from the real data we need to process), and having in account that in production we could have bigger data sets or just more traffic, I guess this is an indicative that we should consider also setting this value in neo4j server right?

Also this could be related to a similar issue: Out of memory issue after 2G incremental memory usage despite plenty free memory available

Is the equivalent of GraphDatabaseSettings.tx_state_max_off_heap_memory, in neo4j.conf file, dbms.memory.off_heap.max_size?

So we could set something like this in config file:

dbms.tx_state.memory_allocation = OFF_HEAP
dbms.memory.off_heap.max_size = 0

Source: https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/

Correct. The same settings apply whether it's server or embedded. Memory sizing is a very important thing to do with any database product, and "out of memory" errors are a dead give-away that it hasn't been properly tuned for the workload.

I see, good to know, thanks so much for the help .