Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
12-08-2020 03:07 PM
I’m getting this error in local while running tests for my user defined procedures:
org.neo4j.driver.exceptions.TransientException: Can't allocate extra 262144 bytes due to exceeding memory limit; used=2147436544, max=2147483648
Is there a way to increase memory limits for the embedded database server? I tried this but it didn't work:
neo4j = Neo4jBuilders.newInProcessBuilder()
.withConfig(GraphDatabaseSettings.pagecache_memory, "3000m")
.withDisabledServer()
.withProcedure(MyUDP1.class)
.withProcedure(MyUDP1.class)
.withFixture(MODEL_STATEMENT)
.build();
Solved! Go to Solution.
12-09-2020 06:41 AM
Thanks so much @david.allen ,
Increasing GraphDatabaseSettings.tx_state_max_off_heap_memory
solved the issue:
neo4j = Neo4jBuilders.newInProcessBuilder()
.withConfig(GraphDatabaseSettings.tx_state_max_off_heap_memory, Long.parseLong("3000000000")) // bytes (default 2147483648)
.withDisabledServer()
.withProcedure(MyUDP1.class)
.withProcedure(MyUDP1.class)
.withFixture(MODEL_STATEMENT)
.build();
12-09-2020 05:26 AM
There are different memory settings you can configure in Neo4j. The way you're approaching this looks to me like it should work, but the issue is that you're configuring the page cache. The page cache controls how much of the graph is kept hot in memory. It doesn't help you with large transactions run against the database.
For an overview of memory segmentation, please check this site -- it will explain the different kinds of memory and what they do. I suspect you need to configure the heap to be larger to help with a client transaction. The page cache helps speed queries up, but doesn't affect how large they can be.
12-09-2020 06:41 AM
Thanks so much @david.allen ,
Increasing GraphDatabaseSettings.tx_state_max_off_heap_memory
solved the issue:
neo4j = Neo4jBuilders.newInProcessBuilder()
.withConfig(GraphDatabaseSettings.tx_state_max_off_heap_memory, Long.parseLong("3000000000")) // bytes (default 2147483648)
.withDisabledServer()
.withProcedure(MyUDP1.class)
.withProcedure(MyUDP1.class)
.withFixture(MODEL_STATEMENT)
.build();
12-09-2020 07:40 AM
Just wondering, if this is happening in local with test data (taken from the real data we need to process), and having in account that in production we could have bigger data sets or just more traffic, I guess this is an indicative that we should consider also setting this value in neo4j server right?
Also this could be related to a similar issue: Out of memory issue after 2G incremental memory usage despite plenty free memory available
Is the equivalent of GraphDatabaseSettings.tx_state_max_off_heap_memory
, in neo4j.conf
file, dbms.memory.off_heap.max_size
?
So we could set something like this in config file:
dbms.tx_state.memory_allocation = OFF_HEAP
dbms.memory.off_heap.max_size = 0
Source: https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
12-09-2020 08:56 AM
Correct. The same settings apply whether it's server or embedded. Memory sizing is a very important thing to do with any database product, and "out of memory" errors are a dead give-away that it hasn't been properly tuned for the workload.
12-09-2020 09:56 AM
I see, good to know, thanks so much for the help .
All the sessions of the conference are now available online