Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
01-30-2019 07:08 AM
I have a database that is roughly 1200GB. I am running it on a server with 128GB of RAM, and 32GB of swap memory. These are my neo4j.conf
settings:
dbms.memory.heap.initial_size=16g
dbms.memory.heap.max_size=16g
dbms.memory.pagecache.size=64g
Over time (roughly 2-3 weeks), the swap space on my server gradually fills up until it hits the maximum swap size. At this point the server starts "thrashing" and slows down noticeably. Here's a screenshot:
When this happens I usually have to do a hard-kill of Neo4j. After that the swap space gets freed up:
I'm guessing that for some reason a lot of the pagecache is being moved from RAM to Swap Memory over time, eventually exceeding the swap space available.
Is it possible to prevent Neo4j from moving so much memory to the swap space? Ideally I'd like for as much of the pagecache as possible to remain in RAM.
I have tried changing the swappiness
setting on my Ubuntu server to 10 (and then to 5), but the swap memory still seems to be filling up with the Neo4j cache.
02-06-2019 06:36 AM
I would just disable swap on the server. Usually that's more suitable for end-user computers but not servers as you actually never want stuff from RAM swapped to disk.
So your Neo4j server should use in total 80G of memory (plus some extra that Lucene grabs behind the scenes but which should go away in 3.4/3.5. with the native indexes).
02-06-2019 08:06 AM
Hey there,
There's a KB article detailing Neo4j's memory consumption (here: https://neo4j.com/developer/kb/understanding-memory-consumption/).
It'd be good to first see exactly what is using the memory, if you run Neo4j with:
dbms.jvm.additional=-XX:NativeMemoryTracking=detail
added to the neo4j.conf
file you should be able to then execute
jcmd <PID> VM.native_memory summary
(obvs with your PID)
to see where the memory is being used.
If it is Native Memory using it up, you can change the Max Direct Memory allocated but...
WARNING WARNING
This is something we strongly suggest you talk to us about changing - they are sensitive settings
With that said, you'd need to set 2 settings:
-XX:MaxDirectMemorySize=#g
(where # is a number)
and
dbms.jvm.additional=-Dio.netty.maxDirectMemory=0
You need to carefully monitor the server, look in the logs for OutOfMemory messages etc as you may need to tweak the values up if you went too aggressive.
I hope that helps 😕
02-06-2019 08:09 AM
So I guess setting
dbms.jvm.additional=-XX:MaxDirectMemorySize=6g
dbms.jvm.additional=-Dio.netty.maxDirectMemory=0
should help Greg?
02-06-2019 08:14 AM
Yes, but I would probably start with 50% of the heap and then work down - depends on how much 'tweakability' you have 🙂
02-06-2019 09:05 AM
Thank you both. I have turned swap off, and everything is running nicely at the moment.
sudo swapoff -a
I will look in to the more advanced changes you have mentioned. Thank you.
All the sessions of the conference are now available online