Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
06-16-2022 07:47 AM
Hello,
i am trying to import huge volumes of data in a community edition Neo4j graph database community edition that runs as a standalone server in linux. My approach is loading through Python Driver and transactions. However the python client application i wrote works but performs very badly. Can anyone suggest the appropriate mixture of configuration settings in /etc/neo4j/neo4j.conf configuration file in order to accelerate the import process through transactions? I allready use transactions in batches of size 1000000. Can anyone share his technicall skills to help me out? Any idea about what slows the execution of my batched queries would be appreciated. Thanks in advance for your time.
06-16-2022 11:04 AM
What's your total data volume?
50k per transaction
1M updates need 2-3G heap per tx
I'd suggest on a 16G server
4G heap, 10G page-cache.
use batches / parameters, but only 50k otherwise you blow the transport layer
All the sessions of the conference are now available online