Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
09-18-2022 09:13 AM
Hello
I need help please
I am loading the ssb (star schema benchmark) data onto a neo4j database. For I am using the script "using period commit ...... load csv ...." from files I generated from the ssb database.
However, one of the files contains 23 million lines and it takes too long.
I have a 64GB ram virtual machine on ubuntu and I have configured my neo4j.conf as follows:
dbms.memory.heap.initial_size=40960m
dbms.memory.heap.max_size=40960m
dbms.memory.pagecache.size=20480m
I would like to know if you can give me an optimal configuration to speed up the loading.
thank you in advance
redha
Translated with DeepL
Solved! Go to Solution.
09-18-2022 12:00 PM
Hello @redha_benhisse1 😀
For large CSV files, you should use neo4j-admin import.
Moreover, do you have unique constraints on your nodes?
Finally, you can use neo4j-admin memrec to help you configure the database memory.
Regards,
Cobra
09-18-2022 12:00 PM
Hello @redha_benhisse1 😀
For large CSV files, you should use neo4j-admin import.
Moreover, do you have unique constraints on your nodes?
Finally, you can use neo4j-admin memrec to help you configure the database memory.
Regards,
Cobra
09-23-2022 12:17 PM
Hi
neo4j-admin import resolve my problem
Thank u v m
All the sessions of the conference are now available online