Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
08-02-2022 09:31 PM
Currently there are 313k files and it is taking a lot of memory of around 670GB to cache all the data. Is there a way to reduce the system configuration with same set of data ? I am new to this, please put some light on this.
Can we use Redis here ? Or do I need to split the data into multiple instances ?
Current system config: 94 cores and 750GB
08-03-2022 03:53 AM
313k files?
Can you provide more detail.
What version of Neo4j is in play here?
>>taking a lot of memory of around 670GB to cache all the data
is you database 670GB large? how have you configured dbms.memory.pagecache.size
08-04-2022 04:12 AM - edited 08-04-2022 09:03 PM
Hey @dana_canzano , thanks for the reply.
We have the data of around 623G, neo4j version is 4.2.6 and the dbms.memory.pagecache.size is 690G. Kindly suggest on how it can be handled better ? Redis or any other way ?
PS: Dataset which is being loaded is in CSV format
All the sessions of the conference are now available online