cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Saving large graphs with Spring Neo4j

mrksph
Node Clone

Hi all,

I'm encountering some problems while trying to save a relatively big graph using Spring Data Neo4j .save() method passing the aggregate root. In the following image you can see an example (the graph in the image is not complete, it's a little larger than that) 

mrksph_0-1661858801787.png

 


Is there any other way to speed up the save?

I tried to save first the nodes at depth 1 or depth 2 using concurrency but I think it won't work.


6 REPLIES 6

TrevorS
Community Team
Community Team

Hello @mrksph 

Are you still encountering this issue? If you are, please create a ticket at https://github.com/neo4j and reply here with your ticket link so others can also track the progress.

If you were able to solve the issue, can you please reply back with your solution so I may mark it as resolved?

Thank you!

TrevorS
Community Specialist

Hi Trevor

Yes, we are still experiencing the same issue. We haven't been able to try another solution like recursive save from lowest level then going up. 
Ok, I'm creating an issue. I suppose you meant to link the Spring Data Neo4j repository right?

Thank you

Hi @mrksph,

got the same issue as you! Can you please share your solution or ticket number? 

Hi @mraleksei here you have the issue: https://github.com/spring-projects/spring-data-neo4j/issues/2587

How did you encounter this problem? Maybe we can discuss an alternative solution. I've yet to test the changes they made to improve the save method


Thanks for link. As I see no solution. I upgraded to the last spring version and noting changed.

I also have big graph with deeps ~ 5 lvl and a lot of leaves (thousands) . After turned on trace logs I see that there is no spring magic: each node and relationship are storing separately. Looks like I will create custom load function with native query that will bulk insert nodes and after it will bulk insert relationship. 

Yes,

I think that's the approach we will follow too. 
Please could you keep us updated with how your approach performs?

Thank you!