Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
03-26-2021 10:29 AM
We have a gds graph built with our key nodes and relationships in which we are looking to use for a deep learning algorithm. We found that using a FastRP ran graph embedding as a vector produced predictive power.
However, after running again, we realised that the FastRP neo4j implementation was giving us different values for embeddings on the same graph, without altering anything. This lack of consistency is worrying, is there any way to ensure that we get the same embeddings when we run it on the same data twice?
03-29-2021 03:13 PM
FastRP has some stochastic elements in the internal calculations (such as calculating the similarity matrix). What you can do now is to write your embedding back to your database with .write
mode, but in our next release (1.6) we have added a randomSeed
parameter that will let you generate the same results across multiple runs. The pre-release is here, and the docs are here.
All the sessions of the conference are now available online