I have a Community Edition 4.2.5 database that I'd like to be able to query via Scala code. When I pop the db open using the web interface and launch a query, it works great. However, when I launch the same query using Neo4j embedded, no results are returned. The code seems to be connecting to the database, as I don't see a "Database Not Found" error, but I have been unable to retrieve any data from it using this method. If it matters, I'm building the project with SBT version 1.5.0. Code snippet below:
def search_db()
{
val dir = new File("path_to_db_dir")
val managementService = new DatabaseManagementServiceBuilder(dir)
.loadPropertiesFromFile("path_to_config_file")
.build()
val graphDb = managementService.database("db_name")
val transaction = graphDb.beginTx()
val query: String =
"MATCH (n:Page) RETURN n LIMIT 25"
val result = transaction.execute(query)
println(result.hasNext())
}
Neo4j dependency in build.sbt:
libraryDependencies ++= Seq(
"org.neo4j" % "neo4j" % "4.2.5"
)
hi, my name is Wilson and i'm working on a fintech company. Currently I'm researching on wether we need to create a graph database. Is it wise to create a really big graph about some usecases (ex: transaction relation, P2P relation etc)? the nodes would be more than ten millions and have multiple labels and properties.
This graph database would be used for daily operational (query data) and analytics. The analytics example would be for jaccard similarity. How's the jaccard performance compared to python package such as scipy cdist, and wether Neo4j could process 1000 nodes for jaccard for example.
Thanks
I would like to know if it is possible to handle few hundreds databases (or even more if it is possible) with multi database feature.
Anyone has experience with this?
I have done an experiment with default settings on my Mac, and I realised that after cca. 250 created databases inside one instance, the server started to struggle and throwing Too many open files exceptions, and the newly created databases went into FAILED state.
I tried to adjust the the number of max files and max files per process, but it is still failing after that amount of created database.
Any hints on this topic?
Hello everyone,
I am working on an academic project where I am developing a recommender system. For now, the recommender system uses matrix factorization to generate embeddings of user-item data and is a general collaborative-filtering recommender system. The architecture also involves deploying the recommender system in Kubernetes. I am trying to understand few things:
Is Graph Database helpful when it comes to batch recommendation engine. This recommender system uses quite a lot of data and takes 3-4 hours to produce recommendations and every time a new customer is added it will have to generate embeddings every time in order to produce recommendations for that user. So I was wondering if there is a way to have more efficient collaborative filtering recommender systems using neo4j.
I have read a lot about Neo4j producing real-time recommendations but most of the resources only mention the process that is used to get those recommendations and not really talk about deployment. I was wondering how could one deploy this in a production environment on a cloud platform. For example, if I am trying to show products that the customer will like based on their session history (using their hits/clicks) how would the architecture exactly look like. I am using Google Cloud Platform for now so it would really help if someone can shed light on to how the high-level architecture would look like (especially how the recommendations will be shown to consumers).
Any help would be really appreciated!
Hello there
I'm activity working on a database project right now and I was wondering if any of you have a quick tip about how to resolve the famous question, how much ressources in CPU and Memory I will need for a database of x size or x nodes and relationships?
Is there a tool or rule for that? I have to read the administration course about it but I'm running out of time a bit.
Thank you
Hi,
I am trying to execute a dynamic query, which its number of params can vary, against Neo4j using Spring Boot.
I've tried using Neo4jClient and Neo4jTemplate but the Neo4jTemplate methods don't work for me so I'm left with Neo4jClient and its awful mapping system which is not the easiest way to map a path into a List of objects.
What I'm trying right now is to somehow save the segments and then later retrieve each node separately but I've yet to see if this will work.
Any help is appreciated.
Thanks