cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Can I expect Neo4j to work faster as I scale up (more expensive hosting)?

Here is a video of an old version of my project.

At the end when I apply the rule found in the "match subgraph" search (that I simply brute-force coded using "longest path matching first" and cypher, using django-neomodel library and classes), the rule application takes a while to load.

The slow part is not the passing of the data because that's the short bit of JSON. The slow part is Neo4j / my Django python code.

So, I'm wondering, clearly things will slow down as I add more stuff to the DB. But will they speed up with > $5 / month hosting?

I expect that rule application to happen instantaneously. I think I could do it by returning the data first and then writing to the database in the background, since the rule's input graph is already found, and thus the output graph (connected via a DiagramRule django-neomodel class).

Not sure how to do that though. However, regardless, I would like the searching to be fast itself. So my question still stands.

Also does anyone know of a faster way than the obvious "search for longest paths first / memoization of nodes" method using cypher? I have never used a community plugin to Neo4j. I need the matching to work using regex's since that's how it's currently variable-substitution aware.

Let me know if you'd like to see the code that does the rule matching of the user's drawn diagram and I'll grab it from github.

3 REPLIES 3

Hello

The generic answer is to create apt indexes so that queries are executed faster. But you need to Profile the query and understand the relationships in proper perspective before you execute them.

Thanking you
Yours faithfully
Sameer S Gijare

Hi,
If you have machines for the applications (db)you can try to increase the memory in these machines.
Besides a profile for queries and adding an index.
You can also connect the database to a tool that monitors and by predicting the cpu peaks, if there is a trigger that produces these peaks you can isolate and treat these peaks in a spotty way and understand the root cause.
Regards
Rafi

Hello

 You can observe the load on CPU and see if you can self analyze the entities in your data model for the best outcome that your company desires.

Thanking you
Sameer Sudhir Gijare