cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Receive data via bolt from remote machine?

Hello folks,

I have a distribution of 4 pcs, 1 as master and 3 as slaves that comunicate via rabbitmq, every one of them produce small amounts of data, so they do small queries adding one relationship to a node at a time if they find something interesting. PE:

MERGE (:Person {name:'Andrei'})-[:LOVES]->(:DB {name:'Neo4j'})
(The queries are no more complex than this)

The master pc hosts the neo4j server (version 3.5.12); on the development environment everything was on the same pc so the processes where sending data throw bolt driver on python to the url, bolt://127.0.0.1:7687, my question resides here:

Changing the url from 127.0.0.1 to my master machine's ip will send the data from my slave to the master and will it work the same? (I can't test it on my own as opening the ports will take long bureaucratic time here on my company and I need to be sure which ones I really need to open)

And talking about performance, if I have around 15 scripts in every slave that send around 10 petitions every second, will the master be able to handle all of those 250/500 petitions? Or should I add a new rabbitmq consumer in the master pc, so my slaves send the queries there and then this new consumer/consumers, sends the data to localhost as I did before?

PS: there are no needs for the data to be processed instantly, all I want is making it sure its delivered.

I'm open to any other suggestion.

Thank you for you attention,

1 ACCEPTED SOLUTION

You may need to edit your neo4j.conf with dbms.connectors.default_listen_address=0.0.0.0 to make sure it's not just listening for localhost connections, but otherwise, yes, it will work the same.

As long as the server machine isn't older than sand or bogged down by other processes, it should be able to handle that many queries just fine — just make sure you index any properties you're using in the MERGE clause in your query to help it out as the dataset grows.

View solution in original post

2 REPLIES 2

You may need to edit your neo4j.conf with dbms.connectors.default_listen_address=0.0.0.0 to make sure it's not just listening for localhost connections, but otherwise, yes, it will work the same.

As long as the server machine isn't older than sand or bogged down by other processes, it should be able to handle that many queries just fine — just make sure you index any properties you're using in the MERGE clause in your query to help it out as the dataset grows.

Thank you a lot for your answer, and for the tip !