cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Unpredictable connection errors with driver transactions

I have a batch of data imports for neo4j, that are imported through the neo4j-driver for javascript. It does a whole bunch of queries that import a lot of data.

My problem is that I often receive random "Neo4jError: connect ECONNREFUSED" errors. Quite often, some imports succeed, and then suddenly a whole bunch fails. I run again, and some succeed while others fail. At other times, they all succeed. It's incredibly frustrating to have these frequent, unpredictable errors. And before you think I'm on a really unreliable internet connection: I'm running neo4j locally in a Docker container. And the script runs locally in that same Docker container.

To combat it, I've already switched from reusing the same session for everything, to creating separate sessions and making use of transactions. From what I understand, a transaction should retry the query a number of times, and I do see 5 logs from my own error handler before the driver gives up.

I've tried playing around with connectionTimeout and maxTransactionRetryTime, but with no success.

I'm utterly baffled why this keeps going wrong so often. I don't run them all in parallel either; I async/await literally every query (also because some of the imports depend on data from previous imports).

Here's the function that gets called for every query:

const runQuery = (query, params) => {
  return dbSession().writeTransaction((tx) => {
    return tx.run(query, params)
      .then(successHandler, errorHandler);
  });
};

const dbSession = () => {
  const driver = neo4j.driver(neo4jUrl, neo4j.auth.basic(login, password));
  return driver.session();
};

Did I do something terribly wrong that this connection within a local Docker container is so incredibly unstable? It's not that the configuration is wrong; quite often it works fine. But sometimes it doesn't. I'm utterly baffled what could cause this. Is there a way to improve the reliability of this connection? Should I reuse sessions instead of creating a new one every time?

2 REPLIES 2

From one of my colleagues

A few things strike me as odd in their sample code1) they're creating a new driver and returning a session...which implies they create new drivers every time they call runQuery() ...which doesn't make sense.
2) session is never closed
3) it's not clear how they're using runQuery , but the symptoms they report of batchy/bulk success/failure seems like the classic async/await issue where someone made some mistakes and the work is queueing up and not advancing fluidly

So please create only one driver in your application and reuse it

  • create a new session per unit of work and close the session after the work is done

Makes sense. I can see how having too many sessions open could be a problem. Creating only a single driver and closing each session should be simple enough.

The speed at which this executes isn't a big problem. It's a nightly import from various other systems. It's more important that some queries don't start before some others have finished, than that it's fast.