cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

C# Driver - Session.CloseAsync() not closing the connections effectively

nijas_rawther
Node Link

We are using .NET Driver for Neo4j .

Environment: .NET 6 
Neo4j Server version : 4.3.2
Driver version : Neo4j.Driver 4.4.0

We are just opening a single connection with the server using the following code snippet and reusing it across all the sessions. 

 

Neo4j.Driver.IDriver _driver = GraphDatabase.Driver("neo4j://*.*.*.*:7687", AuthTokens.Basic("neo4j", "*****"));

 

And we are opening and closing a session with each transaction like 

 

var session = _driver.AsyncSession(o => o.WithDatabase("pdb00"));
        try
        {
                return await session.ReadTransactionAsync(async tx =>
                {
                    var result = await tx.RunAsync
                                             (
                                                query, parameters
                                              );

                    res = await result.ToListAsync();
                    var counters = await result.ConsumeAsync();

                    Console.WriteLine("Time taken to read query " + index + ":  " + counters.ResultConsumedAfter.TotalMilliseconds);
                    return res;

                });
            
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
            throw;
        }
        finally
        {
            await session.CloseAsync();
        }

 

However when we monitor the number of active connections to the Neo4j server using the following command 

 

call dbms.listConnections()

 

We are seeing as many connections as the number of sessions are made, and the connections are not getting dropped until the Driver is closed.

For instance, if we call the transaction 100 times, we can see the active connections increase by 100, and they are staying as-is even though session.closeasync() is getting invoked.

After the Driver.closeAsync() is invoked, all the connections are dropped. 

On heavy load, we are running into server overloading and port exhaustion scenarios due to this behavior. 
Snapshot of current connections:

nijas_rawther_0-1661183595414.png

Are we missing something here ?

Thanks in advance. 

5 REPLIES 5

Hi,

The driver uses connection pooling. When a session is created it takes a connection from the pool, if one is not available a new one is added to the pool and used. If you have used 100 sessions simultaneously then your drivers connection pool will have 100 connections in it. On closing the Session any open transactions are disposed of, any results that are streaming are disposed and then connection is returned to the pool. 

You can configure the behaviour of the connection pool to control resource usage when creating the driver using:

 

ConfigBuilder.WithMaxConnectionPoolSize //Defaults to 100.
ConfigBuilder.WithMaxIdleConnectionPoolSize //Defaults to infinite.

 

 

It is worth noting that from 4.4 onwards the session object inherits from IAsyncDisposable to help reduce boilerplate code when using sessions. E.g.

 

await using var session = _driver.AsyncSession(o => o.WithDatabase("pdb00"));
return await session.ReadTransactionAsync(async tx =>
{
     var result = await tx.RunAsync
     (
          query, parameters
     );

     res = await result.ToListAsync();
     var counters = await result.ConsumeAsync();

     Console.WriteLine("Time taken to read query " + index + ":  " + 
     counters.ResultConsumedAfter.TotalMilliseconds);
     return res;
});
            
        

 

 Hope this is of help.

Thanks for the quick reply.

As per your comment: 
On closing the Session any open transactions are disposed of, any results that are streaming are disposed and then connection is returned to the pool. 

This is currently not happening. Even though, in the finally block we are closing each respective session, the connections are still not released until the 
Driver.CloseAsync() is called at the application end.

We expect the connections in use to comedown as soon as the respective sessions are closed. 

The results and transactions make use of the connection, they have a different 'lifecycle' from the connection.

The connection is not closed when it is no longer in use by a session. It is returned to the connection pool that the driver maintains. This is to improve performance. When a session needs a connection, if there is an unused one in the pool it can use that without the overhead of opening, verifying etc a new one.

To control the number of connections that you are seeing the driver holding open use the two configuration options I mentioned above.

ConfigBuilder.WithMaxConnectionPoolSize //Defaults to 100.
ConfigBuilder.WithMaxIdleConnectionPoolSize //Defaults to infinite.

The first is a hard limit, the driver will not hold any more connections open than this value. The second is probably the configuration that you should change. Once a connection is returned to the connection pool it is considered idle. If there are too many idle connections then the number open is reduced to the configured value.

Thanks for the response. 

So in a high traffic environment with lots of parallel reads and writes, what's the most ideal configuration for MaxConnectionPoolSize and MaxIdleConnectionPoolSize ?

Can we max them both and probably give a 5-sec Idle connection timeout using the ConnectionIdleTimeout variable ?

Something like 

Neo4j.Driver.IDriver _driver = GraphDatabase.Driver("neo4j://*.*.*.*:7687", AuthTokens.Basic("neo4j", "*****"),
            config => { config.WithMaxConnectionPoolSize(Neo4j.Driver.Config.Infinite);
                config.WithMaxIdleConnectionPoolSize(Neo4j.Driver.Config.Infinite);
                config.WithConnectionIdleTimeout(new TimeSpan(0, 0, 5));
            });


The IdleConnectionTimeout() value will help us close the idle connections after 5 seconds hence not keeping them around for too long if not really required ?

Can you please suggest. Thanks

Yes adjusting the idle timeout would help. An alternative is to reduce the number of idle connections allowed in the pool. This latter option may lead to an overhead if the figure is too low as new connections will being created more than they need to be. As with anything like this some experimentation with the values and fine tuning to your environment can pay off.