cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Bloom 1.6.1 Poor Connection Issue

andy_hegedus
Graph Fellow

Hi,
Installed Bloom 1.6.1 along with Desktop 1.4.4.
In Bloom using custom search that previously worked, no longer return results. I am getting a port connection retrying warning. The database is local.
Andy

15 REPLIES 15

Hi @andy.hegedus

I installed Bloom 1.6.1, Desktop 1.4.4. on macOS 11.3.
And created a simple custom search for local Movie.
It worked correctly.
I have no the port connection retrying warning.

Is it OK to view the DB in a browser like Chrome?

Hi Koji,
My objective is to use Bloom for some visualization studies. What worked before update is move giving this:


and this

It seems intermittent. I got the error a few times. Relaunched Bloom a couple of times and go it to work.
I am executing this custom search.
MATCH (a:cpc {subgroup: $cpcstart})
CALL apoc.path.expand(a, "Reports_to", ">cpc", 1, $numlevels)
YIELD path
RETURN path
Andy

Hi @andy.hegedus

How about testing with a simple Cypher first?

MATCH (a:cpc {subgroup: $cpcstart})
RETURN a
LIMIT 25

By the way, do you find the answers here helpful?

Hi,
Still getting poor connection issues. This is a local database on my Mac system. Single user.
Andy

Hi @andy.hegedus

You can profile the query by EXPLAIN or PROFILE.

PROFILE MATCH (a:cpc {subgroup: $cpcstart})
CALL apoc.path.expand(a, "Reports_to", ">cpc", 1, $numlevels)
YIELD path
RETURN path

If the search takes too long, add an index.

CREATE INDEX cpc_subgroup FOR (n:cpc) ON (n.subgroup)

If not, and your memory settings are low, you can change neo4j.conf.

Hi Koji,

I already have an indexes set up. Is is some times hard to reproduce.


I do get it on a query that is reading company name property. Note there are only 1267 company nodes so it should not be a heavy lift.

My typical workaround is to close Bloom and reopen. It may take two or three tries sometimes usually when I am doing a demo for a client.

Andy

Hi @andy.hegedus

What are the values for each in neo4j.conf.
The following are the default values.

dbms.memory.heap.initial_size=512m
dbms.memory.heap.max_size=1G

# The amount of memory to use for mapping the store files.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 50% of RAM minus the Java heap size.
dbms.memory.pagecache.size=512m

By the way, are your Neo4j Database and Bloom on your same PC? not Remote.

I am getting the same issue on Bloom 1.7.0.

andy_hegedus
Graph Fellow

Hi Koji,

dbms.memory.heap.initial_size=2G
dbms.memory.heap.max_size=5G

The database is local being accessed via desktop browser and Bloom.

Andy

Hi @andy.hegedus

You can find Memory settings recommendation.

Neo4j Desktop > Project > Database > ... > Treminal

$ bin/neo4j-admin memrec

This is my Memory settings recommendation for some database.

# Based on the above, the following memory settings are recommended:
dbms.memory.heap.initial_size=12g
dbms.memory.heap.max_size=12g
dbms.memory.pagecache.size=12g
#
# It is also recommended turning out-of-memory errors into full crashes,
# instead of allowing a partially crashed database to continue running:
dbms.jvm.additional=-XX:+ExitOnOutOfMemoryError
#
# The numbers below have been derived based on your current databases located at: '/Users/koji/Library/Application Support/com.Neo4j.Relate/Data/dbmss/dbms-<something>/data/databases'.
# They can be used as an input into more detailed memory analysis.
# Total size of lucene indexes in all databases: 0k
# Total size of data and native indexes in all databases: 920400k

Hi Koji,

Based on the above, the following memory settings are recommended:

dbms.memory.heap.initial_size=15g
dbms.memory.heap.max_size=15g
dbms.memory.pagecache.size=16g

It is also recommended turning out-of-memory errors into full crashes,

instead of allowing a partially crashed database to continue running:

dbms.jvm.additional=-XX:+ExitOnOutOfMemoryError

The numbers below have been derived based on your current databases located at: '/Users/andreashegedus/Library/Application Support/com.Neo4j.Relate/Data/dbmss/dbms-f42ddd4f-37ef-4b85-b62a-01682f3e7323/data/databases'.

They can be used as an input into more detailed memory analysis.

Total size of lucene indexes in all databases: 0k

Total size of data and native indexes in all databases: 145100k

Andy

Hi @andy.hegedus

It is better to set a little more memory and keep the default and maximum settings the same from the beginning.

dbms.memory.heap.initial_size=15g
dbms.memory.heap.max_size=15g
dbms.memory.pagecache.size=16g

If your PC has about 16GB of memory, you can set it a little lower (10g??).

andy_hegedus
Graph Fellow

Hi Koji,

This Mac has 40G of built in memory.

I have set the memory allocations to recommended values. I will try and report back. The issue is intermittent so I may need to test for awhile an resolution is clear.

Andy

Hi @andy.hegedus

I think the total size of data and native indexes are not so big.
I hope it will work well.

andy_hegedus
Graph Fellow

After some testing with how higher memory allocations, I can say it is improved but not completely resolved.
It seems to lose connection when I am using a custom query and the parameter is referencing a label-key. In my test case the number of nodes being searched is still rather modest, either 12,000 or 70,000.
Andy