Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
04-29-2021 12:33 PM
Hi,
Installed Bloom 1.6.1 along with Desktop 1.4.4.
In Bloom using custom search that previously worked, no longer return results. I am getting a port connection retrying warning. The database is local.
Andy
04-29-2021 01:56 PM
I installed Bloom 1.6.1, Desktop 1.4.4. on macOS 11.3.
And created a simple custom search for local Movie.
It worked correctly.
I have no the port connection retrying warning.
Is it OK to view the DB in a browser like Chrome?
04-29-2021 02:56 PM
Hi Koji,
My objective is to use Bloom for some visualization studies. What worked before update is move giving this:
It seems intermittent. I got the error a few times. Relaunched Bloom a couple of times and go it to work.
I am executing this custom search.
MATCH (a:cpc {subgroup: $cpcstart})
CALL apoc.path.expand(a, "Reports_to", ">cpc", 1, $numlevels)
YIELD path
RETURN path
Andy
04-29-2021 07:45 PM
How about testing with a simple Cypher first?
MATCH (a:cpc {subgroup: $cpcstart})
RETURN a
LIMIT 25
By the way, do you find the answers here helpful?
07-09-2021 08:24 AM
Hi,
Still getting poor connection issues. This is a local database on my Mac system. Single user.
Andy
07-10-2021 03:08 PM
You can profile the query by EXPLAIN
or PROFILE
.
PROFILE MATCH (a:cpc {subgroup: $cpcstart})
CALL apoc.path.expand(a, "Reports_to", ">cpc", 1, $numlevels)
YIELD path
RETURN path
If the search takes too long, add an index.
CREATE INDEX cpc_subgroup FOR (n:cpc) ON (n.subgroup)
If not, and your memory settings are low, you can change neo4j.conf.
07-10-2021 03:18 PM
Hi Koji,
I already have an indexes set up. Is is some times hard to reproduce.
My typical workaround is to close Bloom and reopen. It may take two or three tries sometimes usually when I am doing a demo for a client.
Andy
07-10-2021 03:56 PM
What are the values for each in neo4j.conf.
The following are the default values.
dbms.memory.heap.initial_size=512m
dbms.memory.heap.max_size=1G
# The amount of memory to use for mapping the store files.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 50% of RAM minus the Java heap size.
dbms.memory.pagecache.size=512m
By the way, are your Neo4j Database and Bloom on your same PC? not Remote.
07-10-2021 12:20 PM
I am getting the same issue on Bloom 1.7.0.
07-10-2021 04:33 PM
Hi Koji,
dbms.memory.heap.initial_size=2G
dbms.memory.heap.max_size=5G
The database is local being accessed via desktop browser and Bloom.
Andy
07-10-2021 06:13 PM
You can find Memory settings recommendation.
Neo4j Desktop > Project > Database > ... > Treminal
$ bin/neo4j-admin memrec
This is my Memory settings recommendation for some database.
# Based on the above, the following memory settings are recommended:
dbms.memory.heap.initial_size=12g
dbms.memory.heap.max_size=12g
dbms.memory.pagecache.size=12g
#
# It is also recommended turning out-of-memory errors into full crashes,
# instead of allowing a partially crashed database to continue running:
dbms.jvm.additional=-XX:+ExitOnOutOfMemoryError
#
# The numbers below have been derived based on your current databases located at: '/Users/koji/Library/Application Support/com.Neo4j.Relate/Data/dbmss/dbms-<something>/data/databases'.
# They can be used as an input into more detailed memory analysis.
# Total size of lucene indexes in all databases: 0k
# Total size of data and native indexes in all databases: 920400k
07-10-2021 07:42 PM
Hi Koji,
dbms.memory.heap.initial_size=15g
dbms.memory.heap.max_size=15g
dbms.memory.pagecache.size=16g
dbms.jvm.additional=-XX:+ExitOnOutOfMemoryError
Andy
07-10-2021 10:11 PM
It is better to set a little more memory and keep the default and maximum settings the same from the beginning.
dbms.memory.heap.initial_size=15g
dbms.memory.heap.max_size=15g
dbms.memory.pagecache.size=16g
If your PC has about 16GB of memory, you can set it a little lower (10g??).
07-11-2021 05:51 AM
Hi Koji,
This Mac has 40G of built in memory.
I have set the memory allocations to recommended values. I will try and report back. The issue is intermittent so I may need to test for awhile an resolution is clear.
Andy
07-11-2021 04:11 PM
I think the total size of data and native indexes are not so big.
I hope it will work well.
07-20-2021 08:57 AM
After some testing with how higher memory allocations, I can say it is improved but not completely resolved.
It seems to lose connection when I am using a custom query and the parameter is referencing a label-key. In my test case the number of nodes being searched is still rather modest, either 12,000 or 70,000.
Andy
All the sessions of the conference are now available online