Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
05-04-2022 03:50 AM
{message: The allocation of an extra 15.1 MiB would use more than the limit 100.0 MiB. Currently using 88.4 MiB. dbms.memory.transaction.global_max_size threshold reached}
Am I not paying for 1gb? Am I doing something terribly stupid or wrong? I've read the graph databases book (the one with the octopus). What else should I read?
I've fallen into this beautiful rabbit hole and find new insights each day, but I have to deliver and be practical
I'll appreciate any practical guidance and book/learning resource recommendation
again, sorry if this is the wrong place to ask!
server_uri: neo4j+s://id:7687
admin_user: neo4j
admin_pass: pass
files:
# concepts
- url: /home/gocandra/workspace/uma/deep-learning/research/graphs/snomed-loader/csv/Concept_Snapshot.csv
compression: none
skip_file: false
chunk_size: 100
cql: |
WITH $dict.rows as rows UNWIND rows as row
MERGE (c:Concept {conceptId:row.id,term:row.term,descType:row.descType})
ON CREATE SET c.conceptId = row.id, c.term = row.term, c.descType = row.descType
ON MATCH SET c.conceptId = row.id, c.term = row.term, c.descType = row.descType
## concept synonim generator
- url: /home/gocandra/workspace/uma/deep-learning/research/graphs/snomed-loader/csv/Concept_Snapshot_add.csv
compression: none
skip_file: false
chunk_size: 50
cql: |
WITH $dict.rows as rows UNWIND rows as row
MATCH (dest:Concept) WHERE dest.conceptId = row.id
CREATE (c:Concept:Synonym{
conceptId: row.id,
term: row.term,
descType: row.descType
})-[r:IS_A {
relId:'116680003',
term:'Is a (attribute)',
descType:'900000000000003001'
}]->(dest);
# relationships
- url: /home/gocandra/workspace/uma/deep-learning/research/graphs/snomed-loader/csv/Concept_Snapshot_add.csv
compression: none
skip_file: false
chunk_size: 50
cql: |
WITH $dict.rows as rows UNWIND rows as row
MATCH (source:Concept) WHERE source.conceptId = row.sourceId
MATCH (dest:Concept:FSA) WHERE dest.conceptId = row.destinationId
CREATE (source)-[r:row.relLabel{relId: row.typeId, term: row.term, descType: row.descType}]->(dest)"
that's the config.yml with all the queries (i'm chunking to try and avoid this issue)
{code: Neo.TransientError.General.MemoryPoolOutOfMemoryError} {message: The allocation of an extra 7.3 MiB would use more than the limit 100.0 MiB. Currently using 99.0 MiB. dbms.memory.transaction.global_max_size threshold reached}
now I get this error, I'm not running any other queries on the database, nor is anyone else (i'm the only one with credentials)
05-10-2022 04:21 AM
Hi @gocampo !
You may like to check the ram usage of your queries. You ram is also use for the page cache. Take a look on:
Bennu
06-02-2022 04:51 AM - edited 06-02-2022 05:03 AM
You can try to reduce your chunk sizes.
It might also be good to merge on single properties (with constraint) only -> here as your id-is row.id, the other fields should not be part of the merge but an ON CREATE SET ...
MERGE (c:Concept {conceptId:row.id}) ON CREATE SET ...
Do you see which of the import queries causes the memory issue?
Sometimes AuraDB free works better in terms of memory limits, give it a try, as it doesn't have to support a clustered environment.
06-02-2022 08:46 AM
I noticed one thing with your query:
# relationships - url: /home/gocandra/workspace/uma/deep-learning/research/graphs/snomed-loader/csv/Concept_Snapshot_add.csv compression: none skip_file: false chunk_size: 50 cql: | WITH $dict.rows as rows UNWIND rows as row MATCH (source:Concept) WHERE source.conceptId = row.sourceId MATCH (dest:Concept:FSA) WHERE dest.conceptId = row.destinationId CREATE (source)-[r:row.relLabel{relId: row.typeId, term: row.term, descType: row.descType}]->(dest)"
The [r:row.relLabel] won't resolve. Pyingest uses regular cypher parameter substitution which doesn't include the relationship name. To create a dynamic relationship name, you need to use something like apoc.createRelationship
All the sessions of the conference are now available online