cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

failing to use apoc.period.iterate() with apoc.load.json

luk801
Node Link
I tried to combine apoc.period.iterate() with the below CYPHER query and keep getting errors. The json file has nodes and edges. Edges should create relationships between the nodes using source and target ids, these correspond to the node ids. The below query runs fine with a sample file but runs out of memory with a large file. I am new to Neo4j and any tips on how to break down the below query are welcome! 
 
WITH 'file:/file_name' AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.nodes AS node
CREATE (n:Node {id: node.id, name: node.name, type: node.type})
WITH node, value.edges AS edges
UNWIND edges AS edge
MATCH (source:Node {id: edge.source}), (target:Node {id: edge.target})
MERGE (source)<-[rel:is_part_of]-(target)
 
 
1 ACCEPTED SOLUTION

I noticed one oversight.  Try these two.  they should do the same thing:

WITH 'file:/file_name' AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.nodes AS node
CREATE (n:Node {id: node.id, name: node.name, type: node.type})
WITH distinct value
UNWIND value.edges AS edge
MATCH (source:Node {id: edge.source}), (target:Node {id: edge.target})
MERGE (source)<-[rel:is_part_of]-(target)

 

CALL apoc.load.json(url) YIELD value
CALL {
    with value
    UNWIND value.nodes AS node
    CREATE (n:Node {id: node.id, name: node.name, type: node.type})
}
CALL {
    WITH value
    UNWIND value.edges AS edge
    MATCH (source:Node {id: edge.source}), (target:Node {id: edge.target})
    MERGE (source)<-[rel:is_part_of]-(target)
}

View solution in original post

4 REPLIES 4

I am assuming the json file has a list of node info associated with value.nodes and list of edge info associated with values.edges.  The flow of the query unwind the list of node info and creates a node for each. It then passes each created node in its own row with the list of edge info appended to each row. Thus each node row has the identical list of edge information. The unwind edge creates a new row for each edge info in the list and appends the node to each row. As such, the 'unwind edges as edge' operation is repeated over-and-over for each created node. Since you are matching/merging the create the relationship, you don't realize the merge operation is being repeated over-and-over. Removing 'node' from the 'with' will result on only one row being passed, instead of one for each node. Now the 'unwind edges as edge' will occur just once. This should help with issue.

WITH 'file:/file_name' AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.nodes AS node
CREATE (n:Node {id: node.id, name: node.name, type: node.type})
WITH value.edges AS edges
UNWIND edges AS edge
MATCH (source:Node {id: edge.source}), (target:Node {id: edge.target})
MERGE (source)<-[rel:is_part_of]-(target)

thank you for your help glilienfield!

I am still running into the same problem and running out of memory:

Neo.TransientError.General.MemoryPoolOutOfMemoryError

The allocation of an extra 5.3 MiB would use more than the limit 716.8 MiB. Currently using 712.9 MiB. dbms.memory.transaction.total.max threshold reached

I noticed one oversight.  Try these two.  they should do the same thing:

WITH 'file:/file_name' AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.nodes AS node
CREATE (n:Node {id: node.id, name: node.name, type: node.type})
WITH distinct value
UNWIND value.edges AS edge
MATCH (source:Node {id: edge.source}), (target:Node {id: edge.target})
MERGE (source)<-[rel:is_part_of]-(target)

 

CALL apoc.load.json(url) YIELD value
CALL {
    with value
    UNWIND value.nodes AS node
    CREATE (n:Node {id: node.id, name: node.name, type: node.type})
}
CALL {
    WITH value
    UNWIND value.edges AS edge
    MATCH (source:Node {id: edge.source}), (target:Node {id: edge.target})
    MERGE (source)<-[rel:is_part_of]-(target)
}

The first one ran out of memory but the second one worked fine, thank you!