Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.
08-10-2022 09:44 AM
I am able to perform spark.read and save into a dataframe. I am also able to write a dataframe as a node or relationship from Spark to Neo4j. I would like to create a pipeline where I can run most of the creation and deletion work from Spark (without using Neo4j browser). However, I cannot use :`
spark.read....option("query", "CREATE CONSTRAINT IF NOT EXISTS ON (a: APP) ASSERT (a.app_name) IS NODE KEY")
for example. Is there a way I can run the above CONSTRAINT creation using spark connector on Neo4j without running the above in Neo4j browser?
Solved! Go to Solution.
08-22-2022 02:46 AM
Hi @learner you can create indexes/constraints only when you're ingesting data:
https://neo4j.com/docs/spark/current/writing/#_schema_optimization_operations
If you have to do before reading it's an error, it should be always upfront when you ingest it as it will provide constraints consistency.
Cheers
Andrea
08-22-2022 02:46 AM
Hi @learner you can create indexes/constraints only when you're ingesting data:
https://neo4j.com/docs/spark/current/writing/#_schema_optimization_operations
If you have to do before reading it's an error, it should be always upfront when you ingest it as it will provide constraints consistency.
Cheers
Andrea
All the sessions of the conference are now available online