cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

DatabaseError.General.UnknownError using apoc.static.get/set

TL;DR: apoc.static.get/set/list procedures are not working after a simple SyntaxError in neo4j-dotnet-driver

  • Neo4j database version: 4.0.3
  • neo4j-dotnet-driver version: 4.0.1
  • apoc procedures installed: apoc-4.0.0.7-all.jar

Hello, I discovered one error which I can't figure out, how to deal with it.

I'm using this C# snippet in my project:

string rootNodeKey = "someStaticKey1";
string executableKey = "someStaticKey2";

using (ISession session = driver.Session())
{
    foreach (NodeGraph nodeGraph in filteredNodeGraphsFromOneXml)
    {
            using (ITransaction tx = session.BeginTransaction())
            {
                tx.Run(
                    "CREATE (n:RootNode $parameters) WITH n CALL apoc.static.set($rootNodeKey, n) YIELD value RETURN value",
                    new { parameters = nodeGraph.rootNode.Parameters, rootNodeKey = rootNodeKey }).Consume();
                
                foreach (<some foreach cycle through main nodes>)
                {
                    tx.Run(
                        "WITH apoc.static.get($rootNodeKey) as r " +
                        "MERGE (n {customID: $customID}) " +
                        "ON CREATE SET n:Executable, n = $parameters " +
                        "ON MATCH SET n:Executable, n += $parameters " +
                        "WITH n, r CALL apoc.static.set($executableKey, n) YIELD value " +
                        "CREATE (n)-[:HAS_ROOT_NODE]->(r)",
                        new
                        {
                            customID = executable.CustomID,
                            rootNodeKey = rootNodeKey,
                            parameters = executable.Parameters,
                            executableKey = executableKey
                        }).Consume();

                    foreach(<some other foreach or condition to create secondary nodes)
                    {
                          tx.Run(
                              "WITH apoc.static.get($executableKey) as e CREATE (n:Detection $parameters)<-[:HAS_DETECTION]-(e)",
                              new { executableKey = executableKey, parameters = detection.Value.Parameters }).Consume();
                    }
                 }
     
                 <some other cycles or conditions for adding another nodes and relationships>

                 tx.Commit();
            }
            catch
            {
                tx.Rollback();
                tx.Dispose();
                session.Dispose();
                throw;
            }
        }
    }
}

As you can see, I'm running some node creating and I'm using apoc.static.get/set procedures to temporally store created RootNode or Executable (or some other types), so I can get them later to create another secondary nodes (in this example Detection) connected to them.

I also use .Consume() at the end of each query, which enables me to immediately know, if there was a problem with node creating by throwing exception right away. This is good for me since the next node creating will not continue and the code will jump straight to the catch block.

There as you can see, I Rollback and Dispose the transaction, I Dispose the session, throw exception to the main code of the project, where it also Disposes the whole driver.

BUT, here comes the problem:

In case there is some exception throw, for example I purposely mess up the query (adding adsfasdfas at the start of the second query) the code successfully jumps to the catch block with SyntaxError and everything is closed and disposed.

But when I start the program again, the code throws Unknown exception with the text "The transaction has been closed" at the FIRST query statement (where there should not be any errors).

I have discovered that this problem is related to apoc.static, because of these reasons:

  • the query without using apoc.static.get/set is executed without problems
  • after I threw the SyntaxError exception and I'm in the state that next program run falls on this UnknownError, I can go to web UI of the database and cannot even call something like apoc.static.list(""):

Here are the details of exception:


StackTrace:

   at Neo4j.Driver.Internal.MessageHandling.ResponsePipelineError.EnsureThrownIf(Func`2 predicate)
   at Neo4j.Driver.Internal.MessageHandling.ResponsePipelineError.EnsureThrown()
   at Neo4j.Driver.Internal.Result.ResultCursorBuilder.<ConsumeAsync>d__24.MoveNext()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Neo4j.Driver.Internal.BlockingExecutor.RunSync[T](Func`1 task)
   at Neo4j.Driver.Internal.InternalResult.Consume()
   at Nebula.Neo4j.Neo4jDriver.ImportNodeGraphs(IEnumerable`1 filteredNodeGraphsFromOneXml, Int32 threadNumber, StringBuilder& log) in C:\Users\muzikant\Documents\GitHub\Nebula\Neo4j\Neo4jDriver.cs:line 47

Here are related lines from debug.log:

2020-04-20 13:44:46.556+0000 ERROR [o.n.b.r.s.i.ErrorReporter] Client triggered an unexpected error [Neo.DatabaseError.General.UnknownError]: The transaction has been closed., reference c18c0ddb-ee1c-4e56-a837-6500aefb34b4.
2020-04-20 13:44:46.556+0000 ERROR [o.n.b.r.s.i.ErrorReporter] Client triggered an unexpected error [Neo.DatabaseError.General.UnknownError]: The transaction has been closed., reference c18c0ddb-ee1c-4e56-a837-6500aefb34b4. The transaction has been closed.
org.neo4j.graphdb.NotInTransactionException: The transaction has been closed.
	at org.neo4j.kernel.impl.coreapi.TransactionImpl.checkInTransaction(TransactionImpl.java:667)
	at org.neo4j.kernel.impl.coreapi.TransactionImpl.kernelTransaction(TransactionImpl.java:548)
	at org.neo4j.kernel.impl.core.NodeEntity.getLabels(NodeEntity.java:635)
	at org.neo4j.kernel.impl.util.NodeEntityWrappingNodeValue.labels(NodeEntityWrappingNodeValue.java:111)
	at org.neo4j.kernel.impl.util.NodeEntityWrappingNodeValue.populate(NodeEntityWrappingNodeValue.java:85)
	at org.neo4j.cypher.internal.runtime.ValuePopulation.populate(ValuePopulation.java:42)
	at org.neo4j.cypher.internal.runtime.interpreted.pipes.ProduceResultsPipe.produceAndPopulate(ProduceResultsPipe.scala:51)
	at org.neo4j.cypher.internal.runtime.interpreted.pipes.ProduceResultsPipe.$anonfun$internalCreateResults$1(ProduceResultsPipe.scala:35)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:455)
	at org.neo4j.cypher.internal.runtime.interpreted.PipeExecutionResult.serveResults(PipeExecutionResult.scala:76)
	at org.neo4j.cypher.internal.runtime.interpreted.PipeExecutionResult.request(PipeExecutionResult.scala:63)
	at org.neo4j.cypher.internal.result.StandardInternalExecutionResult.request(StandardInternalExecutionResult.scala:88)
	at org.neo4j.cypher.internal.result.ClosingExecutionResult.request(ClosingExecutionResult.scala:135)
	at org.neo4j.bolt.runtime.AbstractCypherAdapterStream.handleRecords(AbstractCypherAdapterStream.java:105)
	at org.neo4j.bolt.v3.messaging.ResultHandler.onPullRecords(ResultHandler.java:41)
	at org.neo4j.bolt.v4.messaging.PullResultConsumer.consume(PullResultConsumer.java:42)
	at org.neo4j.bolt.runtime.statemachine.impl.TransactionStateMachine$State.consumeResult(TransactionStateMachine.java:511)
	at org.neo4j.bolt.runtime.statemachine.impl.TransactionStateMachine$State$2.streamResult(TransactionStateMachine.java:355)
	at org.neo4j.bolt.runtime.statemachine.impl.TransactionStateMachine.streamResult(TransactionStateMachine.java:92)
	at org.neo4j.bolt.v4.runtime.InTransactionState.processStreamResultMessage(InTransactionState.java:73)
	at org.neo4j.bolt.v4.runtime.AbstractStreamingState.processUnsafe(AbstractStreamingState.java:49)
	at org.neo4j.bolt.v4.runtime.InTransactionState.processUnsafe(InTransactionState.java:60)
	at org.neo4j.bolt.v3.runtime.FailSafeBoltStateMachineState.process(FailSafeBoltStateMachineState.java:48)
	at org.neo4j.bolt.runtime.statemachine.impl.AbstractBoltStateMachine.nextState(AbstractBoltStateMachine.java:143)
	at org.neo4j.bolt.runtime.statemachine.impl.AbstractBoltStateMachine.process(AbstractBoltStateMachine.java:91)
	at org.neo4j.bolt.messaging.BoltRequestMessageReader.lambda$doRead$1(BoltRequestMessageReader.java:90)
	at org.neo4j.bolt.runtime.DefaultBoltConnection.lambda$enqueue$0(DefaultBoltConnection.java:151)
	at org.neo4j.bolt.runtime.DefaultBoltConnection.processNextBatchInternal(DefaultBoltConnection.java:240)
	at org.neo4j.bolt.runtime.DefaultBoltConnection.processNextBatch(DefaultBoltConnection.java:175)
	at org.neo4j.bolt.runtime.DefaultBoltConnection.processNextBatch(DefaultBoltConnection.java:165)
	at org.neo4j.bolt.runtime.scheduling.ExecutorBoltScheduler.executeBatch(ExecutorBoltScheduler.java:212)
	at org.neo4j.bolt.runtime.scheduling.ExecutorBoltScheduler.lambda$scheduleBatchOrHandleError$2(ExecutorBoltScheduler.java:195)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:834)

Here is neo4j.conf file:

#*****************************************************************
# Neo4j configuration
#
# For more details and a complete list of settings, please see
# https://neo4j.com/docs/operations-manual/4.0/reference/configuration-settings/
#*****************************************************************

# The name of the default database
#dbms.default_database=neo4j

# Paths of directories in the installation.
dbms.directories.data=/var/lib/neo4j/data
dbms.directories.plugins=/var/lib/neo4j/plugins
dbms.directories.logs=/var/log/neo4j
dbms.directories.lib=/usr/share/neo4j/lib
dbms.directories.run=/var/run/neo4j

# This setting constrains all `LOAD CSV` import files to be under the `import` directory. Remove or comment it out to
# allow files to be loaded from anywhere in the filesystem; this introduces possible security problems. See the
# `LOAD CSV` section of the manual for details.
dbms.directories.import=/var/lib/neo4j/import

# Whether requests to Neo4j are authenticated.
# To disable authentication, uncomment this line
#dbms.security.auth_enabled=false

# Enable this to be able to upgrade a store from an older version.
#dbms.allow_upgrade=true

# Java Heap Size: by default the Java heap size is dynamically
# calculated based on available system resources.
# Uncomment these lines to set specific initial and maximum
# heap size.
dbms.memory.heap.initial_size=31g
dbms.memory.heap.max_size=31g

# The amount of memory to use for mapping the store files, in bytes (or
# kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
# If Neo4j is running on a dedicated server, then it is generally recommended
# to leave about 2-4 gigabytes for the operating system, give the JVM enough
# heap to hold all your transaction state and query context, and then leave the
# rest for the page cache.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 50% of RAM minus the max Java heap size.
dbms.memory.pagecache.size=364600m
dbms.tx_state.max_off_heap_memory=8g
dbms.jvm.additional=-XX:+ExitOnOutOfMemoryError



#*****************************************************************
# Network connector configuration
#*****************************************************************

# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
#dbms.default_listen_address=0.0.0.0

# You can also choose a specific network interface, and configure a non-default
# port for each connector, by setting their individual listen_address.

# The address at which this server can be reached by its clients. This may be the server's IP address or DNS name, or
# it may be the address of a reverse proxy which sits in front of the server. This setting may be overridden for
# individual connectors below.
#dbms.default_advertised_address=localhost

# You can also choose a specific advertised hostname or IP address, and
# configure an advertised port for each connector, by setting their
# individual advertised_address.

# By default, encryption is turned off.
# To turn on encryption, an ssl policy for the connector needs to be configured
# Read more in SSL policy section in this file for how to define a SSL policy.

# Bolt connector
dbms.connector.bolt.enabled=true
#dbms.connector.bolt.tls_level=DISABLED
#dbms.connector.bolt.listen_address=:7687

# HTTP Connector. There can be zero or one HTTP connectors.
dbms.connector.http.enabled=true
#dbms.connector.http.listen_address=:7474

# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=false
#dbms.connector.https.listen_address=:7473

# Number of Neo4j worker threads.
#dbms.threads.worker_count=

#*****************************************************************
# SSL policy configuration
#*****************************************************************

# Each policy is configured under a separate namespace, e.g.
#    dbms.ssl.policy.<scope>.*
#    <scope> can be any of 'bolt', 'https', 'cluster' or 'backup'
#
# The scope is the name of the component where the policy will be used
# Each component where the use of an ssl policy is desired needs to declare at least one setting of the policy.
# Allowable values are 'bolt', 'https', 'cluster' or 'backup'.

# E.g if bolt and https connectors should use the same policy, the following could be declared
#   dbms.ssl.policy.bolt.base_directory=certificates/default
#   dbms.ssl.policy.https.base_directory=certificates/default
# However, it's strongly encouraged to not use the same key pair for multiple scopes.
#
# N.B: Note that a connector must be configured to support/require
#      SSL/TLS for the policy to actually be utilized.
#
# see: dbms.connector.*.tls_level

# SSL settings (dbms.ssl.policy.<scope>.*)
#  .base_directory       Base directory for SSL policies paths. All relative paths within the
#                        SSL configuration will be resolved from the base dir.
#
#  .private_key          A path to the key file relative to the '.base_directory'.
#
#  .private_key_password The password for the private key.
#
#  .public_certificate   A path to the public certificate file relative to the '.base_directory'.
#
#  .trusted_dir          A path to a directory containing trusted certificates.
#
#  .revoked_dir          Path to the directory with Certificate Revocation Lists (CRLs).
#
#  .verify_hostname      If true, the server will verify the hostname that the client uses to connect with. In order
#                        for this to work, the server public certificate must have a valid CN and/or matching
#                        Subject Alternative Names.
#
#  .client_auth          How the client should be authorized. Possible values are: 'none', 'optional', 'require'.
#
#  .tls_versions         A comma-separated list of allowed TLS versions. By default only TLSv1.2 is allowed.
#
#  .trust_all            Setting this to 'true' will ignore the trust truststore, trusting all clients and servers.
#                        Use of this mode is discouraged. It would offer encryption but no security.
#
#  .ciphers              A comma-separated list of allowed ciphers. The default ciphers are the defaults of
#                        the JVM platform.

# Bolt SSL configuration
#dbms.ssl.policy.bolt.enabled=true
#dbms.ssl.policy.bolt.base_directory=certificates/bolt
#dbms.ssl.policy.bolt.private_key=private.key
#dbms.ssl.policy.bolt.public_certificate=public.crt

# Https SSL configuration
#dbms.ssl.policy.https.enabled=true
#dbms.ssl.policy.https.base_directory=certificates/https
#dbms.ssl.policy.https.private_key=private.key
#dbms.ssl.policy.https.public_certificate=public.crt

# Cluster SSL configuration
#dbms.ssl.policy.cluster.enabled=true
#dbms.ssl.policy.cluster.base_directory=certificates/cluster
#dbms.ssl.policy.cluster.private_key=private.key
#dbms.ssl.policy.cluster.public_certificate=public.crt

# Backup SSL configuration
#dbms.ssl.policy.backup.enabled=true
#dbms.ssl.policy.backup.base_directory=certificates/backup
#dbms.ssl.policy.backup.private_key=private.key
#dbms.ssl.policy.backup.public_certificate=public.crt

#*****************************************************************
# Logging configuration
#*****************************************************************

# To enable HTTP logging, uncomment this line
#dbms.logs.http.enabled=true

# Number of HTTP logs to keep.
#dbms.logs.http.rotation.keep_number=5

# Size of each HTTP log that is kept.
#dbms.logs.http.rotation.size=20m

# To enable GC Logging, uncomment this line
#dbms.logs.gc.enabled=true

# GC Logging Options
# see https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-BE93ABDC-999C-4CB5-A88B-1994AAAC74D5
#dbms.logs.gc.options=-Xlog:gc*,safepoint,age*=trace

# Number of GC logs to keep.
#dbms.logs.gc.rotation.keep_number=5

# Size of each GC log that is kept.
#dbms.logs.gc.rotation.size=20m

# Log level for the debug log. One of DEBUG, INFO, WARN and ERROR. Be aware that logging at DEBUG level can be very verbose.
#dbms.logs.debug.level=INFO

# Size threshold for rotation of the debug log. If set to zero then no rotation will occur. Accepts a binary suffix "k",
# "m" or "g".
#dbms.logs.debug.rotation.size=20m

# Maximum number of history files for the internal log.
#dbms.logs.debug.rotation.keep_number=7

#*****************************************************************
# Miscellaneous configuration
#*****************************************************************

# Enable this to specify a parser other than the default one.
#cypher.default_language_version=3.5

# Determines if Cypher will allow using file URLs when loading data using
# `LOAD CSV`. Setting this value to `false` will cause Neo4j to fail `LOAD CSV`
# clauses that load data from the file system.
#dbms.security.allow_csv_import_from_file_urls=true


# Value of the Access-Control-Allow-Origin header sent over any HTTP or HTTPS
# connector. This defaults to '*', which allows broadest compatibility. Note
# that any URI provided here limits HTTP/HTTPS access to that URI only.
#dbms.security.http_access_control_allow_origin=*

# Value of the HTTP Strict-Transport-Security (HSTS) response header. This header
# tells browsers that a webpage should only be accessed using HTTPS instead of HTTP.
# It is attached to every HTTPS response. Setting is not set by default so
# 'Strict-Transport-Security' header is not sent. Value is expected to contain
# directives like 'max-age', 'includeSubDomains' and 'preload'.
#dbms.security.http_strict_transport_security=

# Retention policy for transaction logs needed to perform recovery and backups.
dbms.tx_log.rotation.retention_policy=1 days

# Only allow read operations from this Neo4j instance. This mode still requires
# write access to the directory for lock purposes.
#dbms.read_only=false

# Comma separated list of JAX-RS packages containing JAX-RS resources, one
# package name for each mountpoint. The listed package names will be loaded
# under the mountpoints specified. Uncomment this line to mount the
# org.neo4j.examples.server.unmanaged.HelloWorldResource.java from
# neo4j-server-examples under /examples/unmanaged, resulting in a final URL of
# http://localhost:7474/examples/unmanaged/helloworld/{nodeId}
#dbms.unmanaged_extension_classes=org.neo4j.examples.server.unmanaged=/examples/unmanaged

# A comma separated list of procedures and user defined functions that are allowed
# full access to the database through unsupported/insecure internal APIs.
#dbms.security.procedures.unrestricted=my.extensions.example,my.procedures.*

# A comma separated list of procedures to be loaded by default.
# Leaving this unconfigured will load all procedures found.
#dbms.security.procedures.whitelist=apoc.coll.*,apoc.load.*

#********************************************************************
# JVM Parameters
#********************************************************************

# G1GC generally strikes a good balance between throughput and tail
# latency, without too much tuning.
dbms.jvm.additional=-XX:+UseG1GC

# Have common exceptions keep producing stack traces, so they can be
# debugged regardless of how often logs are rotated.
dbms.jvm.additional=-XX:-OmitStackTraceInFastThrow

# Make sure that `initmemory` is not only allocated, but committed to
# the process, before starting the database. This reduces memory
# fragmentation, increasing the effectiveness of transparent huge
# pages. It also reduces the possibility of seeing performance drop
# due to heap-growing GC events, where a decrease in available page
# cache leads to an increase in mean IO response time.
# Try reducing the heap memory, if this flag degrades performance.
dbms.jvm.additional=-XX:+AlwaysPreTouch

# Trust that non-static final fields are really final.
# This allows more optimizations and improves overall performance.
# NOTE: Disable this if you use embedded mode, or have extensions or dependencies that may use reflection or
# serialization to change the value of final fields!
dbms.jvm.additional=-XX:+UnlockExperimentalVMOptions
dbms.jvm.additional=-XX:+TrustFinalNonStaticFields

# Disable explicit garbage collection, which is occasionally invoked by the JDK itself.
dbms.jvm.additional=-XX:+DisableExplicitGC

# Restrict size of cached JDK buffers to 256 KB
dbms.jvm.additional=-Djdk.nio.maxCachedBufferSize=262144

# More efficient buffer allocation in Netty by allowing direct no cleaner buffers.
dbms.jvm.additional=-Dio.netty.tryReflectionSetAccessible=true

# Exits JVM on the first occurrence of an out-of-memory error. Its preferrable to restart VM in case of out of memory errors.
# dbms.jvm.additional=-XX:+ExitOnOutOfMemoryError

# Remote JMX monitoring, uncomment and adjust the following lines as needed. Absolute paths to jmx.access and
# jmx.password files are required.
# Also make sure to update the jmx.access and jmx.password files with appropriate permission roles and passwords,
# the shipped configuration contains only a read only role called 'monitor' with password 'Neo4j'.
# For more details, see: http://download.oracle.com/javase/8/docs/technotes/guides/management/agent.html
# On Unix based systems the jmx.password file needs to be owned by the user that will run the server,
# and have permissions set to 0600.
# For details on setting these file permissions on Windows see:
#     http://docs.oracle.com/javase/8/docs/technotes/guides/management/security-windows.html
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.port=3637
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.authenticate=true
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.ssl=false
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.password.file=/absolute/path/to/conf/jmx.password
#dbms.jvm.additional=-Dcom.sun.management.jmxremote.access.file=/absolute/path/to/conf/jmx.access

# Some systems cannot discover host name automatically, and need this line configured:
#dbms.jvm.additional=-Djava.rmi.server.hostname=$THE_NEO4J_SERVER_HOSTNAME

# Expand Diffie Hellman (DH) key size from default 1024 to 2048 for DH-RSA cipher suites used in server TLS handshakes.
# This is to protect the server from any potential passive eavesdropping.
dbms.jvm.additional=-Djdk.tls.ephemeralDHKeySize=2048

# This mitigates a DDoS vector.
dbms.jvm.additional=-Djdk.tls.rejectClientInitiatedRenegotiation=true

# Enable remote debugging
#dbms.jvm.additional=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005

# This filter prevents deserialization of arbitrary objects via java object serialization, addressing potential vulnerabilities.
# By default this filter whitelists all neo4j classes, as well as classes from the hazelcast library and the java standard library.
# These defaults should only be modified by expert users!
# For more details (including filter syntax) see: https://openjdk.java.net/jeps/290
#dbms.jvm.additional=-Djdk.serialFilter=java.**;org.neo4j.**;com.neo4j.**;com.hazelcast.**;net.sf.ehcache.Element;com.sun.proxy.*;org.openjdk.jmh.**;!*

#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
#  using this configuration file has been installed as a service.
#  Please uninstall the service before modifying this section.  The
#  service can then be reinstalled.

# Name of the service
dbms.windows_service_name=neo4j

#********************************************************************
# Other Neo4j system properties
#********************************************************************
dbms.security.procedures.unrestricted=apoc.*
apoc.import.file.enabled=true

Solution to this is to restart the database by service neo4j restart, but this is so time consuming. After that, queries with apoc.static.get in work fine again until another (some normal, problem reporting) exception is thrown .

I can't seem to figure out, how to deal with this problem. I moved my project to the temporal solution of not using the apoc.static procedures, but I think this should still be reported.

Thanks, Peter.

(Btw, without using .Consume() in each query the same problem occurs when tx.Commit() is called, but program tries to finish all other nodes so it's just longer process that lead to the same outcome.)

3 REPLIES 3

I think the problem here is that you need to rebind a node/relationship instance when re-using it in a different transaction. When another tx uses a node from apoc.static.get it's bound to a already closed transaction context. Therefore you need to rebind it.

The most simple way would be storing the internal node id instead of the node instance and reloading the node by its id:

CREATE (n:RootNode $parameters) WITH n CALL apoc.static.set($rootNodeKey, id(n)) YIELD value RETURN value

and

MATCH (r) WHERE id(r)= apoc.static.get($rootNodeKey)
MERGE (n {customID: $customID}) 
ON CREATE SET n:Executable, n = $parameters 
ON MATCH SET n:Executable, n += $parameters 
WITH n, r CALL apoc.static.set($executableKey, id(n)) YIELD value 
CREATE (n)-[:HAS_ROOT_NODE]->(r)

Hi thanks for the answer.

Do you think this will be more time consuming than using your other advise in other thread = returning id(n) of created node and obtaining it by .First()["id"].As<long>() and than passing it to the other query as parameter?


Peter

No, a node lookup by internal id is basically a seek operation - therefore very cheap.

Nodes 2022
Nodes
NODES 2022, Neo4j Online Education Summit

All the sessions of the conference are now available online