cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Neo4j Considerations in Orchestration Environments

As more workloads are being run in Docker containers, using orchestration environments like Kubernetes, Mesos, and Docker Swarm is also gaining in popularity.

Neo4j already provides public docker containers, and has published a commercial application Google’s GKE marketplace. We also have a public helm chart that customers use to deploy the graph database in Kubernetes.

Databases in Container Orchestrators

Orchestration environments though introduce some special architectural challenges for databases, and Neo4j in particular. This article aims to cover some of those and the extra considerations you might want to keep in mind. Most of the examples here will talk about Kubernetes specifics, but architecturally, the considerations are the same whether it’s Kubernetes or any other orchestration manager.

We’ll take a look at the architecture of a Neo4j deployment, both from “inside of Kubernetes” and “outside” perspectives, and talk about how that impacts querying.

Questions? Comments? Come discuss on the Neo4j Community Site on this thread.

Looking at Neo4j inside of Kubernetes

The diagram below shows how the internal deploy of a typical Neo4j cluster in Kubernetes works. Each pod is placed in a StatefulSet depending on whether it is a core or read replica node. Each pod has a persistent volume claim which maps to an underlying disk which stores the data.

Neo4j on Google Kubernetes Engine (GKE) architecture diagram

Looking at Neo4j from outside of Kubernetes

Within Kubernetes, everything is straightforward; the pods can be connected to directly via internal DNS. The “app2” in the box below has no trouble dealing with the cluster because the auto-assigned DNS routes for it.

App1 talking to Neo4j inside of Kubernetes

The outside app1 though has a different set of issues. The Kubernetes cluster manager has some public ingress IP address, and we need a way of routing traffic into the right cluster node, and this is where things become a bit tricky.

Tip

This section assumes some familiarity with how bolt+routing works in Neo4j. If you need a brush up on any of the ideas of how Neo4j query routing works, please have a look at Querying Neo4j Clusters. In that article, we described how the “public advertised address” of a node interacts with the Routing Table shown in the diagram above.

In Kubernetes setups, there is a specific problem: internal DNS in the table above cannot be routed by app1 outside of the cluster. So even if there is an IP ingress, attempts to use bolt+routing Neo4j drivers will fail. App1 will get a routing table with un-routable information, and will fail to connect.

Possible Solutions

  1. Determine public IP addresses or DNS that you can use for your pods, and set that in your Docker configuration — pods should advertise proper external addresses
  2. Set up a NodePort service to route traffic from the outside to a specific pod, and set the advertised address of the pod to be the ingress IP. Make sure to use “port spreading”. That is, if bolt is usually on 7687, then pod 1 would be on port 7687, pod 2 on port 7688, pod 3 on port 7689, and so on, so that they can all use the same address without conflicting.
  3. Set up a service that points only to the leader of the cluster, and then use only a bolt driver, and not a bolt+routing driver.

Keeping in mind that cluster topology can change and leader re-elections can happen, #3 is usually not a good choice. Option #1 or option #2 tend to be superior — but require extra configuration after you deploy Neo4j. Those configuration steps tend to fall into these categories:

  • Register DNS for each node
  • Configure Neo4j pods to advertise that DNS
  • Configure port spreading ingress services as needed

Now that we’ve covered Neo4j networking and bolt+routing specifically, let’s get into other things that make databases different and what to look out for.

Orchestration Managers and Databases

As orchestration managers were born and grew up, they typically handled workloads that had a lot of stateless microservices. Developers would create small containers that held a single service. Typically that service did not store any data (if it did, it talked to a database deployed outside). These services, because they were stateless, could be deployed in fleets, and willy nilly killed and restarted. Typical deployments might have many replica containers, fronted by a load balancer. Clients would talk to the load balancer, and end up getting forwarded to any of the microservice instances, it didn’t matter which.

Orchestration manager features play directly to this sweet spot: Kubernetes with the concepts of ReplicaSets(a set of replicas of a given container) and LoadBalancers(allowing access to any replica through a single network ingress).

Databases are Different

Databases (including Neo4j) are different in three key ways here!

They’re Stateful: They must store data, that’s their whole reason for being. So in orchestration managers we need to think about concepts like disks, persistent volume claims, and so on.

2X_0_0b457bcc34a7c2ebbbb5c51a2f310b769ce108d3.png

Database pods use Persistent Volume Claims to keep long-term storage of state

Their Pods are Not Interchangeable — They either host different services, or have different capabilities. In the Neo4j architecture, for example, there are leaders and followers, where leaders need to process the writes.

Different nodes play different roles in clustered database architectures

If you need to talk to a particular cluster member though, this limits the usefulness of abstractions like load balancers, because they don’t know or care about any differences. In several databases, this necessitates the whole idea of a “routing driver” that handles this logic at the application or driver level. Which in turn complicates the orchestration networking configuration.

They scale differently — with micro services, suppose I have a function hello which returns “hello world”. I can scale this service up and down very quickly. Launching the container takes seconds, and there’s no stored data to synchronize. With databases, when you “scale up” you may need to copy most or all of the database contents to the new node before it can fully participate in the cluster. Doing that “scale up” may also place extra load on the other cluster members while they replicate data and check in with their new peer. When you scale down, your cluster finds that a member has gone missing, and it will recover but each of the other instances needs to update its member list.

None of that is happening with your typical micro service. As a result, up-front capacity planning for databases is more important, as it isn’t going to be a good idea to rapidly add/remove 10 instances of a 5TB database.

Noisy Neighbors, Co-Residency, and HA

In an orchestrator it can also be important to think about Affinity and Anti-Affinity rules to spread cluster members out across physical hardware. With microservices with many copies this may matter less — but if you’re running a 3-node clustered database for high availability, it is very important to ensure that not all 3 pods are running on the same physical server. If they are, and the physical server dies, then the database is completely down despite your attempts to guarantee HA!

2X_d_d975136828f79461368fe1b7d8b4cd4509c8d8c4.png

If all three nodes of your cluster live on the same physical server, and that server burns, you have no high availability! (HA)

Spreading things out (and avoiding co-residency) also helps to insulate you from so-called “Noisy Neighbors” or other workloads on the same machine that are soaking up shared resources.

Disks: Thin vs. Thick Provisioning

In virtualized environments, whether in the cloud or on-prem, when you create a persistent volume claim you take for granted that you get a disk and it “just works”. But how does that actually work?

Under the covers, your virtual disk is usually allocated from some kind of storage solution, or SAN. Many of those tend to use thin provisioning by default.

Over-allocation or over-subscription is a mechanism that allows a server to view more storage capacity than has been physically reserved on the storage array itself. This allows flexibility in growth of storage volumes, without having to predict accurately how much a volume will grow. Instead, block growth becomes sequential. Physical storage capacity on the array is only dedicated when data is actually written by the application, not when the storage volume is initially allocated. The servers, and by extension the applications that reside on them, view a full size volume from the storage but the storage itself only allocates the blocks of data when they are written.

2X_3_3aac27069a179eed3bf09985107e8a02e94e2da3.png

Thin provisioning

This is great for a wide range of applications but very bad for databases. It’s possible in some storage environments for your database to try to write data and to fail, because the underlying storage isn’t available. This can occur when the storage layer becomes over-subscribed and can be a catastrophic problem for the database, because its core underlying assumption is that it has a disk that it can use — which turns out not to be true. If you have a clustered Neo4j setup and this is only happening to one node, you probably won’t lose data (because the HA capabilities of the database are keeping the data safe on the other two nodes) but you’re going to see lots of strange errors which make no sense, and you’ll have operational issues.

The fix is to ensure that when you allocate disk, you’re doing it in a “thick” provisioning way, meaning that when the Persistent Volume is created, you’re guaranteed to have an exclusive hold on it. How to do this will differ between cloud environments and storage solutions; and so depending on your kubernetes provider or method of hosting, this may or may not apply to you, but it’s something to keep a careful eye on.

Other References

2X_2_279315d507855c6a4351e1e2c2f39dd9cd2fccd8.gif


Neo4j Considerations in Orchestration Environments was originally published in neo4j on Medium, where people are continuing the conversation by highlighting and responding to this story.

3 REPLIES 3

Since this was posted, some of the techniques described in this article are implemented in this repo:

@david.allen
Would it be accurate to say that neo4j-helm is the recommended way to deploy a cluster? As compared to using docker image to set up separate core servers?

neo4j-helm is the only way to deploy clusters to kubernetes. I wouldn't say it's the recommended way primarily because there are so many other ways, it just depends on your infrastructure. If you settled on using docker & containers, and you had a requirement for a causal cluster, I'd use neo4j-helm for this purpose. But many customers and users use VMs rather than containers, so it's far from the only way.

Nodes 2022
Nodes
NODES 2022, Neo4j Online Education Summit

All the sessions of the conference are now available online