cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Access cluster in Kubernetes using ssh tunnels

wergeland
Node Clone

We have set up a casual cluster using Kubernetes in Google Cloud, using private IPs and DNS discovery (with DNS service provided by Kubernetes). The cluster runs fine within Kubernetes, but when accessing the cluster remotely with e.g. the Neo4j desktop browser through an ssh tunnel, I have to manually locate the cluster leader and connect to that one directly for write operations. Is there a way to use bolt+routing through ssh tunnels (given that I set up tunnels to all the instances), or use a load balancer in Kubernetes that is aware of the leader?

Best regards,

Øyvind Wergeland

2 REPLIES 2

It is possible to set this up, but unfortunately this is a limitation of the current marketplace entry that it doesn't do this out of the box. For details on that see this: https://github.com/neo-technology/neo4j-google-k8s-marketplace/blob/master/user-guide/USER-GUIDE.md#...

To set this up, a way I would recommend is this:

  • Use other kubernetes approaches to establish valid external DNS names for your pods.
  • Configure your Neo4j pods to advertise those DNS names. You can find all of the helm charts and everything you need at the same github repo as the link above.

You can't use SSH tunnels directly because the issue with the setup is that when a pod starts, it only gets an internal kubernetes DNS address. Google's marketplace doesn't know how to configure externally valid DNS for all of these pods, because this depends on your domain name ownership and other issues.

The way bolt+routing works is that you connect to a single node, and it gives you a "routing table" (basically the same as you would get if you ran in cypher: CALL dbms.cluster.overview();). This routing table returns a list of the advertised addresses for all of the nodes. In a k8s environment, those are all internal private DNS names, and so external usage of bolt+routing fails because your external client can't resolve those DNS names.

The solution then is to give the pods valid DNS names if you want to do it from the outside. Because of the limitations in the google marketplace and the difference in various people's configurations, this wasn't something that could be set up from the outside by us in the marketplace entry without a lot of extra machinery.

Hope this helps.

Thank you for your thorough reply.

We are not using the marketplace entry, we are running the Docker image in a separate Kubernetes cluster, and it works well with bolt+routing with clients in another Kubernetes cluster (i the same Google Cloud project), both for read-only and write sessions.

However, we also occasionally need to connect to the cluster from outside the Google Cloud, and this is limited to using SSH tunnels. If we need to write, it looks like that we have to first establish which node is the leader, and then connect to that one. Great tip with using CALL dbms.cluster.overview(); to figure ut out.

Best regards,

Øyvind Matheson Wergeland