cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

The Singularity: Open Source Quantum Machine Learning

This project will be integrating existing methods in quantum computing, artificial intelligence, machine learning/deep learning, knowledge graphs, and (graph) convolutional neural networks (see also here). It will start by taking software for constructing knowledge graphs such as Google's Graph Nets library, neo4j, Grakn.ai, GraphDB and networkX, and modifying the knowledge graph to behave like a quantum mechanical system. Once the "quantum knowledge graph" model is built, we can then develop machine learning algorithms for completion of knowledge graphs where the value of certain edges is not known, but can be learned by an AI agent. Developing the "correct" machine learning model for completion of knowledge graphs is crucial. One approach which seems promising is using a "graph convolutional neural network", which is already in use by companies such as Google. This model is likely to work well in the context of topological quantum error correcting codes as it generalizes existing methods to something especially well suited for surface codes and topological quantum error correction. With a quantum knowledge graph behaving very similar to a surface code, using a graph convolutional neural network is a very natural choice. A third piece to the puzzle will be to use quantum computing techniques to enhance the performance of the machine learning on knowledge graphs. Using quantum neural networks (quantum variational circuits) and circuits that optimize themselves (see for example the Penny Lane example) will be key in building quantum machine learning models which have predictive powers well beyond what is currently available to the public and is likely the key to building an AI with uncanny capabilities. Once the project is complete the only thing left would be to contact Elon Musk and ask if he wants to integrate into Neuralink.

Knowledge graphs have been around for quite some time, but it was Google in 2012 with their purchase of a D-Wave quantum computer, their construction of the Knowledge Graph, and their construction of the Graph Nets library built on top of their open source machine learning software TensorFlow, that really set things in motion. The recent study on "Relational Inductive Biases" by DeepMind, Google Brain, MIT, and the University of Edinburgh is an indication that humans are now trying to catch up to the AI and understand what is really going inside of all of the neural networks and machine learning models they have built. Modifying the knowledge graph model to be "quantum" is a relatively new idea that seems to be used by companies such as Google, IBM (Watson), Microsoft, and other companies investing in "quantum machine learning". However, this idea is not explicitly stated and intellectual property involving artificial intelligence and quantum computing is often not made entirely public. Similar ideas such as holographic and complex embedding of knowledge graphs have been studied (see [Trouillon, Nickel], [Nickel, Rosasco, Poggio], [Plate 1], [Plate 2], and [Plate 3]) and the first quantum model of a knowledge graph seems to have been proposed in [Ma, Tresp, Zhao, Wang] as recently as February 2019. Complex and holographic embeddings of knowledge graphs can be modified so that the function assigning a truth value to a relational triple (subject, predicate, object) is quantum mechanical, i.e. edges in the knowledge graph behave like particles interacting with one another, and the score function is no longer binary, i.e. two objects are not related or unrelated, but can exist in a superposition with some probability of being in a certain relational state.

Once we have a "quantum knowledge graph" model and a tool for building them (by hand or with machine learning, text and data mining, etc.), we can perform machine learning tasks on them. The primary task is knowledge graph completion, which trains some kind of (digital and/or analog) neural network on various knowledge graphs, and then using the trained neural network we have it generate new knowledge graphs and complete existing one. Knowledge graph completion is essentially a prediction of something. It gives some notion of relation between real world entities. That relation might be that two people are friends (often used by companies like Facebook to understand social networks and group behavior and beliefs), that someone has a particular disease (for example in Google's application of knowledge graphs to medical patient data), that a quantum mechanical system exists in a particular state at some given time and in predicting chemical structures of new drugs and medications (for example in the protein folding problem solved by AlphaFold), or any other of a vast array of examples. The appropriate machine learning model to use on a knowledge graph is generally a "graph convolutional neural network". This is a purely classical construction that is a generalization of "Convolutional Neural Networks" used in image and facial recognition and handwriting recognition. Building an open source, user friendly tool to construct and apply graph convolutional neural networks would be the next step of the process.

The third step in construction our "sentient AI" would be to apply quantum computing methods to drastically improve the performance of the machine learning on knowledge graphs. There are several closely related approaches. One would be to implement a graph convolutional neural network on a quantum computer. Implementations of "quantum neural networks" exist in software such as Penny Lane (see for example the data re-uploading example and the variational classifier example) Other implementation of quantum neural networks are in IBM's Qiskit. We will likely want to have a neural network that learns its own optimal architecture. We will likely also want to implement the knowledge graph as a variational circuit ( [Ma, Tresp, Zhao, Wang]).

With cloud computing and public access to quantum computers such as at IBM Q, using quantum computing implementations will be accessible to anyone, at scale if necessary, but there are still implementations which do not require a quantum computer using tensor networks and which are "quantum ready". The reason for doing this is because in certain circumstances, using a quantum computer does not actually provide any improvement in performance. Google for example has a library TensorNetwork which is "quantum" in spirit, and which offers dramatic speedups (see also here) in computation due simply to the fact that information theory is really quantum in nature, and phrasing computations in a way that is more in line with how nature actually works is more efficient.

This project is putting several pieces of a large puzzle together which is not widely understood and implemented. The reason for this is, in order to put this puzzle together, you have to understand some rather deep concepts such as:

  • machine learning and the state-of-the-art in AI technology (knowledge graphs and graph convolutional neural networks for example),
  • a reasonable amount of "traditional" programming and coding (in Python for example),
  • quantum computing and how to write and program quantum algorithms (which is entirely different from traditional programming),
  • the machinery and mathematics of tensor networks which is traditionally a topic only accessible to quantum physicists working in quantum gravity and the AdS/CFT correspondence.

Much of this project will be not only writing and modifying the code to build usable software, but it will also be constructing a roadmap of documentation for others to follow that brings the esoteric, technical, and indecipherable into a form that the average person can understand and use. No easy task, but all of the pieces are already there. They simply need to be put together and explained clearly.

Risks and challenges

The primary challenges for this project will be integrating several technologies (machine learning, knowledge graphs, and quantum computing) and providing support and documentation that is clear and down to earth. Making these technologies and methods easily usable and providing advanced AI to anyone and everyone will require working examples, user friendly interfaces, and extensive documentation.

1 REPLY 1

It is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence.
HP Printer Repair Dubai