cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

ryan_boyd
Neo4j
Neo4j

[First of these livestreams is TOMORROW, see below]

Hi there,

We just announced a few new Online Meetups. If you're not familiar with Online Meetups, they're livestream events with members of the Neo4j Community. Most online meetups are hosted by @mark.needham or @neo4j_devrel .

To stay up to date with all events, please Join the Neo4j Online Meetup group.

We hope you enjoy and please let us know if you have suggestions for future presentations!

Cheers,
-Ryan

Football (Soccer) exploration with Neo4j and RLang
Tomorrow @ 830am SF, 16:30 London
@chucheria Bea Hernandez, Data Scientist at Olympic Channel and co-organiser of R-Ladies Madrid.

Bea will be showing how she used the new Neo4j R-driver to build to analyze home advantage and competitiveness in football.

2X_f_f5a39ee60aa40dd4caa7983211e58f094db25de2.jpeg


Event-driven Graph Analytics using Neo4j and Apache Kafka
Next Thursday, 30 May @ 830am SF, 16:30 London
@lju, Engineer at Neo4j

Commonly we will want to get insight from any analytical processing on our operational data. For example, we may want to leverage the connectedness of customers to products and their networks to identify recommendation opportunities. However, doing analytical work on operational databases is seldom a good idea, and usually, there will be separate databases for each of the tasks. Also, we may want to stream insight as and when it becomes available.

This in itself can bring in new challenges: How do we keep the data on both database instances in sync? How do we stream results as and when they’re generated from our analysis onto our transactional database?

In this talk we will describe a scenario where graph databases in a cluster and read replica format are used for both operational means, and for delivering the analytical work, and how we can use this architectural pattern with Kafka to stream back analytical results to the operational databases as soon as they’re available, whilst ensuring all of the databases are up to date with the same data. This example uses the newly released Apache Kafka plugin for Neo4j.

2X_4_4967f26712144e57d1fae7ffaab6800a8bffdb4b.png


Getting Started with Provenance and Neo4j
Thursday, 6 June @ 830am SF, 16:30 London
Stefan Bieliauskas, Software Engineer at casecheck GmbH

We all want to know “Where does our meat come from?” or “Is this a reliable information or fake news?”. If we ask this kind of questions it is always about the Provenance of information or physical objects.

In this session we'll learn how to use Neo4j to store and query provenance data.

2X_b_b83bda93ecd6fea1f48f8ccf5918561a77a4275f.png

1 Comment