cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

International Salmon Data Laboratory

The International Year of the Salmon involves a dozen countries with one overarching goal: Salmon are resilient to climate change. Survive surprise. Salmon uniquely integrate terrestrial and marine effects from global warming, so they are the canary in the coal mine.
To meet this challenge, salmon biologists need new tools for data assembly, analysis, and visualization; tools that are powerful but safe, effective but easy-to-use, and rewarding. That's our challenge. This project intends to:
(a) develop tools and workflows for radically better information flow and knowledge management—new best practices;
(b) data mobilization and analyses, so we actually know what we should know from the data and metadata we collected;
(c) deliver irresistible examples, that are easy and safe to use, of workflows from field data to decision support products. Without effectively communicating the knowledge obtained, what's the point?
(d) help salmon biologists leapfrog to 2022 technology by 2022.

We will do this by leveraging the power of Neo4j and new tools built on neo4j. Your skills, and the tools you are helping to develop can have impacts throughout the environmental sciences. "The revolution begins with salmon" -- because focus allows excellence leads to adoption -- but will quickly spread via cogent and rewarding examples. Biologists have begun to share exampes of datasets for this work. About 20 people at GraphConnect offered expert assistance. Neo4j corporation is behind this (thank you, Jeff Morris!). We are connecting right here, right now.
On your mark, get Bang!

"... People don't know what they want until you show it to them." - Steve Jobs, 1997.

41 REPLIES 41

Woo hoo! Discovered that Jeff Altman - graph fan - Denver, CO has PepperSlice - a data to decisions workflow tool. That addresses a big chunk of the underlying problem broached by ISDL: evolving best appropriate practices .
There you go, immediate value from this collaboration portal.

That makes us soooo happy!

Hey Scott - Thanks for the shout out. Sign up for a free PepperSlice account at https://pepperslice.com/

ISDL Architecture
@tom.geudens asked me the following questions (I abbreviated) ... fortunately, he explained that "I have no clue" was an acceptable answer. Phew!
(answers revised 2018-09-28)

(1) size and growth prediction for the data?
The largest datasets are tables of about 10,000 rows of 10 variables. Size is not the problem. We will have max 20 of these in 2018, perhaps ~200 by 2022. Some are smaller but with more variables and need to be linked to preceding data. The problems are assembly (access), standardizing (ontology required), cleaning (synonyms, classification of practice, etc), and finally integrating. I will post links to data as it becomes available (tumbling in as I write).

(2) Will ISDL be public facing? If so, in what way: application, Neo4j browser connection,?
Yes. I want many people need to see the examples. Maybe webpages via Structr, or something simpler to start. It should be possible for people to look closely (Bloom, Browser) without being able to break anything.

(3) Will some persons have different access the database (i.e. Bloom)? Is that a separate design requirement?
Inevitable. Developers have more permission than known users (login) have more permission than public.

(4) How do you see security?
Lax, actually. Users (login/pw) will generally be previously known, and we simply boot malfeasants. The data is of value to a very limited community. Providers can have a copyright (attribution is critical).
I want people to steal the examples! To re-apply them, they will, of course, need SW licenses. I think 200 users would be a howling success. 2,000 means we fell through a wormhole.

(5) Query load ?
Scant. Certainly to start. In the happy event that I am wrong, use will ramping up slowly and we will react.

(6) How do you envision uploading of data? Small transactional size datapoints, gigantic bulk sets of datapoints, or in between?
The data will arrive as .csv tables. ISDL is about tools. Other, bewildering, people want to maintain a jumble of jumbo datasets.

(7) Should data contributions be verified first?
Yes, but no. I cannot imagine a user stuffing illicit or broken data into an existing knowledge graph as a covert act. Sweet, naive, child of summer that I am. However, this alerts me to the need for backups.

(8) How big will the daily updates/contributions be?
I am so enjoying this answer. Frequency: 0.1/day; Update size: microscopic; Contribution size: typically < 1M.

Small can be nasty. Shrews, OK? Try picking one up, you'll see. We don't need to wrestle Grizzlies to develop tools, workflows, and products.

Within hours of saying the largest tables had 10,000 rows, I posted a dataset with nearly 400 thousand rows. Sigh.

Offerings of Data #1
Q. What does your data look like? A. Tremendously diverse. I will post examples as they are indicated as available or passed to me for use within the ISDL project. If the data is proprietary (acknowledgement for all products and uses) it will be released after suitable declarations of honour and good will.
Oops! "new users can only put 5 links in a post" ... I have to break this up.

  1. New Salmon Escapement Database System
    NuSEDS is the Pacific Region’s centralized Oracle database that holds adult salmon escapement data. About 10,000 salmon spawning sites in DFO Pacific Region have been observed 0 to 10 times per year for 6 species for nearly 100 years (poorly before 1948). This is a rich, valuable, complicated, and largely unexplored dataset. Extensive metadata are required and provided. Data is aggregated within year; the raw data is largely on paper. Caveat analysta.
    http://www.pac.dfo-mpo.gc.ca/od-ds/science/sed-des/NUSEDS_20180416.zip

format: .csv, 135.6 Mbytes (24.4 zipped). 392,790 rows of 64 variables (sparse).
metadata: https://open.canada.ca/data/en/dataset/c48669a3-045b-400d-b730-48aafe8c5ee6
source: Canadian Open Government Portal https://open.canada.ca
contact: Bruce Baxter Fisheries and Oceans Canada, Stock Assessment Division, Pacific Biological Station, 3190 Hammond Bay Road, Nanaimo, BC, V9R 5K6, Canada. bruce.baxter@dfo-mpo.gc.ca

Offerings of data #2

  1. Pacific Region Commercial Salmon Fishery In-season Catch Estimates.
    The spatial pattern of fishing (8 areas), by species (6), gear (3), and year (2004-2017).
    https://www.pac.dfo-mpo.gc.ca/od-ds/science/species-especes/salmon-saumon/ise-ecs.zip

format: .csv in 3 files (by gear).
metadata: https://open.canada.ca/data/en/dataset/7ac5fe02-308d-4fff-b805-80194f8ddeb4
source: Canadian Open Government Portal https://open.canada.ca
contact: Bruce A. Patten, Head, Fishery and Assessment Data Section, Science Branch, Fisheries and Oceans Canada, Government of Canada. Bruce.Patten@dfo-mpo.gc.ca

Offerings of Data #3
3. North Pacific Salmon Abundance and Biomass Data
2X_0_0e3b90cce36618b976f649b201440cfa7e38c085.png
From Dr. Jim Irvine, Pacific Biological Station: "Scott, thanks for distributing these data. ... these data are relatively simple, on the other hand, if you ever wondered how many salmon there are in the Pacific Ocean, and how this has changed since 1925, this is where to find out. There are lots of tables but the graphs in our paper will give you a better feel for the data." PAPER.
"I think the [ISDL] folks would be interested in seeing differences between Alaska and the southern states in terms of salmon numbers and biomass. The various time series, if spatially linked to maps, would be relatively easy to show. The NPAFC data [see following post] have the advantage of including catches for all salmon species by area. The Ruggerone and Irvine data are only 3 species but have the advantage of separating hatchery from wild fish as well as numbers vs adult biomass vs adult plus immature biomass, by area."
This paper was featured in a magazine article with tableau figures that allow readers to play with the data.

LINK TO DATA Please acknowledge the scientists in any use of these data.
format: .xlsx, 21 tables (multiple per sheet), 64 rows of 15 variables.
metadata: extensive, as sheets in this .xlsx file.
source: reviewed science publication, see link above
contacts: Dr. Gregory T. Ruggerone, Natural Resources Consultants Inc., Suite 404, 4039 21st Avenue West, Seattle, WA, 98199, USA.GRuggerone@nrccorp.com
Dr. James R. Irvine, Fisheries and Oceans Canada, Pacific Biological Station, 3190 Hammond Bay Road, Nanaimo, BC, V9T 6N7, Canada. James.Irvine@dfo-mpo.gc.ca

Great to see Dr. Jim Irvine join this discussion. Jim is a distinguished research scientist at the Pacific Biological Station, Fisheries and Oceans Canada. Please reference his paper, the source of Offerings of Data #3,https://doi.org/10.1002/mcf2.10023, as:
Ruggerone, G.T., and J.R. Irvine. 2018. Numbers and Biomass of Natural- and Hatchery-Origin Pink Salmon, Chum Salmon, and Sockeye Salmon in the North Pacific Ocean, 1925–2015. Marine and Coastal Fisheries: Dynamics, Management, and Ecosystem Science 10:152-168. DOI: 10.1002/mcf2.10023

Offerings of Data #4
4. NPAFC Statistics: Pacific Salmonid Catch and Hatchery Release Data
The North Pacific Anadromous Fish Commission (US, RU, JP, SK, CA) catch and hatchery release data are publicly available.
https://npafc.org/statistics/
format: .xlxs pivot tables (note hidden rows)
metadata: See that web page.
source: North Pacific Anadromous Fish Commission npafc.org
contact: Dr. James R. Irvine , Fisheries and Oceans Canada, Pacific Biological Station, 3190 Hammond Bay Road, Nanaimo, BC, V9T 6N7, Canada. James.Irvine@dfo-mpo.gc.ca

Great to see this discussion. We salmon folk need help from data experts to help us understand and interpret salmon data. These datasets are currently trivial in terms of size but that is partly because of all the metadata that are not included. How can all this important ancillary information be included? That is where software platforms like Neo4j have promise.

Offerings of Data #5
Returns and Spawners for Sockeye, Pink, and Chum Salmon from British Columbia
We assembled productivity (recruits per spawner) estimates for BC sockeye, pink, and chum salmon. Annual estimates by brood year of spawner numbers, catch, and population and age composition are in a simple database. Time series were organized by species, Conservation Unit, Pacific Fisheries Management Area, or aggregates of these. Three categories of data quality, unique to each data type, were determined and used to rate annual recruit-per-spawner data annually and across all years for each time series. Our exploration of temporal changes in both field methods and data quality will assist analysts to interpret the reliability of the data and their results.

https://open.canada.ca/data/en/dataset/3d659575-4125-44b4-8d8f-c050d6624758
format: .csv
metadata: In the report.
source: Fisheries and Oceans Canada
citation: Ogden, A.D., Irvine, J.R., English, K.K., Grant, S., Hyatt, K.D., Godbout, L., and Holt,
C.A. 2015. Productivity (Recruits-per-Spawner) data for Sockeye, Pink, and
Chum Salmon from British Columbia. Can. Tech. Rep. Fish. Aquat. Sci. 3130:vi+57p.
ftp://ftp.meds-sdmm.dfo-mpo.gc.ca/pub/openData/Recruits_Spawner/Canadian_Technical_Report_3130.pdf
contact: Dr. James R. Irvine, Fisheries and Oceans Canada, Pacific Biological Station, 3190 Hammond Bay Road, Nanaimo, BC, V9T 6N7, Canada. James.Irvine@dfo-mpo.gc.ca

js1
Node Clone

To begin leveraging Neo4j for analyzing the salmon datasets, a core data schema will have to be established. Graph Commons provides a tool for collaborating on the schema design. Using the Darwin Core model to drive the initial design would be a good starting point. Once the core schema is generated, creating publishing pipelines into the schema from disparate datasets would be the next natural course of action. The Neo4j schema is forgiving and extensible, so the data model can evolve with deeper insights into the available datasets.

js1
Node Clone

I've created a schema graph at graphcommons.com. If you search for "Salmon Data Laboratory", it will pop up the canvas as a "Work in Progress". It is a public graph. Although the tool is for visualizing real data, it can be used to visualize a data schema model.

There may be better public tools out there, but this seems to have all the features needed for collaborating on a public graph.

GraphCommons
A step forward. Thanks @js1
Alas, not visible today.
I will try to find collaborators re schema and the objects (lists named fields) attached to nodes and links.
Maybe we should split the problem(s) into small and specific examples.

Example #1
Biologists and DB managers at the Pacific Biological Station are thinking about expanding a small fraction of the NuSeds dataset (Offerings #1) by adding the raw sampling data behind each annual estimate. Via graph.

ISDL Documents
Our thinking, planning ,and projects should be an on-going and shared write-up in addition to this discussion. I started that HERE. Anyone with the link should add / edit. I don't like "shoulds" but, please, that would be helpful.

Example #2
What do you think of converting theDarwin Core to a graph? As a resource to:

  • standardize data and practices
  • tag nodes and links
  • link nodes by concepts

js1
Node Clone

I will start defining the Darwin Core Simple Model in the Salmon Data Laboratory Graph and attach edges to show how the data may be related. I'll then ingest some of the attach data into a real graph and share that out to show how the data can be fit into the Darwin model. I may get some of the biological terms wrong to start, but it'll be a good learning experience and nothing that can't be remedied with a simple update. Up to this point, salmon for me was something on a menu; not something with Latin and Greek names.

js1
Node Clone

Darwin core classes have been added to the graph commons canvas. First stab at relating the different Darwin Core classes via edges.

Thanks! Our first graph. See figure below. More to this than meets the eye. Extensive properties for each "resource" (type of node). That was a lot of work.
"Come Watson! The game is afoot."

Now I /we have to figure out how to create a lot of instances of each resource, so:

  1. Created Google Sheets corresponding to this schema, HERE.
  2. Wrote an R script to create (empty) data.frames for this schema, HERE

Darwin Core is for museum catalogues, not for organizing salmon datasets, practices, analyses, workflows, data products,. And the surrounding people, projects,. But:

  1. we do not need to specify every field in each resource.
  2. we must add new resources.
    Obviously Person, Activity, Organization),.
    Less so: WorkFlow, Model, DataSet, Practice (?)
  3. Helpful to future users if we predefine layers (ontologies,networks) such as Place-[In]-Place, Organization-[In]-Organization, Practice-[in]-Practice.

I conclude we are going to have to build our own Salmon Ontology. It will not be small.
This needs to be a group grope, but turning into a standards committee is a trap. We need "good enough to proceed." The Nazi Librarian ("no data for you!") will have her/his/its day.
Another Google Sheet for that. Stand by!

International Year of Salmon (IYS) Workshop on Salmon Status and Trends
2019-01-21/22, Vancouver BC Canada

Problem:
In recent decades, the productivity of salmon has become increasingly uncertain with many populations experiencing extremes related to timing, abundance and size with serious social, economic and conservation impacts. Much has been learned in previous workshops but a lack of consistency in approaches to categorize biological status and trends, terminology to indicate status, requirements and standards for different types of data, spatial and temporal scales for comparison and aggregation, and ways of communicating findings significantly impedes the timeliness, efficiency and effectiveness of scientific investigations. Our data systems do not match our technological capacity and social/scientific inquiry needs. Many agencies have a commitment to open data but are challenged to achieve it given the significant costs associated with bringing historic data online. Increasing variability in a rapidly changing environment demands rapid access to integrated data for comparative and mechanistic studies of the distribution and productivity of salmon across life history stages and associated eco- regions.

Solution: (extracts thereof)
The primary goal of this workshop will be to identify a series of legacy datasets and standards associated with major categories of data. These datasets will be the focus of subsequent analytical workshops; additional workshops may concentrate on communication of scientific results. Ideally these separate but linked workshops will [cycle and grow]* over the course of IYS.
* but not yet funded.

Complete announcement HERE..

A search and rescue team for salmon data
https://www.nceas.ucsb.edu/news/a-search-and-rescue-team-for-salmon-data
I wrote to Matt Jones, a database guru at the National Centre for Ecological Analysis and Synthesis (NCEAS), introducing ISDL and asking for advice. Specifically about compliance with Darwin Core protocols and ontology as we build knowledge graphs and analyses to share across the environmental sector. ISDL has much to learn from the NCEAS project Southern Alaska Salmon and People or SASAP.
2X_5_54738de7a99fef4351ef69a9c93529e6f5b3fbf8.png
2X_6_66eb606b3df713771000aa83e531bdcfb4d2b79b.png

js1
Node Clone

Part of the challenge will be defining the standardization of the data. Darwin Core is a good starting point, but as I have been working through the transformation of the attached data, even the simple data sets feel like I'm fitting a square peg into a circle. We still want to preserve the "natural" way of thinking about the data that Neo4j and a graph model encourages to exercise it's true value. What that will probably mean is that we will have to define/extend a core model to fit more naturally into the domain of analytics. Working through the NUSED data with it's metadata really pushes the Darwin Core mapping to its limits.

I will share the data graphs on graph commons to show what the graph containing the data looks like in the next day or two. Working through the NUSED data may take longer.

We should start thinking about working inside Neo4j itself soon and load data into a fully queryable instance. We can then query through the Neo4j desktop.

js1
Node Clone

NPAFC catch data. You can search for "NPAFC" to get a list all of the available public graphs.

js1
Node Clone

The beauty of graphs. This is the hatchery release data for US and Canada, NPAFC Hatchery Statistics Species Data - Canada, United States, graph 2b1841ac-73d1-4603-9492-2f281f375171.

Be careful what you ask for
Bewilderingly fast development by unseen people with strange tools in unfamiliar territory. Seriously, who would ask for that? Me. Now I am pressing frantically on this button marked “Genius on Demand” but nothing is coming out of the machine (translation: struggling to keep up).

At this point, we need define the components of a salmon graph: a set of ~15 "resources," with subtypes.Things like:
(:Organization{label:"NMFS", name:"National Marine Fisheries Service", type:"federal government"}
If, and how, these nodes and fields within nodes are compliant with Darwin Core is to be determined. A specific ISDL example would probably not involve all resources.

Knowledge: knowing how things are connected. Thus, a constrained set of links between these nodes. We do not need n(n+1)/2 types of edge (node-link-node) for n nodes. Nor links based on types within nodes. Some links are simple and widely used,
(:Place)-[In]-(:Place), (:Organization-[:In]-(:Organization).
Some are not,
(:Person)-[HasOrganization {label:"employed by", job_description:"research biologist"]-(:Organization)
Likely to see a large number of optional fields, with defaults, in all nodes and links. In addition to ID and date-stamps created by neo4j.

Neo4j Sandbox - Needed Immediately
ISDL is happening. We have salmon datasets recast as graphs. A database schema is emerging. We are moving toward pipelines/workflows and analyses. ISDL needs a server and stack for neo4j.

And now a word from our sponsors!

js1
Node Clone

There is some affordance for this in Darwin Core with the RecordLevel class, but this class holds foreign keys or IDs for contributors etc. This can be expanded with relationships that point to more detailed information regarding the contributor or the organization. I'll add those relationships and nodes to the schema.

js1
Node Clone

In terms of the Neo4j sandbox. I believe the default sandbox expires after 3 days. Will we have access to a sandbox with more longevity for the data?

js1
Node Clone

Inspired by Mike Bostock's population pyramid, global salmon hatchery release data plotted as an interactive pyramid. https://isdl-220916.appspot.com/release

2015 paper: Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Bio... ... _"The future, as data will continue to grow in size and complexity, lies in online analysis and storage for collaborative work, with methods that allow high interactivity with data for analysis and visualization." They see graph databases as the way out of the box. The real message is how quickly the technology moved.

Video from my ISDL talk at GraphConnect.
Abstract and slides.
Announcement: first IYS Salmon Data Workshop, Vancouver BC, 2019-01-23/24
Ecologists to contribute datasets and say what they need, technologists then figure out how.

Fretting about workflows that bounce between R and Neo4j.
Excited byMax Demarzi's talk at Graph Connect: decision trees and workflows WITHIN neo4j. Big step forward, thanks!
"The code is the data" -- oh yeah, now we're getting somewhere.
Find the rest of the story at https://maxdemarzi.com/tag/decision-tree/

So on.
Confirmed: ISDL Workshop, 2019-01-25, 0800:1600, Pacific Salmon Commission Boardroom, 600-1155 Robson Street, Vancouver, BC, V6E 1B5.
That was the last obstacle. Oh yeah, it is on!
Attendance by Neo4j confirmed. Attendance by neo4j experts confirmed. Attendance by DFO database managers confirmed.

Definitely an interesting location. The Pacific Salmon Commission (USA + Canada) runs perhaps the world's most data intensive -- and just plain intense -- fisheries management process on the planet. Consider: DNA samples from salmon catches are processed and returned as population composition in 36 to 48 hours, informing weekly decisions about which fisheries will open in which fishing areas.

This first ISDL workshop will consider graph-based solutions to problems re data integration, processing, and analysis expressed by salmon ecologists in a preceding two-day workshop, also sponsored by the International Year of the Salmon. "Designated survivors" of preceding ecologists' workshop will attend, and may or may not survive this one.

Salmon data integration is critical:
“We collated smolt survival and smolt-to-adult (marine) survival data for all regions of the Pacific coast of North America excluding California to examine the forces shaping salmon returns. A total of 3,055 years of annual survival estimates were available for Chinook (Oncorhynchus tshawytscha) and steelhead (O. mykiss). This dataset provides a fundamentally different perspective on west coast salmon conservation problems from the previously accepted view.”
https://www.researchgate.net/publication/329214582_The_coast-wide_collapse_in_marine_survival_of_wes...

Building toward a graph db scheme for salmon data integration. Levering and extending the Darwin Core ontology, an international standard. Thank you, John Song.
https://www.lucidchart.com/documents/view/a9313e34-f6ca-4730-a17b-b591e0d7a725/0

How will salmon ecologists react to the need for elaborate standards for the salmon knowledge graph and attendent glossary: barriers worth removing, or barriers to participation?
But, ...
Are you sure that my "1SW coho" is your "grilse"? They're both age 1.1 right? I mean age 2_1.

You might imagine my excitement upon discovering that the Instituut Natuur-en Bosonderzoek in Brussels -- Institute for Nature and Forest Research -- records fish counts back to 1753! Alas, only 3 observations of Atlantic salmon: 2 in 2012, 1 in 2017. Which seems rather odd. 2X_2_272b9793d4b1c6d0089df7fbff4d89bd1037aa7c.png.

WorkFlow, finally!
Extensive search revealed several promising workflow packages, but either (A) hopeless as tools for persons other than professional computer programmers/sysadmins, or (B) not being maintained and, frankly, old-fashioned. Australia to the rescue, thank-you CSIRO, for WorkSpace.
An excellent GUI for building workflows, and, to my delight, nested workflows. "Nesting does make working with complex workflow arrangements significantly easier."
I have pounced on this like a Velociraptor on a tourist.

2X_a_a935f5aca908de6ebe8e17b1a905980ad69a92e5.jpeg

Thanks for keeping everyone in the community updated on this projects' updates. This is awesome!!!! It's really cool to be able to see the process as it moves along.

Thanks for this. I am very interested in what happened with this project, if it's still ongoing, and how I can help. @js1 @ScottAkenhead . I was reading through @js1 google doc where he mentions the Cal Academy of sciences data begging to be a graph, and that is exactly what I'd like to do. I am new to graphs and Neo4j but very willing to do the work and learn. Any pointing in the right direction is appreciated.

This first update since 2018-12 is to (a) tell you this project has been steadily developing, (b) to point to some presentations and papers about those developments, and (c) to apologize for rudely ignoring the 3,600 viewers of this topic.

– The January 2018 workshop International Salmon Data Laboratory, was part of the [North Pacific Anadromous Fish Commission] (NPAFC.org) program International Year of the Salmon. See NPAFC Technical Report #14, HERE. with presenation slides but not videos.

  • A review of progress 2017-2021 is in this 19 video "The Salmon Of Knowledge." I expected to deliver this in Hakodate Japan in May 2020, but actually delivered in my kitchen May 2021 because of expletive deleted COVID-19. See also the 1,500 word extended abstract

– Lastly, today we presented this project at the webinar [Neo4j Connections - A Virtual Event: Accelerating Innovation with Graphs Wed, Aug 25, 2021. If our video is posted (youtube?) I will let you know.

The important news is that Fisheries and Oceans Canada implemented a neo4j graph database as fully developed Enterprie IT. Their Azure Cloud, with the DB and microservices in Docker Containers. Tools include NLP and AI for dissecting, loading, and linking messy Excel spreadsheet, Postal Codes, GeoNames.org, First Nations locations,.

As part of the Fisheries and Oceans experiment with neo4j knowledge graph, I insisted, not entirely reasonably, on R+Shiny as platform for a user-friendly (no code) UI because that will allow the salmon research and management community to expand the UI themselves, adding more tools, Views, reports,.
Please see the silent two-minute demo showing GraphView that allows end-users to add and edit content, and showing ability to flip between TableView, MapView, and GraphView. Thank you, George Batten of Sitka Technologies in Portland, for an excellent Proof of Concept.

This system is a knowledge graph for fish, so we called it Graphish (and ourselves Graphishtas). So the UI is called Graphishui and, no, not connected to feng shui.

Nodes 2022
Nodes
NODES 2022, Neo4j Online Education Summit

All the sessions of the conference are now available online