cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Transferring database from server to local - Database stops after log 'Missing counts store...'

cuneyttyler
Ninja
Ninja

I aim to transfer my database in my remote server to my local machine. The store format of the database is AF4.3.0. Neo4j version of the one in my local machine is 4.4.16. So store format is adaptable to my local database. 

I copied neo4j folder under NEO_HOME/data/databases to my local installation's databases folder and similarly I copied neo4j folder under NEO_HOME/data/transactions to my local installation's transaction folder. When I try to start the database it stops after the following logs:

 

 

 

2023-01-07 19:14:45.089+0000 INFO  [o.n.k.i.i.s.GenericNativeIndexProvider] [neo4j/455170c4] Schema index cleanup job registered: descriptor=Index( id=12, name='INDEX_ENTITY_SHORT_DESCRIPTION', type='GENERAL BTREE', schema=(:Label[0] {PropertyKey[28]}), indexProvider='native-btree-1.0' ), indexFile=/media/cnytync/Yeni Birim/neo4j-community-4.4.16/data/databases/neo4j/schema/index/native-btree-1.0/12/index-12
2023-01-07 19:14:45.089+0000 INFO  [o.n.k.i.i.s.GenericNativeIndexProvider] [neo4j/455170c4] Schema index cleanup job started: descriptor=Index( id=12, name='INDEX_ENTITY_SHORT_DESCRIPTION', type='GENERAL BTREE', schema=(:Label[0] {PropertyKey[28]}), indexProvider='native-btree-1.0' ), indexFile=/media/cnytync/Yeni Birim/neo4j-community-4.4.16/data/databases/neo4j/schema/index/native-btree-1.0/12/index-12
2023-01-07 19:17:01.321+0000 INFO  [o.n.k.i.i.s.GenericNativeIndexProvider] [neo4j/455170c4] Schema index cleanup job finished: descriptor=Index( id=12, name='INDEX_ENTITY_SHORT_DESCRIPTION', type='GENERAL BTREE', schema=(:Label[0] {PropertyKey[28]}), indexProvider='native-btree-1.0' ), indexFile=/media/cnytync/Yeni Birim/neo4j-community-4.4.16/data/databases/neo4j/schema/index/native-btree-1.0/12/index-12 Number of pages visited: 273402, Number of tree nodes: 273393, Number of cleaned crashed pointers: 0, Time spent: 2m 16s 231ms
2023-01-07 19:17:01.491+0000 INFO  [o.n.k.i.i.s.GenericNativeIndexProvider] [neo4j/455170c4] Schema index cleanup job closed: descriptor=Index( id=12, name='INDEX_ENTITY_SHORT_DESCRIPTION', type='GENERAL BTREE', schema=(:Label[0] {PropertyKey[28]}), indexProvider='native-btree-1.0' ), indexFile=/media/cnytync/Yeni Birim/neo4j-community-4.4.16/data/databases/neo4j/schema/index/native-btree-1.0/12/index-12
2023-01-07 19:17:01.492+0000 INFO  [o.n.k.i.a.i.IndexingService] [neo4j/455170c4] IndexingService.init: indexes not specifically mentioned above are ONLINE
2023-01-07 19:17:01.780+0000 INFO  [o.n.k.a.DatabaseAvailabilityGuard] [neo4j/455170c4] Requirement `Database unavailable` makes database neo4j unavailable.
2023-01-07 19:17:01.780+0000 INFO  [o.n.k.a.DatabaseAvailabilityGuard] [neo4j/455170c4] DatabaseId{455170c4[neo4j]} is unavailable.
2023-01-07 19:17:04.237+0000 WARN  [o.n.k.i.s.MetaDataStore] [neo4j/455170c4] Missing counts store, rebuilding it.

 

From last log, it seems to be running but when I run 'ps aux | grep neo4j', no records are returned. Before this, I moved my db from one remote server to another and performed the same steps and this didn't happen. Why neo4j stops after this scan index procedure and what should I do to make this failure disappear? May it be related to high memory usage because of big db size? I have 24 GB ram on my computer and have intial heapsize = 8g, max heapsize=16g settings for neo4j.

5 REPLIES 5

@cuneyttyler 

if you use https://neo4j.com/docs/operations-manual/4.4/backup-restore/offline-backup/ and

 

bin/neo4j-admin dump

and

bin/neo4j-admin load

as these are the recommended mechanism for copying databases do you encounter the same failure?

 

@cuneyttyler 

if you use https://neo4j.com/docs/operations-manual/4.4/backup-restore/offline-backup/ and

 

bin/neo4j-admin dump

and

bin/neo4j-admin load

as these are the recommended mechanism for copying databases do you encounter the same failure?

 

I don't have enough storage in my local machine to have both dump file and database. Can I load from my remote server to my local machine directly without copying the file?

I'll purchase an external hard drive and try in a few days.

I created a dump and it's 100 GB - the database size is 500 GB. When I try to load the dump, after loading 250 GB it fails prompting 'Input/Output Error' on Ubuntu. After that, I am unable to delete database and transaction folders that are created - I need to reboot first. And I can't continue loading where it left off because it says 'A database with that name already exists'. How you encountered this Input/Output error before?