cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Recommened memory settings on a NUMA machine

Hello, I am trying to run the SF100 LDBC dataset with neo4j on a dual node NUMA Machine.

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                16
On-line CPU(s) list:   0-15
Thread(s) per core:    1
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 45
Model name:            Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
Stepping:              7
CPU MHz:               1200.083
CPU max MHz:           2901.0000
CPU min MHz:           1200.0000
BogoMIPS:              5801.18
Virtualisation:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              20480K
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15

I want to run 2 different benchmarks, one with single node configurations (half of the resources) and another one dual node (using full of the resources).

htophtop

 free -g command gives me the following:

  total used free shared buff/cache available
Mem: 377 11 357 0 8 363
Swap: 0 0 0      

I also get the following run the ./neo4j-admin memrec --database=db_sf0100_p064_regular_utc_35ce command:

# Memory settings recommendation from neo4j-admin memrec:
#
# Assuming the system is dedicated to running Neo4j and has 386900m of memory,
# we recommend a heap size of around 31g, and a page cache of around 332700m,
# and that about 22400m is left for the operating system, and the native memory
# needed by Lucene and Netty.
#
# Tip: If the indexing storage use is high, e.g. there are many indexes or most
# data indexed, then it might advantageous to leave more memory for the
# operating system.
#
# Tip: The more concurrent transactions your workload has and the more updates
# they do, the more heap memory you will need. However, don't allocate more
# than 31g of heap, since this will disable pointer compression, also known as
# "compressed oops", in the JVM and make less effective use of the heap.
#
# Tip: Setting the initial and the max heap size to the same value means the
# JVM will never need to change the heap size. Changing the heap size otherwise
# involves a full GC, which is desirable to avoid.
#
# Based on the above, the following memory settings are recommended:
dbms.memory.heap.initial_size=31g
dbms.memory.heap.max_size=31g
dbms.memory.pagecache.size=332700m
#
# The numbers below have been derived based on your current data volume in database and index configuration of database 'db_sf0100_p064_regular_utc_35ce'.
# They can be used as an input into more detailed memory analysis.
# Lucene indexes: 0k
# Data volume and native indexes: 178300m

According to the manual: dbms.memory.pagecache.size = 1.2 * (Data volume and native indexes) = 1.2 * 178.3 = 213.96gb

 

I am a bit confused about what configurations I need to set for having those 2 different experiments (single node  -> half of ram and cpus: 0,2,4,6,8,10,12,14 / dual node -> full of ram and all cpus).

 

Any ideas?

0 REPLIES 0