/proc/meminfo formatted for humans

Franck Pachot
2 min readFeb 27, 2019

--

Here is a small awk script I use to format memory information on Linux:

awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$2;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print sprintf("%-60s %10d",k,v[k]/p); for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|( HugePage)[^:]*" #awk #meminfo

This reads /proc/meminfo and formats it to display the size in GB on the first column. Most of the statistics are in kB (formatted in the ‘v’ arrays in the awk script) but for Huge Pages we must read the Hugepagesize as they are in number of pages (stored in the ‘h’ array in the awk script). Then I sort it by size, and color the ‘HugePage’ pattern with grep.

Here is an example of the output on a ‘ VM.DenseIO2.24’ compute shape in the Oracle Cloud (320GB — showing 314 MemTotal here).

I have allocated 102400 Huge Pages (200GB) with the following line in /etc/sysctl.conf

vm.nr_hugepages=102400

Remember that this can be allocated dynamically (sysctl -p) but be careful to leave enough small pages (here is an example where the system cannot boot because of invalid settings: https://blog.dbi-services.com/kernel-panic-not-syncing-out-of-memory-and-no-killable-processes/)

In this example, 64 GB of those Huge Pages are used (136 GB free within 200GB total). They were allocated by two Oracle Database instances having a 32GB System Global Area each. This is visible from the alert.log. When the first instance started, the 102400 pages were free and 16385 were allocated:

Supported system pagesize(s):
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES
4K Configured 12 12
2048K 102400 16385 16385

When the second instance started, only 102400–16385=86015 were free and another 16385 were allocated:

Supported system pagesize(s):
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES
4K Configured 12 12
2048K 86015 16385 16385

So, this leaves 120 GB of free small pages, approximately counted by MemAvailable but I recommend Frits Hoogland post for a better calculation:

The current post follows the awk snippet I posted in a tweet— follow me on Twitter if you like this kind of posts…

Added 3-MAR-2019

I’ve added the calculation of used Huge Pages as the difference between Total and Free (added in italic in the code above):

    32767 GB VmallocTotal:   34359738367 kB
32510 GB VmallocChunk: 34089686416 kB
503 GB MemTotal: 528074912 kB
483 GB DirectMap1G: 506462208 kB
457 GB HugePages_Total: 234000
436 GB HugePages_Free: 223470

39 GB CommitLimit: 41181260 kB
33 GB MemAvailable: 35024844 kB
31 GB Cached: 33027992 kB
30 GB DirectMap2M: 31846400 kB
20 GB HugePages Used (Total-Free)
18 GB Inactive: 19130736 kB
17 GB Inactive(file): 18231100 kB
17 GB Active: 18713072 kB
15 GB SwapTotal: 16759804 kB

--

--

Franck Pachot
Franck Pachot

Written by Franck Pachot

Developer Advocate for YugabyteDB (Open-Source, PostgreSQL-compatible Distributed SQL Database. Oracle Certified Master and AWS Data Hero.