The OpenEdge DBA Files

Understanding UNIX Performance Metrics – Part Two

Written by Paul Koufalis | Nov 24, 2021 1:19:00 PM

In today’s missive, we are going to talk about poor, misunderstood memory. How much is used? How much is free? How exactly do you define “used” and “free” memory? And the million dollar question: why should you care?

 

A bit of background first

Processes do not access physical memory directly. Each process has its own virtual address space which, all by itself, is much larger than any conceivable amount of physical RAM, and the operating system’s memory management system translates all of these virtual addresses across all processes to the much smaller (but still quite large) number of physical locations in RAM as they are needed.

Many virtual addresses map to objects that can be shared: for example, when more than one process is running the same executable, the “code” segments can all be shared. There is no need for everyone to have their own private copy! Likewise the OpenEdge database manages a very large shared memory pool. Every process connected to the db maps some of those pages into its own address space but the operating system ensures that there is really only one instance of each page. The same is true when reading files from disk. When the same data is read there is just one instance of any given page shared by everyone. The first process to read it does the hard work and everyone else gets a free lunch! Of course if a process modifies data you only want it to modify its own copy – which is where “copy on write” comes into play, a new page is then allocated and mapped to the process with the newly modified data in it and the process can happily do whatever it wants with its private copy of that data (unless this was an explicitly shared page like the OpenEdge -B buffers, in that case you want everyone else to see your changes!).

And that’s just the high level view. There is a lot of detail hiding in there and it is constantly changing.

Wow! This sounds great. A process can use an infinite amount of memory without any consequences?

Not quite…

While the virtual address space is quite large and the amount of physical RAM that you can use to back it up is also very large neither of those is actually infinite. And in the real world you will, eventually, manage to map enough unique data into the address space that some decisions will need to be made about which memory is most important.

Modern operating systems have very sophisticated ways of prioritizing memory. The algorithms are always improving, so that great Stackoverflow article from 2012 may not be as insightful as you hope. That article is also probably talking about desktop considerations rather than database servers. Many of the old-school rules of thumb, metrics, and utilities are no longer helpful.

But one thing remains true: if you try to use more memory than you actually have available your system will perform dramatically badly. Or, worse, your database will crash.

 

Used memory?

If we are going to try to avoid using too much memory it seems like it would be helpful to know how much we are actually using wouldn’t it? How do we calculate “used” memory? Hmmm…this is a bit of a head scratcher.

On Linux, the old-school “free” command provides an overview of memory usage:

$ free -h
total used free shared buff/cache available
Mem: 124G 17G 715M 72G 105G 33G
Swap: 34G 534M 34G

So how much memory is used? Apparently it is 17 GB? Or is it 124 GB – 715M = 123.3 GB (total – free)? Or maybe it’s 124 GB – 33 GB (total – available)? But what about shared memory? Shouldn’t that be counted? It is, after all, in use. So are the buffers and caches, they all count too don’t they? “Used”, as it is shown here, doesn’t seem very useful.

Free memory?

Ok, maybe we can back into it. Perhaps we can think of “used” memory as “not free” memory? This all brings us to the definition of free memory. A simple definition would be the sum of free physical memory, as reported by the free command, and cached data that can be safely evicted from memory.

Try this experiment on a NON-PRODUCTION Linux server (please do not do this in production!):

$ free -h
total used free shared buff/cache available
Mem: 30G 4.7G 387M 16G 25G 9.2G
Swap: 34G 751M 34G

$ echo 3 > /proc/sys/vm/drop_caches

$ free -h
total used free shared buff/cache available
Mem: 30G 4.7G 9.4G 16G 16G 9.3G
Swap: 34G 751M 34G

The “echo 3 > /proc/sys/vm/drop_caches” effectively empties the file system buffer caches, increasing “free” memory from 378 MB to 9.4 GB, with the difference coming out of the “buff/cache” column. This is “safe” to do because each page that is returned to the OS exists on disk and if someone requests it again you can just re-read it. Of course re-reading data from disk is slow and painful so you don’t want to make a habit of it. (Drop caches is mostly useful for running benchmarks to ensure that your initial conditions are “fair”.)

We found 9GB somewhere. but 16GB was still retained in the various buffers and caches.

Did that make it any clearer? Do we know how much we are “using” yet?

 

Available memory?

Notice in the previous experiment how free memory increased from 387 MB to 9.4 GB, but available memory stayed essentially unchanged. This is because the memory manager knows how many pages can be quickly and easily evicted if more memory is needed. The “available” column is memory that is currently used for low priority purposes, but readily available if needed.

Forget about “used” and “free” memory. You should care about available memory.

Different versions and flavors of Linux may calculate available memory somewhat differently, and it may or may not be visible as part of the “free” command. (If you are on RHEL6 or earlier you won’t see it.) This information is also available in /proc/meminfo:

$ cat /proc/meminfo
MemTotal: 32305564 kB
MemFree: 9837124 kB
MemAvailable: 9745440 kB
Buffers: 1604 kB
Cached: 17367464 kB
SwapCached: 15280 kB
Active: 12942836 kB
Inactive: 8937596 kB
Active(anon): 12876236 kB
Inactive(anon): 8927276 kB
Active(file): 66600 kB

 

Why should you care?

Three letters: OOM.

When a system is low on memory, the Out Of Memory manager looks for heavy memory consumer processes and kills them in order to, free up that memory. How it picks the processes to kill is outside the scope of this article, but I can tell you from experience that anything connected to a database is going to be near the top of the list due to the (usually) very large -B pool that any process connected to a db is going to flaunt.

Yes, in an effort to keep your system running, the OOM Manager will likely kill the most important running processes on the server. Yay. On the other hand, if you got to that point you were probably already in a lot of trouble and just hadn’t noticed yet.

 

What should you do?

Monitoring available memory sounds like a great first step!

This is often overlooked because modern servers typically have more than enough RAM to meet their needs. The first “free -h” example above is a prime example: a server with 124 GB of RAM and 33 GB available. Today. Six months from now, when everyone is thrilled because business has tripled due to the excellent job you have been doing tuning the database, you might be see something very different! Or, worse, maybe you won’t see it until your phone starts going crazy with users wanting to know why the db just crashed.

You can also keep track of which processes are consuming memory. ProTop tracks and alerts on available memory on Linux and recent versions of ProTop include a program called “util/private_dirty.p”, which calculates per process private memory usage which is great for detecting slow memory leaks.  You can download ProTop here.  For an explanation of process memory usage, stay tuned for Tom Bascom’s videos on “How Much RAM Am I (Really) Using?”  Make sure to subscribe to our blog so that you don’t miss it!