Designing ELK for production use, Part 2
In part 1 of this series, I covered node responsibilities and how to configure them. In part 2, we will be discussing memory management.
Configuring JVM Heap Size
First, we must understand how Java manages it’s memory. Java objects are stored in something called heap memory. When Java (JVM) first starts up, it allocates system memory to heap. It manages allocation and deallocation through a process called garbage collection (GC). When memory usage is low, GC happens fast and is almost not noticeable. As heap grows, GC takes longer and is more impactful. During GC, all other Java threads are stopped until it is complete.
Every “How to install Elastic” tutorial I have read gets it installed. None of them finish by telling you that the out of box installation has Elastic configured for 1GB RAM. That’s it! Only 1 GB. Usually enough for development. Rarely enough for Production.
Heap is set via the jvm.options file in the elasticseach folder. Xms for the minimum memory and Xmx for the maximum. Elastic has some great documentation on setting JVM Heap. Here’s some quick pointers:
- Minimum and Maximum should be set to the same value.
- Memory needs will vary by node type. Data nodes require the lion’s share of the cluster’s memory.
- The Heap Memory cannot be more than 32GB
- Elastic recommends setting to no more than 50% of total system RAM. In my experience, for small to medium workloads, going up to 75% seems okay. For example, many of my data nodes have 8GB total RAM and have the heap configured to use 6GB.
Too Much or Too Little? It’s easy to assume that too little heap space is going to be problematic. However, what about too much? Why not just give your Elastic nodes more heap than they could need to make sure you don’t face memory pressure? Well, not so fast. Too much heap can result in latency spikes when GC is running. You’re probably not going to set your heap correctly at first. It’ll take time to tune it to your cluster’s needs.
Use MLOCKALL
When your server is under heavy memory pressure, it will begin to swap. Swap is when memory usage is shifted to virtual memory – which runs on the hard drive. Memory is much faster than your hard drive. The last thing you want is Elastic to start swapping to disk. Fortunately, we have the MLOCKALL configuration parameter. When set, your system will not be able to swap Elasticsearch.
In the elasticsearch.yml
file on each of your nodes, add the following:
bootstrap.mlockall: false
File Descriptors
Unix / Linux systems have a limit for the quantity of files that a single process can access. Usually, the default configuration is too low for what Elastic needs. Elastic recommends a file descriptor limit of 64000. The error you will see if you run against this limit will reference “too many files open”. To see what you’re currently set at, run:
sysctl fs.file-max
If you need to change this value, it can be done by setting "fs.file-max=64000"
in "/etc/sysctl.conf"
.