Big memory

Big-memory is a term used to describe server workloads which need to run on machines with a large amount of RAM (Random-access memory) memory. Some example workloads are databases, in-memory caches, and graph analytics.[1] Or, more generally, Data Science and Big data.

Some database systems are designed to run mostly in memory, rarely if ever retrieving data from disk or flash memory. See a List of in-memory databases.

The performance of big memory systems depends on how the CPU's or CPU cores access the memory, via a conventional Memory controller or via NUMA ( Non-uniform memory access ). Performance also depends on the size and design of the CPU cache.

Performance also depends on OS design. The "Huge pages" feature in Linux can improve the efficiency of Virtual Memory.[2] The new "Transparent huge pages" feature in Linux can offer better performance for some big-memory workloads.[3] The "Large-Page Support" in Microsoft Windows enables server applications to establish large-page memory regions which are typically three orders of magnitude larger than the native page size.[4]

References

  1. "Efficient Virtual Memory for Big Memory Servers" (PDF). Retrieved 2016-09-24.
  2. "Huge pages part 1 (Introduction)". Retrieved 2016-09-24.
  3. "Transparent huge pages in 2.6.38". Retrieved 2016-09-24.
  4. "Large-Page Support". Retrieved 2016-09-24.


This article is issued from Wikipedia - version of the 9/28/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.