Exploring the Feasibility of 1GB Transparent Huge Pages in Linux
Introduction
In the world of Linux memory management, huge pages have long been a tool for improving performance by reducing Translation Lookaside Buffer (TLB) misses. Typically, when developers discuss huge pages, they refer to PMD-level pages that are 1 MB or 2 MB in size, depending on the underlying CPU architecture. However, modern processors can support even larger page sizes. On x86 systems, for instance, PUD-level huge pages can hold a full 1 GB of data. Yet, until recently, the idea of making such enormous pages available transparently to applications has been dismissed as either impractical or undesirable. That perception is now being challenged by Usama Arif, who presented a session on this very topic at the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit.
Huge Pages: A Quick Refresher
Memory is managed in fixed-size blocks called pages. The default page size on most architectures is 4 KB. While small pages offer fine-grained control, they also lead to large page tables and increased TLB misses. Huge pages mitigate this by using larger page sizes, thereby reducing the number of page table entries and improving memory access efficiency.
- PMD-level pages (Page Middle Directory): Typically 2 MB on x86, 1 MB on ARM.
- PUD-level pages (Page Upper Directory): 1 GB on x86, used for very large contiguous memory regions.
Until now, transparent huge page (THP) support in Linux has focused almost exclusively on PMD-level sizes. PUD-level pages were deemed too large to be managed transparently without causing excessive memory waste or fragmentation.
Why 1 GB THP Has Been Avoided
Several technical and practical obstacles have historically kept 1 GB transparent huge pages out of the mainline kernel:
- Fragmentation Risk: Allocating a contiguous 1 GB block is extremely difficult in a heavily fragmented system. Memory compaction and defragmentation would need to work at a much larger scale.
- Memory Waste: If an application only uses a fraction of a 1 GB page, the rest is wasted. For many workloads, this overhead is unacceptable.
- Page Migration Complexity: Moving a 1 GB page between NUMA nodes or during compaction is far more complex and time-consuming than smaller pages.
- TLB Coverage Trade-off: While 1 GB pages reduce TLB misses, they also increase the penalty for TLB misses when they do occur, and they may reduce the effectiveness of multi-level TLBs.
Usama Arif’s Proposal at LSFMM+BPF 2026
Usama Arif, a prominent kernel developer, argued that the time has come to reevaluate these assumptions. In his session at the summit, he outlined a path toward transparent 1 GB huge pages that could be used safely and efficiently by suitable applications.
Key Ideas Presented
- Selective Enablement: Instead of enabling 1 GB THP for all processes, the system would allow opt-in via
madviseor cgroup controls. Only workloads that can benefit—such as large databases, HPC simulations, or in-memory data grids—would use them. - Improved Memory Compaction: Enhancements to the kernel’s compaction subsystem to proactively reserve and defragment memory at the 1 GB granularity, possibly using zone compaction and page block grouping.
- Fallback Mechanism: If a 1 GB page cannot be allocated, the kernel automatically falls back to 2 MB THP or regular 4 KB pages, ensuring no application breakage.
- NUMA Awareness: Carefully handle NUMA locality for such large pages to avoid performance penalties from remote memory access.
Potential Benefits
For applications with massive, contiguous data sets, 1 GB THP could deliver dramatic performance improvements. TLB coverage would be increased by a factor of 512 compared to 2 MB pages, effectively eliminating TLB misses for many workloads. This is particularly valuable for virtual machine monitors, database engines, and large-scale key-value stores.
Implementation Considerations
Arif’s proposal did not provide a full patch set, but it outlined several implementation areas:
- MMU Notifier Integration: To handle address-space changes and page table updates for 1 GB mappings efficiently.
- Huge Page PMD Splitting: On memory pressure, the kernel must be able to split a 1 GB page into smaller 2 MB pages, then further into 4 KB pages, without causing sudden latency spikes.
- Memory Cgroup Accounting: Properly track and limit 1 GB page usage within control groups to prevent any single workload from starving others.
Community Reception and Next Steps
The session at LSFMM+BPF generated lively discussion. Some attendees expressed concerns about the complexity and maintenance burden, while others saw it as a natural evolution of the THP subsystem. There was general agreement that a careful, incremental approach is needed—starting with a compile-time or runtime opt-in, extensive testing, and performance benchmarking.
Several developers volunteered to collaborate on a prototype implementation, focusing on the x86-64 architecture first, with plans to extend to ARM (which supports 64 KB base pages and 512 MB huge pages) later.
Conclusion
The push to scale transparent huge pages to the 1 GB level represents a bold step forward for Linux memory management. While challenges remain, the potential performance gains for memory-intensive applications are substantial. Usama Arif’s leadership at the 2026 LSFMM+BPF Summit has reignited interest in this capability, and the community is now actively exploring how to make 1 GB THP a practical reality. For more on the broader context of huge pages, see our article on Huge Pages Basics or the section on Challenges.
Related Articles
- How to Install Linux Mint on New Hardware Using HWE ISOs
- Fedora KDE Plasma 44: Key Features and Improvements
- Meta Slashes 8,000 Jobs as Zuckerberg Blames AI Arms Race and Infrastructure Costs
- Exploring Multi-GPU Setups with Intel Arc Pro B70 on Ubuntu 26.04: A Q&A Guide
- Upgrading Fedora Silverblue to Release 44: A Comprehensive Rebase Guide
- Debian's Mandatory Reproducible Builds: A Complete Guide for Users and Maintainers
- Strawberry Music Player Reaches New Milestone: A Full-Featured Linux Music Management Solution
- 10 Essential Steps to Upgrade Fedora Silverblue to Fedora Linux 44