MAP_HUGETLB 是连贯记忆的同义词吗? (成功时)
Is MAP_HUGETLB synonym of coherent memory? (when successful)
我是否正确地假设使用 MAP_HUGETLB|MAP_ANONYMOUS
的 mmap 内存实际上是 100% 物理连贯的?至少在大页面大小上,2MB 或 1GB。
否则我不知道它怎么能work/be表现出色,因为 TLB 需要更多条目...
是的,他们是。事实上,正如您所指出的,如果不是,则单个大页面将需要多个页面 table 条目,这将破坏拥有大页面的全部目的。
以下是 Documentation/admin-guide/mm/hugetlbpage.rst
的摘录:
The default for the allowed nodes--when the task has default memory
policy--is all on-line nodes with memory. Allowed nodes with
insufficient available, contiguous memory for a huge page will be
silently skipped when allocating persistent huge pages. See the
discussion below <mem_policy_and_hp_alloc>
of the interaction
of task memory policy, cpusets and per node attributes with the
allocation and freeing of persistent huge pages.
The success or failure of huge page allocation depends on the amount of physically contiguous memory that is present in system at the time
of the allocation attempt. If the kernel is unable to allocate huge
pages from some nodes in a NUMA system, it will attempt to make up the
difference by allocating extra pages on other nodes with sufficient
available contiguous memory, if any.
另请参阅:How do I allocate a DMA buffer backed by 1GB HugePages in a linux kernel module?
我是否正确地假设使用 MAP_HUGETLB|MAP_ANONYMOUS
的 mmap 内存实际上是 100% 物理连贯的?至少在大页面大小上,2MB 或 1GB。
否则我不知道它怎么能work/be表现出色,因为 TLB 需要更多条目...
是的,他们是。事实上,正如您所指出的,如果不是,则单个大页面将需要多个页面 table 条目,这将破坏拥有大页面的全部目的。
以下是 Documentation/admin-guide/mm/hugetlbpage.rst
的摘录:
The default for the allowed nodes--when the task has default memory policy--is all on-line nodes with memory. Allowed nodes with insufficient available, contiguous memory for a huge page will be silently skipped when allocating persistent huge pages. See the discussion below
<mem_policy_and_hp_alloc>
of the interaction of task memory policy, cpusets and per node attributes with the allocation and freeing of persistent huge pages.The success or failure of huge page allocation depends on the amount of physically contiguous memory that is present in system at the time of the allocation attempt. If the kernel is unable to allocate huge pages from some nodes in a NUMA system, it will attempt to make up the difference by allocating extra pages on other nodes with sufficient available contiguous memory, if any.
另请参阅:How do I allocate a DMA buffer backed by 1GB HugePages in a linux kernel module?