From: Zi Yan <ziy@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
David Hildenbrand <david@kernel.org>,
Matthew Wilcox <willy@infradead.org>
Cc: Alistair Popple <apopple@nvidia.com>,
Balbir Singh <balbirs@nvidia.com>,
Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Jens Axboe <axboe@kernel.dk>,
Zi Yan <ziy@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
Muchun Song <muchun.song@linux.dev>,
Oscar Salvador <osalvador@suse.de>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
io-uring@vger.kernel.org
Subject: [RFC PATCH 3/5] mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling
Date: Thu, 29 Jan 2026 22:48:16 -0500 [thread overview]
Message-ID: <20260130034818.472804-4-ziy@nvidia.com> (raw)
In-Reply-To: <20260130034818.472804-1-ziy@nvidia.com>
Commit f708f6970cc9 ("mm/hugetlb: fix kernel NULL pointer dereference when
migrating hugetlb folio") fixed a NULL pointer dereference when
folio_undo_large_rmappable(), now folio_unqueue_deferred_list(), is used on
hugetlb to clear deferred_list. It cleared large_rmappable flag on hugetlb.
hugetlb is rmappable, thus clearing large_rmappable flag looks misleading.
Instead, reject hugetlb in folio_unqueue_deferred_list() to avoid the
issue.
This prepares for code separation of compound page and folio in a follow-up
commit.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/hugetlb.c | 6 +++---
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 3 ++-
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6e855a32de3d..7466c7bf41a1 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1422,8 +1422,8 @@ static struct folio *alloc_gigantic_frozen_folio(int order, gfp_t gfp_mask,
if (hugetlb_cma_exclusive_alloc())
return NULL;
- folio = (struct folio *)alloc_contig_frozen_pages(1 << order, gfp_mask,
- nid, nodemask);
+ folio = page_rmappable_folio(alloc_contig_frozen_pages(1 << order, gfp_mask,
+ nid, nodemask));
return folio;
}
#else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE || !CONFIG_CONTIG_ALLOC */
@@ -1859,7 +1859,7 @@ static struct folio *alloc_buddy_frozen_folio(int order, gfp_t gfp_mask,
if (alloc_try_hard)
gfp_mask |= __GFP_RETRY_MAYFAIL;
- folio = (struct folio *)__alloc_frozen_pages(gfp_mask, order, nid, nmask);
+ folio = page_rmappable_folio(__alloc_frozen_pages(gfp_mask, order, nid, nmask));
/*
* If we did not specify __GFP_RETRY_MAYFAIL, but still got a
diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index f83ae4998990..4245b5dda4dc 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -51,7 +51,7 @@ struct folio *hugetlb_cma_alloc_frozen_folio(int order, gfp_t gfp_mask,
if (!page)
return NULL;
- folio = page_folio(page);
+ folio = page_rmappable_folio(page);
folio_set_hugetlb_cma(folio);
return folio;
}
diff --git a/mm/internal.h b/mm/internal.h
index d67e8bb75734..8bb22fb9a0e1 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -835,7 +835,8 @@ static inline void folio_set_order(struct folio *folio, unsigned int order)
bool __folio_unqueue_deferred_split(struct folio *folio);
static inline bool folio_unqueue_deferred_split(struct folio *folio)
{
- if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio))
+ if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio) ||
+ folio_test_hugetlb(folio))
return false;
/*
--
2.51.0
next prev parent reply other threads:[~2026-01-30 3:52 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 3:48 [RFC PATCH 0/5] Separate compound page from folio Zi Yan
2026-01-30 3:48 ` [RFC PATCH 1/5] io_uring: allocate folio in io_mem_alloc_compound() and function rename Zi Yan
2026-01-30 3:48 ` [RFC PATCH 2/5] mm/huge_memory: use page_rmappable_folio() to convert after-split folios Zi Yan
2026-01-30 3:48 ` Zi Yan [this message]
2026-01-30 3:48 ` [RFC PATCH 4/5] mm: only use struct page in compound_nr() and compound_order() Zi Yan
2026-01-30 3:48 ` [RFC PATCH 5/5] mm: code separation for compound page and folio Zi Yan
2026-01-30 8:15 ` [syzbot ci] Re: Separate compound page from folio syzbot ci
2026-01-30 16:39 ` [syzbot ci] " Zi Yan
2026-01-30 16:41 ` syzbot ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260130034818.472804-4-ziy@nvidia.com \
--to=ziy@nvidia.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=axboe@kernel.dk \
--cc=balbirs@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=io-uring@vger.kernel.org \
--cc=jackmanb@google.com \
--cc=jgg@nvidia.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=npache@redhat.com \
--cc=osalvador@suse.de \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox