Rietveld Code Review Tool
Help | Bug tracker | Discussion group | Source code | Sign in
(788)

Unified Diff: mm/slub.c

Issue 325780043: mm/kasan: support per-page shadow memory to reduce memory consumption
Patch Set: mm/kasan: support per-page shadow memory to reduce memory consumption Created 6 years, 10 months ago
Use n/p to move between diff chunks; N/P to move between comments. Please Sign in to add in-line comments.
Jump to:
View side-by-side diff with in-line comments
Download patch
« no previous file with comments | « mm/slab_common.c ('k') | no next file » | no next file with comments »
Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
Index: mm/slub.c
diff --git a/mm/slub.c b/mm/slub.c
index 2378733d0fd9a0a0cc478e5fd753e1a431f313c8..85e348ee7734ff8b08b9e60a74529c0e6029e382 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1409,7 +1409,14 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
else
page = __alloc_pages_node(node, flags, order);
+ if (kasan_slab_page_alloc(page ? page_address(page) : NULL,
+ PAGE_SIZE << order, flags)) {
+ __free_pages(page, order);
+ page = NULL;
+ }
+
if (page && memcg_charge_slab(page, flags, order, s)) {
+ kasan_slab_page_free(page_address(page), PAGE_SIZE << order);
__free_pages(page, order);
page = NULL;
}
@@ -1667,6 +1674,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += pages;
memcg_uncharge_slab(page, order, s);
+ kasan_slab_page_free(page_address(page), PAGE_SIZE << order);
__free_pages(page, order);
}
« no previous file with comments | « mm/slab_common.c ('k') | no next file » | no next file with comments »

Powered by Google App Engine
RSS Feeds Recent Issues | This issue
This is Rietveld f62528b