Rietveld Code Review Tool
Help | Bug tracker | Discussion group | Source code | Sign in
(330)

Unified Diff: mm/slab.c

Issue 325780043: mm/kasan: support per-page shadow memory to reduce memory consumption
Patch Set: mm/kasan: support per-page shadow memory to reduce memory consumption Created 6 years, 10 months ago
Use n/p to move between diff chunks; N/P to move between comments. Please Sign in to add in-line comments.
Jump to:
View side-by-side diff with in-line comments
Download patch
« no previous file with comments | « mm/page_alloc.c ('k') | mm/slab_common.c » ('j') | no next file with comments »
Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
Index: mm/slab.c
diff --git a/mm/slab.c b/mm/slab.c
index 2a31ee3c5814f192234385303a0d86cc77790580..77b8be6f593bf686b74ce6825a2f9fc881b4672e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1418,7 +1418,15 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
return NULL;
}
+ if (kasan_slab_page_alloc(page_address(page),
+ PAGE_SIZE << cachep->gfporder, flags)) {
+ __free_pages(page, cachep->gfporder);
+ return NULL;
+ }
+
if (memcg_charge_slab(page, flags, cachep->gfporder, cachep)) {
+ kasan_slab_page_free(page_address(page),
+ PAGE_SIZE << cachep->gfporder);
__free_pages(page, cachep->gfporder);
return NULL;
}
@@ -1474,6 +1482,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page)
if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += nr_freed;
memcg_uncharge_slab(page, order, cachep);
+ kasan_slab_page_free(page_address(page), PAGE_SIZE << order);
__free_pages(page, order);
}
« no previous file with comments | « mm/page_alloc.c ('k') | mm/slab_common.c » ('j') | no next file with comments »

Powered by Google App Engine
RSS Feeds Recent Issues | This issue
This is Rietveld f62528b