diff options
author | Mel Gorman <mel@csn.ul.ie> | 2009-06-17 00:32:14 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-06-17 04:47:35 +0200 |
commit | 974709bdb2a34db378fc84140220f363f558d0d6 (patch) | |
tree | 2b63a089cc077579e3b67efba1995c71102db2e2 /mm/page_alloc.c | |
parent | page allocator: update NR_FREE_PAGES only as necessary (diff) | |
download | linux-974709bdb2a34db378fc84140220f363f558d0d6.tar.xz linux-974709bdb2a34db378fc84140220f363f558d0d6.zip |
page allocator: get the pageblock migratetype without disabling interrupts
Local interrupts are disabled when freeing pages to the PCP list. Part of
that free checks what the migratetype of the pageblock the page is in but
it checks this with interrupts disabled and interupts should never be
disabled longer than necessary. This patch checks the pagetype with
interrupts enabled with the impact that it is possible a page is freed to
the wrong list when a pageblock changes type. As that block is now
already considered mixed from an anti-fragmentation perspective, it's not
of vital importance.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r-- | mm/page_alloc.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d56e377ad085..e60e41474332 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1030,6 +1030,7 @@ static void free_hot_cold_page(struct page *page, int cold) kernel_map_pages(page, 1, 0); pcp = &zone_pcp(zone, get_cpu())->pcp; + set_page_private(page, get_pageblock_migratetype(page)); local_irq_save(flags); if (unlikely(clearMlocked)) free_page_mlock(page); @@ -1039,7 +1040,6 @@ static void free_hot_cold_page(struct page *page, int cold) list_add_tail(&page->lru, &pcp->list); else list_add(&page->lru, &pcp->list); - set_page_private(page, get_pageblock_migratetype(page)); pcp->count++; if (pcp->count >= pcp->high) { free_pages_bulk(zone, pcp->batch, &pcp->list, 0); |