)]}'
{
  "log": [
    {
      "commit": "6e543d5780e36ff5ee56c44d7e2e30db3457a7ed",
      "tree": "094208c4caad9d0d766137c243d0cfe97a1ce0b9",
      "parents": [
        "7a8010cd36273ff5f6fea5201ef9232f30cebbd9"
      ],
      "author": {
        "name": "Lisa Du",
        "email": "cldu@marvell.com",
        "time": "Wed Sep 11 14:22:36 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:01 2013 -0700"
      },
      "message": "mm: vmscan: fix do_try_to_free_pages() livelock\n\nThis patch is based on KOSAKI\u0027s work and I add a little more description,\nplease refer https://lkml.org/lkml/2012/6/14/74.\n\nCurrently, I found system can enter a state that there are lots of free\npages in a zone but only order-0 and order-1 pages which means the zone is\nheavily fragmented, then high order allocation could make direct reclaim\npath\u0027s long stall(ex, 60 seconds) especially in no swap and no compaciton\nenviroment.  This problem happened on v3.4, but it seems issue still lives\nin current tree, the reason is do_try_to_free_pages enter live lock:\n\nkswapd will go to sleep if the zones have been fully scanned and are still\nnot balanced.  As kswapd thinks there\u0027s little point trying all over again\nto avoid infinite loop.  Instead it changes order from high-order to\n0-order because kswapd think order-0 is the most important.  Look at\n73ce02e9 in detail.  If watermarks are ok, kswapd will go back to sleep\nand may leave zone-\u003eall_unreclaimable \u003d3D 0.  It assume high-order users\ncan still perform direct reclaim if they wish.\n\nDirect reclaim continue to reclaim for a high order which is not a\nCOSTLY_ORDER without oom-killer until kswapd turn on\nzone-\u003eall_unreclaimble\u003d .  This is because to avoid too early oom-kill.\nSo it means direct_reclaim depends on kswapd to break this loop.\n\nIn worst case, direct-reclaim may continue to page reclaim forever when\nkswapd sleeps forever until someone like watchdog detect and finally kill\nthe process.  As described in:\nhttp://thread.gmane.org/gmane.linux.kernel.mm/103737\n\nWe can\u0027t turn on zone-\u003eall_unreclaimable from direct reclaim path because\ndirect reclaim path don\u0027t take any lock and this way is racy.  Thus this\npatch removes zone-\u003eall_unreclaimable field completely and recalculates\nzone reclaimable state every time.\n\nNote: we can\u0027t take the idea that direct-reclaim see zone-\u003epages_scanned\ndirectly and kswapd continue to use zone-\u003eall_unreclaimable.  Because, it\nis racy.  commit 929bea7c71 (vmscan: all_unreclaimable() use\nzone-\u003eall_unreclaimable as a name) describes the detail.\n\n[akpm@linux-foundation.org: uninline zone_reclaimable_pages() and zone_reclaimable()]\nCc: Aaditya Kumar \u003caaditya.kumar.30@gmail.com\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nCc: Nick Piggin \u003cnpiggin@gmail.com\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nCc: Bob Liu \u003clliubbo@gmail.com\u003e\nCc: Neil Zhang \u003czhangwm@marvell.com\u003e\nCc: Russell King - ARM Linux \u003clinux@arm.linux.org.uk\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Minchan Kim \u003cminchan@kernel.org\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Lisa Du \u003ccldu@marvell.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "81c0a2bb515fd4daae8cab64352877480792b515",
      "tree": "5ef326d226fdd14332cd0e5382e6dd2759dd08e3",
      "parents": [
        "e085dbc52fad8d79fa2245339c84bf3ef0b3a802"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "hannes@cmpxchg.org",
        "time": "Wed Sep 11 14:20:47 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:23 2013 -0700"
      },
      "message": "mm: page_alloc: fair zone allocator policy\n\nEach zone that holds userspace pages of one workload must be aged at a\nspeed proportional to the zone size.  Otherwise, the time an individual\npage gets to stay in memory depends on the zone it happened to be\nallocated in.  Asymmetry in the zone aging creates rather unpredictable\naging behavior and results in the wrong pages being reclaimed, activated\netc.\n\nBut exactly this happens right now because of the way the page allocator\nand kswapd interact.  The page allocator uses per-node lists of all zones\nin the system, ordered by preference, when allocating a new page.  When\nthe first iteration does not yield any results, kswapd is woken up and the\nallocator retries.  Due to the way kswapd reclaims zones below the high\nwatermark while a zone can be allocated from when it is above the low\nwatermark, the allocator may keep kswapd running while kswapd reclaim\nensures that the page allocator can keep allocating from the first zone in\nthe zonelist for extended periods of time.  Meanwhile the other zones\nrarely see new allocations and thus get aged much slower in comparison.\n\nThe result is that the occasional page placed in lower zones gets\nrelatively more time in memory, even gets promoted to the active list\nafter its peers have long been evicted.  Meanwhile, the bulk of the\nworking set may be thrashing on the preferred zone even though there may\nbe significant amounts of memory available in the lower zones.\n\nEven the most basic test -- repeatedly reading a file slightly bigger than\nmemory -- shows how broken the zone aging is.  In this scenario, no single\npage should be able stay in memory long enough to get referenced twice and\nactivated, but activation happens in spades:\n\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 0\n      nr_inactive_file 0\n      nr_active_file 8\n      nr_inactive_file 1582\n      nr_active_file 11994\n  $ cat data data data data \u003e/dev/null\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 70\n      nr_inactive_file 258753\n      nr_active_file 443214\n      nr_inactive_file 149793\n      nr_active_file 12021\n\nFix this with a very simple round robin allocator.  Each zone is allowed a\nbatch of allocations that is proportional to the zone\u0027s size, after which\nit is treated as full.  The batch counters are reset when all zones have\nbeen tried and the allocator enters the slowpath and kicks off kswapd\nreclaim.  Allocation and reclaim is now fairly spread out to all\navailable/allowable zones:\n\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 0\n      nr_inactive_file 174\n      nr_active_file 4865\n      nr_inactive_file 53\n      nr_active_file 860\n  $ cat data data data data \u003e/dev/null\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 0\n      nr_inactive_file 666622\n      nr_active_file 4988\n      nr_inactive_file 190969\n      nr_active_file 937\n\nWhen zone_reclaim_mode is enabled, allocations will now spread out to all\nzones on the local node, not just the first preferred zone (which on a 4G\nnode might be a tiny Normal zone).\n\nSigned-off-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nReviewed-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: Paul Bolle \u003cpaul.bollee@gmail.com\u003e\nCc: Zlatko Calusic \u003czcalusic@bitsync.net\u003e\nTested-by: Kevin Hilman \u003ckhilman@linaro.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b21fbccd4b8aba805cbc231998ec7bf83616a79e",
      "tree": "d6465436da82b1cb45d4bf9b99b41b439285ba71",
      "parents": [
        "bc732f1d55cf41627ee4c64078812b2fa592b394"
      ],
      "author": {
        "name": "Zhang Yanfei",
        "email": "zhangyanfei@cn.fujitsu.com",
        "time": "Mon Jul 08 16:00:07 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 09 10:33:22 2013 -0700"
      },
      "message": "mm: remove unused functions is_{normal_idx, normal, dma32, dma}\n\nThese functions are nowhere used, so remove them.\n\nSigned-off-by: Zhang Yanfei \u003czhangyanfei@cn.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "55878e88c59221c3187e1c24ec3b15eb79c374c0",
      "tree": "f0c9a994d63b3fb64dffe6477317db53453b2b61",
      "parents": [
        "b26a3dfd4c0b888303a5909ef37febeb582e190e"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Wed Jul 03 15:04:44 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Jul 03 16:07:40 2013 -0700"
      },
      "message": "sparsemem: add BUILD_BUG_ON when sizeof mem_section is non-power-of-2\n\nInstead of leaving a hidden trap for the next person who comes along and\nwants to add something to mem_section, add a big fat warning about it\nneeding to be a power-of-2, and insert a BUILD_BUG_ON() in sparse_init()\nto catch mistakes.\n\nRight now non-power-of-2 mem_sections cause a number of WARNs at boot\n(which don\u0027t clearly point to the size of mem_section as an issue), but\nthe system limps on (temporarily, at least).\n\nThis is based upon Dave Hansen\u0027s earlier RFC where he ran into the same\nissue:\n\t\"sparsemem: fix boot when SECTIONS_PER_ROOT is not power-of-2\"\n\thttp://lkml.indiana.edu/hypermail/linux/kernel/1205.2/03077.html\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nAcked-by: Dave Hansen \u003cdave.hansen@linux.intel.com\u003e\nCc: Jiang Liu \u003cliuj97@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c3d5f5f0c2bc4eabeaf49f1a21e1aeb965246cd2",
      "tree": "ec3b59b33102150b20c04de0b43d25dcea692b68",
      "parents": [
        "7b4b2a0d6c8500350784beb83a6a55e60ea3bea3"
      ],
      "author": {
        "name": "Jiang Liu",
        "email": "liuj97@gmail.com",
        "time": "Wed Jul 03 15:03:14 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Jul 03 16:07:33 2013 -0700"
      },
      "message": "mm: use a dedicated lock to protect totalram_pages and zone-\u003emanaged_pages\n\nCurrently lock_memory_hotplug()/unlock_memory_hotplug() are used to\nprotect totalram_pages and zone-\u003emanaged_pages.  Other than the memory\nhotplug driver, totalram_pages and zone-\u003emanaged_pages may also be\nmodified at runtime by other drivers, such as Xen balloon,\nvirtio_balloon etc.  For those cases, memory hotplug lock is a little\ntoo heavy, so introduce a dedicated lock to protect totalram_pages and\nzone-\u003emanaged_pages.\n\nNow we have a simplified locking rules totalram_pages and\nzone-\u003emanaged_pages as:\n\n1) no locking for read accesses because they are unsigned long.\n2) no locking for write accesses at boot time in single-threaded context.\n3) serialize write accesses at runtime by acquiring the dedicated\n   managed_page_count_lock.\n\nAlso adjust zone-\u003emanaged_pages when freeing reserved pages into the\nbuddy system, to keep totalram_pages and zone-\u003emanaged_pages in\nconsistence.\n\n[akpm@linux-foundation.org: don\u0027t export adjust_managed_page_count to modules (for now)]\nSigned-off-by: Jiang Liu \u003cjiang.liu@huawei.com\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\nCc: \"Michael S. Tsirkin\" \u003cmst@redhat.com\u003e\nCc: \u003csworddragon2@aol.com\u003e\nCc: Arnd Bergmann \u003carnd@arndb.de\u003e\nCc: Catalin Marinas \u003ccatalin.marinas@arm.com\u003e\nCc: Chris Metcalf \u003ccmetcalf@tilera.com\u003e\nCc: David Howells \u003cdhowells@redhat.com\u003e\nCc: Geert Uytterhoeven \u003cgeert@linux-m68k.org\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: Jeremy Fitzhardinge \u003cjeremy@goop.org\u003e\nCc: Jianguo Wu \u003cwujianguo@huawei.com\u003e\nCc: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nCc: Kamezawa Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Konrad Rzeszutek Wilk \u003ckonrad.wilk@oracle.com\u003e\nCc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nCc: Tang Chen \u003ctangchen@cn.fujitsu.com\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Wen Congyang \u003cwency@cn.fujitsu.com\u003e\nCc: Will Deacon \u003cwill.deacon@arm.com\u003e\nCc: Yasuaki Ishimatsu \u003cisimatu.yasuaki@jp.fujitsu.com\u003e\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Russell King \u003crmk@arm.linux.org.uk\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "114d4b79f7e561fd76fb98eba79e18df7cdd60f0",
      "tree": "20d1e2b060b4272e9a18894538d6beb551d5b7c8",
      "parents": [
        "72c3b51bda557ab38d6c5b2af750c27cba15f828"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Wed Jul 03 15:02:09 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Jul 03 16:07:29 2013 -0700"
      },
      "message": "mmzone: note that node_size_lock should be manipulated via pgdat_resize_lock()\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "72c3b51bda557ab38d6c5b2af750c27cba15f828",
      "tree": "4a54ebf4989062ce4ea034b59070afaf4970644a",
      "parents": [
        "f919b19614f06711cba300c1bb1e3d94c9ca21b0"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Wed Jul 03 15:02:08 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Jul 03 16:07:29 2013 -0700"
      },
      "message": "mm: fix comment referring to non-existent size_seqlock, change to span_seqlock\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nAcked-by: David Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "283aba9f9e0e4882bf09bd37a2983379a6fae805",
      "tree": "8c856efae71bb2daaadae48ff565132dd6e0b06b",
      "parents": [
        "d43006d503ac921c7df4f94d13c17db6f13c9d26"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Wed Jul 03 15:01:51 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Jul 03 16:07:28 2013 -0700"
      },
      "message": "mm: vmscan: block kswapd if it is encountering pages under writeback\n\nHistorically, kswapd used to congestion_wait() at higher priorities if\nit was not making forward progress.  This made no sense as the failure\nto make progress could be completely independent of IO.  It was later\nreplaced by wait_iff_congested() and removed entirely by commit 258401a6\n(mm: don\u0027t wait on congested zones in balance_pgdat()) as it was\nduplicating logic in shrink_inactive_list().\n\nThis is problematic.  If kswapd encounters many pages under writeback\nand it continues to scan until it reaches the high watermark then it\nwill quickly skip over the pages under writeback and reclaim clean young\npages or push applications out to swap.\n\nThe use of wait_iff_congested() is not suited to kswapd as it will only\nstall if the underlying BDI is really congested or a direct reclaimer\nwas unable to write to the underlying BDI.  kswapd bypasses the BDI\ncongestion as it sets PF_SWAPWRITE but even if this was taken into\naccount then it would cause direct reclaimers to stall on writeback\nwhich is not desirable.\n\nThis patch sets a ZONE_WRITEBACK flag if direct reclaim or kswapd is\nencountering too many pages under writeback.  If this flag is set and\nkswapd encounters a PageReclaim page under writeback then it\u0027ll assume\nthat the LRU lists are being recycled too quickly before IO can complete\nand block waiting for some IO to complete.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Jiri Slaby \u003cjslaby@suse.cz\u003e\nCc: Valdis Kletnieks \u003cValdis.Kletnieks@vt.edu\u003e\nTested-by: Zlatko Calusic \u003czcalusic@bitsync.net\u003e\nCc: dormando \u003cdormando@rydia.net\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d43006d503ac921c7df4f94d13c17db6f13c9d26",
      "tree": "3adf95869d1821cd5360c81dca790b3203608555",
      "parents": [
        "9aa41348a8d11427feec350b21dcdd4330fd20c4"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Wed Jul 03 15:01:50 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Jul 03 16:07:28 2013 -0700"
      },
      "message": "mm: vmscan: have kswapd writeback pages based on dirty pages encountered, not priority\n\nCurrently kswapd queues dirty pages for writeback if scanning at an\nelevated priority but the priority kswapd scans at is not related to the\nnumber of unqueued dirty encountered.  Since commit \"mm: vmscan: Flatten\nkswapd priority loop\", the priority is related to the size of the LRU\nand the zone watermark which is no indication as to whether kswapd\nshould write pages or not.\n\nThis patch tracks if an excessive number of unqueued dirty pages are\nbeing encountered at the end of the LRU.  If so, it indicates that dirty\npages are being recycled before flusher threads can clean them and flags\nthe zone so that kswapd will start writing pages until the zone is\nbalanced.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Jiri Slaby \u003cjslaby@suse.cz\u003e\nCc: Valdis Kletnieks \u003cValdis.Kletnieks@vt.edu\u003e\nTested-by: Zlatko Calusic \u003czcalusic@bitsync.net\u003e\nCc: dormando \u003cdormando@rydia.net\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5d434fcb255dec99189f1c58a06e4f56e12bf77d",
      "tree": "734289dc85074903d9e636a935d43414746e222c",
      "parents": [
        "5a5a1bf099d6942399ea0b34a62e5f0bc4c5c36e",
        "071361d3473ebb8142907470ff12d59c59f6be72"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Apr 30 09:36:50 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Apr 30 09:36:50 2013 -0700"
      },
      "message": "Merge branch \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial\n\nPull trivial tree updates from Jiri Kosina:\n \"Usual stuff, mostly comment fixes, typo fixes, printk fixes and small\n  code cleanups\"\n\n* \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (45 commits)\n  mm: Convert print_symbol to %pSR\n  gfs2: Convert print_symbol to %pSR\n  m32r: Convert print_symbol to %pSR\n  iostats.txt: add easy-to-find description for field 6\n  x86 cmpxchg.h: fix wrong comment\n  treewide: Fix typo in printk and comments\n  doc: devicetree: Fix various typos\n  docbook: fix 8250 naming in device-drivers\n  pata_pdc2027x: Fix compiler warning\n  treewide: Fix typo in printks\n  mei: Fix comments in drivers/misc/mei\n  treewide: Fix typos in kernel messages\n  pm44xx: Fix comment for \"CONFIG_CPU_IDLE\"\n  doc: Fix typo \"CONFIG_CGROUP_CGROUP_MEMCG_SWAP\"\n  mmzone: correct \"pags\" to \"pages\" in comment.\n  kernel-parameters: remove outdated \u0027noresidual\u0027 parameter\n  Remove spurious _H suffixes from ifdef comments\n  sound: Remove stray pluses from Kconfig file\n  radio-shark: Fix printk \"CONFIG_LED_CLASS\"\n  doc: put proper reference to CONFIG_MODULE_SIG_ENFORCE\n  ...\n"
    },
    {
      "commit": "8761e31c227f9751327196f170eba2b519eab48f",
      "tree": "a4c8221977c6026677dc122324b21a569823459c",
      "parents": [
        "e1ca3c7ac9caad278d05f0b30f4e1a03e4a65b7f"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Tue Mar 26 10:30:44 2013 -0700"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Wed Mar 27 14:12:14 2013 +0100"
      },
      "message": "mmzone: correct \"pags\" to \"pages\" in comment.\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nSigned-off-by: Jiri Kosina \u003cjkosina@suse.cz\u003e\n"
    },
    {
      "commit": "f9228b204f789493117e458d2fefae937edb7272",
      "tree": "b073cb200a073a6ce2767012174ccd39accf4e48",
      "parents": [
        "2ca067efd82939dfd87827d29d36a265823a4c2f"
      ],
      "author": {
        "name": "Russ Anderson",
        "email": "rja@sgi.com",
        "time": "Fri Mar 22 15:04:43 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri Mar 22 16:41:20 2013 -0700"
      },
      "message": "mm: zone_end_pfn is too small\n\nBooting with 32 TBytes memory hits BUG at mm/page_alloc.c:552! (output\nbelow).\n\nThe key hint is \"page 4294967296 outside zone\".\n4294967296 \u003d 0x100000000 (bit 32 is set).\n\nThe problem is in include/linux/mmzone.h:\n\n  530 static inline unsigned zone_end_pfn(const struct zone *zone)\n  531 {\n  532         return zone-\u003ezone_start_pfn + zone-\u003espanned_pages;\n  533 }\n\nzone_end_pfn is \"unsigned\" (32 bits).  Changing it to \"unsigned long\"\n(64 bits) fixes the problem.\n\nzone_end_pfn() was added recently in commit 108bcc96ef70 (\"mm: add \u0026 use\nzone_end_pfn() and zone_spans_pfn()\")\n\nOutput from the failure.\n\n  No AGP bridge found\n  page 4294967296 outside zone [ 4294967296 - 4327469056 ]\n  ------------[ cut here ]------------\n  kernel BUG at mm/page_alloc.c:552!\n  invalid opcode: 0000 [#1] SMP\n  Modules linked in:\n  CPU 0\n  Pid: 0, comm: swapper Not tainted 3.9.0-rc2.dtp+ #10\n  RIP: free_one_page+0x382/0x430\n  Process swapper (pid: 0, threadinfo ffffffff81942000, task ffffffff81955420)\n  Call Trace:\n    __free_pages_ok+0x96/0xb0\n    __free_pages+0x25/0x50\n    __free_pages_bootmem+0x8a/0x8c\n    __free_memory_core+0xea/0x131\n    free_low_memory_core_early+0x4a/0x98\n    free_all_bootmem+0x45/0x47\n    mem_init+0x7b/0x14c\n    start_kernel+0x216/0x433\n    x86_64_start_reservations+0x2a/0x2c\n    x86_64_start_kernel+0x144/0x153\n  Code: 89 f1 ba 01 00 00 00 31 f6 d3 e2 4c 89 ef e8 66 a4 01 00 e9 2c fe ff ff 0f 0b eb fe 0f 0b 66 66 2e 0f 1f 84 00 00 00 00 00 eb f3 \u003c0f\u003e 0b eb fe 0f 0b 0f 1f 84 00 00 00 00 00 eb f6 0f 0b eb fe 49\n\nSigned-off-by: Russ Anderson \u003crja@sgi.com\u003e\nReported-by: George Beshers \u003cgbeshers@sgi.com\u003e\nAcked-by: Hedi Berriche \u003chedi@sgi.com\u003e\nCc: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "da3649e133948d8b7d8c57b05a33faf62ac2cc7e",
      "tree": "d03e6ce3b4ee6a5b08739839809c88ae8ec53629",
      "parents": [
        "d29bb9782d22063892e28716cdb76a87d2876ddb"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Fri Feb 22 16:35:27 2013 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sat Feb 23 17:50:20 2013 -0800"
      },
      "message": "mmzone: add pgdat_{end_pfn,is_empty}() helpers \u0026 consolidate.\n\nAdd pgdat_end_pfn() and pgdat_is_empty() helpers which match the similar\nzone_*() functions.\n\nChange node_end_pfn() to be a wrapper of pgdat_end_pfn().\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nCc: David Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: Catalin Marinas \u003ccatalin.marinas@arm.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "2a6e3ebee2edcade56f836390a5f0c7b76ff5f9e",
      "tree": "53005ce7c42e9fad98d9a0dec7a0035f5bb5143c",
      "parents": [
        "108bcc96ef7047c02cad4d229f04da38186a3f3f"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Fri Feb 22 16:35:24 2013 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sat Feb 23 17:50:20 2013 -0800"
      },
      "message": "mm: add zone_is_empty() and zone_is_initialized()\n\nFactoring out these 2 checks makes it more clear what we are actually\nchecking for.\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nCc: David Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: Catalin Marinas \u003ccatalin.marinas@arm.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "108bcc96ef7047c02cad4d229f04da38186a3f3f",
      "tree": "e11d82074cae54dcf0fa8eea12750c661a16b02d",
      "parents": [
        "9127ab4ff92f0ecd7b4671efa9d0edb21c691e9f"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Fri Feb 22 16:35:23 2013 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sat Feb 23 17:50:20 2013 -0800"
      },
      "message": "mm: add \u0026 use zone_end_pfn() and zone_spans_pfn()\n\nAdd 2 helpers (zone_end_pfn() and zone_spans_pfn()) to reduce code\nduplication.\n\nThis also switches to using them in compaction (where an additional\nvariable needed to be renamed), page_alloc, vmstat, memory_hotplug, and\nkmemleak.\n\nNote that in compaction.c I avoid calling zone_end_pfn() repeatedly\nbecause I expect at some point the sycronization issues with start_pfn \u0026\nspanned_pages will need fixing, either by actually using the seqlock or\nclever memory barrier usage.\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nCc: David Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: Catalin Marinas \u003ccatalin.marinas@arm.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "bbeae5b05ef6e40bf54db05ceb8635824153b9e2",
      "tree": "293d8b4e4bfc06367908df1915460905a5f8408b",
      "parents": [
        "3c0ff4689630b280704666833e9539d84cddc373"
      ],
      "author": {
        "name": "Peter Zijlstra",
        "email": "a.p.zijlstra@chello.nl",
        "time": "Fri Feb 22 16:34:30 2013 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sat Feb 23 17:50:17 2013 -0800"
      },
      "message": "mm: move page flags layout to separate header\n\nThis is a preparation patch for moving page-\u003e_last_nid into page-\u003eflags\nthat moves page flag layout information to a separate header.  This\npatch is necessary because otherwise there would be a circular\ndependency between mm_types.h and mm.h.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nCc: Simon Jeons \u003csimon.jeons@gmail.com\u003e\nCc: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "194159fbcc0d6ac1351837d3cd7a27a4af0219a6",
      "tree": "a5a960a4c8698001db50a987015131bb5d256c6e",
      "parents": [
        "c60514b6314137a9505c60966fda2094b22a2fda"
      ],
      "author": {
        "name": "Minchan Kim",
        "email": "minchan@kernel.org",
        "time": "Fri Feb 22 16:33:58 2013 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sat Feb 23 17:50:15 2013 -0800"
      },
      "message": "mm: remove MIGRATE_ISOLATE check in hotpath\n\nSeveral functions test MIGRATE_ISOLATE and some of those are hotpath but\nMIGRATE_ISOLATE is used only if we enable CONFIG_MEMORY_ISOLATION(ie,\nCMA, memory-hotplug and memory-failure) which are not common config\noption.  So let\u0027s not add unnecessary overhead and code when we don\u0027t\nenable CONFIG_MEMORY_ISOLATION.\n\nSigned-off-by: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a458431e176ddb27e8ef8b98c2a681b217337393",
      "tree": "466ec91a25ebbe30870d12486071bb08a8c7cd5a",
      "parents": [
        "358e419f826b552c9d795bcd3820597217692461"
      ],
      "author": {
        "name": "Bartlomiej Zolnierkiewicz",
        "email": "b.zolnierkie@samsung.com",
        "time": "Fri Jan 04 15:35:08 2013 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri Jan 04 16:11:46 2013 -0800"
      },
      "message": "mm: fix zone_watermark_ok_safe() accounting of isolated pages\n\nCommit 702d1a6e0766 (\"memory-hotplug: fix kswapd looping forever\nproblem\") added an isolated pageblocks counter (nr_pageblock_isolate in\nstruct zone) and used it to adjust free pages counter in\nzone_watermark_ok_safe() to prevent kswapd looping forever problem.\n\nThen later, commit 2139cbe627b8 (\"cma: fix counting of isolated pages\")\nfixed accounting of isolated pages in global free pages counter.  It\nmade the previous zone_watermark_ok_safe() fix unnecessary and\npotentially harmful (cause now isolated pages may be accounted twice\nmaking free pages counter incorrect).\n\nThis patch removes the special isolated pageblocks counter altogether\nwhich fixes zone_watermark_ok_safe() free pages check.\n\nReported-by: Tomasz Stanislawski \u003ct.stanislaws@samsung.com\u003e\nSigned-off-by: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Aaditya Kumar \u003caaditya.kumar.30@gmail.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "3d59eebc5e137bd89c6351e4c70e90ba1d0dc234",
      "tree": "b4ddfd0b057454a7437a3b4e3074a3b8b4b03817",
      "parents": [
        "11520e5e7c1855fc3bf202bb3be35a39d9efa034",
        "4fc3f1d66b1ef0d7b8dc11f4ff1cc510f78b37d6"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sun Dec 16 14:33:25 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sun Dec 16 15:18:08 2012 -0800"
      },
      "message": "Merge tag \u0027balancenuma-v11\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux-balancenuma\n\nPull Automatic NUMA Balancing bare-bones from Mel Gorman:\n \"There are three implementations for NUMA balancing, this tree\n  (balancenuma), numacore which has been developed in tip/master and\n  autonuma which is in aa.git.\n\n  In almost all respects balancenuma is the dumbest of the three because\n  its main impact is on the VM side with no attempt to be smart about\n  scheduling.  In the interest of getting the ball rolling, it would be\n  desirable to see this much merged for 3.8 with the view to building\n  scheduler smarts on top and adapting the VM where required for 3.9.\n\n  The most recent set of comparisons available from different people are\n\n    mel:    https://lkml.org/lkml/2012/12/9/108\n    mingo:  https://lkml.org/lkml/2012/12/7/331\n    tglx:   https://lkml.org/lkml/2012/12/10/437\n    srikar: https://lkml.org/lkml/2012/12/10/397\n\n  The results are a mixed bag.  In my own tests, balancenuma does\n  reasonably well.  It\u0027s dumb as rocks and does not regress against\n  mainline.  On the other hand, Ingo\u0027s tests shows that balancenuma is\n  incapable of converging for this workloads driven by perf which is bad\n  but is potentially explained by the lack of scheduler smarts.  Thomas\u0027\n  results show balancenuma improves on mainline but falls far short of\n  numacore or autonuma.  Srikar\u0027s results indicate we all suffer on a\n  large machine with imbalanced node sizes.\n\n  My own testing showed that recent numacore results have improved\n  dramatically, particularly in the last week but not universally.\n  We\u0027ve butted heads heavily on system CPU usage and high levels of\n  migration even when it shows that overall performance is better.\n  There are also cases where it regresses.  Of interest is that for\n  specjbb in some configurations it will regress for lower numbers of\n  warehouses and show gains for higher numbers which is not reported by\n  the tool by default and sometimes missed in treports.  Recently I\n  reported for numacore that the JVM was crashing with\n  NullPointerExceptions but currently it\u0027s unclear what the source of\n  this problem is.  Initially I thought it was in how numacore batch\n  handles PTEs but I\u0027m no longer think this is the case.  It\u0027s possible\n  numacore is just able to trigger it due to higher rates of migration.\n\n  These reports were quite late in the cycle so I/we would like to start\n  with this tree as it contains much of the code we can agree on and has\n  not changed significantly over the last 2-3 weeks.\"\n\n* tag \u0027balancenuma-v11\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux-balancenuma: (50 commits)\n  mm/rmap, migration: Make rmap_walk_anon() and try_to_unmap_anon() more scalable\n  mm/rmap: Convert the struct anon_vma::mutex to an rwsem\n  mm: migrate: Account a transhuge page properly when rate limiting\n  mm: numa: Account for failed allocations and isolations as migration failures\n  mm: numa: Add THP migration for the NUMA working set scanning fault case build fix\n  mm: numa: Add THP migration for the NUMA working set scanning fault case.\n  mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node\n  mm: sched: numa: Control enabling and disabling of NUMA balancing if !SCHED_DEBUG\n  mm: sched: numa: Control enabling and disabling of NUMA balancing\n  mm: sched: Adapt the scanning rate if a NUMA hinting fault does not migrate\n  mm: numa: Use a two-stage filter to restrict pages being migrated for unlikely task\u003c-\u003enode relationships\n  mm: numa: migrate: Set last_nid on newly allocated page\n  mm: numa: split_huge_page: Transfer last_nid on tail page\n  mm: numa: Introduce last_nid to the page frame\n  sched: numa: Slowly increase the scanning period as NUMA faults are handled\n  mm: numa: Rate limit setting of pte_numa if node is saturated\n  mm: numa: Rate limit the amount of memory that is migrated between nodes\n  mm: numa: Structures for Migrate On Fault per NUMA migration rate limiting\n  mm: numa: Migrate pages handled during a pmd_numa hinting fault\n  mm: numa: Migrate on reference policy\n  ...\n"
    },
    {
      "commit": "9feedc9d831e18ae6d0d15aa562e5e46ba53647b",
      "tree": "cb26ff54b0f02c4905772288b27f99b8b384ad6d",
      "parents": [
        "c2d23f919bafcbc2259f5257d9a7d729802f0e3a"
      ],
      "author": {
        "name": "Jiang Liu",
        "email": "liuj97@gmail.com",
        "time": "Wed Dec 12 13:52:12 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Dec 12 17:38:34 2012 -0800"
      },
      "message": "mm: introduce new field \"managed_pages\" to struct zone\n\nCurrently a zone\u0027s present_pages is calcuated as below, which is\ninaccurate and may cause trouble to memory hotplug.\n\n\tspanned_pages - absent_pages - memmap_pages - dma_reserve.\n\nDuring fixing bugs caused by inaccurate zone-\u003epresent_pages, we found\nzone-\u003epresent_pages has been abused.  The field zone-\u003epresent_pages may\nhave different meanings in different contexts:\n\n1) pages existing in a zone.\n2) pages managed by the buddy system.\n\nFor more discussions about the issue, please refer to:\n  http://lkml.org/lkml/2012/11/5/866\n  https://patchwork.kernel.org/patch/1346751/\n\nThis patchset tries to introduce a new field named \"managed_pages\" to\nstruct zone, which counts \"pages managed by the buddy system\".  And revert\nzone-\u003epresent_pages to count \"physical pages existing in a zone\", which\nalso keep in consistence with pgdat-\u003enode_present_pages.\n\nWe will set an initial value for zone-\u003emanaged_pages in function\nfree_area_init_core() and will adjust it later if the initial value is\ninaccurate.\n\nFor DMA/normal zones, the initial value is set to:\n\n\t(spanned_pages - absent_pages - memmap_pages - dma_reserve)\n\nLater zone-\u003emanaged_pages will be adjusted to the accurate value when the\nbootmem allocator frees all free pages to the buddy system in function\nfree_all_bootmem_node() and free_all_bootmem().\n\nThe bootmem allocator doesn\u0027t touch highmem pages, so highmem zones\u0027\nmanaged_pages is set to the accurate value \"spanned_pages - absent_pages\"\nin function free_area_init_core() and won\u0027t be updated anymore.\n\nThis patch also adds a new field \"managed_pages\" to /proc/zoneinfo\nand sysrq showmem.\n\n[akpm@linux-foundation.org: small comment tweaks]\nSigned-off-by: Jiang Liu \u003cjiang.liu@huawei.com\u003e\nCc: Wen Congyang \u003cwency@cn.fujitsu.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Maciej Rutecki \u003cmaciej.rutecki@gmail.com\u003e\nTested-by: Chris Clayton \u003cchris2553@googlemail.com\u003e\nCc: \"Rafael J . Wysocki\" \u003crjw@sisk.pl\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Jianguo Wu \u003cwujianguo@huawei.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "bc357f431c836c6631751e3ef7dfe7882394ad67",
      "tree": "b67904e354a30c9ecc0a53b8288a3a74c37b9bc2",
      "parents": [
        "2e30abd1730751d58463d88bc0844ab8fd7112a9"
      ],
      "author": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Tue Dec 11 16:02:59 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Dec 11 17:22:27 2012 -0800"
      },
      "message": "mm: cma: remove watermark hacks\n\nCommits 2139cbe627b8 (\"cma: fix counting of isolated pages\") and\nd95ea5d18e69 (\"cma: fix watermark checking\") introduced a reliable\nmethod of free page accounting when memory is being allocated from CMA\nregions, so the workaround introduced earlier by commit 49f223a9cd96\n(\"mm: trigger page reclaim in alloc_contig_range() to stabilise\nwatermarks\") can be finally removed.\n\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Arnd Bergmann \u003carnd@arndb.de\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "8177a420ed7c16c171ed3c3aec5b0676db38c247",
      "tree": "9d188c65d68af5a8f77913043ab8689c5530508b",
      "parents": [
        "9532fec118d485ea37ab6e3ea372d68cd8b4cd0d"
      ],
      "author": {
        "name": "Andrea Arcangeli",
        "email": "aarcange@redhat.com",
        "time": "Fri Mar 23 20:56:34 2012 +0100"
      },
      "committer": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Tue Dec 11 14:42:50 2012 +0000"
      },
      "message": "mm: numa: Structures for Migrate On Fault per NUMA migration rate limiting\n\nThis defines the per-node data used by Migrate On Fault in order to\nrate limit the migration. The rate limiting is applied independently\nto each destination node.\n\nSigned-off-by: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\n"
    },
    {
      "commit": "bea8c150a7efbc0f204e709b7274fe273f55e0d3",
      "tree": "fab695d7d18c5bbc9a42bccd793456826e392754",
      "parents": [
        "18f694271b86ee279e88208550cc49fee206b544"
      ],
      "author": {
        "name": "Hugh Dickins",
        "email": "hughd@google.com",
        "time": "Fri Nov 16 14:14:54 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri Nov 16 14:33:04 2012 -0800"
      },
      "message": "memcg: fix hotplugged memory zone oops\n\nWhen MEMCG is configured on (even when it\u0027s disabled by boot option),\nwhen adding or removing a page to/from its lru list, the zone pointer\nused for stats updates is nowadays taken from the struct lruvec.  (On\nmany configurations, calculating zone from page is slower.)\n\nBut we have no code to update all the lruvecs (per zone, per memcg) when\na memory node is hotadded.  Here\u0027s an extract from the oops which\nresults when running numactl to bind a program to a newly onlined node:\n\n  BUG: unable to handle kernel NULL pointer dereference at 0000000000000f60\n  IP:  __mod_zone_page_state+0x9/0x60\n  Pid: 1219, comm: numactl Not tainted 3.6.0-rc5+ #180 Bochs Bochs\n  Process numactl (pid: 1219, threadinfo ffff880039abc000, task ffff8800383c4ce0)\n  Call Trace:\n    __pagevec_lru_add_fn+0xdf/0x140\n    pagevec_lru_move_fn+0xb1/0x100\n    __pagevec_lru_add+0x1c/0x30\n    lru_add_drain_cpu+0xa3/0x130\n    lru_add_drain+0x2f/0x40\n   ...\n\nThe natural solution might be to use a memcg callback whenever memory is\nhotadded; but that solution has not been scoped out, and it happens that\nwe do have an easy location at which to update lruvec-\u003ezone.  The lruvec\npointer is discovered either by mem_cgroup_zone_lruvec() or by\nmem_cgroup_page_lruvec(), and both of those do know the right zone.\n\nSo check and set lruvec-\u003ezone in those; and remove the inadequate\nattempt to set lruvec-\u003ezone from lruvec_init(), which is called before\nNODE_DATA(node) has been allocated in such cases.\n\nAh, there was one exceptionr.  For no particularly good reason,\nmem_cgroup_force_empty_list() has its own code for deciding lruvec.\nChange it to use the standard mem_cgroup_zone_lruvec() and\nmem_cgroup_get_lru_size() too.  In fact it was already safe against such\nan oops (the lru lists in danger could only be empty), but we\u0027re better\nproofed against future changes this way.\n\nI\u0027ve marked this for stable (3.6) since we introduced the problem in 3.5\n(now closed to stable); but I have no idea if this is the only fix\nneeded to get memory hotadd working with memcg in 3.6, and received no\nanswer when I enquired twice before.\n\nReported-by: Tang Chen \u003ctangchen@cn.fujitsu.com\u003e\nSigned-off-by: Hugh Dickins \u003chughd@google.com\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nCc: Wen Congyang \u003cwency@cn.fujitsu.com\u003e\nCc: \u003cstable@vger.kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e46a28790e594c0876d1a84270926abf75460f61",
      "tree": "febfaa6c20dab69490308190729f1d898e4df930",
      "parents": [
        "7a71932d5676b7410ab64d149bad8bde6b0d8632"
      ],
      "author": {
        "name": "Minchan Kim",
        "email": "minchan@kernel.org",
        "time": "Mon Oct 08 16:33:48 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 09 16:23:00 2012 +0900"
      },
      "message": "CMA: migrate mlocked pages\n\nPresently CMA cannot migrate mlocked pages so it ends up failing to allocate\ncontiguous memory space.\n\nThis patch makes mlocked pages be migrated out.  Of course, it can affect\nrealtime processes but in CMA usecase, contiguous memory allocation failing\nis far worse than access latency to an mlocked page being variable while\nCMA is running.  If someone wants to make the system realtime, he shouldn\u0027t\nenable CMA because stalls can still happen at random times.\n\n[akpm@linux-foundation.org: tweak comment text, per Mel]\nSigned-off-by: Minchan Kim \u003cminchan@kernel.org\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nCc: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nCc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "957f822a0ab95e88b146638bad6209bbc315bedd",
      "tree": "2e1336ddc1c574f54d582c6b74dcc1d1230482f8",
      "parents": [
        "a0c5e813f087dffc0d9b173d2e7d3328b1482fd5"
      ],
      "author": {
        "name": "David Rientjes",
        "email": "rientjes@google.com",
        "time": "Mon Oct 08 16:33:24 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 09 16:22:56 2012 +0900"
      },
      "message": "mm, numa: reclaim from all nodes within reclaim distance\n\nRECLAIM_DISTANCE represents the distance between nodes at which it is\ndeemed too costly to allocate from; it\u0027s preferred to try to reclaim from\na local zone before falling back to allocating on a remote node with such\na distance.\n\nTo do this, zone_reclaim_mode is set if the distance between any two\nnodes on the system is greather than this distance.  This, however, ends\nup causing the page allocator to reclaim from every zone regardless of\nits affinity.\n\nWhat we really want is to reclaim only from zones that are closer than\nRECLAIM_DISTANCE.  This patch adds a nodemask to each node that\nrepresents the set of nodes that are within this distance.  During the\nzone iteration, if the bit for a zone\u0027s node is set for the local node,\nthen reclaim is attempted; otherwise, the zone is skipped.\n\n[akpm@linux-foundation.org: fix CONFIG_NUMA\u003dn build]\nSigned-off-by: David Rientjes \u003crientjes@google.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "62997027ca5b3d4618198ed8b1aba40b61b1137b",
      "tree": "cf26352e091ae10f7201d98ca774a8c0e5f8cdfd",
      "parents": [
        "c89511ab2f8fe2b47585e60da8af7fd213ec877e"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Oct 08 16:32:47 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 09 16:22:51 2012 +0900"
      },
      "message": "mm: compaction: clear PG_migrate_skip based on compaction and reclaim activity\n\nCompaction caches if a pageblock was scanned and no pages were isolated so\nthat the pageblocks can be skipped in the future to reduce scanning.  This\ninformation is not cleared by the page allocator based on activity due to\nthe impact it would have to the page allocator fast paths.  Hence there is\na requirement that something clear the cache or pageblocks will be skipped\nforever.  Currently the cache is cleared if there were a number of recent\nallocation failures and it has not been cleared within the last 5 seconds.\nTime-based decisions like this are terrible as they have no relationship\nto VM activity and is basically a big hammer.\n\nUnfortunately, accurate heuristics would add cost to some hot paths so\nthis patch implements a rough heuristic.  There are two cases where the\ncache is cleared.\n\n1. If a !kswapd process completes a compaction cycle (migrate and free\n   scanner meet), the zone is marked compact_blockskip_flush. When kswapd\n   goes to sleep, it will clear the cache. This is expected to be the\n   common case where the cache is cleared. It does not really matter if\n   kswapd happens to be asleep or going to sleep when the flag is set as\n   it will be woken on the next allocation request.\n\n2. If there have been multiple failures recently and compaction just\n   finished being deferred then a process will clear the cache and start a\n   full scan.  This situation happens if there are multiple high-order\n   allocation requests under heavy memory pressure.\n\nThe clearing of the PG_migrate_skip bits and other scans is inherently\nracy but the race is harmless.  For allocations that can fail such as THP,\nthey will simply fail.  For requests that cannot fail, they will retry the\nallocation.  Tests indicated that scanning rates were roughly similar to\nwhen the time-based heuristic was used and the allocation success rates\nwere similar.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Richard Davies \u003crichard@arachsys.com\u003e\nCc: Shaohua Li \u003cshli@kernel.org\u003e\nCc: Avi Kivity \u003cavi@redhat.com\u003e\nCc: Rafael Aquini \u003caquini@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c89511ab2f8fe2b47585e60da8af7fd213ec877e",
      "tree": "c6b04cb5335957e8409edc77ca23ef012d9d326d",
      "parents": [
        "bb13ffeb9f6bfeb301443994dfbf29f91117dfb3"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Oct 08 16:32:45 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 09 16:22:50 2012 +0900"
      },
      "message": "mm: compaction: Restart compaction from near where it left off\n\nThis is almost entirely based on Rik\u0027s previous patches and discussions\nwith him about how this might be implemented.\n\nOrder \u003e 0 compaction stops when enough free pages of the correct page\norder have been coalesced.  When doing subsequent higher order\nallocations, it is possible for compaction to be invoked many times.\n\nHowever, the compaction code always starts out looking for things to\ncompact at the start of the zone, and for free pages to compact things to\nat the end of the zone.\n\nThis can cause quadratic behaviour, with isolate_freepages starting at the\nend of the zone each time, even though previous invocations of the\ncompaction code already filled up all free memory on that end of the zone.\n This can cause isolate_freepages to take enormous amounts of CPU with\ncertain workloads on larger memory systems.\n\nThis patch caches where the migration and free scanner should start from\non subsequent compaction invocations using the pageblock-skip information.\n When compaction starts it begins from the cached restart points and will\nupdate the cached restart points until a page is isolated or a pageblock\nis skipped that would have been scanned by synchronous compaction.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Richard Davies \u003crichard@arachsys.com\u003e\nCc: Shaohua Li \u003cshli@kernel.org\u003e\nCc: Avi Kivity \u003cavi@redhat.com\u003e\nAcked-by: Rafael Aquini \u003caquini@redhat.com\u003e\nCc: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "bb13ffeb9f6bfeb301443994dfbf29f91117dfb3",
      "tree": "45e0e6574c0165da9cdc993b3401fe3263e4761c",
      "parents": [
        "753341a4b85ff337487b9959c71c529f522004f4"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Oct 08 16:32:41 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 09 16:22:50 2012 +0900"
      },
      "message": "mm: compaction: cache if a pageblock was scanned and no pages were isolated\n\nWhen compaction was implemented it was known that scanning could\npotentially be excessive.  The ideal was that a counter be maintained for\neach pageblock but maintaining this information would incur a severe\npenalty due to a shared writable cache line.  It has reached the point\nwhere the scanning costs are a serious problem, particularly on\nlong-lived systems where a large process starts and allocates a large\nnumber of THPs at the same time.\n\nInstead of using a shared counter, this patch adds another bit to the\npageblock flags called PG_migrate_skip.  If a pageblock is scanned by\neither migrate or free scanner and 0 pages were isolated, the pageblock is\nmarked to be skipped in the future.  When scanning, this bit is checked\nbefore any scanning takes place and the block skipped if set.\n\nThe main difficulty with a patch like this is \"when to ignore the cached\ninformation?\" If it\u0027s ignored too often, the scanning rates will still be\nexcessive.  If the information is too stale then allocations will fail\nthat might have otherwise succeeded.  In this patch\n\no CMA always ignores the information\no If the migrate and free scanner meet then the cached information will\n  be discarded if it\u0027s at least 5 seconds since the last time the cache\n  was discarded\no If there are a large number of allocation failures, discard the cache.\n\nThe time-based heuristic is very clumsy but there are few choices for a\nbetter event.  Depending solely on multiple allocation failures still\nallows excessive scanning when THP allocations are failing in quick\nsuccession due to memory pressure.  Waiting until memory pressure is\nrelieved would cause compaction to continually fail instead of using\nreclaim/compaction to try allocate the page.  The time-based mechanism is\nclumsy but a better option is not obvious.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Richard Davies \u003crichard@arachsys.com\u003e\nCc: Shaohua Li \u003cshli@kernel.org\u003e\nCc: Avi Kivity \u003cavi@redhat.com\u003e\nAcked-by: Rafael Aquini \u003caquini@redhat.com\u003e\nCc: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nCc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nCc: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nCc: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Mark Brown \u003cbroonie@opensource.wolfsonmicro.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "753341a4b85ff337487b9959c71c529f522004f4",
      "tree": "6a705fd73dd599e7eeb58cb06e84c86c07c03a64",
      "parents": [
        "f40d1e42bb988d2a26e8e111ea4c4c7bac819b7e"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Oct 08 16:32:40 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 09 16:22:50 2012 +0900"
      },
      "message": "revert \"mm: have order \u003e 0 compaction start off where it left\"\n\nThis reverts commit 7db8889ab05b (\"mm: have order \u003e 0 compaction start\noff where it left\") and commit de74f1cc (\"mm: have order \u003e 0 compaction\nstart near a pageblock with free pages\").  These patches were a good\nidea and tests confirmed that they massively reduced the amount of\nscanning but the implementation is complex and tricky to understand.  A\nlater patch will cache what pageblocks should be skipped and\nreimplements the concept of compact_cached_free_pfn on top for both\nmigration and free scanners.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Richard Davies \u003crichard@arachsys.com\u003e\nCc: Shaohua Li \u003cshli@kernel.org\u003e\nCc: Avi Kivity \u003cavi@redhat.com\u003e\nAcked-by: Rafael Aquini \u003caquini@redhat.com\u003e\nAcked-by: Minchan Kim \u003cminchan@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d1ce749a0db12202b711d1aba1d29e823034648d",
      "tree": "b9b1f0e1d4fcda9ab900575f42f5ddc155d28648",
      "parents": [
        "2139cbe627b8910ded55148f87ee10f7485408ed"
      ],
      "author": {
        "name": "Bartlomiej Zolnierkiewicz",
        "email": "b.zolnierkie@samsung.com",
        "time": "Mon Oct 08 16:32:02 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 09 16:22:44 2012 +0900"
      },
      "message": "cma: count free CMA pages\n\nAdd NR_FREE_CMA_PAGES counter to be later used for checking watermark in\n__zone_watermark_ok().  For simplicity and to avoid #ifdef hell make this\ncounter always available (not only when CONFIG_CMA\u003dy).\n\n[akpm@linux-foundation.org: use conventional migratetype naming]\nSigned-off-by: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5515061d22f0f9976ae7815864bfd22042d36848",
      "tree": "13b53a29166f19eb864e96b3b58539a207e5fa2f",
      "parents": [
        "7f338fe4540b1d0600b02314c7d885fd358e9eca"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Tue Jul 31 16:44:35 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 31 18:42:46 2012 -0700"
      },
      "message": "mm: throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage\n\nIf swap is backed by network storage such as NBD, there is a risk that a\nlarge number of reclaimers can hang the system by consuming all\nPF_MEMALLOC reserves.  To avoid these hangs, the administrator must tune\nmin_free_kbytes in advance which is a bit fragile.\n\nThis patch throttles direct reclaimers if half the PF_MEMALLOC reserves\nare in use.  If the system is routinely getting throttled the system\nadministrator can increase min_free_kbytes so degradation is smoother but\nthe system will keep running.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: David Miller \u003cdavem@davemloft.net\u003e\nCc: Neil Brown \u003cneilb@suse.de\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Mike Christie \u003cmichaelc@cs.wisc.edu\u003e\nCc: Eric B Munson \u003cemunson@mgebm.net\u003e\nCc: Eric Dumazet \u003ceric.dumazet@gmail.com\u003e\nCc: Sebastian Andrzej Siewior \u003csebastian@breakpoint.cc\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "702d1a6e0766d45642c934444fd41f658d251305",
      "tree": "6c9144521b03f11f7ea2e709f066b90a9b9f38d5",
      "parents": [
        "2cfed0752808625d30aca7fc9f383af386fd8a13"
      ],
      "author": {
        "name": "Minchan Kim",
        "email": "minchan@kernel.org",
        "time": "Tue Jul 31 16:43:56 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 31 18:42:45 2012 -0700"
      },
      "message": "memory-hotplug: fix kswapd looping forever problem\n\nWhen hotplug offlining happens on zone A, it starts to mark freed page as\nMIGRATE_ISOLATE type in buddy for preventing further allocation.\n(MIGRATE_ISOLATE is very irony type because it\u0027s apparently on buddy but\nwe can\u0027t allocate them).\n\nWhen the memory shortage happens during hotplug offlining, current task\nstarts to reclaim, then wake up kswapd.  Kswapd checks watermark, then go\nsleep because current zone_watermark_ok_safe doesn\u0027t consider\nMIGRATE_ISOLATE freed page count.  Current task continue to reclaim in\ndirect reclaim path without kswapd\u0027s helping.  The problem is that\nzone-\u003eall_unreclaimable is set by only kswapd so that current task would\nbe looping forever like below.\n\n__alloc_pages_slowpath\nrestart:\n\twake_all_kswapd\nrebalance:\n\t__alloc_pages_direct_reclaim\n\t\tdo_try_to_free_pages\n\t\t\tif global_reclaim \u0026\u0026 !all_unreclaimable\n\t\t\t\treturn 1; /* It means we did did_some_progress */\n\tskip __alloc_pages_may_oom\n\tshould_alloc_retry\n\t\tgoto rebalance;\n\nIf we apply KOSAKI\u0027s patch[1] which doesn\u0027t depends on kswapd about\nsetting zone-\u003eall_unreclaimable, we can solve this problem by killing some\ntask in direct reclaim path.  But it doesn\u0027t wake up kswapd, still.  It\ncould be a problem still if other subsystem needs GFP_ATOMIC request.  So\nkswapd should consider MIGRATE_ISOLATE when it calculate free pages BEFORE\ngoing sleep.\n\nThis patch counts the number of MIGRATE_ISOLATE page block and\nzone_watermark_ok_safe will consider it if the system has such blocks\n(fortunately, it\u0027s very rare so no problem in POV overhead and kswapd is\nnever hotpath).\n\nCopy/modify from Mel\u0027s quote\n\"\nIdeal solution would be \"allocating\" the pageblock.\nIt would keep the free space accounting as it is but historically,\nmemory hotplug didn\u0027t allocate pages because it would be difficult to\ndetect if a pageblock was isolated or if part of some balloon.\nAllocating just full pageblocks would work around this, However,\nit would play very badly with CMA.\n\"\n\n[1] http://lkml.org/lkml/2012/6/14/74\n\n[akpm@linux-foundation.org: simplify nr_zone_isolate_freepages(), rework zone_watermark_ok_safe() comment, simplify set_pageblock_isolate() and restore_pageblock_isolate()]\n[akpm@linux-foundation.org: fix CONFIG_MEMORY_ISOLATION\u003dn build]\nSigned-off-by: Minchan Kim \u003cminchan@kernel.org\u003e\nSuggested-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nTested-by: Aaditya Kumar \u003caaditya.kumar.30@gmail.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9adb62a5df9c0fbef7b4665919329f73a34651ed",
      "tree": "8372c9c1202adac889714ea99319346279107f33",
      "parents": [
        "da92c47d069890106484cb6605df701a54d24499"
      ],
      "author": {
        "name": "Jiang Liu",
        "email": "jiang.liu@huawei.com",
        "time": "Tue Jul 31 16:43:28 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 31 18:42:44 2012 -0700"
      },
      "message": "mm/hotplug: correctly setup fallback zonelists when creating new pgdat\n\nWhen hotadd_new_pgdat() is called to create new pgdat for a new node, a\nfallback zonelist should be created for the new node.  There\u0027s code to try\nto achieve that in hotadd_new_pgdat() as below:\n\n\t/*\n\t * The node we allocated has no zone fallback lists. For avoiding\n\t * to access not-initialized zonelist, build here.\n\t */\n\tmutex_lock(\u0026zonelists_mutex);\n\tbuild_all_zonelists(pgdat, NULL);\n\tmutex_unlock(\u0026zonelists_mutex);\n\nBut it doesn\u0027t work as expected.  When hotadd_new_pgdat() is called, the\nnew node is still in offline state because node_set_online(nid) hasn\u0027t\nbeen called yet.  And build_all_zonelists() only builds zonelists for\nonline nodes as:\n\n        for_each_online_node(nid) {\n                pg_data_t *pgdat \u003d NODE_DATA(nid);\n\n                build_zonelists(pgdat);\n                build_zonelist_cache(pgdat);\n        }\n\nThough we hope to create zonelist for the new pgdat, but it doesn\u0027t.  So\nadd a new parameter \"pgdat\" the build_all_zonelists() to build pgdat for\nthe new pgdat too.\n\nSigned-off-by: Jiang Liu \u003cliuj97@gmail.com\u003e\nSigned-off-by: Xishi Qiu \u003cqiuxishi@huawei.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Tony Luck \u003ctony.luck@intel.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Keping Chen \u003cchenkeping@huawei.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "fe03025db3f4ade1f231b174938e0fe224722759",
      "tree": "8414153a22fbf0859ba3b9c07cb9d11ba0fe4214",
      "parents": [
        "7db8889ab05b57200158432755af318fb68854a2"
      ],
      "author": {
        "name": "Rabin Vincent",
        "email": "rabin@rab.in",
        "time": "Tue Jul 31 16:43:14 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 31 18:42:43 2012 -0700"
      },
      "message": "mm: CONFIG_HAVE_MEMBLOCK_NODE -\u003e CONFIG_HAVE_MEMBLOCK_NODE_MAP\n\n0ee332c14518699 (\"memblock: Kill early_node_map[]\") wanted to replace\nCONFIG_ARCH_POPULATES_NODE_MAP with CONFIG_HAVE_MEMBLOCK_NODE_MAP but\nended up replacing one occurence with a reference to the non-existent\nsymbol CONFIG_HAVE_MEMBLOCK_NODE.\n\nThe resulting omission of code would probably have been causing problems\nto 32-bit machines with memory hotplug.\n\nSigned-off-by: Rabin Vincent \u003crabin@rab.in\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7db8889ab05b57200158432755af318fb68854a2",
      "tree": "dfce0ce79909bc102465d871dc7b949fa9525e85",
      "parents": [
        "ab2158848775c7918288f2c423d3e4dbbc7d34eb"
      ],
      "author": {
        "name": "Rik van Riel",
        "email": "riel@redhat.com",
        "time": "Tue Jul 31 16:43:12 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 31 18:42:43 2012 -0700"
      },
      "message": "mm: have order \u003e 0 compaction start off where it left\n\nOrder \u003e 0 compaction stops when enough free pages of the correct page\norder have been coalesced.  When doing subsequent higher order\nallocations, it is possible for compaction to be invoked many times.\n\nHowever, the compaction code always starts out looking for things to\ncompact at the start of the zone, and for free pages to compact things to\nat the end of the zone.\n\nThis can cause quadratic behaviour, with isolate_freepages starting at the\nend of the zone each time, even though previous invocations of the\ncompaction code already filled up all free memory on that end of the zone.\n\nThis can cause isolate_freepages to take enormous amounts of CPU with\ncertain workloads on larger memory systems.\n\nThe obvious solution is to have isolate_freepages remember where it left\noff last time, and continue at that point the next time it gets invoked\nfor an order \u003e 0 compaction.  This could cause compaction to fail if\ncc-\u003efree_pfn and cc-\u003emigrate_pfn are close together initially, in that\ncase we restart from the end of the zone and try once more.\n\nForced full (order \u003d\u003d -1) compactions are left alone.\n\n[akpm@linux-foundation.org: checkpatch fixes]\n[akpm@linux-foundation.org: s/laste/last/, use 80 cols]\nSigned-off-by: Rik van Riel \u003criel@redhat.com\u003e\nReported-by: Jim Schutt \u003cjaschut@sandia.gov\u003e\nTested-by: Jim Schutt \u003cjaschut@sandia.gov\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ca28ddc908fcfef0e5c1b6e5df632db7fc26de10",
      "tree": "403e763684ae57afa007047a3ba3843d099a7cca",
      "parents": [
        "c255a458055e459f65eb7b7f51dc5dbdd0caf1d8"
      ],
      "author": {
        "name": "Wanpeng Li",
        "email": "liwp@linux.vnet.ibm.com",
        "time": "Tue Jul 31 16:43:04 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 31 18:42:43 2012 -0700"
      },
      "message": "mm: remove unused LRU_ALL_EVICTABLE\n\nSigned-off-by: Wanpeng Li \u003cliwp.linux@gmail.com\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c255a458055e459f65eb7b7f51dc5dbdd0caf1d8",
      "tree": "b143b1914eeb6f27f53e30f9f0275d0f1ca5480b",
      "parents": [
        "80934513b230bfcf70265f2ef0fdae89fb391633"
      ],
      "author": {
        "name": "Andrew Morton",
        "email": "akpm@linux-foundation.org",
        "time": "Tue Jul 31 16:43:02 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 31 18:42:43 2012 -0700"
      },
      "message": "memcg: rename config variables\n\nSanity:\n\nCONFIG_CGROUP_MEM_RES_CTLR -\u003e CONFIG_MEMCG\nCONFIG_CGROUP_MEM_RES_CTLR_SWAP -\u003e CONFIG_MEMCG_SWAP\nCONFIG_CGROUP_MEM_RES_CTLR_SWAP_ENABLED -\u003e CONFIG_MEMCG_SWAP_ENABLED\nCONFIG_CGROUP_MEM_RES_CTLR_KMEM -\u003e CONFIG_MEMCG_KMEM\n\n[mhocko@suse.cz: fix missed bits]\nCc: Glauber Costa \u003cglommer@parallels.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Aneesh Kumar K.V \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d14b7a419a664cd7c1c585c9e7fffee9e9051d53",
      "tree": "42a1d5b61b58fa0a75252b082c4c6cef6fa9fd8d",
      "parents": [
        "e8ff13b0bf88b5e696323a1eec877783d965b3c6",
        "a58b3a4aba2fd5c445d9deccc73192bff48b591d"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 24 13:34:56 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 24 13:34:56 2012 -0700"
      },
      "message": "Merge branch \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial\n\nPull trivial tree from Jiri Kosina:\n \"Trivial updates all over the place as usual.\"\n\n* \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (29 commits)\n  Fix typo in include/linux/clk.h .\n  pci: hotplug: Fix typo in pci\n  iommu: Fix typo in iommu\n  video: Fix typo in drivers/video\n  Documentation: Add newline at end-of-file to files lacking one\n  arm,unicore32: Remove obsolete \"select MISC_DEVICES\"\n  module.c: spelling s/postition/position/g\n  cpufreq: Fix typo in cpufreq driver\n  trivial: typo in comment in mksysmap\n  mach-omap2: Fix typo in debug message and comment\n  scsi: aha152x: Fix sparse warning and make printing pointer address more portable.\n  Change email address for Steve Glendinning\n  Btrfs: fix typo in convert_extent_bit\n  via: Remove bogus if check\n  netprio_cgroup.c: fix comment typo\n  backlight: fix memory leak on obscure error path\n  Documentation: asus-laptop.txt references an obsolete Kconfig item\n  Documentation: ManagementStyle: fixed typo\n  mm/vmscan: cleanup comment error in balance_pgdat\n  mm: cleanup on the comments of zone_reclaim_stat\n  ...\n"
    },
    {
      "commit": "d8adde17e5f858427504725218c56aef90e90fc7",
      "tree": "7703f24a69478ebcf9522a92113fc336e1953a82",
      "parents": [
        "bd0a521e88aa7a06ae7aabaed7ae196ed4ad867a"
      ],
      "author": {
        "name": "Jiang Liu",
        "email": "jiang.liu@huawei.com",
        "time": "Wed Jul 11 14:01:52 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Jul 11 16:04:41 2012 -0700"
      },
      "message": "memory hotplug: fix invalid memory access caused by stale kswapd pointer\n\nkswapd_stop() is called to destroy the kswapd work thread when all memory\nof a NUMA node has been offlined.  But kswapd_stop() only terminates the\nwork thread without resetting NODE_DATA(nid)-\u003ekswapd to NULL.  The stale\npointer will prevent kswapd_run() from creating a new work thread when\nadding memory to the memory-less NUMA node again.  Eventually the stale\npointer may cause invalid memory access.\n\nAn example stack dump as below. It\u0027s reproduced with 2.6.32, but latest\nkernel has the same issue.\n\n  BUG: unable to handle kernel NULL pointer dereference at (null)\n  IP: [\u003cffffffff81051a94\u003e] exit_creds+0x12/0x78\n  PGD 0\n  Oops: 0000 [#1] SMP\n  last sysfs file: /sys/devices/system/memory/memory391/state\n  CPU 11\n  Modules linked in: cpufreq_conservative cpufreq_userspace cpufreq_powersave acpi_cpufreq microcode fuse loop dm_mod tpm_tis rtc_cmos i2c_i801 rtc_core tpm serio_raw pcspkr sg tpm_bios igb i2c_core iTCO_wdt rtc_lib mptctl iTCO_vendor_support button dca bnx2 usbhid hid uhci_hcd ehci_hcd usbcore sd_mod crc_t10dif edd ext3 mbcache jbd fan ide_pci_generic ide_core ata_generic ata_piix libata thermal processor thermal_sys hwmon mptsas mptscsih mptbase scsi_transport_sas scsi_mod\n  Pid: 7949, comm: sh Not tainted 2.6.32.12-qiuxishi-5-default #92 Tecal RH2285\n  RIP: 0010:exit_creds+0x12/0x78\n  RSP: 0018:ffff8806044f1d78  EFLAGS: 00010202\n  RAX: 0000000000000000 RBX: ffff880604f22140 RCX: 0000000000019502\n  RDX: 0000000000000000 RSI: 0000000000000202 RDI: 0000000000000000\n  RBP: ffff880604f22150 R08: 0000000000000000 R09: ffffffff81a4dc10\n  R10: 00000000000032a0 R11: ffff880006202500 R12: 0000000000000000\n  R13: 0000000000c40000 R14: 0000000000008000 R15: 0000000000000001\n  FS:  00007fbc03d066f0(0000) GS:ffff8800282e0000(0000) knlGS:0000000000000000\n  CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b\n  CR2: 0000000000000000 CR3: 000000060f029000 CR4: 00000000000006e0\n  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000\n  DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400\n  Process sh (pid: 7949, threadinfo ffff8806044f0000, task ffff880603d7c600)\n  Stack:\n   ffff880604f22140 ffffffff8103aac5 ffff880604f22140 ffffffff8104d21e\n   ffff880006202500 0000000000008000 0000000000c38000 ffffffff810bd5b1\n   0000000000000000 ffff880603d7c600 00000000ffffdd29 0000000000000003\n  Call Trace:\n    __put_task_struct+0x5d/0x97\n    kthread_stop+0x50/0x58\n    offline_pages+0x324/0x3da\n    memory_block_change_state+0x179/0x1db\n    store_mem_state+0x9e/0xbb\n    sysfs_write_file+0xd0/0x107\n    vfs_write+0xad/0x169\n    sys_write+0x45/0x6e\n    system_call_fastpath+0x16/0x1b\n  Code: ff 4d 00 0f 94 c0 84 c0 74 08 48 89 ef e8 1f fd ff ff 5b 5d 31 c0 41 5c c3 53 48 8b 87 20 06 00 00 48 89 fb 48 8b bf 18 06 00 00 \u003c8b\u003e 00 48 c7 83 18 06 00 00 00 00 00 00 f0 ff 0f 0f 94 c0 84 c0\n  RIP  exit_creds+0x12/0x78\n   RSP \u003cffff8806044f1d78\u003e\n  CR2: 0000000000000000\n\n[akpm@linux-foundation.org: add pglist_data.kswapd locking comments]\nSigned-off-by: Xishi Qiu \u003cqiuxishi@huawei.com\u003e\nSigned-off-by: Jiang Liu \u003cjiang.liu@huawei.com\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: David Rientjes \u003crientjes@google.com\u003e\nReviewed-by: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: \u003cstable@vger.kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "59f91e5dd0504dc0ebfaa0b6f3a55e6931f96266",
      "tree": "b913718405d44a921905ac71044fbde410256865",
      "parents": [
        "57bdfdd80077addf518a9b90c4a66890efc4f70e",
        "89abfab133ef1f5902abafb744df72793213ac19"
      ],
      "author": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Fri Jun 29 14:45:58 2012 +0200"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Fri Jun 29 14:45:58 2012 +0200"
      },
      "message": "Merge branch \u0027master\u0027 into for-next\n\nConflicts:\n\tinclude/linux/mmzone.h\n\nSynced with Linus\u0027 tree so that trivial patch can be applied\non top of up-to-date code properly.\n\nReported-by: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\n"
    },
    {
      "commit": "46028e6d10cbf9ccd5fb49aa0c23a430f314144c",
      "tree": "610ff4319244c09bb7f905321f76cc3093506709",
      "parents": [
        "be7bd59db71dfb6dc011c9880fec5a659430003a"
      ],
      "author": {
        "name": "Wanpeng Li",
        "email": "liwp@linux.vnet.ibm.com",
        "time": "Fri Jun 15 16:52:29 2012 +0800"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Thu Jun 28 11:56:12 2012 +0200"
      },
      "message": "mm: cleanup on the comments of zone_reclaim_stat\n\nSigned-off-by: Wanpeng Li \u003cliwp.linux@gmail.com\u003e\nAcked-by: Minchan Kim \u003cminchan@kernel.org\u003e\nSigned-off-by: Jiri Kosina \u003cjkosina@suse.cz\u003e\n"
    },
    {
      "commit": "7f5e86c2ccc1480946d2c869d7f7d5278e828092",
      "tree": "704612422963868042c9d240b4a395bd7bce8469",
      "parents": [
        "9e3b2f8cd340e13353a44c9a34caef2848131ed7"
      ],
      "author": {
        "name": "Konstantin Khlebnikov",
        "email": "khlebnikov@openvz.org",
        "time": "Tue May 29 15:06:58 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 29 16:22:26 2012 -0700"
      },
      "message": "mm: add link from struct lruvec to struct zone\n\nThis is the first stage of struct mem_cgroup_zone removal.  Further\npatches replace struct mem_cgroup_zone with a pointer to struct lruvec.\n\nIf CONFIG_CGROUP_MEM_RES_CTLR\u003dn lruvec_zone() is just container_of().\n\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "89abfab133ef1f5902abafb744df72793213ac19",
      "tree": "29df29e2a34a0af3649417d2e430480c7e7e5fa1",
      "parents": [
        "c3c787e8c38557ccf44c670d73aebe630a2b1479"
      ],
      "author": {
        "name": "Hugh Dickins",
        "email": "hughd@google.com",
        "time": "Tue May 29 15:06:53 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 29 16:22:25 2012 -0700"
      },
      "message": "mm/memcg: move reclaim_stat into lruvec\n\nWith mem_cgroup_disabled() now explicit, it becomes clear that the\nzone_reclaim_stat structure actually belongs in lruvec, per-zone when\nmemcg is disabled but per-memcg per-zone when it\u0027s enabled.\n\nWe can delete mem_cgroup_get_reclaim_stat(), and change\nupdate_page_reclaim_stat() to update just the one set of stats, the one\nwhich get_scan_count() will actually use.\n\nSigned-off-by: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Minchan Kim \u003cminchan@kernel.org\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Glauber Costa \u003cglommer@parallels.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f3fd4a61928a5edf5b033a417e761b488b43e203",
      "tree": "f56c5b6f4a4c732c9167e4cacb3e9c25ced0d000",
      "parents": [
        "014483bcccc5edbf861d89dc1a6f7cdc02f9f4c0"
      ],
      "author": {
        "name": "Konstantin Khlebnikov",
        "email": "khlebnikov@openvz.org",
        "time": "Tue May 29 15:06:54 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 29 16:22:25 2012 -0700"
      },
      "message": "mm: remove lru type checks from __isolate_lru_page()\n\nAfter patch \"mm: forbid lumpy-reclaim in shrink_active_list()\" we can\ncompletely remove anon/file and active/inactive lru type filters from\n__isolate_lru_page(), because isolation for 0-order reclaim always\nisolates pages from right lru list.  And pages-isolation for lumpy\nshrink_inactive_list() or memory-compaction anyway allowed to isolate\npages from all evictable lru lists.\n\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Glauber Costa \u003cglommer@parallels.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d484864dd96e1830e7689510597707c1df8cd681",
      "tree": "51551708ba3f26d05575fa91daaf0c0d970a77c3",
      "parents": [
        "be87cfb47c5c740f7b17929bcd7c480b228513e0",
        "0f51596bd39a5c928307ffcffc9ba07f90f42a8b"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri May 25 09:18:59 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri May 25 09:18:59 2012 -0700"
      },
      "message": "Merge branch \u0027for-linus\u0027 of git://git.linaro.org/people/mszyprowski/linux-dma-mapping\n\nPull CMA and ARM DMA-mapping updates from Marek Szyprowski:\n \"These patches contain two major updates for DMA mapping subsystem\n  (mainly for ARM architecture).  First one is Contiguous Memory\n  Allocator (CMA) which makes it possible for device drivers to allocate\n  big contiguous chunks of memory after the system has booted.\n\n  The main difference from the similar frameworks is the fact that CMA\n  allows to transparently reuse the memory region reserved for the big\n  chunk allocation as a system memory, so no memory is wasted when no\n  big chunk is allocated.  Once the alloc request is issued, the\n  framework migrates system pages to create space for the required big\n  chunk of physically contiguous memory.\n\n  For more information one can refer to nice LWN articles:\n\n   - \u0027A reworked contiguous memory allocator\u0027:\n\t\thttp://lwn.net/Articles/447405/\n\n   - \u0027CMA and ARM\u0027:\n\t\thttp://lwn.net/Articles/450286/\n\n   - \u0027A deep dive into CMA\u0027:\n\t\thttp://lwn.net/Articles/486301/\n\n   - and the following thread with the patches and links to all previous\n     versions:\n\t\thttps://lkml.org/lkml/2012/4/3/204\n\n  The main client for this new framework is ARM DMA-mapping subsystem.\n\n  The second part provides a complete redesign in ARM DMA-mapping\n  subsystem.  The core implementation has been changed to use common\n  struct dma_map_ops based infrastructure with the recent updates for\n  new dma attributes merged in v3.4-rc2.  This allows to use more than\n  one implementation of dma-mapping calls and change/select them on the\n  struct device basis.  The first client of this new infractructure is\n  dmabounce implementation which has been completely cut out of the\n  core, common code.\n\n  The last patch of this redesign update introduces a new, experimental\n  implementation of dma-mapping calls on top of generic IOMMU framework.\n  This lets ARM sub-platform to transparently use IOMMU for DMA-mapping\n  calls if one provides required IOMMU hardware.\n\n  For more information please refer to the following thread:\n\t\thttp://www.spinics.net/lists/arm-kernel/msg175729.html\n\n  The last patch merges changes from both updates and provides a\n  resolution for the conflicts which cannot be avoided when patches have\n  been applied on the same files (mainly arch/arm/mm/dma-mapping.c).\"\n\nAcked by Andrew Morton \u003cakpm@linux-foundation.org\u003e:\n \"Yup, this one please.  It\u0027s had much work, plenty of review and I\n  think even Russell is happy with it.\"\n\n* \u0027for-linus\u0027 of git://git.linaro.org/people/mszyprowski/linux-dma-mapping: (28 commits)\n  ARM: dma-mapping: use PMD size for section unmap\n  cma: fix migration mode\n  ARM: integrate CMA with DMA-mapping subsystem\n  X86: integrate CMA with DMA-mapping subsystem\n  drivers: add Contiguous Memory Allocator\n  mm: trigger page reclaim in alloc_contig_range() to stabilise watermarks\n  mm: extract reclaim code from __alloc_pages_direct_reclaim()\n  mm: Serialize access to min_free_kbytes\n  mm: page_isolation: MIGRATE_CMA isolation functions added\n  mm: mmzone: MIGRATE_CMA migration type added\n  mm: page_alloc: change fallbacks array handling\n  mm: page_alloc: introduce alloc_contig_range()\n  mm: compaction: export some of the functions\n  mm: compaction: introduce isolate_freepages_range()\n  mm: compaction: introduce map_pages()\n  mm: compaction: introduce isolate_migratepages_range()\n  mm: page_alloc: remove trailing whitespace\n  ARM: dma-mapping: add support for IOMMU mapper\n  ARM: dma-mapping: use alloc, mmap, free from dma_ops\n  ARM: dma-mapping: remove redundant code and do the cleanup\n  ...\n\nConflicts:\n\tarch/x86/include/asm/dma-mapping.h\n"
    },
    {
      "commit": "49f223a9cd96c7293d7258ff88c2bdf83065f69c",
      "tree": "4a141cbe4132ab2a5edfbc44165d091bb2289c75",
      "parents": [
        "bba9071087108d3de70bea274e35064cc480487b"
      ],
      "author": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Wed Jan 25 12:49:24 2012 +0100"
      },
      "committer": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Mon May 21 15:09:36 2012 +0200"
      },
      "message": "mm: trigger page reclaim in alloc_contig_range() to stabilise watermarks\n\nalloc_contig_range() performs memory allocation so it also should keep\ntrack on keeping the correct level of memory watermarks. This commit adds\na call to *_slowpath style reclaim to grab enough pages to make sure that\nthe final collection of contiguous pages from freelists will not starve\nthe system.\n\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCC: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\n"
    },
    {
      "commit": "47118af076f64844b4f423bc2f545b2da9dab50d",
      "tree": "00df88cf2f60a2a3efc1a6c46ad88d128aee2071",
      "parents": [
        "6d4a49160de2c684fb59fa627bce80e200224331"
      ],
      "author": {
        "name": "Michal Nazarewicz",
        "email": "mina86@mina86.com",
        "time": "Thu Dec 29 13:09:50 2011 +0100"
      },
      "committer": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Mon May 21 15:09:32 2012 +0200"
      },
      "message": "mm: mmzone: MIGRATE_CMA migration type added\n\nThe MIGRATE_CMA migration type has two main characteristics:\n(i) only movable pages can be allocated from MIGRATE_CMA\npageblocks and (ii) page allocator will never change migration\ntype of MIGRATE_CMA pageblocks.\n\nThis guarantees (to some degree) that page in a MIGRATE_CMA page\nblock can always be migrated somewhere else (unless there\u0027s no\nmemory left in the system).\n\nIt is designed to be used for allocating big chunks (eg. 10MiB)\nof physically contiguous memory.  Once driver requests\ncontiguous memory, pages from MIGRATE_CMA pageblocks may be\nmigrated away to create a contiguous block.\n\nTo minimise number of migrations, MIGRATE_CMA migration type\nis the last type tried when page allocator falls back to other\nmigration types when requested.\n\nSigned-off-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\n"
    },
    {
      "commit": "35fca53e15a696adbea300a981df4bbfb09a76d6",
      "tree": "0be0420f08b9c0fd2772448c448d4ba3b99db8ff",
      "parents": [
        "b3aa1584e9f3449b0669ab2beb9b9bf99874e1d6"
      ],
      "author": {
        "name": "Wang YanQing",
        "email": "udknight@gmail.com",
        "time": "Sun Apr 15 20:42:28 2012 +0800"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Sun Apr 15 16:57:23 2012 +0200"
      },
      "message": "mmzone: fix comment typo coelesce -\u003e coalesce\n\nSigned-off-by: Wang YanQing \u003cudknight@gmail.com\u003e\nSigned-off-by: Jiri Kosina \u003cjkosina@suse.cz\u003e\n"
    },
    {
      "commit": "aff622495c9a0b56148192e53bdec539f5e147f2",
      "tree": "78f6400d8b6bec3279483006a0e9543e47aa833e",
      "parents": [
        "7be62de99adcab4449d416977b4274985c5fe023"
      ],
      "author": {
        "name": "Rik van Riel",
        "email": "riel@redhat.com",
        "time": "Wed Mar 21 16:33:52 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 21 17:54:56 2012 -0700"
      },
      "message": "vmscan: only defer compaction for failed order and higher\n\nCurrently a failed order-9 (transparent hugepage) compaction can lead to\nmemory compaction being temporarily disabled for a memory zone.  Even if\nwe only need compaction for an order 2 allocation, eg.  for jumbo frames\nnetworking.\n\nThe fix is relatively straightforward: keep track of the highest order at\nwhich compaction is succeeding, and only defer compaction for orders at\nwhich compaction is failing.\n\nSigned-off-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Hillf Danton \u003cdhillf@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4111304dab198c687bc60f2e235a9f7ee92c47c8",
      "tree": "c98fbae214f73f8475bcdc54c8116dea82cd7d14",
      "parents": [
        "4d06f382c733f99ec67df006255e87525ac1efd3"
      ],
      "author": {
        "name": "Hugh Dickins",
        "email": "hughd@google.com",
        "time": "Thu Jan 12 17:20:01 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 12 20:13:10 2012 -0800"
      },
      "message": "mm: enum lru_list lru\n\nMostly we use \"enum lru_list lru\": change those few \"l\"s to \"lru\"s.\n\nSigned-off-by: Hugh Dickins \u003chughd@google.com\u003e\nReviewed-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c82449352854ff09e43062246af86bdeb628f0c3",
      "tree": "9cb8052e425c8cdab24ac41e83bbb672832ce54e",
      "parents": [
        "b969c4ab9f182a6e1b2a0848be349f99714947b0"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Thu Jan 12 17:19:38 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 12 20:13:09 2012 -0800"
      },
      "message": "mm: compaction: make isolate_lru_page() filter-aware again\n\nCommit 39deaf85 (\"mm: compaction: make isolate_lru_page() filter-aware\")\nnoted that compaction does not migrate dirty or writeback pages and that\nis was meaningless to pick the page and re-add it to the LRU list.  This\nhad to be partially reverted because some dirty pages can be migrated by\ncompaction without blocking.\n\nThis patch updates \"mm: compaction: make isolate_lru_page\" by skipping\nover pages that migration has no possibility of migrating to minimise LRU\ndisruption.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nReviewed-by: Rik van Riel\u003criel@redhat.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nReviewed-by: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Dave Jones \u003cdavej@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Andy Isaacson \u003cadi@hexapodia.org\u003e\nCc: Nai Xia \u003cnai.xia@gmail.com\u003e\nCc: Johannes Weiner \u003cjweiner@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6290df545814990ca2663baf6e894669132d5f73",
      "tree": "c62472270ba81a7146bed0854be74e2e2338c629",
      "parents": [
        "b95a2f2d486d0d768a92879c023a03757b9c7e58"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "jweiner@redhat.com",
        "time": "Thu Jan 12 17:18:10 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 12 20:13:05 2012 -0800"
      },
      "message": "mm: collect LRU list heads into struct lruvec\n\nHaving a unified structure with a LRU list set for both global zones and\nper-memcg zones allows to keep that code simple which deals with LRU\nlists and does not care about the container itself.\n\nOnce the per-memcg LRU lists directly link struct pages, the isolation\nfunction and all other list manipulations are shared between the memcg\ncase and the global LRU case.\n\nSigned-off-by: Johannes Weiner \u003cjweiner@redhat.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Kirill A. Shutemov \u003ckirill@shutemov.name\u003e\nCc: Daisuke Nishimura \u003cnishimura@mxp.nes.nec.co.jp\u003e\nCc: Balbir Singh \u003cbsingharora@gmail.com\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ab8fabd46f811d5153d8a0cd2fac9a0d41fb593d",
      "tree": "0a6f7dcca59d22abe07973e3fafc41719ff3ad9d",
      "parents": [
        "25bd91bd27820d5971258cecd1c0e64b0e485144"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "jweiner@redhat.com",
        "time": "Tue Jan 10 15:07:42 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:43 2012 -0800"
      },
      "message": "mm: exclude reserved pages from dirtyable memory\n\nPer-zone dirty limits try to distribute page cache pages allocated for\nwriting across zones in proportion to the individual zone sizes, to reduce\nthe likelihood of reclaim having to write back individual pages from the\nLRU lists in order to make progress.\n\nThis patch:\n\nThe amount of dirtyable pages should not include the full number of free\npages: there is a number of reserved pages that the page allocator and\nkswapd always try to keep free.\n\nThe closer (reclaimable pages - dirty pages) is to the number of reserved\npages, the more likely it becomes for reclaim to run into dirty pages:\n\n       +----------+ ---\n       |   anon   |  |\n       +----------+  |\n       |          |  |\n       |          |  -- dirty limit new    -- flusher new\n       |   file   |  |                     |\n       |          |  |                     |\n       |          |  -- dirty limit old    -- flusher old\n       |          |                        |\n       +----------+                       --- reclaim\n       | reserved |\n       +----------+\n       |  kernel  |\n       +----------+\n\nThis patch introduces a per-zone dirty reserve that takes both the lowmem\nreserve as well as the high watermark of the zone into account, and a\nglobal sum of those per-zone values that is subtracted from the global\namount of dirtyable pages.  The lowmem reserve is unavailable to page\ncache allocations and kswapd tries to keep the high watermark free.  We\ndon\u0027t want to end up in a situation where reclaim has to clean pages in\norder to balance zones.\n\nNot treating reserved pages as dirtyable on a global level is only a\nconceptual fix.  In reality, dirty pages are not distributed equally\nacross zones and reclaim runs into dirty pages on a regular basis.\n\nBut it is important to get this right before tackling the problem on a\nper-zone level, where the distance between reclaim and the dirty pages is\nmostly much smaller in absolute numbers.\n\n[akpm@linux-foundation.org: fix highmem build]\nSigned-off-by: Johannes Weiner \u003cjweiner@redhat.com\u003e\nReviewed-by: Rik van Riel \u003criel@redhat.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Shaohua Li \u003cshaohua.li@intel.com\u003e\nCc: Chris Mason \u003cchris.mason@oracle.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "0ee332c1451869963626bf9cac88f165a90990e1",
      "tree": "a40e6c9c6cfe39ecbca37a08019be3c9e56a4a9b",
      "parents": [
        "a2bf79e7dcc97b4e9654f273453f9264f49e41ff"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Dec 08 10:22:09 2011 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Dec 08 10:22:09 2011 -0800"
      },
      "message": "memblock: Kill early_node_map[]\n\nNow all ARCH_POPULATES_NODE_MAP archs select HAVE_MEBLOCK_NODE_MAP -\nthere\u0027s no user of early_node_map[] left.  Kill early_node_map[] and\nreplace ARCH_POPULATES_NODE_MAP with HAVE_MEMBLOCK_NODE_MAP.  Also,\nrelocate for_each_mem_pfn_range() and helper from mm.h to memblock.h\nas page_alloc.c would no longer host an alternative implementation.\n\nThis change is ultimately one to one mapping and shouldn\u0027t cause any\nobservable difference; however, after the recent changes, there are\nsome functions which now would fit memblock.c better than page_alloc.c\nand dependency on HAVE_MEMBLOCK_NODE_MAP instead of HAVE_MEMBLOCK\ndoesn\u0027t make much sense on some of them.  Further cleanups for\nfunctions inside HAVE_MEMBLOCK_NODE_MAP in mm.h would be nice.\n\n-v2: Fix compile bug introduced by mis-spelling\n CONFIG_HAVE_MEMBLOCK_NODE_MAP to CONFIG_MEMBLOCK_HAVE_NODE_MAP in\n mmzone.h.  Reported by Stephen Rothwell.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Tony Luck \u003ctony.luck@intel.com\u003e\nCc: Ralf Baechle \u003cralf@linux-mips.org\u003e\nCc: Martin Schwidefsky \u003cschwidefsky@de.ibm.com\u003e\nCc: Chen Liqin \u003cliqin.chen@sunplusct.com\u003e\nCc: Paul Mundt \u003clethal@linux-sh.org\u003e\nCc: \"David S. Miller\" \u003cdavem@davemloft.net\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\n"
    },
    {
      "commit": "49ea7eb65e7c5060807fb9312b1ad4c3eab82e2c",
      "tree": "88eaa206cdcac1190817820a0eb56bca2585f9ea",
      "parents": [
        "92df3a723f84cdf8133560bbff950a7a99e92bc9"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Oct 31 17:07:59 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Oct 31 17:30:47 2011 -0700"
      },
      "message": "mm: vmscan: immediately reclaim end-of-LRU dirty pages when writeback completes\n\nWhen direct reclaim encounters a dirty page, it gets recycled around the\nLRU for another cycle.  This patch marks the page PageReclaim similar to\ndeactivate_page() so that the page gets reclaimed almost immediately after\nthe page gets cleaned.  This is to avoid reclaiming clean pages that are\nyounger than a dirty page encountered at the end of the LRU that might\nhave been something like a use-once page.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Johannes Weiner \u003cjweiner@redhat.com\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Alex Elder \u003caelder@sgi.com\u003e\nCc: Theodore Ts\u0027o \u003ctytso@mit.edu\u003e\nCc: Chris Mason \u003cchris.mason@oracle.com\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ee72886d8ed5d9de3fa0ed3b99a7ca7702576a96",
      "tree": "d9596005d3ea38541c5dfe1c2a0b7d5a4d73488f",
      "parents": [
        "e10d59f2c3decaf22cc5d3de7040eba202bc2df3"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Mon Oct 31 17:07:38 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Oct 31 17:30:46 2011 -0700"
      },
      "message": "mm: vmscan: do not writeback filesystem pages in direct reclaim\n\nTesting from the XFS folk revealed that there is still too much I/O from\nthe end of the LRU in kswapd.  Previously it was considered acceptable by\nVM people for a small number of pages to be written back from reclaim with\ntesting generally showing about 0.3% of pages reclaimed were written back\n(higher if memory was low).  That writing back a small number of pages is\nok has been heavily disputed for quite some time and Dave Chinner\nexplained it well;\n\n\tIt doesn\u0027t have to be a very high number to be a problem. IO\n\tis orders of magnitude slower than the CPU time it takes to\n\tflush a page, so the cost of making a bad flush decision is\n\tvery high. And single page writeback from the LRU is almost\n\talways a bad flush decision.\n\nTo complicate matters, filesystems respond very differently to requests\nfrom reclaim according to Christoph Hellwig;\n\n\txfs tries to write it back if the requester is kswapd\n\text4 ignores the request if it\u0027s a delayed allocation\n\tbtrfs ignores the request\n\nAs a result, each filesystem has different performance characteristics\nwhen under memory pressure and there are many pages being dirtied.  In\nsome cases, the request is ignored entirely so the VM cannot depend on the\nIO being dispatched.\n\nThe objective of this series is to reduce writing of filesystem-backed\npages from reclaim, play nicely with writeback that is already in progress\nand throttle reclaim appropriately when writeback pages are encountered.\nThe assumption is that the flushers will always write pages faster than if\nreclaim issues the IO.\n\nA secondary goal is to avoid the problem whereby direct reclaim splices\ntwo potentially deep call stacks together.\n\nThere is a potential new problem as reclaim has less control over how long\nbefore a page in a particularly zone or container is cleaned and direct\nreclaimers depend on kswapd or flusher threads to do the necessary work.\nHowever, as filesystems sometimes ignore direct reclaim requests already,\nit is not expected to be a serious issue.\n\nPatch 1 disables writeback of filesystem pages from direct reclaim\n\tentirely. Anonymous pages are still written.\n\nPatch 2 removes dead code in lumpy reclaim as it is no longer able\n\tto synchronously write pages. This hurts lumpy reclaim but\n\tthere is an expectation that compaction is used for hugepage\n\tallocations these days and lumpy reclaim\u0027s days are numbered.\n\nPatches 3-4 add warnings to XFS and ext4 if called from\n\tdirect reclaim. With patch 1, this \"never happens\" and is\n\tintended to catch regressions in this logic in the future.\n\nPatch 5 disables writeback of filesystem pages from kswapd unless\n\tthe priority is raised to the point where kswapd is considered\n\tto be in trouble.\n\nPatch 6 throttles reclaimers if too many dirty pages are being\n\tencountered and the zones or backing devices are congested.\n\nPatch 7 invalidates dirty pages found at the end of the LRU so they\n\tare reclaimed quickly after being written back rather than\n\twaiting for a reclaimer to find them\n\nI consider this series to be orthogonal to the writeback work but it is\nworth noting that the writeback work affects the viability of patch 8 in\nparticular.\n\nI tested this on ext4 and xfs using fs_mark, a simple writeback test based\non dd and a micro benchmark that does a streaming write to a large mapping\n(exercises use-once LRU logic) followed by streaming writes to a mix of\nanonymous and file-backed mappings.  The command line for fs_mark when\nbotted with 512M looked something like\n\n./fs_mark -d  /tmp/fsmark-2676  -D  100  -N  150  -n  150  -L  25  -t  1  -S0  -s  10485760\n\nThe number of files was adjusted depending on the amount of available\nmemory so that the files created was about 3xRAM.  For multiple threads,\nthe -d switch is specified multiple times.\n\nThe test machine is x86-64 with an older generation of AMD processor with\n4 cores.  The underlying storage was 4 disks configured as RAID-0 as this\nwas the best configuration of storage I had available.  Swap is on a\nseparate disk.  Dirty ratio was tuned to 40% instead of the default of\n20%.\n\nTesting was run with and without monitors to both verify that the patches\nwere operating as expected and that any performance gain was real and not\ndue to interference from monitors.\n\nHere is a summary of results based on testing XFS.\n\n512M1P-xfs           Files/s  mean                 32.69 ( 0.00%)     34.44 ( 5.08%)\n512M1P-xfs           Elapsed Time fsmark                    51.41     48.29\n512M1P-xfs           Elapsed Time simple-wb                114.09    108.61\n512M1P-xfs           Elapsed Time mmap-strm                113.46    109.34\n512M1P-xfs           Kswapd efficiency fsmark                 62%       63%\n512M1P-xfs           Kswapd efficiency simple-wb              56%       61%\n512M1P-xfs           Kswapd efficiency mmap-strm              44%       42%\n512M-xfs             Files/s  mean                 30.78 ( 0.00%)     35.94 (14.36%)\n512M-xfs             Elapsed Time fsmark                    56.08     48.90\n512M-xfs             Elapsed Time simple-wb                112.22     98.13\n512M-xfs             Elapsed Time mmap-strm                219.15    196.67\n512M-xfs             Kswapd efficiency fsmark                 54%       56%\n512M-xfs             Kswapd efficiency simple-wb              54%       55%\n512M-xfs             Kswapd efficiency mmap-strm              45%       44%\n512M-4X-xfs          Files/s  mean                 30.31 ( 0.00%)     33.33 ( 9.06%)\n512M-4X-xfs          Elapsed Time fsmark                    63.26     55.88\n512M-4X-xfs          Elapsed Time simple-wb                100.90     90.25\n512M-4X-xfs          Elapsed Time mmap-strm                261.73    255.38\n512M-4X-xfs          Kswapd efficiency fsmark                 49%       50%\n512M-4X-xfs          Kswapd efficiency simple-wb              54%       56%\n512M-4X-xfs          Kswapd efficiency mmap-strm              37%       36%\n512M-16X-xfs         Files/s  mean                 60.89 ( 0.00%)     65.22 ( 6.64%)\n512M-16X-xfs         Elapsed Time fsmark                    67.47     58.25\n512M-16X-xfs         Elapsed Time simple-wb                103.22     90.89\n512M-16X-xfs         Elapsed Time mmap-strm                237.09    198.82\n512M-16X-xfs         Kswapd efficiency fsmark                 45%       46%\n512M-16X-xfs         Kswapd efficiency simple-wb              53%       55%\n512M-16X-xfs         Kswapd efficiency mmap-strm              33%       33%\n\nUp until 512-4X, the FSmark improvements were statistically significant.\nFor the 4X and 16X tests the results were within standard deviations but\njust barely.  The time to completion for all tests is improved which is an\nimportant result.  In general, kswapd efficiency is not affected by\nskipping dirty pages.\n\n1024M1P-xfs          Files/s  mean                 39.09 ( 0.00%)     41.15 ( 5.01%)\n1024M1P-xfs          Elapsed Time fsmark                    84.14     80.41\n1024M1P-xfs          Elapsed Time simple-wb                210.77    184.78\n1024M1P-xfs          Elapsed Time mmap-strm                162.00    160.34\n1024M1P-xfs          Kswapd efficiency fsmark                 69%       75%\n1024M1P-xfs          Kswapd efficiency simple-wb              71%       77%\n1024M1P-xfs          Kswapd efficiency mmap-strm              43%       44%\n1024M-xfs            Files/s  mean                 35.45 ( 0.00%)     37.00 ( 4.19%)\n1024M-xfs            Elapsed Time fsmark                    94.59     91.00\n1024M-xfs            Elapsed Time simple-wb                229.84    195.08\n1024M-xfs            Elapsed Time mmap-strm                405.38    440.29\n1024M-xfs            Kswapd efficiency fsmark                 79%       71%\n1024M-xfs            Kswapd efficiency simple-wb              74%       74%\n1024M-xfs            Kswapd efficiency mmap-strm              39%       42%\n1024M-4X-xfs         Files/s  mean                 32.63 ( 0.00%)     35.05 ( 6.90%)\n1024M-4X-xfs         Elapsed Time fsmark                   103.33     97.74\n1024M-4X-xfs         Elapsed Time simple-wb                204.48    178.57\n1024M-4X-xfs         Elapsed Time mmap-strm                528.38    511.88\n1024M-4X-xfs         Kswapd efficiency fsmark                 81%       70%\n1024M-4X-xfs         Kswapd efficiency simple-wb              73%       72%\n1024M-4X-xfs         Kswapd efficiency mmap-strm              39%       38%\n1024M-16X-xfs        Files/s  mean                 42.65 ( 0.00%)     42.97 ( 0.74%)\n1024M-16X-xfs        Elapsed Time fsmark                   103.11     99.11\n1024M-16X-xfs        Elapsed Time simple-wb                200.83    178.24\n1024M-16X-xfs        Elapsed Time mmap-strm                397.35    459.82\n1024M-16X-xfs        Kswapd efficiency fsmark                 84%       69%\n1024M-16X-xfs        Kswapd efficiency simple-wb              74%       73%\n1024M-16X-xfs        Kswapd efficiency mmap-strm              39%       40%\n\nAll FSMark tests up to 16X had statistically significant improvements.\nFor the most part, tests are completing faster with the exception of the\nstreaming writes to a mixture of anonymous and file-backed mappings which\nwere slower in two cases\n\nIn the cases where the mmap-strm tests were slower, there was more\nswapping due to dirty pages being skipped.  The number of additional pages\nswapped is almost identical to the fewer number of pages written from\nreclaim.  In other words, roughly the same number of pages were reclaimed\nbut swapping was slower.  As the test is a bit unrealistic and stresses\nmemory heavily, the small shift is acceptable.\n\n4608M1P-xfs          Files/s  mean                 29.75 ( 0.00%)     30.96 ( 3.91%)\n4608M1P-xfs          Elapsed Time fsmark                   512.01    492.15\n4608M1P-xfs          Elapsed Time simple-wb                618.18    566.24\n4608M1P-xfs          Elapsed Time mmap-strm                488.05    465.07\n4608M1P-xfs          Kswapd efficiency fsmark                 93%       86%\n4608M1P-xfs          Kswapd efficiency simple-wb              88%       84%\n4608M1P-xfs          Kswapd efficiency mmap-strm              46%       45%\n4608M-xfs            Files/s  mean                 27.60 ( 0.00%)     28.85 ( 4.33%)\n4608M-xfs            Elapsed Time fsmark                   555.96    532.34\n4608M-xfs            Elapsed Time simple-wb                659.72    571.85\n4608M-xfs            Elapsed Time mmap-strm               1082.57   1146.38\n4608M-xfs            Kswapd efficiency fsmark                 89%       91%\n4608M-xfs            Kswapd efficiency simple-wb              88%       82%\n4608M-xfs            Kswapd efficiency mmap-strm              48%       46%\n4608M-4X-xfs         Files/s  mean                 26.00 ( 0.00%)     27.47 ( 5.35%)\n4608M-4X-xfs         Elapsed Time fsmark                   592.91    564.00\n4608M-4X-xfs         Elapsed Time simple-wb                616.65    575.07\n4608M-4X-xfs         Elapsed Time mmap-strm               1773.02   1631.53\n4608M-4X-xfs         Kswapd efficiency fsmark                 90%       94%\n4608M-4X-xfs         Kswapd efficiency simple-wb              87%       82%\n4608M-4X-xfs         Kswapd efficiency mmap-strm              43%       43%\n4608M-16X-xfs        Files/s  mean                 26.07 ( 0.00%)     26.42 ( 1.32%)\n4608M-16X-xfs        Elapsed Time fsmark                   602.69    585.78\n4608M-16X-xfs        Elapsed Time simple-wb                606.60    573.81\n4608M-16X-xfs        Elapsed Time mmap-strm               1549.75   1441.86\n4608M-16X-xfs        Kswapd efficiency fsmark                 98%       98%\n4608M-16X-xfs        Kswapd efficiency simple-wb              88%       82%\n4608M-16X-xfs        Kswapd efficiency mmap-strm              44%       42%\n\nUnlike the other tests, the fsmark results are not statistically\nsignificant but the min and max times are both improved and for the most\npart, tests completed faster.\n\nThere are other indications that this is an improvement as well.  For\nexample, in the vast majority of cases, there were fewer pages scanned by\ndirect reclaim implying in many cases that stalls due to direct reclaim\nare reduced.  KSwapd is scanning more due to skipping dirty pages which is\nunfortunate but the CPU usage is still acceptable\n\nIn an earlier set of tests, I used blktrace and in almost all cases\nthroughput throughout the entire test was higher.  However, I ended up\ndiscarding those results as recording blktrace data was too heavy for my\nliking.\n\nOn a laptop, I plugged in a USB stick and ran a similar tests of tests\nusing it as backing storage.  A desktop environment was running and for\nthe entire duration of the tests, firefox and gnome terminal were\nlaunching and exiting to vaguely simulate a user.\n\n1024M-xfs            Files/s  mean               0.41 ( 0.00%)        0.44 ( 6.82%)\n1024M-xfs            Elapsed Time fsmark               2053.52   1641.03\n1024M-xfs            Elapsed Time simple-wb            1229.53    768.05\n1024M-xfs            Elapsed Time mmap-strm            4126.44   4597.03\n1024M-xfs            Kswapd efficiency fsmark              84%       85%\n1024M-xfs            Kswapd efficiency simple-wb           92%       81%\n1024M-xfs            Kswapd efficiency mmap-strm           60%       51%\n1024M-xfs            Avg wait ms fsmark                5404.53     4473.87\n1024M-xfs            Avg wait ms simple-wb             2541.35     1453.54\n1024M-xfs            Avg wait ms mmap-strm             3400.25     3852.53\n\nThe mmap-strm results were hurt because firefox launching had a tendency\nto push the test out of memory.  On the postive side, firefox launched\nmarginally faster with the patches applied.  Time to completion for many\ntests was faster but more importantly - the \"Avg wait\" time as measured by\niostat was far lower implying the system would be more responsive.  It was\nalso the case that \"Avg wait ms\" on the root filesystem was lower.  I\ntested it manually and while the system felt slightly more responsive\nwhile copying data to a USB stick, it was marginal enough that it could be\nmy imagination.\n\nThis patch: do not writeback filesystem pages in direct reclaim.\n\nWhen kswapd is failing to keep zones above the min watermark, a process\nwill enter direct reclaim in the same manner kswapd does.  If a dirty page\nis encountered during the scan, this page is written to backing storage\nusing mapping-\u003ewritepage.\n\nThis causes two problems.  First, it can result in very deep call stacks,\nparticularly if the target storage or filesystem are complex.  Some\nfilesystems ignore write requests from direct reclaim as a result.  The\nsecond is that a single-page flush is inefficient in terms of IO.  While\nthere is an expectation that the elevator will merge requests, this does\nnot always happen.  Quoting Christoph Hellwig;\n\n\tThe elevator has a relatively small window it can operate on,\n\tand can never fix up a bad large scale writeback pattern.\n\nThis patch prevents direct reclaim writing back filesystem pages by\nchecking if current is kswapd.  Anonymous pages are still written to swap\nas there is not the equivalent of a flusher thread for anonymous pages.\nIf the dirty pages cannot be written back, they are placed back on the LRU\nlists.  There is now a direct dependency on dirty page balancing to\nprevent too many pages in the system being dirtied which would prevent\nreclaim making forward progress.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: Johannes Weiner \u003cjweiner@redhat.com\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Alex Elder \u003caelder@sgi.com\u003e\nCc: Theodore Ts\u0027o \u003ctytso@mit.edu\u003e\nCc: Chris Mason \u003cchris.mason@oracle.com\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f80c0673610e36ae29d63e3297175e22f70dde5f",
      "tree": "0a6aab3b637fa75961224e9261eb544156672c34",
      "parents": [
        "39deaf8585152f1a35c1676d3d7dc6ae0fb65967"
      ],
      "author": {
        "name": "Minchan Kim",
        "email": "minchan.kim@gmail.com",
        "time": "Mon Oct 31 17:06:55 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Oct 31 17:30:44 2011 -0700"
      },
      "message": "mm: zone_reclaim: make isolate_lru_page() filter-aware\n\nIn __zone_reclaim case, we don\u0027t want to shrink mapped page.  Nonetheless,\nwe have isolated mapped page and re-add it into LRU\u0027s head.  It\u0027s\nunnecessary CPU overhead and makes LRU churning.\n\nOf course, when we isolate the page, the page might be mapped but when we\ntry to migrate the page, the page would be not mapped.  So it could be\nmigrated.  But race is rare and although it happens, it\u0027s no big deal.\n\nSigned-off-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nReviewed-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "39deaf8585152f1a35c1676d3d7dc6ae0fb65967",
      "tree": "a7509ea61c2f1028ed7ef961aa1abd16d50905f9",
      "parents": [
        "4356f21d09283dc6d39a6f7287a65ddab61e2808"
      ],
      "author": {
        "name": "Minchan Kim",
        "email": "minchan.kim@gmail.com",
        "time": "Mon Oct 31 17:06:51 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Oct 31 17:30:44 2011 -0700"
      },
      "message": "mm: compaction: make isolate_lru_page() filter-aware\n\nIn async mode, compaction doesn\u0027t migrate dirty or writeback pages.  So,\nit\u0027s meaningless to pick the page and re-add it to lru list.\n\nOf course, when we isolate the page in compaction, the page might be dirty\nor writeback but when we try to migrate the page, the page would be not\ndirty, writeback.  So it could be migrated.  But it\u0027s very unlikely as\nisolate and migration cycle is much faster than writeout.\n\nSo, this patch helps cpu overhead and prevent unnecessary LRU churning.\n\nSigned-off-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nReviewed-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4356f21d09283dc6d39a6f7287a65ddab61e2808",
      "tree": "34822a1662ea83291455834556a4fb5bf98ecd72",
      "parents": [
        "b9e84ac1536d35aee03b2601f19694949f0bd506"
      ],
      "author": {
        "name": "Minchan Kim",
        "email": "minchan.kim@gmail.com",
        "time": "Mon Oct 31 17:06:47 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Oct 31 17:30:44 2011 -0700"
      },
      "message": "mm: change isolate mode from #define to bitwise type\n\nChange ISOLATE_XXX macro with bitwise isolate_mode_t type.  Normally,\nmacro isn\u0027t recommended as it\u0027s type-unsafe and making debugging harder as\nsymbol cannot be passed throught to the debugger.\n\nQuote from Johannes\n\" Hmm, it would probably be cleaner to fully convert the isolation mode\ninto independent flags.  INACTIVE, ACTIVE, BOTH is currently a\ntri-state among flags, which is a bit ugly.\"\n\nThis patch moves isolate mode from swap.h to mmzone.h by memcontrol.h\n\nSigned-off-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "60063497a95e716c9a689af3be2687d261f115b4",
      "tree": "6ce0d68db76982c53df46aee5f29f944ebf2c320",
      "parents": [
        "148817ba092f9f6edd35bad3c6c6b8e8f90fe2ed"
      ],
      "author": {
        "name": "Arun Sharma",
        "email": "asharma@fb.com",
        "time": "Tue Jul 26 16:09:06 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 26 16:49:47 2011 -0700"
      },
      "message": "atomic: use \u003clinux/atomic.h\u003e\n\nThis allows us to move duplicated code in \u003casm/atomic.h\u003e\n(atomic_inc_not_zero() for now) to \u003clinux/atomic.h\u003e\n\nSigned-off-by: Arun Sharma \u003casharma@fb.com\u003e\nReviewed-by: Eric Dumazet \u003ceric.dumazet@gmail.com\u003e\nCc: Ingo Molnar \u003cmingo@elte.hu\u003e\nCc: David Miller \u003cdavem@davemloft.net\u003e\nCc: Eric Dumazet \u003ceric.dumazet@gmail.com\u003e\nAcked-by: Mike Frysinger \u003cvapier@gentoo.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "bb2a0de92c891b8feeedc0178acb3ae009d899a8",
      "tree": "c2c0b3ad66c8da0e48c021927b2d747fb08b7ef3",
      "parents": [
        "1f4c025b5a5520fd2571244196b1b01ad96d18f6"
      ],
      "author": {
        "name": "KAMEZAWA Hiroyuki",
        "email": "kamezawa.hiroyu@jp.fujitsu.com",
        "time": "Tue Jul 26 16:08:22 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 26 16:49:42 2011 -0700"
      },
      "message": "memcg: consolidate memory cgroup lru stat functions\n\nIn mm/memcontrol.c, there are many lru stat functions as..\n\n  mem_cgroup_zone_nr_lru_pages\n  mem_cgroup_node_nr_file_lru_pages\n  mem_cgroup_nr_file_lru_pages\n  mem_cgroup_node_nr_anon_lru_pages\n  mem_cgroup_nr_anon_lru_pages\n  mem_cgroup_node_nr_unevictable_lru_pages\n  mem_cgroup_nr_unevictable_lru_pages\n  mem_cgroup_node_nr_lru_pages\n  mem_cgroup_nr_lru_pages\n  mem_cgroup_get_local_zonestat\n\nSome of them are under #ifdef MAX_NUMNODES \u003e1 and others are not.\nThis seems bad. This patch consolidates all functions into\n\n  mem_cgroup_zone_nr_lru_pages()\n  mem_cgroup_node_nr_lru_pages()\n  mem_cgroup_nr_lru_pages()\n\nFor these functions, \"which LRU?\" information is passed by a mask.\n\nexample:\n  mem_cgroup_nr_lru_pages(mem, BIT(LRU_ACTIVE_ANON))\n\nAnd I added some macro as ALL_LRU, ALL_LRU_FILE, ALL_LRU_ANON.\n\nexample:\n  mem_cgroup_nr_lru_pages(mem, ALL_LRU)\n\nBTW, considering layout of NUMA memory placement of counters, this patch seems\nto be better.\n\nNow, when we gather all LRU information, we scan in following orer\n    for_each_lru -\u003e for_each_node -\u003e for_each_zone.\n\nThis means we\u0027ll touch cache lines in different node in turn.\n\nAfter patch, we\u0027ll scan\n    for_each_node -\u003e for_each_zone -\u003e for_each_lru(mask)\n\nThen, we\u0027ll gather information in the same cacheline at once.\n\n[akpm@linux-foundation.org: fix warnigns, build error]\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Daisuke Nishimura \u003cnishimura@mxp.nes.nec.co.jp\u003e\nCc: Balbir Singh \u003cbsingharora@gmail.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c6830c22603aaecf65405af23f6da2d55892f9cb",
      "tree": "19458ebc7c32bef8a4ed59630cabb5785b1bdc11",
      "parents": [
        "af4087e0e682df12bdffec5cfafc2fec9208716e"
      ],
      "author": {
        "name": "KAMEZAWA Hiroyuki",
        "email": "kamezawa.hiroyu@jp.fujitsu.com",
        "time": "Thu Jun 16 17:28:07 2011 +0900"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Jun 27 14:13:09 2011 -0700"
      },
      "message": "Fix node_start/end_pfn() definition for mm/page_cgroup.c\n\ncommit 21a3c96 uses node_start/end_pfn(nid) for detection start/end\nof nodes. But, it\u0027s not defined in linux/mmzone.h but defined in\n/arch/???/include/mmzone.h which is included only under\nCONFIG_NEED_MULTIPLE_NODES\u003dy.\n\nThen, we see\n  mm/page_cgroup.c: In function \u0027page_cgroup_init\u0027:\n  mm/page_cgroup.c:308: error: implicit declaration of function \u0027node_start_pfn\u0027\n  mm/page_cgroup.c:309: error: implicit declaration of function \u0027node_end_pfn\u0027\n\nSo, fixiing page_cgroup.c is an idea...\n\nBut node_start_pfn()/node_end_pfn() is a very generic macro and\nshould be implemented in the same manner for all archs.\n(m32r has different implementation...)\n\nThis patch removes definitions of node_start/end_pfn() in each archs\nand defines a unified one in linux/mmzone.h. It\u0027s not under\nCONFIG_NEED_MULTIPLE_NODES, now.\n\nA result of macro expansion is here (mm/page_cgroup.c)\n\nfor !NUMA\n start_pfn \u003d ((\u0026contig_page_data)-\u003enode_start_pfn);\n  end_pfn \u003d ({ pg_data_t *__pgdat \u003d (\u0026contig_page_data); __pgdat-\u003enode_start_pfn + __pgdat-\u003enode_spanned_pages;});\n\nfor NUMA (x86-64)\n  start_pfn \u003d ((node_data[nid])-\u003enode_start_pfn);\n  end_pfn \u003d ({ pg_data_t *__pgdat \u003d (node_data[nid]); __pgdat-\u003enode_start_pfn + __pgdat-\u003enode_spanned_pages;});\n\nChangelog:\n - fixed to avoid using \"nid\" twice in node_end_pfn() macro.\n\nReported-and-acked-by: Randy Dunlap \u003crandy.dunlap@oracle.com\u003e\nReported-and-tested-by: Ingo Molnar \u003cmingo@elte.hu\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "2a56d2220284b0e4dd8569fa475d7053f1c40a63",
      "tree": "96f959486a2f31db599e5f97167074bd1ecb3dc6",
      "parents": [
        "46f2cc80514e389bacfb642a32a4181fa1f1d20b",
        "239df0fd5ee25588f8a5ba7f7ee646940cc403f4"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri May 27 19:51:32 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri May 27 19:51:32 2011 -0700"
      },
      "message": "Merge branch \u0027for-linus\u0027 of master.kernel.org:/home/rmk/linux-2.6-arm\n\n* \u0027for-linus\u0027 of master.kernel.org:/home/rmk/linux-2.6-arm: (45 commits)\n  ARM: 6945/1: Add unwinding support for division functions\n  ARM: kill pmd_off()\n  ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks\n  ARM: 6943/1: mm: use TTBR1 instead of reserved context ID\n  ARM: 6942/1: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7\n  ARM: 6941/1: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area\n  ARM: add sendmmsg syscall\n  ARM: 6863/1: allow hotplug on msm\n  ARM: 6832/1: mmci: support for ST-Ericsson db8500v2\n  ARM: 6830/1: mach-ux500: force PrimeCell revisions\n  ARM: 6829/1: amba: make hardcoded periphid override hardware\n  ARM: 6828/1: mach-ux500: delete SSP PrimeCell ID\n  ARM: 6827/1: mach-netx: delete hardcoded periphid\n  ARM: 6940/1: fiq: Briefly document driver responsibilities for suspend/resume\n  ARM: 6938/1: fiq: Refactor {get,set}_fiq_regs() for Thumb-2\n  ARM: 6914/1: sparsemem: fix highmem detection when using SPARSEMEM\n  ARM: 6913/1: sparsemem: allow pfn_valid to be overridden when using SPARSEMEM\n  at91: drop at572d940hf support\n  at91rm9200: introduce at91rm9200_set_type to specficy cpu package\n  at91: drop boot_params and PLAT_PHYS_OFFSET\n  ...\n"
    },
    {
      "commit": "246e87a9393448c20873bc5dee64be68ed559e24",
      "tree": "a17016142b267fcba2e3be9908f8138c8dcb3f3a",
      "parents": [
        "889976dbcb1218119fdd950fb7819084e37d7d37"
      ],
      "author": {
        "name": "KAMEZAWA Hiroyuki",
        "email": "kamezawa.hiroyu@jp.fujitsu.com",
        "time": "Thu May 26 16:25:34 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu May 26 17:12:35 2011 -0700"
      },
      "message": "memcg: fix get_scan_count() for small targets\n\nDuring memory reclaim we determine the number of pages to be scanned per\nzone as\n\n\t(anon + file) \u003e\u003e priority.\nAssume\n\tscan \u003d (anon + file) \u003e\u003e priority.\n\nIf scan \u003c SWAP_CLUSTER_MAX, the scan will be skipped for this time and\npriority gets higher.  This has some problems.\n\n  1. This increases priority as 1 without any scan.\n     To do scan in this priority, amount of pages should be larger than 512M.\n     If pages\u003e\u003epriority \u003c SWAP_CLUSTER_MAX, it\u0027s recorded and scan will be\n     batched, later. (But we lose 1 priority.)\n     If memory size is below 16M, pages \u003e\u003e priority is 0 and no scan in\n     DEF_PRIORITY forever.\n\n  2. If zone-\u003eall_unreclaimabe\u003d\u003dtrue, it\u0027s scanned only when priority\u003d\u003d0.\n     So, x86\u0027s ZONE_DMA will never be recoverred until the user of pages\n     frees memory by itself.\n\n  3. With memcg, the limit of memory can be small. When using small memcg,\n     it gets priority \u003c DEF_PRIORITY-2 very easily and need to call\n     wait_iff_congested().\n     For doing scan before priorty\u003d9, 64MB of memory should be used.\n\nThen, this patch tries to scan SWAP_CLUSTER_MAX of pages in force...when\n\n  1. the target is enough small.\n  2. it\u0027s kswapd or memcg reclaim.\n\nThen we can avoid rapid priority drop and may be able to recover\nall_unreclaimable in a small zones.  And this patch removes nr_saved_scan.\n This will allow scanning in this priority even when pages \u003e\u003e priority is\nvery small.\n\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: Ying Han \u003cyinghan@google.com\u003e\nCc: Balbir Singh \u003cbalbir@in.ibm.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Daisuke Nishimura \u003cnishimura@mxp.nes.nec.co.jp\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7b7bf499f79de3f6c85a340c8453a78789523f85",
      "tree": "1d0bf7ae8d5befe135fb7e7cfc455656a0ec7b34",
      "parents": [
        "4db70f73e56961b9bcdfd0c36c62847a18b7dbb5"
      ],
      "author": {
        "name": "Will Deacon",
        "email": "will.deacon@arm.com",
        "time": "Thu May 19 13:21:14 2011 +0100"
      },
      "committer": {
        "name": "Russell King",
        "email": "rmk+kernel@arm.linux.org.uk",
        "time": "Thu May 26 10:23:24 2011 +0100"
      },
      "message": "ARM: 6913/1: sparsemem: allow pfn_valid to be overridden when using SPARSEMEM\n\nIn commit eb33575c (\"[ARM] Double check memmap is actually valid with a\nmemmap has unexpected holes V2\"), a new function, memmap_valid_within,\nwas introduced to mmzone.h so that holes in the memmap which pass\npfn_valid in SPARSEMEM configurations can be detected and avoided.\n\nThe fix to this problem checks that the pfn \u003c-\u003e page linkages are\ncorrect by calculating the page for the pfn and then checking that\npage_to_pfn on that page returns the original pfn. Unfortunately, in\nSPARSEMEM configurations, this results in reading from the page flags to\ndetermine the correct section. Since the memmap here has been freed,\njunk is read from memory and the check is no longer robust.\n\nIn the best case, reading from /proc/pagetypeinfo will give you the\nwrong answer. In the worst case, you get SEGVs, Kernel OOPses and hung\nCPUs. Furthermore, ioremap implementations that use pfn_valid to\ndisallow the remapping of normal memory will break.\n\nThis patch allows architectures to provide their own pfn_valid function\ninstead of using the default implementation used by sparsemem. The\narchitecture-specific version is aware of the memmap state and will\nreturn false when passed a pfn for a freed page within a valid section.\n\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Catalin Marinas \u003ccatalin.marinas@arm.com\u003e\nTested-by: H Hartley Sweeten \u003chsweeten@visionengravers.com\u003e\nSigned-off-by: Will Deacon \u003cwill.deacon@arm.com\u003e\nSigned-off-by: Russell King \u003crmk+kernel@arm.linux.org.uk\u003e\n"
    },
    {
      "commit": "a539f3533b78e39a22723d6d3e1e11b6c14454d9",
      "tree": "59c62d883a2f38e79a5e37d114c4560443728426",
      "parents": [
        "a2c8990aed5ab000491732b07c8c4465d1b389b8"
      ],
      "author": {
        "name": "Daniel Kiper",
        "email": "dkiper@net-space.pl",
        "time": "Tue May 24 17:12:51 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed May 25 08:39:36 2011 -0700"
      },
      "message": "mm: add SECTION_ALIGN_UP() and SECTION_ALIGN_DOWN() macro\n\nAdd SECTION_ALIGN_UP() and SECTION_ALIGN_DOWN() macro which aligns given\npfn to upper section and lower section boundary accordingly.\n\nRequired for the latest memory hotplug support for the Xen balloon driver.\n\nSigned-off-by: Daniel Kiper \u003cdkiper@net-space.pl\u003e\nReviewed-by: Konrad Rzeszutek Wilk \u003ckonrad.wilk@oracle.com\u003e\nDavid Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e3c40f379a144f35e53864a2cd970e238071afd7",
      "tree": "6636214fe729d2ee08780a44ab92bed66c0074db",
      "parents": [
        "bf4e8902ee5080f5d2c810b639e7e778c8082b52"
      ],
      "author": {
        "name": "Daniel Kiper",
        "email": "dkiper@net-space.pl",
        "time": "Tue May 24 17:12:33 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed May 25 08:39:29 2011 -0700"
      },
      "message": "mm: pfn_to_section_nr()/section_nr_to_pfn() is valid only in CONFIG_SPARSEMEM context\n\npfn_to_section_nr()/section_nr_to_pfn() is valid only in CONFIG_SPARSEMEM\ncontext.  Move it to proper place.\n\nSigned-off-by: Daniel Kiper \u003cdkiper@net-space.pl\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "0a9d59a2461477bd9ed143c01af9df3f8f00fa81",
      "tree": "df997d1cfb0786427a0df1fbd6f0640fa4248cf4",
      "parents": [
        "a23ce6da9677d245aa0aadc99f4197030350ab54",
        "795abaf1e4e188c4171e3cd3dbb11a9fcacaf505"
      ],
      "author": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Tue Feb 15 10:24:31 2011 +0100"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Tue Feb 15 10:24:31 2011 +0100"
      },
      "message": "Merge branch \u0027master\u0027 into for-next\n"
    },
    {
      "commit": "25a64ec1e7d0cfe172832d06a31215d458dfea7f",
      "tree": "d2ed524b05bcf76e3f87bef8e6f78aed486ee30f",
      "parents": [
        "8e572bab39c484cdf512715f98626337f25cfc32"
      ],
      "author": {
        "name": "Pete Zaitcev",
        "email": "zaitcev@kotori.zaitcev.us",
        "time": "Thu Feb 03 22:43:48 2011 -0700"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Fri Feb 04 10:55:44 2011 +0100"
      },
      "message": "fix comment spelling becausse \u003d\u003e because\n\nSigned-off-by: Pete Zaitcev \u003czaitcev@redhat.com\u003e\nSigned-off-by: Jiri Kosina \u003cjkosina@suse.cz\u003e\n"
    },
    {
      "commit": "79134171df238171daa4c024a42b77b401ccb00b",
      "tree": "af7872d5851e371d09b9fe7eb80f4809713c79fb",
      "parents": [
        "b9bbfbe30ae088cc88a4b2ba7732baeebd1a0162"
      ],
      "author": {
        "name": "Andrea Arcangeli",
        "email": "aarcange@redhat.com",
        "time": "Thu Jan 13 15:46:58 2011 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 13 17:32:43 2011 -0800"
      },
      "message": "thp: transparent hugepage vmstat\n\nAdd hugepage stat information to /proc/vmstat and /proc/meminfo.\n\nSigned-off-by: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9950474883e027e6e728cbcff25f7f2bf0c96530",
      "tree": "ecfdd3e68a25f1ef7822428c44f8375efbe9bc0c",
      "parents": [
        "c585a2678d83ba8fb02fa6b197de0ac7d67377f1"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Thu Jan 13 15:46:20 2011 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 13 17:32:37 2011 -0800"
      },
      "message": "mm: kswapd: stop high-order balancing when any suitable zone is balanced\n\nSimon Kirby reported the following problem\n\n   We\u0027re seeing cases on a number of servers where cache never fully\n   grows to use all available memory.  Sometimes we see servers with 4 GB\n   of memory that never seem to have less than 1.5 GB free, even with a\n   constantly-active VM.  In some cases, these servers also swap out while\n   this happens, even though they are constantly reading the working set\n   into memory.  We have been seeing this happening for a long time; I\n   don\u0027t think it\u0027s anything recent, and it still happens on 2.6.36.\n\nAfter some debugging work by Simon, Dave Hansen and others, the prevaling\ntheory became that kswapd is reclaiming order-3 pages requested by SLUB\ntoo aggressive about it.\n\nThere are two apparent problems here.  On the target machine, there is a\nsmall Normal zone in comparison to DMA32.  As kswapd tries to balance all\nzones, it would continually try reclaiming for Normal even though DMA32\nwas balanced enough for callers.  The second problem is that\nsleeping_prematurely() does not use the same logic as balance_pgdat() when\ndeciding whether to sleep or not.  This keeps kswapd artifically awake.\n\nA number of tests were run and the figures from previous postings will\nlook very different for a few reasons.  One, the old figures were forcing\nmy network card to use GFP_ATOMIC in attempt to replicate Simon\u0027s problem.\n Second, I previous specified slub_min_order\u003d3 again in an attempt to\nreproduce Simon\u0027s problem.  In this posting, I\u0027m depending on Simon to say\nwhether his problem is fixed or not and these figures are to show the\nimpact to the ordinary cases.  Finally, the \"vmscan\" figures are taken\nfrom /proc/vmstat instead of the tracepoints.  There is less information\nbut recording is less disruptive.\n\nThe first test of relevance was postmark with a process running in the\nbackground reading a large amount of anonymous memory in blocks.  The\nobjective was to vaguely simulate what was happening on Simon\u0027s machine\nand it\u0027s memory intensive enough to have kswapd awake.\n\nPOSTMARK\n                                            traceonly          kanyzone\nTransactions per second:              156.00 ( 0.00%)   153.00 (-1.96%)\nData megabytes read per second:        21.51 ( 0.00%)    21.52 ( 0.05%)\nData megabytes written per second:     29.28 ( 0.00%)    29.11 (-0.58%)\nFiles created alone per second:       250.00 ( 0.00%)   416.00 (39.90%)\nFiles create/transact per second:      79.00 ( 0.00%)    76.00 (-3.95%)\nFiles deleted alone per second:       520.00 ( 0.00%)   420.00 (-23.81%)\nFiles delete/transact per second:      79.00 ( 0.00%)    76.00 (-3.95%)\n\nMMTests Statistics: duration\nUser/Sys Time Running Test (seconds)         16.58      17.4\nTotal Elapsed Time (seconds)                218.48    222.47\n\nVMstat Reclaim Statistics: vmscan\nDirect reclaims                                  0          4\nDirect reclaim pages scanned                     0        203\nDirect reclaim pages reclaimed                   0        184\nKswapd pages scanned                        326631     322018\nKswapd pages reclaimed                      312632     309784\nKswapd low wmark quickly                         1          4\nKswapd high wmark quickly                      122        475\nKswapd skip congestion_wait                      1          0\nPages activated                             700040     705317\nPages deactivated                           212113     203922\nPages written                                 9875       6363\n\nTotal pages scanned                         326631    322221\nTotal pages reclaimed                       312632    309968\n%age total pages scanned/reclaimed          95.71%    96.20%\n%age total pages scanned/written             3.02%     1.97%\n\nproc vmstat: Faults\nMajor Faults                                   300       254\nMinor Faults                                645183    660284\nPage ins                                    493588    486704\nPage outs                                  4960088   4986704\nSwap ins                                      1230       661\nSwap outs                                     9869      6355\n\nPerformance is mildly affected because kswapd is no longer doing as much\nwork and the background memory consumer process is getting in the way.\nNote that kswapd scanned and reclaimed fewer pages as it\u0027s less aggressive\nand overall fewer pages were scanned and reclaimed.  Swap in/out is\nparticularly reduced again reflecting kswapd throwing out fewer pages.\n\nThe slight performance impact is unfortunate here but it looks like a\ndirect result of kswapd being less aggressive.  As the bug report is about\ntoo many pages being freed by kswapd, it may have to be accepted for now.\n\nThe second test is a streaming IO benchmark that was previously used by\nJohannes to show regressions in page reclaim.\n\nMICRO\n\t\t\t\t\t traceonly  kanyzone\nUser/Sys Time Running Test (seconds)         29.29     28.87\nTotal Elapsed Time (seconds)                492.18    488.79\n\nVMstat Reclaim Statistics: vmscan\nDirect reclaims                               2128       1460\nDirect reclaim pages scanned               2284822    1496067\nDirect reclaim pages reclaimed              148919     110937\nKswapd pages scanned                      15450014   16202876\nKswapd pages reclaimed                     8503697    8537897\nKswapd low wmark quickly                      3100       3397\nKswapd high wmark quickly                     1860       7243\nKswapd skip congestion_wait                    708        801\nPages activated                               9635       9573\nPages deactivated                             1432       1271\nPages written                                  223       1130\n\nTotal pages scanned                       17734836  17698943\nTotal pages reclaimed                      8652616   8648834\n%age total pages scanned/reclaimed          48.79%    48.87%\n%age total pages scanned/written             0.00%     0.01%\n\nproc vmstat: Faults\nMajor Faults                                   165       221\nMinor Faults                               9655785   9656506\nPage ins                                      3880      7228\nPage outs                                 37692940  37480076\nSwap ins                                         0        69\nSwap outs                                       19        15\n\nAgain fewer pages are scanned and reclaimed as expected and this time the\ntest completed faster.  Note that kswapd is hitting its watermarks faster\n(low and high wmark quickly) which I expect is due to kswapd reclaiming\nfewer pages.\n\nI also ran fs-mark, iozone and sysbench but there is nothing interesting\nto report in the figures.  Performance is not significantly changed and\nthe reclaim statistics look reasonable.\n\nTgis patch:\n\nWhen the allocator enters its slow path, kswapd is woken up to balance the\nnode.  It continues working until all zones within the node are balanced.\nFor order-0 allocations, this makes perfect sense but for higher orders it\ncan have unintended side-effects.  If the zone sizes are imbalanced,\nkswapd may reclaim heavily within a smaller zone discarding an excessive\nnumber of pages.  The user-visible behaviour is that kswapd is awake and\nreclaiming even though plenty of pages are free from a suitable zone.\n\nThis patch alters the \"balance\" logic for high-order reclaim allowing\nkswapd to stop if any suitable zone becomes balanced to reduce the number\nof pages it reclaims from other zones.  kswapd still tries to ensure that\norder-0 watermarks for all zones are met before sleeping.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nReviewed-by: Eric B Munson \u003cemunson@mgebm.net\u003e\nCc: Simon Kirby \u003csim@hostway.ca\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Shaohua Li \u003cshaohua.li@intel.com\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "88f5acf88ae6a9778f6d25d0d5d7ec2d57764a97",
      "tree": "6f39beef8cf918eb2ca9f64ae1bcd1ea79ca487a",
      "parents": [
        "43bb40c9e3aa51a3b038c9df2c9afb4d4685614d"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Thu Jan 13 15:45:41 2011 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 13 17:32:31 2011 -0800"
      },
      "message": "mm: page allocator: adjust the per-cpu counter threshold when memory is low\n\nCommit aa45484 (\"calculate a better estimate of NR_FREE_PAGES when memory\nis low\") noted that watermarks were based on the vmstat NR_FREE_PAGES.  To\navoid synchronization overhead, these counters are maintained on a per-cpu\nbasis and drained both periodically and when a threshold is above a\nthreshold.  On large CPU systems, the difference between the estimate and\nreal value of NR_FREE_PAGES can be very high.  The system can get into a\ncase where pages are allocated far below the min watermark potentially\ncausing livelock issues.  The commit solved the problem by taking a better\nreading of NR_FREE_PAGES when memory was low.\n\nUnfortately, as reported by Shaohua Li this accurate reading can consume a\nlarge amount of CPU time on systems with many sockets due to cache line\nbouncing.  This patch takes a different approach.  For large machines\nwhere counter drift might be unsafe and while kswapd is awake, the per-cpu\nthresholds for the target pgdat are reduced to limit the level of drift to\nwhat should be a safe level.  This incurs a performance penalty in heavy\nmemory pressure by a factor that depends on the workload and the machine\nbut the machine should function correctly without accidentally exhausting\nall memory on a node.  There is an additional cost when kswapd wakes and\nsleeps but the event is not expected to be frequent - in Shaohua\u0027s test\ncase, there was one recorded sleep and wake event at least.\n\nTo ensure that kswapd wakes up, a safe version of zone_watermark_ok() is\nintroduced that takes a more accurate reading of NR_FREE_PAGES when called\nfrom wakeup_kswapd, when deciding whether it is really safe to go back to\nsleep in sleeping_prematurely() and when deciding if a zone is really\nbalanced or not in balance_pgdat().  We are still using an expensive\nfunction but limiting how often it is called.\n\nWhen the test case is reproduced, the time spent in the watermark\nfunctions is reduced.  The following report is on the percentage of time\nspent cumulatively spent in the functions zone_nr_free_pages(),\nzone_watermark_ok(), __zone_watermark_ok(), zone_watermark_ok_safe(),\nzone_page_state_snapshot(), zone_page_state().\n\nvanilla                      11.6615%\ndisable-threshold            0.2584%\n\nDavid said:\n\n: We had to pull aa454840 \"mm: page allocator: calculate a better estimate\n: of NR_FREE_PAGES when memory is low and kswapd is awake\" from 2.6.36\n: internally because tests showed that it would cause the machine to stall\n: as the result of heavy kswapd activity.  I merged it back with this fix as\n: it is pending in the -mm tree and it solves the issue we were seeing, so I\n: definitely think this should be pushed to -stable (and I would seriously\n: consider it for 2.6.37 inclusion even at this late date).\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReported-by: Shaohua Li \u003cshaohua.li@intel.com\u003e\nReviewed-by: Christoph Lameter \u003ccl@linux.com\u003e\nTested-by: Nicolas Bareil \u003cnico@chdir.org\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Kyle McMartin \u003ckyle@mcmartin.ca\u003e\nCc: \u003cstable@kernel.org\u003e\t\t[2.6.37.1, 2.6.36.x]\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "0e093d99763eb4cea09f8ca4f1d01f34e121d10b",
      "tree": "fad38f9c3651c81db298521141a79d9468f71986",
      "parents": [
        "08fc468f4eaf6683bae5bdb94743a09d8630cb80"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 26 14:21:45 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 26 16:52:07 2010 -0700"
      },
      "message": "writeback: do not sleep on the congestion queue if there are no congested BDIs or if significant congestion is not being encountered in the current zone\n\nIf congestion_wait() is called with no BDI congested, the caller will\nsleep for the full timeout and this may be an unnecessary sleep.  This\npatch adds a wait_iff_congested() that checks congestion and only sleeps\nif a BDI is congested else, it calls cond_resched() to ensure the caller\nis not hogging the CPU longer than its quota but otherwise will not sleep.\n\nThis is aimed at reducing some of the major desktop stalls reported during\nIO.  For example, while kswapd is operating, it calls congestion_wait()\nbut it could just have been reclaiming clean page cache pages with no\ncongestion.  Without this patch, it would sleep for a full timeout but\nafter this patch, it\u0027ll just call schedule() if it has been on the CPU too\nlong.  Similar logic applies to direct reclaimers that are not making\nenough progress.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Jens Axboe \u003caxboe@kernel.dk\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ea941f0e2a8c02ae876cd73deb4e1557248f258c",
      "tree": "d2006c10cce4f134dc83f7f5aaa1d0096902cc1a",
      "parents": [
        "f629d1c9bd0dbc44a6c4f9a4a67d1646c42bfc6f"
      ],
      "author": {
        "name": "Michael Rubin",
        "email": "mrubin@google.com",
        "time": "Tue Oct 26 14:21:35 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 26 16:52:06 2010 -0700"
      },
      "message": "writeback: add nr_dirtied and nr_written to /proc/vmstat\n\nTo help developers and applications gain visibility into writeback\nbehaviour adding two entries to vm_stat_items and /proc/vmstat.  This will\nallow us to track the \"written\" and \"dirtied\" counts.\n\n   # grep nr_dirtied /proc/vmstat\n   nr_dirtied 3747\n   # grep nr_written /proc/vmstat\n   nr_written 3618\n\nSigned-off-by: Michael Rubin \u003cmrubin@google.com\u003e\nReviewed-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Jens Axboe \u003caxboe@kernel.dk\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "aa45484031ddee09b06350ab8528bfe5b2c76d1c",
      "tree": "6758072232db9a54453022ec3e6cede35d52001c",
      "parents": [
        "72853e2991a2702ae93aaf889ac7db743a415dd3"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "cl@linux.com",
        "time": "Thu Sep 09 16:38:17 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 09 18:57:25 2010 -0700"
      },
      "message": "mm: page allocator: calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake\n\nOrdinarily watermark checks are based on the vmstat NR_FREE_PAGES as it is\ncheaper than scanning a number of lists.  To avoid synchronization\noverhead, counter deltas are maintained on a per-cpu basis and drained\nboth periodically and when the delta is above a threshold.  On large CPU\nsystems, the difference between the estimated and real value of\nNR_FREE_PAGES can be very high.  If NR_FREE_PAGES is much higher than\nnumber of real free page in buddy, the VM can allocate pages below min\nwatermark, at worst reducing the real number of pages to zero.  Even if\nthe OOM killer kills some victim for freeing memory, it may not free\nmemory if the exit path requires a new page resulting in livelock.\n\nThis patch introduces a zone_page_state_snapshot() function (courtesy of\nChristoph) that takes a slightly more accurate view of an arbitrary vmstat\ncounter.  It is used to read NR_FREE_PAGES while kswapd is awake to avoid\nthe watermark being accidentally broken.  The estimate is not perfect and\nmay result in cache line bounces but is expected to be lighter than the\nIPI calls necessary to continually drain the per-cpu counters while kswapd\nis awake.\n\nSigned-off-by: Christoph Lameter \u003ccl@linux.com\u003e\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "25edde0332916ae706ccf83de688be57bcc844b7",
      "tree": "35a5b0e651f9cdb48d9a55a748970339c4f681bc",
      "parents": [
        "b898cc70019ce1835bbf6c47bdf978adc36faa42"
      ],
      "author": {
        "name": "KOSAKI Motohiro",
        "email": "kosaki.motohiro@jp.fujitsu.com",
        "time": "Mon Aug 09 17:19:27 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Aug 09 20:45:00 2010 -0700"
      },
      "message": "vmscan: kill prev_priority completely\n\nSince 2.6.28 zone-\u003eprev_priority is unused. Then it can be removed\nsafely. It reduce stack usage slightly.\n\nNow I have to say that I\u0027m sorry. 2 years ago, I thought prev_priority\ncan be integrate again, it\u0027s useful. but four (or more) times trying\nhaven\u0027t got good performance number. Thus I give up such approach.\n\nThe rest of this changelog is notes on prev_priority and why it existed in\nthe first place and why it might be not necessary any more. This information\nis based heavily on discussions between Andrew Morton, Rik van Riel and\nKosaki Motohiro who is heavily quotes from.\n\nHistorically prev_priority was important because it determined when the VM\nwould start unmapping PTE pages. i.e. there are no balances of note within\nthe VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there\nis a potential risk of unnecessarily increasing minor faults as a large\namount of read activity of use-once pages could push mapped pages to the\nend of the LRU and get unmapped.\n\nThere is no proof this is still a problem but currently it is not considered\nto be. Active files are not deactivated if the active file list is smaller\nthan the inactive list reducing the liklihood that file-mapped pages are\nbeing pushed off the LRU and referenced executable pages are kept on the\nactive list to avoid them getting pushed out by read activity.\n\nEven if it is a problem, prev_priority prev_priority wouldn\u0027t works\nnowadays. First of all, current vmscan still a lot of UP centric code. it\nexpose some weakness on some dozens CPUs machine. I think we need more and\nmore improvement.\n\nThe problem is, current vmscan mix up per-system-pressure, per-zone-pressure\nand per-task-pressure a bit. example, prev_priority try to boost priority to\nother concurrent priority. but if the another task have mempolicy restriction,\nit is unnecessary, but also makes wrong big latency and exceeding reclaim.\nper-task based priority + prev_priority adjustment make the emulation of\nper-system pressure. but it have two issue 1) too rough and brutal emulation\n2) we need per-zone pressure, not per-system.\n\nAnother example, currently DEF_PRIORITY is 12. it mean the lru rotate about\n2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer.\nbut if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the\nsystem have higher memory pressure than priority\u003d\u003d0 (1/4096*10,000 \u003e 2).\nprev_priority can\u0027t solve such multithreads workload issue. In other word,\nprev_priority concept assume the sysmtem don\u0027t have lots threads.\"\n\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nReviewed-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Chris Mason \u003cchris.mason@oracle.com\u003e\nCc: Nick Piggin \u003cnpiggin@suse.de\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: Michael Rubin \u003cmrubin@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b645bd1286f2fbcd2eb4ab3bed5884f63c42e363",
      "tree": "7649eb3fbe4afeb01e9403e71b0546a37406a33e",
      "parents": [
        "31f961a89bd1cb9baaf32af4bd8b571ace3447b1"
      ],
      "author": {
        "name": "Alexander Nevenchannyy",
        "email": "a.nevenchannyy@gmail.com",
        "time": "Mon Aug 09 17:19:00 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Aug 09 20:44:57 2010 -0700"
      },
      "message": "mmzone.h: remove dead prototype\n\nget_zone_counts() was dropped from kernel tree, see:\nhttp://www.mail-archive.com/mm-commits@vger.kernel.org/msg07313.html but\nits prototype remains.\n\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7aac789885512388a66d47280d7e7777ffba1e59",
      "tree": "af4ac98260268889a422dd264102d2f15d5c1983",
      "parents": [
        "3bccd996276b108c138e8176793a26ecef54d573"
      ],
      "author": {
        "name": "Lee Schermerhorn",
        "email": "lee.schermerhorn@hp.com",
        "time": "Wed May 26 14:45:00 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu May 27 09:12:57 2010 -0700"
      },
      "message": "numa: introduce numa_mem_id()- effective local memory node id\n\nIntroduce numa_mem_id(), based on generic percpu variable infrastructure\nto track \"nearest node with memory\" for archs that support memoryless\nnodes.\n\nDefine API in \u003clinux/topology.h\u003e when CONFIG_HAVE_MEMORYLESS_NODES\ndefined, else stubs.  Architectures will define HAVE_MEMORYLESS_NODES\nif/when they support them.\n\nArchs can override definitions of:\n\nnuma_mem_id() - returns node number of \"local memory\" node\nset_numa_mem() - initialize [this cpus\u0027] per cpu variable \u0027numa_mem\u0027\ncpu_to_mem()  - return numa_mem for specified cpu; may be used as lvalue\n\nGeneric initialization of \u0027numa_mem\u0027 occurs in __build_all_zonelists().\nThis will initialize the boot cpu at boot time, and all cpus on change of\nnuma_zonelist_order, or when node or memory hot-plug requires zonelist\nrebuild.  Archs that support memoryless nodes will need to initialize\n\u0027numa_mem\u0027 for secondary cpus as they\u0027re brought on-line.\n\n[akpm@linux-foundation.org: fix build]\nSigned-off-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nSigned-off-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Nick Piggin \u003cnpiggin@suse.de\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Eric Whitney \u003ceric.whitney@hp.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Ingo Molnar \u003cmingo@elte.hu\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\nCc: \"Luck, Tony\" \u003ctony.luck@intel.com\u003e\nCc: Pekka Enberg \u003cpenberg@cs.helsinki.fi\u003e\nCc: \u003clinux-arch@vger.kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4eaf3f64397c3db3c5785eee508270d62a9fabd9",
      "tree": "bfd986a7e974876755ea6fe0de394199c68e2e36",
      "parents": [
        "1f522509c77a5dea8dc384b735314f03908a6415"
      ],
      "author": {
        "name": "Haicheng Li",
        "email": "haicheng.li@linux.intel.com",
        "time": "Mon May 24 14:32:52 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 25 08:07:02 2010 -0700"
      },
      "message": "mem-hotplug: fix potential race while building zonelist for new populated zone\n\nAdd global mutex zonelists_mutex to fix the possible race:\n\n     CPU0                                  CPU1                    CPU2\n(1) zone-\u003epresent_pages +\u003d online_pages;\n(2)                                       build_all_zonelists();\n(3)                                                               alloc_page();\n(4)                                                               free_page();\n(5) build_all_zonelists();\n(6)   __build_all_zonelists();\n(7)     zone-\u003epageset \u003d alloc_percpu();\n\nIn step (3,4), zone-\u003epageset still points to boot_pageset, so bad\nthings may happen if 2+ nodes are in this state. Even if only 1 node\nis accessing the boot_pageset, (3) may still consume too much memory\nto fail the memory allocations in step (7).\n\nBesides, atomic operation ensures alloc_percpu() in step (7) will never fail\nsince there is a new fresh memory block added in step(6).\n\n[haicheng.li@linux.intel.com: hold zonelists_mutex when build_all_zonelists]\nSigned-off-by: Haicheng Li \u003chaicheng.li@linux.intel.com\u003e\nSigned-off-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nReviewed-by: Andi Kleen \u003candi.kleen@intel.com\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "1f522509c77a5dea8dc384b735314f03908a6415",
      "tree": "4b848527b90877a8a64c46e8e2d76723405c319d",
      "parents": [
        "319774e25fa4b7641bdc3b0a464dd84e62103347"
      ],
      "author": {
        "name": "Haicheng Li",
        "email": "haicheng.li@linux.intel.com",
        "time": "Mon May 24 14:32:51 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 25 08:07:01 2010 -0700"
      },
      "message": "mem-hotplug: avoid multiple zones sharing same boot strapping boot_pageset\n\nFor each new populated zone of hotadded node, need to update its pagesets\nwith dynamically allocated per_cpu_pageset struct for all possible CPUs:\n\n    1) Detach zone-\u003epageset from the shared boot_pageset\n       at end of __build_all_zonelists().\n\n    2) Use mutex to protect zone-\u003epageset when it\u0027s still\n       shared in onlined_pages()\n\nOtherwises, multiple zones of different nodes would share same boot strapping\nboot_pageset for same CPU, which will finally cause below kernel panic:\n\n  ------------[ cut here ]------------\n  kernel BUG at mm/page_alloc.c:1239!\n  invalid opcode: 0000 [#1] SMP\n  ...\n  Call Trace:\n   [\u003cffffffff811300c1\u003e] __alloc_pages_nodemask+0x131/0x7b0\n   [\u003cffffffff81162e67\u003e] alloc_pages_current+0x87/0xd0\n   [\u003cffffffff81128407\u003e] __page_cache_alloc+0x67/0x70\n   [\u003cffffffff811325f0\u003e] __do_page_cache_readahead+0x120/0x260\n   [\u003cffffffff81132751\u003e] ra_submit+0x21/0x30\n   [\u003cffffffff811329c6\u003e] ondemand_readahead+0x166/0x2c0\n   [\u003cffffffff81132ba0\u003e] page_cache_async_readahead+0x80/0xa0\n   [\u003cffffffff8112a0e4\u003e] generic_file_aio_read+0x364/0x670\n   [\u003cffffffff81266cfa\u003e] nfs_file_read+0xca/0x130\n   [\u003cffffffff8117b20a\u003e] do_sync_read+0xfa/0x140\n   [\u003cffffffff8117bf75\u003e] vfs_read+0xb5/0x1a0\n   [\u003cffffffff8117c151\u003e] sys_read+0x51/0x80\n   [\u003cffffffff8103c032\u003e] system_call_fastpath+0x16/0x1b\n  RIP  [\u003cffffffff8112ff13\u003e] get_page_from_freelist+0x883/0x900\n   RSP \u003cffff88000d1e78a8\u003e\n  ---[ end trace 4bda28328b9990db ]\n\n[akpm@linux-foundation.org: merge fix]\nSigned-off-by: Haicheng Li \u003chaicheng.li@linux.intel.com\u003e\nSigned-off-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nReviewed-by: Andi Kleen \u003candi.kleen@intel.com\u003e\nReviewed-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "0faa56389c793cda7f967117415717bbab24fe4e",
      "tree": "b0d5f12579a4448adff2b6e462488f3cc6d75326",
      "parents": [
        "ff3d58c22b6827039983911d3460cf0c1657f8cc"
      ],
      "author": {
        "name": "Marcelo Roberto Jimenez",
        "email": "mroberto@cpti.cetuc.puc-rio.br",
        "time": "Mon May 24 14:32:47 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 25 08:07:01 2010 -0700"
      },
      "message": "mm: fix NR_SECTION_ROOTS \u003d\u003d 0 when using using sparsemem extreme.\n\nGot this while compiling for ARM/SA1100:\n\nmm/sparse.c: In function \u0027__section_nr\u0027:\nmm/sparse.c:135: warning: \u0027root\u0027 is used uninitialized in this function\n\nThis patch follows Russell King\u0027s suggestion for a new calculation for\nNR_SECTION_ROOTS.  Thanks also to Sergei Shtylyov for pointing out the\nexistence of the macro DIV_ROUND_UP.\n\nAtsushi Nemoto observed:\n: This fix doesn\u0027t just silence the warning - it fixes a real problem.\n:\n: Without this fix, mem_section[] might have 0 size so mem_section[0]\n: will share other variable area.  For example, I got:\n:\n: c030c700 b __warned.16478\n: c030c700 B mem_section\n: c030c701 b __warned.16483\n:\n: This might cause very strange behavior.  Your patch actually fixes it.\n\nSigned-off-by: Marcelo Roberto Jimenez \u003cmroberto@cpti.cetuc.puc-rio.br\u003e\nCc: Atsushi Nemoto \u003canemo@mba.ocn.ne.jp\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Sergei Shtylyov \u003csshtylyov@mvista.com\u003e\nCc: Russell King \u003crmk@arm.linux.org.uk\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4f92e2586b43a2402e116055d4edda704f911b5b",
      "tree": "6a765ebeba951c02a7878bcea52a4769ad2e45c2",
      "parents": [
        "5e7719058079a1423ccce56148b0aaa56b2df821"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Mon May 24 14:32:32 2010 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 25 08:07:00 2010 -0700"
      },
      "message": "mm: compaction: defer compaction using an exponential backoff when compaction fails\n\nThe fragmentation index may indicate that a failure is due to external\nfragmentation but after a compaction run completes, it is still possible\nfor an allocation to fail.  There are two obvious reasons as to why\n\n  o Page migration cannot move all pages so fragmentation remains\n  o A suitable page may exist but watermarks are not met\n\nIn the event of compaction followed by an allocation failure, this patch\ndefers further compaction in the zone (1 \u003c\u003c compact_defer_shift) times.\nIf the next compaction attempt also fails, compact_defer_shift is\nincreased up to a maximum of 6.  If compaction succeeds, the defer\ncounters are reset again.\n\nThe zone that is deferred is the first zone in the zonelist - i.e.  the\npreferred zone.  To defer compaction in the other zones, the information\nwould need to be stored in the zonelist or implemented similar to the\nzonelist_cache.  This would impact the fast-paths and is not justified at\nthis time.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "318ae2edc3b29216abd8a2510f3f80b764f06858",
      "tree": "ce595adde342f57f379d277b25e4dd206988a052",
      "parents": [
        "25cf84cf377c0aae5dbcf937ea89bc7893db5176",
        "3e58974027b04e84f68b964ef368a6cd758e2f84"
      ],
      "author": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Mon Mar 08 16:55:37 2010 +0100"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Mon Mar 08 16:55:37 2010 +0100"
      },
      "message": "Merge branch \u0027for-next\u0027 into for-linus\n\nConflicts:\n\tDocumentation/filesystems/proc.txt\n\tarch/arm/mach-u300/include/mach/debug-macro.S\n\tdrivers/net/qlge/qlge_ethtool.c\n\tdrivers/net/qlge/qlge_main.c\n\tdrivers/net/typhoon.c\n"
    },
    {
      "commit": "93e4a89a8c987189b168a530a331ef6d0fcf07a7",
      "tree": "deb08017c0e4874539549d3ea9bf2d7b447a43be",
      "parents": [
        "fc91668eaf9e7ba61e867fc2218b7e9fb67faa4f"
      ],
      "author": {
        "name": "KOSAKI Motohiro",
        "email": "kosaki.motohiro@jp.fujitsu.com",
        "time": "Fri Mar 05 13:41:55 2010 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sat Mar 06 11:26:25 2010 -0800"
      },
      "message": "mm: restore zone-\u003eall_unreclaimable to independence word\n\ncommit e815af95 (\"change all_unreclaimable zone member to flags\") changed\nall_unreclaimable member to bit flag.  But it had an undesireble side\neffect.  free_one_page() is one of most hot path in linux kernel and\nincreasing atomic ops in it can reduce kernel performance a bit.\n\nThus, this patch revert such commit partially. at least\nall_unreclaimable shouldn\u0027t share memory word with other zone flags.\n\n[akpm@linux-foundation.org: fix patch interaction]\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Huang Shijie \u003cshijie8@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a626b46e17d0762d664ce471d40bc506b6e721ab",
      "tree": "445f6ac655ea9247d2e27529f23ba02d0991fec0",
      "parents": [
        "c1dcb4bb1e3e16e9baee578d9bb040e5fba1063e",
        "dce46a04d55d6358d2d4ab44a4946a19f9425fe2"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 03 08:15:05 2010 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 03 08:15:05 2010 -0800"
      },
      "message": "Merge branch \u0027x86-bootmem-for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip\n\n* \u0027x86-bootmem-for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (30 commits)\n  early_res: Need to save the allocation name in drop_range_partial()\n  sparsemem: Fix compilation on PowerPC\n  early_res: Add free_early_partial()\n  x86: Fix non-bootmem compilation on PowerPC\n  core: Move early_res from arch/x86 to kernel/\n  x86: Add find_fw_memmap_area\n  Move round_up/down to kernel.h\n  x86: Make 32bit support NO_BOOTMEM\n  early_res: Enhance check_and_double_early_res\n  x86: Move back find_e820_area to e820.c\n  x86: Add find_early_area_size\n  x86: Separate early_res related code from e820.c\n  x86: Move bios page reserve early to head32/64.c\n  sparsemem: Put mem map for one node together.\n  sparsemem: Put usemap for one node together\n  x86: Make 64 bit use early_res instead of bootmem before slab\n  x86: Only call dma32_reserve_bootmem 64bit !CONFIG_NUMA\n  x86: Make early_node_mem get mem \u003e 4 GB if possible\n  x86: Dynamically increase early_res array size\n  x86: Introduce max_early_res and early_res_count\n  ...\n"
    },
    {
      "commit": "43cf38eb5cea91245502df3fcee4dbfc1c74dd1c",
      "tree": "a58ea87af1f07b8aed4941db074f44103f321f6e",
      "parents": [
        "ab386128f20c44c458a90039ab1bdc265ac474c9"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Feb 02 14:38:57 2010 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 17 11:17:38 2010 +0900"
      },
      "message": "percpu: add __percpu sparse annotations to core kernel subsystems\n\nAdd __percpu sparse annotations to core subsystems.\n\nThese annotations are to make sparse consider percpu variables to be\nin a different address space and warn if accessed without going\nthrough percpu accessors.  This patch doesn\u0027t affect normal builds.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nAcked-by: Paul E. McKenney \u003cpaulmck@linux.vnet.ibm.com\u003e\nCc: Jens Axboe \u003caxboe@kernel.dk\u003e\nCc: linux-mm@kvack.org\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nCc: Dipankar Sarma \u003cdipankar@in.ibm.com\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nCc: Eric Biederman \u003cebiederm@xmission.com\u003e\n"
    },
    {
      "commit": "08677214e318297f228237be0042aac754f48f1d",
      "tree": "6d03424f7e287fcf66136b44512328afb1aeee49",
      "parents": [
        "c252a5bb1f57afb1e336d68085217727ca7b2134"
      ],
      "author": {
        "name": "Yinghai Lu",
        "email": "yinghai@kernel.org",
        "time": "Wed Feb 10 01:20:20 2010 -0800"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@zytor.com",
        "time": "Fri Feb 12 09:41:59 2010 -0800"
      },
      "message": "x86: Make 64 bit use early_res instead of bootmem before slab\n\nFinally we can use early_res to replace bootmem for x86_64 now.\n\nStill can use CONFIG_NO_BOOTMEM to enable it or not.\n\n-v2: fix 32bit compiling about MAX_DMA32_PFN\n-v3: folded bug fix from LKML message below\n\nSigned-off-by: Yinghai Lu \u003cyinghai@kernel.org\u003e\nLKML-Reference: \u003c4B747239.4070907@kernel.org\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@zytor.com\u003e\n"
    },
    {
      "commit": "2a61aa401638529cd4231f6106980d307fba98fa",
      "tree": "a3d7565570c5996d0b3ae5fdf0126e065e750431",
      "parents": [
        "c41b20e721ea4f6f20f66a66e7f0c3c97a2ca9c2"
      ],
      "author": {
        "name": "Adam Buchbinder",
        "email": "adam.buchbinder@gmail.com",
        "time": "Fri Dec 11 16:35:40 2009 -0500"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Thu Feb 04 11:55:45 2010 +0100"
      },
      "message": "Fix misspellings of \"invocation\" in comments.\n\nSome comments misspell \"invocation\"; this fixes them. No code\nchanges.\n\nSigned-off-by: Adam Buchbinder \u003cadam.buchbinder@gmail.com\u003e\nSigned-off-by: Jiri Kosina \u003cjkosina@suse.cz\u003e\n"
    },
    {
      "commit": "99dcc3e5a94ed491fbef402831d8c0bbb267f995",
      "tree": "dd4d2b9e10ab0d4502e4b2a22dfc0a02a3300d7e",
      "parents": [
        "5917dae83cb02dfe74c9167b79e86e6d65183fa3"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "cl@linux-foundation.org",
        "time": "Tue Jan 05 15:34:51 2010 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jan 05 15:34:51 2010 +0900"
      },
      "message": "this_cpu: Page allocator conversion\n\nUse the per cpu allocator functionality to avoid per cpu arrays in struct zone.\n\nThis drastically reduces the size of struct zone for systems with large\namounts of processors and allows placement of critical variables of struct\nzone in one cacheline even on very large systems.\n\nAnother effect is that the pagesets of one processor are placed near one\nanother. If multiple pagesets from different zones fit into one cacheline\nthen additional cacheline fetches can be avoided on the hot paths when\nallocating memory from multiple zones.\n\nBootstrap becomes simpler if we use the same scheme for UP, SMP, NUMA. #ifdefs\nare reduced and we can drop the zone_pcp macro.\n\nHotplug handling is also simplified since cpu alloc can bring up and\nshut down cpu areas for a specific cpu as a whole. So there is no need to\nallocate or free individual pagesets.\n\nV7-V8:\n- Explain chicken egg dilemmna with percpu allocator.\n\nV4-V5:\n- Fix up cases where per_cpu_ptr is called before irq disable\n- Integrate the bootstrap logic that was separate before.\n\ntj: Build failure in pageset_cpuup_callback() due to missing ret\n    variable fixed.\n\nReviewed-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "01fc0ac198eabcbf460e1ed058860a935b6c2c9a",
      "tree": "f980b4c770298bf9491dcfe3f02359fa94b89d04",
      "parents": [
        "9367858dd08caf4e6ebd511abd2fca0a2d87b648"
      ],
      "author": {
        "name": "Sam Ravnborg",
        "email": "sam@ravnborg.org",
        "time": "Sun Apr 19 21:57:19 2009 +0200"
      },
      "committer": {
        "name": "Michal Marek",
        "email": "mmarek@suse.cz",
        "time": "Sat Dec 12 13:08:14 2009 +0100"
      },
      "message": "kbuild: move bounds.h to include/generated\n\nSigned-off-by: Sam Ravnborg \u003csam@ravnborg.org\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nSigned-off-by: Michal Marek \u003cmmarek@suse.cz\u003e\n"
    },
    {
      "commit": "8d65af789f3e2cf4cfbdbf71a0f7a61ebcd41d38",
      "tree": "121df3bfffc7853ac6d2c514ad514d4a748a0933",
      "parents": [
        "c0d0787b6d47d9f4d5e8bd321921104e854a9135"
      ],
      "author": {
        "name": "Alexey Dobriyan",
        "email": "adobriyan@gmail.com",
        "time": "Wed Sep 23 15:57:19 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 24 07:21:04 2009 -0700"
      },
      "message": "sysctl: remove \"struct file *\" argument of -\u003eproc_handler\n\nIt\u0027s unused.\n\nIt isn\u0027t needed -- read or write flag is already passed and sysctl\nshouldn\u0027t care about the rest.\n\nIt _was_ used in two places at arch/frv for some reason.\n\nSigned-off-by: Alexey Dobriyan \u003cadobriyan@gmail.com\u003e\nCc: David Howells \u003cdhowells@redhat.com\u003e\nCc: \"Eric W. Biederman\" \u003cebiederm@xmission.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Ralf Baechle \u003cralf@linux-mips.org\u003e\nCc: Martin Schwidefsky \u003cschwidefsky@de.ibm.com\u003e\nCc: Ingo Molnar \u003cmingo@elte.hu\u003e\nCc: \"David S. Miller\" \u003cdavem@davemloft.net\u003e\nCc: James Morris \u003cjmorris@namei.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5f8dcc21211a3d4e3a7a5ca366b469fb88117f61",
      "tree": "4bbb1b55c7787462fe313c7c003e77823c032422",
      "parents": [
        "5d863b89688e5811cd9e5bd0082cb38abe03adf3"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Mon Sep 21 17:03:19 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 22 07:17:39 2009 -0700"
      },
      "message": "page-allocator: split per-cpu list into one-list-per-migrate-type\n\nThe following two patches remove searching in the page allocator fast-path\nby maintaining multiple free-lists in the per-cpu structure.  At the time\nthe search was introduced, increasing the per-cpu structures would waste a\nlot of memory as per-cpu structures were statically allocated at\ncompile-time.  This is no longer the case.\n\nThe patches are as follows. They are based on mmotm-2009-08-27.\n\nPatch 1 adds multiple lists to struct per_cpu_pages, one per\n\tmigratetype that can be stored on the PCP lists.\n\nPatch 2 notes that the pcpu drain path check empty lists multiple times. The\n\tpatch reduces the number of checks by maintaining a count of free\n\tlists encountered. Lists containing pages will then free multiple\n\tpages in batch\n\nThe patches were tested with kernbench, netperf udp/tcp, hackbench and\nsysbench.  The netperf tests were not bound to any CPU in particular and\nwere run such that the results should be 99% confidence that the reported\nresults are within 1% of the estimated mean.  sysbench was run with a\npostgres background and read-only tests.  Similar to netperf, it was run\nmultiple times so that it\u0027s 99% confidence results are within 1%.  The\npatches were tested on x86, x86-64 and ppc64 as\n\nx86:\tIntel Pentium D 3GHz with 8G RAM (no-brand machine)\n\tkernbench\t- No significant difference, variance well within noise\n\tnetperf-udp\t- 1.34% to 2.28% gain\n\tnetperf-tcp\t- 0.45% to 1.22% gain\n\thackbench\t- Small variances, very close to noise\n\tsysbench\t- Very small gains\n\nx86-64:\tAMD Phenom 9950 1.3GHz with 8G RAM (no-brand machine)\n\tkernbench\t- No significant difference, variance well within noise\n\tnetperf-udp\t- 1.83% to 10.42% gains\n\tnetperf-tcp\t- No conclusive until buffer \u003e\u003d PAGE_SIZE\n\t\t\t\t4096\t+15.83%\n\t\t\t\t8192\t+ 0.34% (not significant)\n\t\t\t\t16384\t+ 1%\n\thackbench\t- Small gains, very close to noise\n\tsysbench\t- 0.79% to 1.6% gain\n\nppc64:\tPPC970MP 2.5GHz with 10GB RAM (it\u0027s a terrasoft powerstation)\n\tkernbench\t- No significant difference, variance well within noise\n\tnetperf-udp\t- 2-3% gain for almost all buffer sizes tested\n\tnetperf-tcp\t- losses on small buffers, gains on larger buffers\n\t\t\t  possibly indicates some bad caching effect.\n\thackbench\t- No significant difference\n\tsysbench\t- 2-4% gain\n\nThis patch:\n\nCurrently the per-cpu page allocator searches the PCP list for pages of\nthe correct migrate-type to reduce the possibility of pages being\ninappropriate placed from a fragmentation perspective.  This search is\npotentially expensive in a fast-path and undesirable.  Splitting the\nper-cpu list into multiple lists increases the size of a per-cpu structure\nand this was potentially a major problem at the time the search was\nintroduced.  These problem has been mitigated as now only the necessary\nnumber of structures is allocated for the running system.\n\nThis patch replaces a list search in the per-cpu allocator with one list\nper migrate type.  The potential snag with this approach is when bulk\nfreeing pages.  We round-robin free pages based on migrate type which has\nlittle bearing on the cache hotness of the page and potentially checks\nempty lists repeatedly in the event the majority of PCP pages are of one\ntype.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Nick Piggin \u003cnpiggin@suse.de\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Pekka Enberg \u003cpenberg@cs.helsinki.fi\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f86296317434b21585e229f6c49a33cb9ebab4d3",
      "tree": "d4fb05d4aee1a8e373ec053e7316dc9847b2c417",
      "parents": [
        "1a8670a29b5277cbe601f74ab63d2c5211fb3005"
      ],
      "author": {
        "name": "Wu Fengguang",
        "email": "fengguang.wu@intel.com",
        "time": "Mon Sep 21 17:03:11 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 22 07:17:39 2009 -0700"
      },
      "message": "mm: do batched scans for mem_cgroup\n\nFor mem_cgroup, shrink_zone() may call shrink_list() with nr_to_scan\u003d1, in\nwhich case shrink_list() _still_ calls isolate_pages() with the much\nlarger SWAP_CLUSTER_MAX.  It effectively scales up the inactive list scan\nrate by up to 32 times.\n\nFor example, with 16k inactive pages and DEF_PRIORITY\u003d12, (16k \u003e\u003e 12)\u003d4.\nSo when shrink_zone() expects to scan 4 pages in the active/inactive list,\nthe active list will be scanned 4 pages, while the inactive list will be\n(over) scanned SWAP_CLUSTER_MAX\u003d32 pages in effect.  And that could break\nthe balance between the two lists.\n\nIt can further impact the scan of anon active list, due to the anon\nactive/inactive ratio rebalance logic in balance_pgdat()/shrink_zone():\n\ninactive anon list over scanned \u003d\u003e inactive_anon_is_low() \u003d\u003d TRUE\n                                \u003d\u003e shrink_active_list()\n                                \u003d\u003e active anon list over scanned\n\nSo the end result may be\n\n- anon inactive  \u003d\u003e over scanned\n- anon active    \u003d\u003e over scanned (maybe not as much)\n- file inactive  \u003d\u003e over scanned\n- file active    \u003d\u003e under scanned (relatively)\n\nThe accesses to nr_saved_scan are not lock protected and so not 100%\naccurate, however we can tolerate small errors and the resulted small\nimbalanced scan rates between zones.\n\nCc: Rik van Riel \u003criel@redhat.com\u003e\nReviewed-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Balbir Singh \u003cbalbir@linux.vnet.ibm.com\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a731286de62294b63d8ceb3c5914ac52cc17e690",
      "tree": "c321e14500ec264e37fd103ffa71c7b133088010",
      "parents": [
        "b35ea17b7bbf5dea35faa0de11030acc620c3197"
      ],
      "author": {
        "name": "KOSAKI Motohiro",
        "email": "kosaki.motohiro@jp.fujitsu.com",
        "time": "Mon Sep 21 17:01:37 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 22 07:17:29 2009 -0700"
      },
      "message": "mm: vmstat: add isolate pages\n\nIf the system is running a heavy load of processes then concurrent reclaim\ncan isolate a large number of pages from the LRU. /proc/vmstat and the\noutput generated for an OOM do not show how many pages were isolated.\n\nThis has been observed during process fork bomb testing (mstctl11 in LTP).\n\nThis patch shows the information about isolated pages.\n\nReproduced via:\n\n-----------------------\n% ./hackbench 140 process 1000\n   \u003d\u003e OOM occur\n\nactive_anon:146 inactive_anon:0 isolated_anon:49245\n active_file:79 inactive_file:18 isolated_file:113\n unevictable:0 dirty:0 writeback:0 unstable:0 buffer:39\n free:370 slab_reclaimable:309 slab_unreclaimable:5492\n mapped:53 shmem:15 pagetables:28140 bounce:0\n\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nAcked-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Hugh Dickins \u003chugh.dickins@tiscali.co.uk\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4b02108ac1b3354a22b0d83c684797692efdc395",
      "tree": "9f65d6e8e35ddce940e7b9da6305cf5a19e5904e",
      "parents": [
        "c6a7f5728a1db45d30df55a01adc130b4ab0327c"
      ],
      "author": {
        "name": "KOSAKI Motohiro",
        "email": "kosaki.motohiro@jp.fujitsu.com",
        "time": "Mon Sep 21 17:01:33 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 22 07:17:27 2009 -0700"
      },
      "message": "mm: oom analysis: add shmem vmstat\n\nRecently we encountered OOM problems due to memory use of the GEM cache.\nGenerally a large amuont of Shmem/Tmpfs pages tend to create a memory\nshortage problem.\n\nWe often use the following calculation to determine the amount of shmem\npages:\n\nshmem \u003d NR_ACTIVE_ANON + NR_INACTIVE_ANON - NR_ANON_PAGES\n\nhowever the expression does not consider isolated and mlocked pages.\n\nThis patch adds explicit accounting for pages used by shmem and tmpfs.\n\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nReviewed-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nAcked-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Hugh Dickins \u003chugh.dickins@tiscali.co.uk\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c6a7f5728a1db45d30df55a01adc130b4ab0327c",
      "tree": "36649bc6ebb959841a5097c699968722cfd99c4d",
      "parents": [
        "71de1ccbe1fb40203edd3beb473f8580d917d2ca"
      ],
      "author": {
        "name": "KOSAKI Motohiro",
        "email": "kosaki.motohiro@jp.fujitsu.com",
        "time": "Mon Sep 21 17:01:32 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 22 07:17:27 2009 -0700"
      },
      "message": "mm: oom analysis: Show kernel stack usage in /proc/meminfo and OOM log output\n\nThe amount of memory allocated to kernel stacks can become significant and\ncause OOM conditions.  However, we do not display the amount of memory\nconsumed by stacks.\n\nAdd code to display the amount of memory used for stacks in /proc/meminfo.\n\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nReviewed-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nReviewed-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6837765963f1723e80ca97b1fae660f3a60d77df",
      "tree": "a9a6ed4b7e3bf188966da78b04bf39298f24375a",
      "parents": [
        "bce7394a3ef82b8477952fbab838e4a6e8cb47d2"
      ],
      "author": {
        "name": "KOSAKI Motohiro",
        "email": "kosaki.motohiro@jp.fujitsu.com",
        "time": "Tue Jun 16 15:32:51 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jun 16 19:47:42 2009 -0700"
      },
      "message": "mm: remove CONFIG_UNEVICTABLE_LRU config option\n\nCurrently, nobody wants to turn UNEVICTABLE_LRU off.  Thus this\nconfigurability is unnecessary.\n\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Andi Kleen \u003candi@firstfloor.org\u003e\nAcked-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: David Woodhouse \u003cdwmw2@infradead.org\u003e\nCc: Matt Mackall \u003cmpm@selenic.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6e08a369ee10b361ac1cdcdf4fabd420fd08beb3",
      "tree": "9dbf870cad025b64781d9051b6680a8a23927e5a",
      "parents": [
        "56e49d218890f49b0057710a4b6fef31f5ffbfec"
      ],
      "author": {
        "name": "Wu Fengguang",
        "email": "fengguang.wu@intel.com",
        "time": "Tue Jun 16 15:32:29 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jun 16 19:47:39 2009 -0700"
      },
      "message": "vmscan: cleanup the scan batching code\n\nThe vmscan batching logic is twisting.  Move it into a standalone function\nnr_scan_try_batch() and document it.  No behavior change.\n\nSigned-off-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Nick Piggin \u003cnpiggin@suse.de\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nAcked-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nAcked-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "418589663d6011de9006425b6c5721e1544fb47a",
      "tree": "ef37fb026d3e38191d6b5c99bc95c190fa98d0fb",
      "parents": [
        "a3af9c389a7f3e675313f442fdd8c247c1cdb66b"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Jun 16 15:32:12 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jun 16 19:47:35 2009 -0700"
      },
      "message": "page allocator: use allocation flags as an index to the zone watermark\n\nALLOC_WMARK_MIN, ALLOC_WMARK_LOW and ALLOC_WMARK_HIGH determin whether\npages_min, pages_low or pages_high is used as the zone watermark when\nallocating the pages.  Two branches in the allocator hotpath determine\nwhich watermark to use.\n\nThis patch uses the flags as an array index into a watermark array that is\nindexed with WMARK_* defines accessed via helpers.  All call sites that\nuse zone-\u003epages_* are updated to use the helpers for accessing the values\nand the array offsets for setting.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Pekka Enberg \u003cpenberg@cs.helsinki.fi\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: Lee Schermerhorn \u003cLee.Schermerhorn@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "49255c619fbd482d704289b5eb2795f8e3b7ff2e",
      "tree": "b1f36ca46bda7767fce12bc4a70360a68f7255ab",
      "parents": [
        "11e33f6a55ed7847d9c8ffe185ef87faf7806abe"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Jun 16 15:31:58 2009 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jun 16 19:47:33 2009 -0700"
      },
      "message": "page allocator: move check for disabled anti-fragmentation out of fastpath\n\nOn low-memory systems, anti-fragmentation gets disabled as there is\nnothing it can do and it would just incur overhead shuffling pages between\nlists constantly.  Currently the check is made in the free page fast path\nfor every page.  This patch moves it to a slow path.  On machines with low\nmemory, there will be small amount of additional overhead as pages get\nshuffled between lists but it should quickly settle.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nReviewed-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Pekka Enberg \u003cpenberg@cs.helsinki.fi\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: Lee Schermerhorn \u003cLee.Schermerhorn@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    }
  ],
  "next": "eb33575cf67d3f35fa2510210ef92631266e2465"
}
