)]}'
{
  "log": [
    {
      "commit": "2a38ada0f1ab9f894eea4428731ebc811b51c3f3",
      "tree": "759c765808a23a3a35e4ba10d8306c847c0205b7",
      "parents": [
        "19218e895cefdd389c96af12c93c89e7276bbaad",
        "44d19f5a04ae4e433548ba2f25e4d2ccfcac765e"
      ],
      "author": {
        "name": "Ethan Chen",
        "email": "intervigil@gmail.com",
        "time": "Sun Dec 08 12:50:38 2013 -0800"
      },
      "committer": {
        "name": "Ethan Chen",
        "email": "intervigil@gmail.com",
        "time": "Sun Dec 08 12:50:38 2013 -0800"
      },
      "message": "Merge tag \u0027v3.4.72\u0027 into tmp\n\nThis is the 3.4.72 stable release\n\nConflicts:\n\tarch/arm/Kconfig\n\tarch/arm/include/asm/mutex.h\n\tarch/arm/kernel/perf_event.c\n\tarch/arm/kernel/traps.c\n\tarch/arm/mm/dma-mapping.c\n\tdrivers/base/power/main.c\n\tdrivers/bluetooth/ath3k.c\n\tdrivers/bluetooth/btusb.c\n\tdrivers/gpu/drm/radeon/radeon_mode.h\n\tdrivers/mmc/card/block.c\n\tdrivers/mmc/host/sdhci.c\n\tdrivers/usb/core/message.c\n\tdrivers/usb/host/xhci-plat.c\n\tdrivers/usb/host/xhci.h\n\tdrivers/virtio/virtio_ring.c\n\tfs/ubifs/dir.c\n\tinclude/linux/freezer.h\n\tinclude/linux/virtio.h\n\tinclude/media/v4l2-ctrls.h\n\tinclude/net/bluetooth/hci_core.h\n\tinclude/net/bluetooth/mgmt.h\n\tkernel/cgroup.c\n\tkernel/futex.c\n\tkernel/signal.c\n\tnet/bluetooth/hci_conn.c\n\tnet/bluetooth/hci_core.c\n\tnet/bluetooth/hci_event.c\n\tnet/bluetooth/l2cap_core.c\n\tnet/bluetooth/mgmt.c\n\tnet/bluetooth/rfcomm/sock.c\n\tnet/bluetooth/smp.c\n\nChange-Id: I4fb0d5de74ca76f933d95d98e1a9c2c859402f34\n"
    },
    {
      "commit": "36abcfd971000b6e589ff65c3456d45b98757f89",
      "tree": "2306000f20fd9f8bbedc8b8339a4585e0f57ae24",
      "parents": [
        "d1c2fbe849e5669959be5a1db1b6d65ca43a19e7"
      ],
      "author": {
        "name": "Lisa Du",
        "email": "cldu@marvell.com",
        "time": "Wed Sep 11 14:22:36 2013 -0700"
      },
      "committer": {
        "name": "Swetha Chikkaboraiah",
        "email": "schikk@codeaurora.org",
        "time": "Mon Dec 02 18:32:37 2013 +0530"
      },
      "message": "mm: vmscan: fix do_try_to_free_pages() livelock\n\nThis patch is based on KOSAKI\u0027s work and I add a little more description,\nplease refer https://lkml.org/lkml/2012/6/14/74.\n\nCurrently, I found system can enter a state that there are lots of free\npages in a zone but only order-0 and order-1 pages which means the zone is\nheavily fragmented, then high order allocation could make direct reclaim\npath\u0027s long stall(ex, 60 seconds) especially in no swap and no compaciton\nenviroment.  This problem happened on v3.4, but it seems issue still lives\nin current tree, the reason is do_try_to_free_pages enter live lock:\n\nkswapd will go to sleep if the zones have been fully scanned and are still\nnot balanced.  As kswapd thinks there\u0027s little point trying all over again\nto avoid infinite loop.  Instead it changes order from high-order to\n0-order because kswapd think order-0 is the most important.  Look at\n73ce02e9 in detail.  If watermarks are ok, kswapd will go back to sleep\nand may leave zone-\u003eall_unreclaimable \u003d3D 0.  It assume high-order users\ncan still perform direct reclaim if they wish.\n\nDirect reclaim continue to reclaim for a high order which is not a\nCOSTLY_ORDER without oom-killer until kswapd turn on\nzone-\u003eall_unreclaimble\u003d .  This is because to avoid too early oom-kill.\nSo it means direct_reclaim depends on kswapd to break this loop.\n\nIn worst case, direct-reclaim may continue to page reclaim forever when\nkswapd sleeps forever until someone like watchdog detect and finally kill\nthe process.  As described in:\nhttp://thread.gmane.org/gmane.linux.kernel.mm/103737\n\nWe can\u0027t turn on zone-\u003eall_unreclaimable from direct reclaim path because\ndirect reclaim path don\u0027t take any lock and this way is racy.  Thus this\npatch removes zone-\u003eall_unreclaimable field completely and recalculates\nzone reclaimable state every time.\n\nNote: we can\u0027t take the idea that direct-reclaim see zone-\u003epages_scanned\ndirectly and kswapd continue to use zone-\u003eall_unreclaimable.  Because, it\nis racy.  commit 929bea7c71 (vmscan: all_unreclaimable() use\nzone-\u003eall_unreclaimable as a name) describes the detail.\n\nCRs-fixed: 573027\nChange-Id: I49970a0fa751cf33af293fd1ee784e36422785b1\n[akpm@linux-foundation.org: uninline zone_reclaimable_pages() and zone_reclaimable()]\nCc: Aaditya Kumar \u003caaditya.kumar.30@gmail.com\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nCc: Nick Piggin \u003cnpiggin@gmail.com\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nCc: Bob Liu \u003clliubbo@gmail.com\u003e\nCc: Neil Zhang \u003czhangwm@marvell.com\u003e\nCc: Russell King - ARM Linux \u003clinux@arm.linux.org.uk\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Minchan Kim \u003cminchan@kernel.org\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Lisa Du \u003ccldu@marvell.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\nGit-commit: 6e543d5780e36ff5ee56c44d7e2e30db3457a7ed\n[lauraa@codeaurora.org: Some context fixups and variable name changes due\nto backporting. Dropped parts that don\u0027t apply to older kernels]\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nSigned-off-by: Swetha Chikkaboraiah \u003cschikk@codeaurora.org\u003e\n"
    },
    {
      "commit": "022a41db8aa1bc0b4ff4c013f889292324a1c465",
      "tree": "11f00ef8d0aa584b956a194de5ee4c3be5fc5120",
      "parents": [
        "9712612a91e92824349ce9fece31dba6d2fbde70"
      ],
      "author": {
        "name": "David Rientjes",
        "email": "rientjes@google.com",
        "time": "Mon Apr 29 15:06:11 2013 -0700"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@linuxfoundation.org",
        "time": "Sun Oct 13 15:42:49 2013 -0700"
      },
      "message": "mm, show_mem: suppress page counts in non-blockable contexts\n\ncommit 4b59e6c4730978679b414a8da61514a2518da512 upstream.\n\nOn large systems with a lot of memory, walking all RAM to determine page\ntypes may take a half second or even more.\n\nIn non-blockable contexts, the page allocator will emit a page allocation\nfailure warning unless __GFP_NOWARN is specified.  In such contexts, irqs\nare typically disabled and such a lengthy delay may even result in NMI\nwatchdog timeouts.\n\nTo fix this, suppress the page walk in such contexts when printing the\npage allocation failure warning.\n\nSigned-off-by: David Rientjes \u003crientjes@google.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Xishi Qiu \u003cqiuxishi@huawei.com\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n\n"
    },
    {
      "commit": "e9223e5a30bd5930092ae22692e2cda76c9afd0c",
      "tree": "f45dbdc1ec3a0acd3ba4c3c9995315cdfc8cae00",
      "parents": [
        "0841f631e5cecf7fb08f6ae6c89e47b79dca83cd"
      ],
      "author": {
        "name": "Tomasz Stanislawski",
        "email": "t.stanislaws@samsung.com",
        "time": "Wed Jun 12 21:05:02 2013 +0000"
      },
      "committer": {
        "name": "Srinivasarao P",
        "email": "spathi@codeaurora.org",
        "time": "Fri Sep 27 11:02:59 2013 +0530"
      },
      "message": "mm/page_alloc.c: fix watermark check in __zone_watermark_ok()\n\nThe watermark check consists of two sub-checks.  The first one is:\n\n\tif (free_pages \u003c\u003d min + lowmem_reserve)\n\t\treturn false;\n\nThe check assures that there is minimal amount of RAM in the zone.  If\nCMA is used then the free_pages is reduced by the number of free pages\nin CMA prior to the over-mentioned check.\n\n\tif (!(alloc_flags \u0026 ALLOC_CMA))\n\t\tfree_pages -\u003d zone_page_state(z, NR_FREE_CMA_PAGES);\n\nThis prevents the zone from being drained from pages available for\nnon-movable allocations.\n\nThe second check prevents the zone from getting too fragmented.\n\n\tfor (o \u003d 0; o \u003c order; o++) {\n\t\tfree_pages -\u003d z-\u003efree_area[o].nr_free \u003c\u003c o;\n\t\tmin \u003e\u003e\u003d 1;\n\t\tif (free_pages \u003c\u003d min)\n\t\t\treturn false;\n\t}\n\nThe field z-\u003efree_area[o].nr_free is equal to the number of free pages\nincluding free CMA pages.  Therefore the CMA pages are subtracted twice.\nThis may cause a false positive fail of __zone_watermark_ok() if the CMA\narea gets strongly fragmented.  In such a case there are many 0-order\nfree pages located in CMA.  Those pages are subtracted twice therefore\nthey will quickly drain free_pages during the check against\nfragmentation.  The test fails even though there are many free non-cma\npages in the zone.\n\nThis patch fixes this issue by subtracting CMA pages only for a purpose of\n(free_pages \u003c\u003d min + lowmem_reserve) check.\n\nLaura said:\n\n  We were observing allocation failures of higher order pages (order 5 \u003d\n  128K typically) under tight memory conditions resulting in driver\n  failure.  The output from the page allocation failure showed plenty of\n  free pages of the appropriate order/type/zone and mostly CMA pages in\n  the lower orders.\n\n  For full disclosure, we still observed some page allocation failures\n  even after applying the patch but the number was drastically reduced and\n  those failures were attributed to fragmentation/other system issues.\n\nChange-Id: Ic2c0c233993c41c630e24d71df5e12aa614588e5\nCRs-Fixed:549847\nSigned-off-by: Tomasz Stanislawski \u003ct.stanislaws@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nTested-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nCc: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nAcked-by: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nTested-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: \u003cstable@vger.kernel.org\u003e\t[3.7+]\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nGit-commit: 026b08147923142e925a7d0aaa39038055ae0156\nGit-repo: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git\nSigned-off-by: Srinivasarao P \u003cspathi@codeaurora.org\u003e\n"
    },
    {
      "commit": "19d22ea89933ca48bcb10fc7919ed7bbefd52362",
      "tree": "b47da3af5d3566935e77ee3975e3bd93d515a926",
      "parents": [
        "40a017c96a98a29c9d39bf0ca34651288984e9ce"
      ],
      "author": {
        "name": "Wanpeng Li",
        "email": "liwanp@linux.vnet.ibm.com",
        "time": "Wed Jul 03 15:02:40 2013 -0700"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@linuxfoundation.org",
        "time": "Sun Aug 04 16:26:07 2013 +0800"
      },
      "message": "mm/memory-hotplug: fix lowmem count overflow when offline pages\n\ncommit cea27eb2a202959783f81254c48c250ddd80e129 upstream.\n\nThe logic for the memory-remove code fails to correctly account the\nTotal High Memory when a memory block which contains High Memory is\nofflined as shown in the example below.  The following patch fixes it.\n\nBefore logic memory remove:\n\nMemTotal:        7603740 kB\nMemFree:         6329612 kB\nBuffers:           94352 kB\nCached:           872008 kB\nSwapCached:            0 kB\nActive:           626932 kB\nInactive:         519216 kB\nActive(anon):     180776 kB\nInactive(anon):   222944 kB\nActive(file):     446156 kB\nInactive(file):   296272 kB\nUnevictable:           0 kB\nMlocked:               0 kB\nHighTotal:       7294672 kB\nHighFree:        5704696 kB\nLowTotal:         309068 kB\nLowFree:          624916 kB\n\nAfter logic memory remove:\n\nMemTotal:        7079452 kB\nMemFree:         5805976 kB\nBuffers:           94372 kB\nCached:           872000 kB\nSwapCached:            0 kB\nActive:           626936 kB\nInactive:         519236 kB\nActive(anon):     180780 kB\nInactive(anon):   222944 kB\nActive(file):     446156 kB\nInactive(file):   296292 kB\nUnevictable:           0 kB\nMlocked:               0 kB\nHighTotal:       7294672 kB\nHighFree:        5181024 kB\nLowTotal:       4294752076 kB\nLowFree:          624952 kB\n\n[mhocko@suse.cz: fix CONFIG_HIGHMEM\u003dn build]\nSigned-off-by: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: \u003cstable@vger.kernel.org\u003e\t[2.6.24+]\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Zhouping Liu \u003czliu@redhat.com\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n\n"
    },
    {
      "commit": "6ddc967f09f9e6008b095f039c963e5ab74be383",
      "tree": "b6dc90465e3a243b63bb887f2f5e603005d16ff1",
      "parents": [
        "ec005c0b2a1870b17795e5d6ae97a78a76f1febe"
      ],
      "author": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Tue Feb 12 13:46:24 2013 -0800"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Thu May 02 21:07:30 2013 -0700"
      },
      "message": "mm: cma: fix accounting of CMA pages placed in high memory\n\nThe total number of low memory pages is determined as totalram_pages -\ntotalhigh_pages, so without this patch all CMA pageblocks placed in\nhighmem were accounted to low memory.\n\nChange-Id: I10b78fa6a710828520a487b2fc2419b4f7521a6f\nCRs-Fixed: 480377\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nAcked-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\nGit-commit: 41a7973447b0b8717f0a214d4328dc31ec2291d7\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "ec005c0b2a1870b17795e5d6ae97a78a76f1febe",
      "tree": "7b2884cb0e5b25a8eb4cc3262875457133594e43",
      "parents": [
        "5e6e44d63c5ccdfffa2709b4924256ccdc937209"
      ],
      "author": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Tue Dec 11 16:02:59 2012 -0800"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Thu May 02 21:07:20 2013 -0700"
      },
      "message": "mm: cma: remove watermark hacks\n\nCommits 2139cbe627b8 (\"cma: fix counting of isolated pages\") and\nd95ea5d18e69 (\"cma: fix watermark checking\") introduced a reliable\nmethod of free page accounting when memory is being allocated from CMA\nregions, so the workaround introduced earlier by commit 49f223a9cd96\n(\"mm: trigger page reclaim in alloc_contig_range() to stabilise\nwatermarks\") can be finally removed.\n\nChange-Id: Iae17de8185eeabffd46752dbaf819591e6585869\nCRs-Fixed: 480377\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Arnd Bergmann \u003carnd@arndb.de\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nGit-commit: bc357f431c836c6631751e3ef7dfe7882394ad67\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\n[lauraa@codeaurora.org: Context fixup in mmzone.h, keep zone definition in\nalloc_contig_range for other purposes]\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "15962c2598f71e0fed921ae441fede5df2bb9e1c",
      "tree": "a8ce44efcbe6d4bd71e2913b8e46be7697cd7caa",
      "parents": [
        "366e0f00106958cf80e0c4a4c664c02562568525"
      ],
      "author": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Fri Apr 05 12:39:18 2013 -0700"
      },
      "committer": {
        "name": "Gerrit - the friendly Code Review server",
        "email": "code-review@localhost",
        "time": "Mon Apr 08 14:49:42 2013 -0700"
      },
      "message": "mm: Retry original migrate type if CMA failed\n\nCurrently, __rmqueue_cma will disregard the original migrate type\nand only try MIGRATE_CMA for allocations. If the MIGRATE_CMA\nallocation fails, the fallback types of the original migrate type\nare used. Note that in this current path we never try to actually\nallocate from the original migrate type. If the only pages left\nin the system are the original migrate type, we will fail the\nallocation since we never actually try the original migrate type.\nThis may lead to infinite looping since the system still (correctly)\ncalculates there are pages available for allocation and will keep\ntrying to allocate pages. Fix this degenerate case by allocating\nfrom the original migrate type if the MIGRATE_CMA allocation fails.\n\nChange-Id: I62ab293dc694955eaf88e790131a8565395ba8cb\nCRs-Fixed: 470615\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "17c2f96bd1de5999900303b3eb923b8cdf6ee2ba",
      "tree": "f6ba7eb0b52708858ab17225f83b5d738bebba5e",
      "parents": [
        "5c444ede11d516377ec6a3230d450441f46cfb4f"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Feb 18 09:58:02 2013 -0800"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@linuxfoundation.org",
        "time": "Thu Feb 28 06:58:58 2013 -0800"
      },
      "message": "mm: fix pageblock bitmap allocation\n\ncommit 7c45512df987c5619db041b5c9b80d281e26d3db upstream.\n\nCommit c060f943d092 (\"mm: use aligned zone start for pfn_to_bitidx\ncalculation\") fixed out calculation of the index into the pageblock\nbitmap when a !SPARSEMEM zome was not aligned to pageblock_nr_pages.\n\nHowever, the _allocation_ of that bitmap had never taken this alignment\nrequirement into accout, so depending on the exact size and alignment of\nthe zone, the use of that index could then access past the allocation,\nresulting in some very subtle memory corruption.\n\nThis was reported (and bisected) by Ingo Molnar: one of his random\nconfig builds would hang with certain very specific kernel command line\noptions.\n\nIn the meantime, commit c060f943d092 has been marked for stable, so this\nfix needs to be back-ported to the stable kernels that backported the\ncommit to use the right alignment.\n\nBisected-and-tested-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n\n"
    },
    {
      "commit": "48f37419dd547478aa46e9898dba65f8a6cfe0a1",
      "tree": "8bd5eb745415feb736c4f5525a8cee682671cbff",
      "parents": [
        "55eb56c75771a81d99e6a70af688f77cb38ff729"
      ],
      "author": {
        "name": "Heesub Shin",
        "email": "heesub.shin@samsung.com",
        "time": "Mon Jan 07 11:10:13 2013 +0900"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Wed Feb 20 14:40:26 2013 -0800"
      },
      "message": "cma: redirect page allocation to CMA\n\nCMA pages are designed to be used as fallback for movable allocations\nand cannot be used for non-movable allocations. If CMA pages are\nutilized poorly, non-movable allocations may end up getting starved if\nall regular movable pages are allocated and the only pages left are\nCMA. Always using CMA pages first creates unacceptable performance\nproblems. As a midway alternative, use CMA pages for certain\nuserspace allocations. The userspace pages can be migrated or dropped\nquickly which giving decent utilization.\n\nChange-Id: I6165dda01b705309eebabc6dfa67146b7a95c174\nCRs-Fixed: 452508\n[lauraa@codeaurora.org: Missing CONFIG_CMA guards, add commit text]\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "55eb56c75771a81d99e6a70af688f77cb38ff729",
      "tree": "ef6fc63d0d0fed345b0f8f93f19312fce6ac19e4",
      "parents": [
        "1ace3155a4baf5a959ce5c372c99a286d7ec2de2"
      ],
      "author": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Mon Feb 18 07:17:06 2013 -0800"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Wed Feb 20 14:40:08 2013 -0800"
      },
      "message": "Revert \"mm: cma: on movable allocations try MIGRATE_CMA first\"\n\nThis reverts commit b5662d64fa5ee483b985b351dec993402422fee3.\n\nUsing CMA pages first creates good utilization but has some\nunfortunate side effects. Many movable allocations come from\nthe filesystem layer which can hold on to pages for long periods\nof time which causes high allocation times (~200ms) and high\nrates of failure. Revert this patch and use alternate allocation\nstrategies to get better utilization.\n\nChange-Id: I917e137d5fb292c9f8282506f71a799a6451ccfa\nCRs-Fixed: 452508\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "1ace3155a4baf5a959ce5c372c99a286d7ec2de2",
      "tree": "dd39d97b1f226c4fadd71e7271f9fe50036cd90d",
      "parents": [
        "4777a5ac5cfe6001027c3781808bfcc10c431d15"
      ],
      "author": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Tue Feb 12 13:30:04 2013 -0800"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Wed Feb 20 14:39:41 2013 -0800"
      },
      "message": "mm: Don\u0027t put CMA pages on per cpu lists\n\nCMA allocations rely on being able to migrate pages out\nquickly to fulfill the allocations. Most use cases for\nmovable allocations meet this requirement. File system\nallocations may take an unaccpetably long time to\nmigrate, which creates delays from CMA. Prevent CMA\npages from ending up on the per-cpu lists to avoid\ncode paths grabbing CMA pages on the fast path. CMA\npages can still be allocated as a fallback under tight\nmemory pressure.\n\nCRs-Fixed: 452508\nChange-Id: I79a28f697275a2a1870caabae53c8ea345b4b47d\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "850bf199320393f313bcdeda21eab85b3777efb9",
      "tree": "d5e52f60cc70eea92fe841c92543531578ed49b3",
      "parents": [
        "7ea4328ba50cffa9e8c4452f0318f22f34444177"
      ],
      "author": {
        "name": "Bartlomiej Zolnierkiewicz",
        "email": "b.zolnierkie@samsung.com",
        "time": "Mon Oct 08 16:32:05 2012 -0700"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Sun Feb 03 08:14:15 2013 -0800"
      },
      "message": "cma: fix watermark checking\n\n* Add ALLOC_CMA alloc flag and pass it to [__]zone_watermark_ok()\n  (from Minchan Kim).\n\n* During watermark check decrease available free pages number by\n  free CMA pages number if necessary (unmovable allocations cannot\n  use pages from CMA areas).\n\nCRs-Fixed: 446321\nChange-Id: Ibd069b028eb80b70701c1b81cb28a503d8265be0\nSigned-off-by: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n[lauraa@codeaurora.org: context fixups]\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "b70fa754f45acaf4ccf453978e415f5d0ae46d22",
      "tree": "70159ad1c99004fae119d536ff83df067b8c5307",
      "parents": [
        "b9c24b258289287f8c3f649d299d48920757dcc5"
      ],
      "author": {
        "name": "Rabin Vincent",
        "email": "rabin.vincent@stericsson.com",
        "time": "Tue Dec 11 16:00:24 2012 -0800"
      },
      "committer": {
        "name": "Sudhir Sharma",
        "email": "sudsha@codeaurora.org",
        "time": "Fri Jan 18 01:07:25 2013 -0800"
      },
      "message": "mm: show migration types in show_mem\n\nThis is useful to diagnose the reason for page allocation failure for\ncases where there appear to be several free pages.\n\nExample, with this alloc_pages(GFP_ATOMIC) failure:\n\n swapper/0: page allocation failure: order:0, mode:0x0\n ...\n Mem-info:\n Normal per-cpu:\n CPU    0: hi:   90, btch:  15 usd:  48\n CPU    1: hi:   90, btch:  15 usd:  21\n active_anon:0 inactive_anon:0 isolated_anon:0\n  active_file:0 inactive_file:84 isolated_file:0\n  unevictable:0 dirty:0 writeback:0 unstable:0\n  free:4026 slab_reclaimable:75 slab_unreclaimable:484\n  mapped:0 shmem:0 pagetables:0 bounce:0\n Normal free:16104kB min:2296kB low:2868kB high:3444kB active_anon:0kB\n inactive_anon:0kB active_file:0kB inactive_file:336kB unevictable:0kB\n isolated(anon):0kB isolated(file):0kB present:331776kB mlocked:0kB\n dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:300kB\n slab_unreclaimable:1936kB kernel_stack:328kB pagetables:0kB unstable:0kB\n bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no\n lowmem_reserve[]: 0 0\n\nBefore the patch, it\u0027s hard (for me, at least) to say why all these free\nchunks weren\u0027t considered for allocation:\n\n Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 1*256kB 1*512kB\n 1*1024kB 1*2048kB 3*4096kB \u003d 16128kB\n\nAfter the patch, it\u0027s obvious that the reason is that all of these are\nin the MIGRATE_CMA (C) freelist:\n\n Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 1*256kB (C) 1*512kB\n (C) 1*1024kB (C) 1*2048kB (C) 3*4096kB (C) \u003d 16128kB\n\nChange-Id: Ic5fe77d762e0c03715bfb917774e7c4f03ac43f5\nSigned-off-by: Rabin Vincent \u003crabin.vincent@stericsson.com\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "b9c24b258289287f8c3f649d299d48920757dcc5",
      "tree": "7d0d3f3ab08b5fc6cbee8d416592a9e4df0f4295",
      "parents": [
        "13f0fb1b9b6ec96b03ede07393d7efa19de53527"
      ],
      "author": {
        "name": "Michal Nazarewicz",
        "email": "mina86@mina86.com",
        "time": "Tue Nov 20 16:37:50 2012 +0100"
      },
      "committer": {
        "name": "Sudhir Sharma",
        "email": "sudsha@codeaurora.org",
        "time": "Fri Jan 18 01:06:32 2013 -0800"
      },
      "message": "mm: cma: on movable allocations try MIGRATE_CMA first\n\nIt has been observed that system tends to keep a lot of CMA free pages\neven in very high memory pressure use cases.  The CMA fallback for\nmovable pages is used very rarely, only when system is completely\npruned from MOVABLE pages.  This means that the out-of-memory is\ntriggered for unmovable allocations even when there are many CMA pages\navailable.  This problem was not observed previously since movable\npages were used as a fallback for unmovable allocations.\n\nTo avoid such situation this commit changes the allocation order so\nthat on movable allocations the MIGRATE_CMA pageblocks are used first.\n\nThis change means that the MIGRATE_CMA can be removed from fallback\npath of the MIGRATE_MOVABLE type.  This means that the\n__rmqueue_fallback() function will never deal with CMA pages and thus\nall the checks around MIGRATE_CMA can be removed from that function.\n\nChange-Id: Ie13312d62a6af12d7aa78b4283ed25535a6d49fd\nCRs-Fixed: 435287\nSigned-off-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nReported-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nCc: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "ce19422296fb5fb5997d81e1b0043d771def7996",
      "tree": "92b8292798661ad6d3d4e4e4a8649949b30872cf",
      "parents": [
        "c0b96525363543a1ba6a277546ebc26ad9a53aa1"
      ],
      "author": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Fri Jan 11 14:31:51 2013 -0800"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@linuxfoundation.org",
        "time": "Thu Jan 17 08:50:43 2013 -0800"
      },
      "message": "mm: use aligned zone start for pfn_to_bitidx calculation\n\ncommit c060f943d0929f3e429c5d9522290584f6281d6e upstream.\n\nThe current calculation in pfn_to_bitidx assumes that (pfn -\nzone-\u003ezone_start_pfn) \u003e\u003e pageblock_order will return the same bit for\nall pfn in a pageblock.  If zone_start_pfn is not aligned to\npageblock_nr_pages, this may not always be correct.\n\nConsider the following with pageblock order \u003d 10, zone start 2MB:\n\n  pfn     | pfn - zone start | (pfn - zone start) \u003e\u003e page block order\n  ----------------------------------------------------------------\n  0x26000 | 0x25e00\t   |  0x97\n  0x26100 | 0x25f00\t   |  0x97\n  0x26200 | 0x26000\t   |  0x98\n  0x26300 | 0x26100\t   |  0x98\n\nThis means that calling {get,set}_pageblock_migratetype on a single page\nwill not set the migratetype for the full block.  Fix this by rounding\ndown zone_start_pfn when doing the bitidx calculation.\n\nFor our use case, the effects of this bug were mostly tied to the fact\nthat CMA allocations would either take a long time or fail to happen.\nDepending on the driver using CMA, this could result in anything from\nvisual glitches to application failures.\n\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n\n"
    },
    {
      "commit": "e8eebe90fb591b76950eff0238bf9b15717e02cc",
      "tree": "e10e9ef030b746dbf2f505685fa9bb71336e5494",
      "parents": [
        "a88184417fb27c2f5482eb98d37c7f3664508d0c"
      ],
      "author": {
        "name": "Liam Mark",
        "email": "lmark@codeaurora.org",
        "time": "Fri Jan 04 09:40:11 2013 -0800"
      },
      "committer": {
        "name": "Liam Mark",
        "email": "lmark@codeaurora.org",
        "time": "Fri Jan 04 10:09:06 2013 -0800"
      },
      "message": "android/lowmemorykiller: Selectively count free CMA pages\n\nIn certain memory configurations there can be a large number of\nCMA pages which are not suitable to satisfy certain memory\nrequests.\n\nThis large number of unsuitable pages can cause the\nlowmemorykiller to not kill any tasks because the\nlowmemorykiller counts all free pages.\nIn order to ensure the lowmemorykiller properly evaluates the\nfree memory only count the free CMA pages if they are suitable\nfor satisfying the memory request.\n\nChange-Id: I7f06d53e2d8cfe7439e5561fe6e5209ce73b1c90\nCRs-fixed: 437016\nSigned-off-by: Liam Mark \u003clmark@codeaurora.org\u003e\n"
    },
    {
      "commit": "922b8422857516bf67a760a398327a96fbbfa6a9",
      "tree": "693ead65bf39d2c97f122257b390a570869c3da2",
      "parents": [
        "13afc1a418378a97ba455122c6f6f79e1048c5cb"
      ],
      "author": {
        "name": "Larry Bassel",
        "email": "lbassel@codeaurora.org",
        "time": "Fri Dec 14 14:21:05 2012 -0800"
      },
      "committer": {
        "name": "Neha Pandey",
        "email": "nehap@codeaurora.org",
        "time": "Thu Dec 27 14:11:13 2012 -0800"
      },
      "message": "mm: make counts of CMA free pages correct\n\nBoth patches needed, second patch (among other things) fixes\na bug in the first.\n\ncommit 2139cbe627b8910ded55148f87ee10f7485408ed\nAuthor: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nDate:   Mon Oct 8 16:32:00 2012 -0700\n\n    cma: fix counting of isolated pages\n\n    Isolated free pages shouldn\u0027t be accounted to NR_FREE_PAGES counter.  Fix\n    it by properly decreasing/increasing NR_FREE_PAGES counter in\n    set_migratetype_isolate()/unset_migratetype_isolate() and removing counter\n    adjustment for isolated pages from free_one_page() and split_free_page().\n\n    Signed-off-by: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\n    Signed-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\n    Cc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\n    Cc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\n    Cc: Minchan Kim \u003cminchan@kernel.org\u003e\n    Cc: Mel Gorman \u003cmgorman@suse.de\u003e\n    Cc: Hugh Dickins \u003chughd@google.com\u003e\n    Signed-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n    Signed-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n    [lbassel@codeaurora.org: backport from 3.7, small changes needed]\n    Signed-off-by: Larry Bassel \u003clbassel@codeaurora.org\u003e\n\ncommit d1ce749a0db12202b711d1aba1d29e823034648d\nAuthor: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\nDate:   Mon Oct 8 16:32:02 2012 -0700\n\n    cma: count free CMA pages\n\n    Add NR_FREE_CMA_PAGES counter to be later used for checking watermark in\n    __zone_watermark_ok().  For simplicity and to avoid #ifdef hell make this\n    counter always available (not only when CONFIG_CMA\u003dy).\n\n    [akpm@linux-foundation.org: use conventional migratetype naming]\n    Signed-off-by: Bartlomiej Zolnierkiewicz \u003cb.zolnierkie@samsung.com\u003e\n    Signed-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\n    Cc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\n    Cc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\n    Cc: Minchan Kim \u003cminchan@kernel.org\u003e\n    Cc: Mel Gorman \u003cmgorman@suse.de\u003e\n    Cc: Hugh Dickins \u003chughd@google.com\u003e\n    Signed-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n    Signed-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n    [lbassel@codeaurora.org: backport from 3.7, small changes needed]\n    Signed-off-by: Larry Bassel \u003clbassel@codeaurora.org\u003e\n\nChange-Id: I7d4f5fe0b6931192706337e0b730f43e7cccd031\nSigned-off-by: Larry Bassel \u003clbassel@codeaurora.org\u003e\nSigned-off-by: Neha Pandey \u003cnehap@codeaurora.org\u003e\n"
    },
    {
      "commit": "70b7280939672e434a6e2536a9a36f4dafc17934",
      "tree": "8e1632a6ea5252d7ce556de3be828f2606c4d345",
      "parents": [
        "afb2a33921982599d247a46b2306c75a4b69064b"
      ],
      "author": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Fri Nov 30 14:07:01 2012 -0800"
      },
      "committer": {
        "name": "Mitchel Humpherys",
        "email": "mitchelh@codeaurora.org",
        "time": "Tue Dec 11 21:45:17 2012 -0800"
      },
      "message": "mm: Use aligned zone start for pfn_to_bitidx calculation\n\nThe current calculation in pfn_to_bitidx assumes that\n(pfn - zone-\u003ezone_start_pfn) \u003e\u003e pageblock_order will return the\nsame bit for all pfn in a pageblock. If zone_start_pfn is not\naligned to pageblock_nr_pages, this may not always be correct.\n\nConsider the following with pageblock order \u003d 10, zone start 2MB:\n\npfn     | pfn - zone start | (pfn - zone start) \u003e\u003e page block order\n----------------------------------------------------------------\n0x26000 | 0x25e00\t   |  0x97\n0x26100 | 0x25f00\t   |  0x97\n0x26200 | 0x26000\t   |  0x98\n0x26300 | 0x26100\t   |  0x98\n\nThis means that calling {get,set}_pageblock_migratetype on a single\npage will not set the migratetype for the full block. Fix this by\nrounding down zone_start_pfn when doing the bitidx calculation.\n\nChange-Id: I13e2f53f50db294f38ec86138c17c6fe29f0ee82\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nSigned-off-by: Mitchel Humpherys \u003cmitchelh@codeaurora.org\u003e\n"
    },
    {
      "commit": "d3c11452319e8e958cc7485fa78d7737429323dd",
      "tree": "4f25f8d653217a3f61532e99187f4e8543c5f1ff",
      "parents": [
        "c21a803d13f8f03d10a4b70e80408b99645d9efa"
      ],
      "author": {
        "name": "Minchan Kim",
        "email": "minchan.kim@gmail.com",
        "time": "Fri May 11 09:37:13 2012 +0200"
      },
      "committer": {
        "name": "Mitchel Humpherys",
        "email": "mitchelh@codeaurora.org",
        "time": "Tue Dec 11 21:45:15 2012 -0800"
      },
      "message": "cma: fix migration mode\n\n__alloc_contig_migrate_range calls migrate_pages with wrong argument\nfor migrate_mode. Fix it.\n\nChange-Id: I84697cf7c6aef6253e9ee7e5b3028c946b95e253\nCc: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Minchan Kim \u003cminchan@kernel.org\u003e\nAcked-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nSigned-off-by: Mitchel Humpherys \u003cmitchelh@codeaurora.org\u003e\n"
    },
    {
      "commit": "c21a803d13f8f03d10a4b70e80408b99645d9efa",
      "tree": "a2d7da6a22e64724de6eb6ccb526ff23eb83c052",
      "parents": [
        "926c5240503f037ba0d5f45b6673487a4ec6bd18"
      ],
      "author": {
        "name": "woojoong.lee",
        "email": "woojoong.lee@samsung.com",
        "time": "Mon Dec 03 17:58:43 2012 -0800"
      },
      "committer": {
        "name": "Mitchel Humpherys",
        "email": "mitchelh@codeaurora.org",
        "time": "Tue Dec 11 21:45:15 2012 -0800"
      },
      "message": "cma : use migrate_prep() instead of migrate_prep_local()\n\n__alloc_contig_migrate_range() should use all possible ways to get all the\npages migrated from the given memory range, so pruning per-cpu lru lists\nfor all CPUs is required, regadless the cost of such operation. Otherwise\nsome pages which got stuck at per-cpu lru list might get missed by\nmigration procedure causing the contiguous allocation to fail.\n\nChange-Id: I70cc0864c57dd49e89f57797122a3fd0f300647a\nSigned-off-by: woojoong.lee \u003cwoojoong.lee@samsung.com\u003e\nReviewed-on: http://165.213.202.130:8080/43063\nTested-by: System S/W SCM \u003cscm.systemsw@samsung.com\u003e\nReviewed-by: daeho jeong \u003cdaeho.jeong@samsung.com\u003e\nReviewed-by: Jeong-Ho Kim \u003cjammer@samsung.com\u003e\nTested-by: Jeong-Ho Kim \u003cjammer@samsung.com\u003e\n[lauraa@codeaurora.org: Applied to correct file]\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nSigned-off-by: Mitchel Humpherys \u003cmitchelh@codeaurora.org\u003e\n"
    },
    {
      "commit": "926c5240503f037ba0d5f45b6673487a4ec6bd18",
      "tree": "a05055556ef818570aec12318cc677e145d485ca",
      "parents": [
        "2dc518e5ebceb877016e7189b27e5506cb67c2ed"
      ],
      "author": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Tue Nov 27 10:17:24 2012 -0800"
      },
      "committer": {
        "name": "Mitchel Humpherys",
        "email": "mitchelh@codeaurora.org",
        "time": "Tue Dec 11 21:45:14 2012 -0800"
      },
      "message": "mm: Add is_cma_pageblock definition\n\nBring back the is_cma_pageblock definition for determining if a\npage is CMA or not.\n\nChange-Id: I39fd546e22e240b752244832c79514f109c8e84b\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nSigned-off-by: Mitchel Humpherys \u003cmitchelh@codeaurora.org\u003e\n"
    },
    {
      "commit": "2dc518e5ebceb877016e7189b27e5506cb67c2ed",
      "tree": "16cbe592362825a2c1586a8db4ae324e7dcaf4f1",
      "parents": [
        "039f2980c4f82d1b1c3e174ceaa56d514caffe35"
      ],
      "author": {
        "name": "Liam Mark",
        "email": "lmark@codeaurora.org",
        "time": "Tue Nov 27 18:49:58 2012 -0800"
      },
      "committer": {
        "name": "Mitchel Humpherys",
        "email": "mitchelh@codeaurora.org",
        "time": "Tue Dec 11 21:45:13 2012 -0800"
      },
      "message": "mm: split_free_page ignore memory watermarks for CMA\n\nMemory watermarks were sometimes preventing CMA allocations\nin low memory.\n\nChange-Id: I550ec987cbd6bc6dadd72b4a764df20cd0758479\nSigned-off-by: Liam Mark \u003clmark@codeaurora.org\u003e\nSigned-off-by: Mitchel Humpherys \u003cmitchelh@codeaurora.org\u003e\n"
    },
    {
      "commit": "aa7994f281a5e705b5f9cb13b3219fc346263872",
      "tree": "bcb197b1091bfd854890418b2aebd92c077b9a2f",
      "parents": [
        "a4dd7e6c27a37237f09d437a515a3330093d4f70"
      ],
      "author": {
        "name": "Li Haifeng",
        "email": "omycle@gmail.com",
        "time": "Mon Sep 17 14:09:21 2012 -0700"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@linuxfoundation.org",
        "time": "Tue Oct 02 10:30:05 2012 -0700"
      },
      "message": "mm/page_alloc: fix the page address of higher page\u0027s buddy calculation\n\ncommit 0ba8f2d59304dfe69b59c034de723ad80f7ab9ac upstream.\n\nThe heuristic method for buddy has been introduced since commit\n43506fad21ca (\"mm/page_alloc.c: simplify calculation of combined index\nof adjacent buddy lists\").  But the page address of higher page\u0027s buddy\nwas wrongly calculated, which will lead page_is_buddy to fail for ever.\nIOW, the heuristic method would be disabled with the wrong page address\nof higher page\u0027s buddy.\n\nCalculating the page address of higher page\u0027s buddy should be based\nhigher_page with the offset between index of higher page and index of\nhigher page\u0027s buddy.\n\nSigned-off-by: Haifeng Li \u003comycle@gmail.com\u003e\nSigned-off-by: Gavin Shan \u003cshangw@linux.vnet.ibm.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: KyongHo Cho \u003cpullip.cho@samsung.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Johannes Weiner \u003cjweiner@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n\n"
    },
    {
      "commit": "fbfcd6fa25e08b96f28527da696a7b3250588df2",
      "tree": "8f2bf0cf48ef4ebe9fd1c81835fdd2103c73c0e3",
      "parents": [
        "2e781cd43f310a482c7d651a9d38de3459967fba"
      ],
      "author": {
        "name": "Rabin Vincent",
        "email": "rabin@rab.in",
        "time": "Thu Jul 05 15:52:23 2012 +0530"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Mon Jul 23 14:48:17 2012 -0700"
      },
      "message": "mm: cma: don\u0027t replace lowmem pages with highmem\n\nThe filesystem layer expects pages in the block device\u0027s mapping to not\nbe in highmem (the mapping\u0027s gfp mask is set in bdget()), but CMA can\ncurrently replace lowmem pages with highmem pages, leading to crashes in\nfilesystem code such as the one below:\n\n  Unable to handle kernel NULL pointer dereference at virtual address 00000400\n  pgd \u003d c0c98000\n  [00000400] *pgd\u003d00c91831, *pte\u003d00000000, *ppte\u003d00000000\n  Internal error: Oops: 817 [#1] PREEMPT SMP ARM\n  CPU: 0    Not tainted  (3.5.0-rc5+ #80)\n  PC is at __memzero+0x24/0x80\n  ...\n  Process fsstress (pid: 323, stack limit \u003d 0xc0cbc2f0)\n  Backtrace:\n  [\u003cc010e3f0\u003e] (ext4_getblk+0x0/0x180) from [\u003cc010e58c\u003e] (ext4_bread+0x1c/0x98)\n  [\u003cc010e570\u003e] (ext4_bread+0x0/0x98) from [\u003cc0117944\u003e] (ext4_mkdir+0x160/0x3bc)\n   r4:c15337f0\n  [\u003cc01177e4\u003e] (ext4_mkdir+0x0/0x3bc) from [\u003cc00c29e0\u003e] (vfs_mkdir+0x8c/0x98)\n  [\u003cc00c2954\u003e] (vfs_mkdir+0x0/0x98) from [\u003cc00c2a60\u003e] (sys_mkdirat+0x74/0xac)\n   r6:00000000 r5:c152eb40 r4:000001ff r3:c14b43f0\n  [\u003cc00c29ec\u003e] (sys_mkdirat+0x0/0xac) from [\u003cc00c2ab8\u003e] (sys_mkdir+0x20/0x24)\n   r6:beccdcf0 r5:00074000 r4:beccdbbc\n  [\u003cc00c2a98\u003e] (sys_mkdir+0x0/0x24) from [\u003cc000e3c0\u003e] (ret_fast_syscall+0x0/0x30)\n\nFix this by replacing only highmem pages with highmem.\n\nChange-Id: I6af2d509af48b5a586037be14bd3593b3f269d95\nReported-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\nSigned-off-by: Rabin Vincent \u003crabin@rab.in\u003e\nAcked-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "f1f63887812a871f4a37842008bab4bdb7560011",
      "tree": "add9bb8720eeeace0bf747cb5efe06d7ed0b82d4",
      "parents": [
        "1ed36511162ddf23ccf3a834721c4fcfde8c9ff1"
      ],
      "author": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Wed Jan 25 12:49:24 2012 +0100"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Fri Jun 29 16:06:50 2012 -0700"
      },
      "message": "mm: trigger page reclaim in alloc_contig_range() to stabilise watermarks\n\nalloc_contig_range() performs memory allocation so it also should keep\ntrack on keeping the correct level of memory watermarks. This commit adds\na call to *_slowpath style reclaim to grab enough pages to make sure that\nthe final collection of contiguous pages from freelists will not starve\nthe system.\n\nChange-Id: I2d68d9ac2cfcd32ca6f515fc7e44e8d9d850dff1\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCC: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "1ed36511162ddf23ccf3a834721c4fcfde8c9ff1",
      "tree": "3bc88974f90f8a7319d9b15fe3684327293e6886",
      "parents": [
        "e12aade2adf3c56764123c51af35ea74b221c4f1"
      ],
      "author": {
        "name": "Marek Szyprowski",
        "email": "m.szyprowski@samsung.com",
        "time": "Wed Jan 25 12:09:52 2012 +0100"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Fri Jun 29 16:06:49 2012 -0700"
      },
      "message": "mm: extract reclaim code from __alloc_pages_direct_reclaim()\n\nThis patch extracts common reclaim code from __alloc_pages_direct_reclaim()\nfunction to separate function: __perform_reclaim() which can be later used\nby alloc_contig_range().\n\nChange-Id: Ia9d8b82018d91dc669488955b20f69f1cba43147\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nCc: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "e12aade2adf3c56764123c51af35ea74b221c4f1",
      "tree": "7dabac1a6fc0b947e77c8d6a03f9d28fa7f2d32e",
      "parents": [
        "c80cd92bb7b152912b8d4487c18d040e1b4266b0"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Apr 25 21:36:42 2011 +0000"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Fri Jun 29 16:06:49 2012 -0700"
      },
      "message": "mm: Serialize access to min_free_kbytes\n\nThere is a race between the min_free_kbytes sysctl, memory hotplug\nand transparent hugepage support enablement.  Memory hotplug uses a\nzonelists_mutex to avoid a race when building zonelists. Reuse it to\nserialise watermark updates.\n\nChange-Id: I31786592a8cc03e579ee01d99d7eba76e926263f\n[a.p.zijlstra@chello.nl: Older patch fixed the race with spinlock]\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "c80cd92bb7b152912b8d4487c18d040e1b4266b0",
      "tree": "1ec5588773c691c7d65c73bc8f8f946470cc3a2e",
      "parents": [
        "1d22dfa0b050d09fc589af90b8d35f32a5dc3056"
      ],
      "author": {
        "name": "Michal Nazarewicz",
        "email": "mina86@mina86.com",
        "time": "Tue Apr 03 15:06:15 2012 +0200"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Fri Jun 29 16:06:48 2012 -0700"
      },
      "message": "mm: page_isolation: MIGRATE_CMA isolation functions added\n\nThis commit changes various functions that change pages and\npageblocks migrate type between MIGRATE_ISOLATE and\nMIGRATE_MOVABLE in such a way as to allow to work with\nMIGRATE_CMA migrate type.\n\nChange-Id: Ib3a0b04cae49396b206a39bfced470e218ab1f90\nSigned-off-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "d4158d253a1993af1faf103575045350313caeda",
      "tree": "531d46fbfc6a9d06c3cdd7f36ddc6dd9620efd93",
      "parents": [
        "c6114bf2a0c067ee77ce240e82694c88a4f31a7e"
      ],
      "author": {
        "name": "Michal Nazarewicz",
        "email": "mina86@mina86.com",
        "time": "Thu Dec 29 13:09:50 2011 +0100"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Tue Jun 26 10:32:20 2012 -0700"
      },
      "message": "mm: mmzone: MIGRATE_CMA migration type added\n\nThe MIGRATE_CMA migration type has two main characteristics:\n(i) only movable pages can be allocated from MIGRATE_CMA\npageblocks and (ii) page allocator will never change migration\ntype of MIGRATE_CMA pageblocks.\n\nThis guarantees (to some degree) that page in a MIGRATE_CMA page\nblock can always be migrated somewhere else (unless there\u0027s no\nmemory left in the system).\n\nIt is designed to be used for allocating big chunks (eg. 10MiB)\nof physically contiguous memory.  Once driver requests\ncontiguous memory, pages from MIGRATE_CMA pageblocks may be\nmigrated away to create a contiguous block.\n\nTo minimise number of migrations, MIGRATE_CMA migration type\nis the last type tried when page allocator falls back to other\nmigration types when requested.\n\nChange-Id: I2bb0954de8be4f212b03dea0e5a508048684bda2\nSigned-off-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nSigned-off-by: Kyungmin Park \u003ckyungmin.park@samsung.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "c6114bf2a0c067ee77ce240e82694c88a4f31a7e",
      "tree": "c889aaa7a6dc15362538105501e22a751b125a72",
      "parents": [
        "4c1ff37656a55618a00d3462bd1631f485a3ee8c"
      ],
      "author": {
        "name": "Michal Nazarewicz",
        "email": "mina86@mina86.com",
        "time": "Wed Jan 11 15:31:33 2012 +0100"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Tue Jun 26 10:32:20 2012 -0700"
      },
      "message": "mm: page_alloc: change fallbacks array handling\n\nThis commit adds a row for MIGRATE_ISOLATE type to the fallbacks array\nwhich was missing from it.  It also, changes the array traversal logic\na little making MIGRATE_RESERVE an end marker.  The letter change,\nremoves the implicit MIGRATE_UNMOVABLE from the end of each row which\nwas read by __rmqueue_fallback() function.\n\nChange-Id: Icdbbebb9eece2468c0b963964be9a4c579cbc775\nSigned-off-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "4c1ff37656a55618a00d3462bd1631f485a3ee8c",
      "tree": "3fa7d31768f8a104bf4a1e0ae376e0a992961524",
      "parents": [
        "02ff1deb01f25ce3cdedb52d852974d0d2afe406"
      ],
      "author": {
        "name": "Michal Nazarewicz",
        "email": "mina86@mina86.com",
        "time": "Thu Dec 29 13:09:50 2011 +0100"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Tue Jun 26 10:32:20 2012 -0700"
      },
      "message": "mm: page_alloc: introduce alloc_contig_range()\n\nThis commit adds the alloc_contig_range() function which tries\nto allocate given range of pages.  It tries to migrate all\nalready allocated pages that fall in the range thus freeing them.\nOnce all pages in the range are freed they are removed from the\nbuddy system thus allocated for the caller to use.\n\nChange-Id: I659b133b1c9991568bfb6bd09c7792e15f2a2bfb\nSigned-off-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nTested-by: Rob Clark \u003crob.clark@linaro.org\u003e\nTested-by: Ohad Ben-Cohen \u003cohad@wizery.com\u003e\nTested-by: Benjamin Gaignard \u003cbenjamin.gaignard@linaro.org\u003e\nTested-by: Robert Nelson \u003crobertcnelson@gmail.com\u003e\nTested-by: Barry Song \u003cBaohua.Song@csr.com\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "adc95fd67db3a93510c858f558f2e52d7571e1db",
      "tree": "99764741d97c21f1d13d7df4fe46d2efbef5aa63",
      "parents": [
        "63e6e20a39c7636d81cd1912ccb114ed9830eeaa"
      ],
      "author": {
        "name": "Michal Nazarewicz",
        "email": "mina86@mina86.com",
        "time": "Wed Jan 11 15:16:11 2012 +0100"
      },
      "committer": {
        "name": "Laura Abbott",
        "email": "lauraa@codeaurora.org",
        "time": "Tue Jun 26 10:32:19 2012 -0700"
      },
      "message": "mm: page_alloc: remove trailing whitespace\n\nChange-Id: I1f112fa3be958d1f9d24ebd076ef4ddcf91fe868\nSigned-off-by: Michal Nazarewicz \u003cmina86@mina86.com\u003e\nSigned-off-by: Marek Szyprowski \u003cm.szyprowski@samsung.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Laura Abbott \u003clauraa@codeaurora.org\u003e\n"
    },
    {
      "commit": "5310349dc7cd8a900bb33934df103e86c65baea0",
      "tree": "171a74fa7f29fafd235751cb77b3d21c6874b3cc",
      "parents": [
        "26d45460e9699e041a399713f1524ceecf3df200"
      ],
      "author": {
        "name": "Shashank Mittal",
        "email": "mittals@codeaurora.org",
        "time": "Tue Jun 19 19:45:35 2012 -0700"
      },
      "committer": {
        "name": "Shashank Mittal",
        "email": "mittals@codeaurora.org",
        "time": "Tue Jun 19 19:51:08 2012 -0700"
      },
      "message": "mm: Fix a compiler warning.\n\nFix compiler warning for a variable not initialized.\n\nChange-Id: Ieedeb1cfb5a22eb5f671e6bfd1361315347a49af\nSigned-off-by: Shashank Mittal \u003cmittals@codeaurora.org\u003e\n"
    },
    {
      "commit": "f132c6cf77251e011e1dad0ec88c0b1fda16d5aa",
      "tree": "f04b469a3547a19b7bdbe110adc571eb71c93328",
      "parents": [
        "23016defd7db701a01dc49f972ad6b1bae9651c2",
        "3f6240f3e4e2608caf1a70d614ada658cbcbe7be"
      ],
      "author": {
        "name": "Steve Muckle",
        "email": "smuckle@codeaurora.org",
        "time": "Wed Jun 06 18:30:57 2012 -0700"
      },
      "committer": {
        "name": "Steve Muckle",
        "email": "smuckle@codeaurora.org",
        "time": "Wed Jun 06 18:45:28 2012 -0700"
      },
      "message": "Merge commit \u0027AU_LINUX_ANDROID_ICS.04.00.04.00.126\u0027 into msm-3.4\n\nAU_LINUX_ANDROID_ICS.04.00.04.00.126 from msm-3.0.\nFirst parent is from google/android-3.4.\n\n* commit \u0027AU_LINUX_ANDROID_ICS.04.00.04.00.126\u0027: (8712 commits)\n  PRNG: Device tree entry for qrng device.\n  vidc:1080p: Set video core timeout value for Thumbnail mode\n  msm: sps: improve the debugging support in SPS driver\n  board-8064 msm: Overlap secure and non secure video firmware heaps.\n  msm: clock: Add handoff ops for 7x30 and copper XO clocks\n  msm_fb: display: Wait for external vsync before DTV IOMMU unmap\n  msm: Fix ciruclar dependency in debug UART settings\n  msm: gdsc: Add GDSC regulator driver for msm-copper\n  defconfig: Enable Mobicore Driver.\n  mobicore: Add mobicore driver.\n  mobicore: rename variable to lower case.\n  mobicore: rename folder.\n  mobicore: add makefiles\n  mobicore: initial import of kernel driver\n  ASoC: msm: Add SLIMBUS_2_RX CPU DAI\n  board-8064-gpio: Update FUNC for EPM SPI CS\n  msm_fb: display: Remove chicken bit config during video playback\n  mmc: msm_sdcc: enable the sanitize capability\n  msm-fb: display: lm2 writeback support on mpq platfroms\n  msm_fb: display: Disable LVDS phy \u0026 pll during panel off\n  ...\n\nSigned-off-by: Steve Muckle \u003csmuckle@codeaurora.org\u003e\n"
    },
    {
      "commit": "ec0b571c19ac62ab0bb80d373a3d4922a48b4b75",
      "tree": "10c597f5227c969c3f2b909fbeb29725a0c5c6e8",
      "parents": [
        "7bb8b65407a519d3a90dd8cecdd1ccd10ee0c6cc",
        "36be50515fe2aef61533b516fa2576a2c7fe7664"
      ],
      "author": {
        "name": "Colin Cross",
        "email": "ccross@android.com",
        "time": "Mon May 14 16:41:02 2012 -0700"
      },
      "committer": {
        "name": "Colin Cross",
        "email": "ccross@android.com",
        "time": "Mon May 14 16:41:02 2012 -0700"
      },
      "message": "Merge commit \u0027v3.4-rc7\u0027 into android-3.4\n"
    },
    {
      "commit": "1b76b02f15c70d5f392ee2e231fbd20a26063a77",
      "tree": "5c7bee2e8a5333e9f99b64287d587a026386459e",
      "parents": [
        "d60b9c16d7bae49b75255520abd7dfd2e94627bc"
      ],
      "author": {
        "name": "Hugh Dickins",
        "email": "hughd@google.com",
        "time": "Fri May 11 01:00:07 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri May 11 09:23:39 2012 -0700"
      },
      "message": "mm: raise MemFree by reverting percpu_pagelist_fraction to 0\n\nWhy is there less MemFree than there used to be?  It perturbed a test,\nso I\u0027ve just been bisecting linux-next, and now find the offender went\nupstream yesterday.\n\nCommit 93278814d359 \"mm: fix division by 0 in percpu_pagelist_fraction()\"\nmistakenly initialized percpu_pagelist_fraction to the sysctl\u0027s minimum 8,\nwhich leaves 1/8th of memory on percpu lists (on each cpu??); but most of\nus expect it to be left unset at 0 (and it\u0027s not then used as a divisor).\n\n  MemTotal: 8061476kB  8061476kB  8061476kB  8061476kB  8061476kB  8061476kB\n  Repetitive test with percpu_pagelist_fraction 8:\n  MemFree:  6948420kB  6237172kB  6949696kB  6840692kB  6949048kB  6862984kB\n  Same test with percpu_pagelist_fraction back to 0:\n  MemFree:  7945000kB  7944908kB  7948568kB  7949060kB  7948796kB  7948812kB\n\nSigned-off-by: Hugh Dickins \u003chughd@google.com\u003e\n[ We really should fix the crazy sysctl interface too, but that\u0027s a\n  separate thing - Linus ]\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "93278814d3590eba0ee360b8d69a35c7f2203ea8",
      "tree": "17784192015e71464f1064af2b071c8cd7fe7f13",
      "parents": [
        "16fbdce62d9c89b794e303f4a232e4749b77e9ac"
      ],
      "author": {
        "name": "Sasha Levin",
        "email": "levinsasha928@gmail.com",
        "time": "Thu May 10 13:01:44 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu May 10 15:06:44 2012 -0700"
      },
      "message": "mm: fix division by 0 in percpu_pagelist_fraction()\n\npercpu_pagelist_fraction_sysctl_handler() has only considered -EINVAL as\na possible error from proc_dointvec_minmax().\n\nIf any other error is returned, it would proceed to divide by zero since\npercpu_pagelist_fraction wasn\u0027t getting initialized at any point.  For\nexample, writing 0 bytes into the proc file would trigger the issue.\n\nSigned-off-by: Sasha Levin \u003clevinsasha928@gmail.com\u003e\nReviewed-by: Minchan Kim \u003cminchan@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "cde8949795629dc5e9e3781fad26afc8dd8d767b",
      "tree": "f9fda0a83fb6111bd50dbe8fb1d4663319fe8400",
      "parents": [
        "ed275aa26f3bcde6da3356755ae5ad8eeab7444f"
      ],
      "author": {
        "name": "Arve Hjønnevåg",
        "email": "arve@android.com",
        "time": "Tue Feb 17 14:51:02 2009 -0800"
      },
      "committer": {
        "name": "Colin Cross",
        "email": "ccross@android.com",
        "time": "Mon Apr 09 13:53:12 2012 -0700"
      },
      "message": "mm: Add min_free_order_shift tunable.\n\nBy default the kernel tries to keep half as much memory free at each\norder as it does for one order below. This can be too agressive when\nrunning without swap.\n\nChange-Id: I5efc1a0b50f41ff3ac71e92d2efd175dedd54ead\nSigned-off-by: Arve Hjønnevåg \u003carve@android.com\u003e\n"
    },
    {
      "commit": "a76e99abc558aed633ba28ff61c5328116292bf3",
      "tree": "60f7677f0baafa00825accc1214839246b3e78dd",
      "parents": [
        "a54734678ff9cb97938b9f7648547174f3b118e4",
        "1d05f993784973189395051cc711fdd6dd5eb389"
      ],
      "author": {
        "name": "Rohit Vaswani",
        "email": "rvaswani@codeaurora.org",
        "time": "Fri Mar 30 00:09:34 2012 -0700"
      },
      "committer": {
        "name": "Rohit Vaswani",
        "email": "rvaswani@codeaurora.org",
        "time": "Fri Mar 30 00:09:34 2012 -0700"
      },
      "message": "Merge branch \u0027Linux 3.0.21\u0027 into msm-3.0\n\nMerge Upstream\u0027s stable 3.0.21 branch into msm-3.0\nThis consists 814 commits and some merge conflicts.\n\nThe merge conflicts are because of some local changes to\nmsm-3.0 as well as some conflicts between google\u0027s tree and\nthe upstream tree.\n\nConflicts:\n\tarch/arm/kernel/head.S\n\tdrivers/bluetooth/ath3k.c\n\tdrivers/bluetooth/btusb.c\n\tdrivers/mmc/core/core.c\n\tdrivers/tty/serial/serial_core.c\n\tdrivers/usb/host/ehci-hub.c\n\tdrivers/usb/serial/qcserial.c\n\tfs/namespace.c\n\tfs/proc/base.c\n\nChange-Id: I62e2edbe213f84915e27f8cd6e4f6ce23db22a21\nSigned-off-by: Rohit Vaswani \u003crvaswani@codeaurora.org\u003e\n"
    },
    {
      "commit": "74046494ea68676d29ef6501a4bd950f08112a2c",
      "tree": "4fb862c2ebeba25b089ed64d5cc36437ad9e3df2",
      "parents": [
        "42be35d0390b966253136a285f507f5ad00fd9e8"
      ],
      "author": {
        "name": "Gilad Ben-Yossef",
        "email": "gilad@benyossef.com",
        "time": "Wed Mar 28 14:42:45 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 28 17:14:35 2012 -0700"
      },
      "message": "mm: only IPI CPUs to drain local pages if they exist\n\nCalculate a cpumask of CPUs with per-cpu pages in any zone and only send\nan IPI requesting CPUs to drain these pages to the buddy allocator if they\nactually have pages when asked to flush.\n\nThis patch saves 85%+ of IPIs asking to drain per-cpu pages in case of\nsevere memory pressure that leads to OOM since in these cases multiple,\npossibly concurrent, allocation requests end up in the direct reclaim code\npath so when the per-cpu pages end up reclaimed on first allocation\nfailure for most of the proceeding allocation attempts until the memory\npressure is off (possibly via the OOM killer) there are no per-cpu pages\non most CPUs (and there can easily be hundreds of them).\n\nThis also has the side effect of shortening the average latency of direct\nreclaim by 1 or more order of magnitude since waiting for all the CPUs to\nACK the IPI takes a long time.\n\nTested by running \"hackbench 400\" on a 8 CPU x86 VM and observing the\ndifference between the number of direct reclaim attempts that end up in\ndrain_all_pages() and those were more then 1/2 of the online CPU had any\nper-cpu page in them, using the vmstat counters introduced in the next\npatch in the series and using proc/interrupts.\n\nIn the test sceanrio, this was seen to save around 3600 global\nIPIs after trigerring an OOM on a concurrent workload:\n\n$ cat /proc/vmstat | tail -n 2\npcp_global_drain 0\npcp_global_ipi_saved 0\n\n$ cat /proc/interrupts | grep CAL\nCAL:          1          2          1          2\n          2          2          2          2   Function call interrupts\n\n$ hackbench 400\n[OOM messages snipped]\n\n$ cat /proc/vmstat | tail -n 2\npcp_global_drain 3647\npcp_global_ipi_saved 3642\n\n$ cat /proc/interrupts | grep CAL\nCAL:          6         13          6          3\n          3          3         1 2          7   Function call interrupts\n\nPlease note that if the global drain is removed from the direct reclaim\npath as a patch from Mel Gorman currently suggests this should be replaced\nwith an on_each_cpu_cond invocation.\n\nSigned-off-by: Gilad Ben-Yossef \u003cgilad@benyossef.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Christoph Lameter \u003ccl@linux.com\u003e\nAcked-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Pekka Enberg \u003cpenberg@kernel.org\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Andi Kleen \u003candi@firstfloor.org\u003e\nAcked-by: Michal Nazarewicz \u003cmina86@mina86.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "29fd66d289f2981e11c550f8b411a6d3d38be0cf",
      "tree": "cc9931d0ee891ccfe9114bfd977da47f47175dbe",
      "parents": [
        "45f83cefe3a5f0476ac3f96382ebfdc3fe4caab2"
      ],
      "author": {
        "name": "David Rientjes",
        "email": "rientjes@google.com",
        "time": "Wed Mar 28 14:42:41 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 28 17:14:35 2012 -0700"
      },
      "message": "mm, coredump: fail allocations when coredumping instead of oom killing\n\nThe size of coredump files is limited by RLIMIT_CORE, however, allocating\nlarge amounts of memory results in three negative consequences:\n\n - the coredumping process may be chosen for oom kill and quickly deplete\n   all memory reserves in oom conditions preventing further progress from\n   being made or tasks from exiting,\n\n - the coredumping process may cause other processes to be oom killed\n   without fault of their own as the result of a SIGSEGV, for example, in\n   the coredumping process, or\n\n - the coredumping process may result in a livelock while writing to the\n   dump file if it needs memory to allocate while other threads are in\n   the exit path waiting on the coredumper to complete.\n\nThis is fixed by implying __GFP_NORETRY in the page allocator for\ncoredumping processes when reclaim has failed so the allocations fail and\nthe process continues to exit.\n\nSigned-off-by: David Rientjes \u003crientjes@google.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Oleg Nesterov \u003coleg@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b224ef856b1a5b949daff5937a9e187fe622b8f5",
      "tree": "7d119425e8734eba198f83dec781d8b9c6859923",
      "parents": [
        "8d13bddd11c10db40e2c81b4b224c11126691fc0"
      ],
      "author": {
        "name": "Kautuk Consul",
        "email": "consul.kautuk@gmail.com",
        "time": "Wed Mar 21 16:34:15 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 21 17:55:00 2012 -0700"
      },
      "message": "page_alloc: remove unused find_zone_movable_pfns_for_nodes() argument\n\nfind_zone_movable_pfns_for_nodes() does not use its argument.\n\nSigned-off-by: Kautuk Consul \u003cconsul.kautuk@gmail.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "8d13bddd11c10db40e2c81b4b224c11126691fc0",
      "tree": "bec31e54cbc43bf6b49c391d81893203b903b5ec",
      "parents": [
        "d1d5e05ffdc110021ae7937802e88ae0d223dcdc"
      ],
      "author": {
        "name": "Kautuk Consul",
        "email": "consul.kautuk@gmail.com",
        "time": "Wed Mar 21 16:34:15 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 21 17:55:00 2012 -0700"
      },
      "message": "page_alloc.c: remove add_from_early_node_map()\n\nadd_from_early_node_map() is unused.\n\nSigned-off-by: Kautuk Consul \u003cconsul.kautuk@gmail.com\u003e\nAcked-by: David Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "cc9a6c8776615f9c194ccf0b63a0aa5628235545",
      "tree": "0cbbf118e86541f8eb2fc7b717a0e08eaced986d",
      "parents": [
        "e845e199362cc5712ba0e7eedc14eed70e144258"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Wed Mar 21 16:34:11 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 21 17:54:59 2012 -0700"
      },
      "message": "cpuset: mm: reduce large amounts of memory barrier related damage v3\n\nCommit c0ff7453bb5c (\"cpuset,mm: fix no node to alloc memory when\nchanging cpuset\u0027s mems\") wins a super prize for the largest number of\nmemory barriers entered into fast paths for one commit.\n\n[get|put]_mems_allowed is incredibly heavy with pairs of full memory\nbarriers inserted into a number of hot paths.  This was detected while\ninvestigating at large page allocator slowdown introduced some time\nafter 2.6.32.  The largest portion of this overhead was shown by\noprofile to be at an mfence introduced by this commit into the page\nallocator hot path.\n\nFor extra style points, the commit introduced the use of yield() in an\nimplementation of what looks like a spinning mutex.\n\nThis patch replaces the full memory barriers on both read and write\nsides with a sequence counter with just read barriers on the fast path\nside.  This is much cheaper on some architectures, including x86.  The\nmain bulk of the patch is the retry logic if the nodemask changes in a\nmanner that can cause a false failure.\n\nWhile updating the nodemask, a check is made to see if a false failure\nis a risk.  If it is, the sequence number gets bumped and parallel\nallocators will briefly stall while the nodemask update takes place.\n\nIn a page fault test microbenchmark, oprofile samples from\n__alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The\nactual results were\n\n                             3.3.0-rc3          3.3.0-rc3\n                             rc3-vanilla        nobarrier-v2r1\n    Clients   1 UserTime       0.07 (  0.00%)   0.08 (-14.19%)\n    Clients   2 UserTime       0.07 (  0.00%)   0.07 (  2.72%)\n    Clients   4 UserTime       0.08 (  0.00%)   0.07 (  3.29%)\n    Clients   1 SysTime        0.70 (  0.00%)   0.65 (  6.65%)\n    Clients   2 SysTime        0.85 (  0.00%)   0.82 (  3.65%)\n    Clients   4 SysTime        1.41 (  0.00%)   1.41 (  0.32%)\n    Clients   1 WallTime       0.77 (  0.00%)   0.74 (  4.19%)\n    Clients   2 WallTime       0.47 (  0.00%)   0.45 (  3.73%)\n    Clients   4 WallTime       0.38 (  0.00%)   0.37 (  1.58%)\n    Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)\n    Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)\n    Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)\n    Clients   1 Flt/sec      495161.39 (  0.00%) 517292.87 (  4.47%)\n    Clients   2 Flt/sec      820325.95 (  0.00%) 850289.77 (  3.65%)\n    Clients   4 Flt/sec      1020068.93 (  0.00%) 1022674.06 (  0.26%)\n    MMTests Statistics: duration\n    Sys Time Running Test (seconds)             135.68    132.17\n    User+Sys Time Running Test (seconds)         164.2    160.13\n    Total Elapsed Time (seconds)                123.46    120.87\n\nThe overall improvement is small but the System CPU time is much\nimproved and roughly in correlation to what oprofile reported (these\nperformance figures are without profiling so skew is expected).  The\nactual number of page faults is noticeably improved.\n\nFor benchmarks like kernel builds, the overall benefit is marginal but\nthe system CPU time is slightly reduced.\n\nTo test the actual bug the commit fixed I opened two terminals.  The\nfirst ran within a cpuset and continually ran a small program that\nfaulted 100M of anonymous data.  In a second window, the nodemask of the\ncpuset was continually randomised in a loop.\n\nWithout the commit, the program would fail every so often (usually\nwithin 10 seconds) and obviously with the commit everything worked fine.\nWith this patch applied, it also worked fine so the fix should be\nfunctionally equivalent.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Miao Xie \u003cmiaox@cn.fujitsu.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f0cb3c76ae1ced85f9034480b1b24cd96530ec78",
      "tree": "997ae67621f76b3b6bf588604f85738a8c97cbef",
      "parents": [
        "3268c63eded4612a3d07b56d1e02ce7731e6608e"
      ],
      "author": {
        "name": "Konstantin Khlebnikov",
        "email": "khlebnikov@openvz.org",
        "time": "Wed Mar 21 16:34:06 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 21 17:54:58 2012 -0700"
      },
      "message": "mm: drain percpu lru add/rotate page-vectors on cpu hot-unplug\n\nThis cpu hotplug hook was accidentally removed in commit 00a62ce91e55\n(\"mm: fix Committed_AS underflow on large NR_CPUS environment\")\n\nThe visible effect of this accident: some pages are borrowed in per-cpu\npage-vectors.  Truncate can deal with it, but these pages cannot be\nreused while this cpu is offline.  So this is like a temporary memory\nleak.\n\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Eric B Munson \u003cebmunson@us.ibm.com\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "08ab9b10d43aca091fdff58b69fc1ec89c5b8a83",
      "tree": "73abfd3a257f3feadc0fa28c3117aaa9d95af596",
      "parents": [
        "b76437579d1344b612cf1851ae610c636cec7db0"
      ],
      "author": {
        "name": "David Rientjes",
        "email": "rientjes@google.com",
        "time": "Wed Mar 21 16:34:04 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 21 17:54:58 2012 -0700"
      },
      "message": "mm, oom: force oom kill on sysrq+f\n\nThe oom killer chooses not to kill a thread if:\n\n - an eligible thread has already been oom killed and has yet to exit,\n   and\n\n - an eligible thread is exiting but has yet to free all its memory and\n   is not the thread attempting to currently allocate memory.\n\nSysRq+F manually invokes the global oom killer to kill a memory-hogging\ntask.  This is normally done as a last resort to free memory when no\nprogress is being made or to test the oom killer itself.\n\nFor both uses, we always want to kill a thread and never defer.  This\npatch causes SysRq+F to always kill an eligible thread and can be used to\nforce a kill even if another oom killed thread has failed to exit.\n\nSigned-off-by: David Rientjes \u003crientjes@google.com\u003e\nAcked-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Pekka Enberg \u003cpenberg@kernel.org\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "aff622495c9a0b56148192e53bdec539f5e147f2",
      "tree": "78f6400d8b6bec3279483006a0e9543e47aa833e",
      "parents": [
        "7be62de99adcab4449d416977b4274985c5fe023"
      ],
      "author": {
        "name": "Rik van Riel",
        "email": "riel@redhat.com",
        "time": "Wed Mar 21 16:33:52 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Mar 21 17:54:56 2012 -0700"
      },
      "message": "vmscan: only defer compaction for failed order and higher\n\nCurrently a failed order-9 (transparent hugepage) compaction can lead to\nmemory compaction being temporarily disabled for a memory zone.  Even if\nwe only need compaction for an order 2 allocation, eg.  for jumbo frames\nnetworking.\n\nThe fix is relatively straightforward: keep track of the highest order at\nwhich compaction is succeeding, and only defer compaction for orders at\nwhich compaction is failing.\n\nSigned-off-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Hillf Danton \u003cdhillf@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "074b85175a43a23fdbde60f55feea636e0bf0f85",
      "tree": "e6f6fdd82854b2bf25ea5b404cee010806a8fced",
      "parents": [
        "1d6f2097865e64963e90cce04980dce2f9fc023f"
      ],
      "author": {
        "name": "Dimitri Sivanich",
        "email": "sivanich@sgi.com",
        "time": "Wed Feb 08 12:39:07 2012 -0800"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Mon Feb 13 20:45:38 2012 -0500"
      },
      "message": "vfs: fix panic in __d_lookup() with high dentry hashtable counts\n\nWhen the number of dentry cache hash table entries gets too high\n(2147483648 entries), as happens by default on a 16TB system, use of a\nsigned integer in the dcache_init() initialization loop prevents the\ndentry_hashtable from getting initialized, causing a panic in\n__d_lookup().  Fix this in dcache_init() and similar areas.\n\nSigned-off-by: Dimitri Sivanich \u003csivanich@sgi.com\u003e\nAcked-by: David S. Miller \u003cdavem@davemloft.net\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "bcbef18db94dfb991bd3025069b3413cbc52f8b6",
      "tree": "a6c432dc65b182b84b058ba9408e40000478d57f",
      "parents": [
        "c2c9f543718e15227a4aa0135e793480b94c4d97"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Fri Jan 20 14:33:55 2012 -0800"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@suse.de",
        "time": "Wed Jan 25 17:25:05 2012 -0800"
      },
      "message": "mm: fix NULL ptr dereference in __count_immobile_pages\n\ncommit 687875fb7de4a95223af20ee024282fa9099f860 upstream.\n\nFix the following NULL ptr dereference caused by\n\n  cat /sys/devices/system/memory/memory0/removable\n\nPid: 13979, comm: sed Not tainted 3.0.13-0.5-default #1 IBM BladeCenter LS21 -[7971PAM]-/Server Blade\nRIP: __count_immobile_pages+0x4/0x100\nProcess sed (pid: 13979, threadinfo ffff880221c36000, task ffff88022e788480)\nCall Trace:\n  is_pageblock_removable_nolock+0x34/0x40\n  is_mem_section_removable+0x74/0xf0\n  show_mem_removable+0x41/0x70\n  sysfs_read_file+0xfe/0x1c0\n  vfs_read+0xc7/0x130\n  sys_read+0x53/0xa0\n  system_call_fastpath+0x16/0x1b\n\nWe are crashing because we are trying to dereference NULL zone which\ncame from pfn\u003d0 (struct page ffffea0000000000). According to the boot\nlog this page is marked reserved:\ne820 update range: 0000000000000000 - 0000000000010000 (usable) \u003d\u003d\u003e (reserved)\n\nand early_node_map confirms that:\nearly_node_map[3] active PFN ranges\n    1: 0x00000010 -\u003e 0x0000009c\n    1: 0x00000100 -\u003e 0x000bffa3\n    1: 0x00100000 -\u003e 0x00240000\n\nThe problem is that memory_present works in PAGE_SECTION_MASK aligned\nblocks so the reserved range sneaks into the the section as well.  This\nalso means that free_area_init_node will not take care of those reserved\npages and they stay uninitialized.\n\nWhen we try to read the removable status we walk through all available\nsections and hope that the zone is valid for all pages in the section.\nBut this is not true in this case as the zone and nid are not initialized.\n\nWe have only one node in this particular case and it is marked as node\u003d1\n(rather than 0) and that made the problem visible because page_to_nid will\nreturn 0 and there are no zones on the node.\n\nLet\u0027s check that the zone is valid and that the given pfn falls into its\nboundaries and mark the section not removable.  This might cause some\nfalse positives, probably, but we do not have any sane way to find out\nwhether the page is reserved by the platform or it is just not used for\nwhatever other reasons.\n\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@suse.de\u003e\n\n"
    },
    {
      "commit": "656a070629adfe23c12768e35ddf91feab469ff7",
      "tree": "42c001d079ec926186873b91d7d84bf66c54bcad",
      "parents": [
        "687875fb7de4a95223af20ee024282fa9099f860"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Fri Jan 20 14:33:58 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Jan 23 08:38:47 2012 -0800"
      },
      "message": "mm: __count_immobile_pages(): make sure the node is online\n\npage_zone() requires an online node otherwise we are accessing NULL\nNODE_DATA.  This is not an issue at the moment because node_zones are\nlocated at the structure beginning but this might change in the future\nso better be careful about that.\n\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "687875fb7de4a95223af20ee024282fa9099f860",
      "tree": "1c50d5ac1e31afac82cb8b0a6dd4b1f7bd07eecd",
      "parents": [
        "6536e3123e5d3371a6f52e32a3d0694bcc987702"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Fri Jan 20 14:33:55 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Jan 23 08:38:47 2012 -0800"
      },
      "message": "mm: fix NULL ptr dereference in __count_immobile_pages\n\nFix the following NULL ptr dereference caused by\n\n  cat /sys/devices/system/memory/memory0/removable\n\nPid: 13979, comm: sed Not tainted 3.0.13-0.5-default #1 IBM BladeCenter LS21 -[7971PAM]-/Server Blade\nRIP: __count_immobile_pages+0x4/0x100\nProcess sed (pid: 13979, threadinfo ffff880221c36000, task ffff88022e788480)\nCall Trace:\n  is_pageblock_removable_nolock+0x34/0x40\n  is_mem_section_removable+0x74/0xf0\n  show_mem_removable+0x41/0x70\n  sysfs_read_file+0xfe/0x1c0\n  vfs_read+0xc7/0x130\n  sys_read+0x53/0xa0\n  system_call_fastpath+0x16/0x1b\n\nWe are crashing because we are trying to dereference NULL zone which\ncame from pfn\u003d0 (struct page ffffea0000000000). According to the boot\nlog this page is marked reserved:\ne820 update range: 0000000000000000 - 0000000000010000 (usable) \u003d\u003d\u003e (reserved)\n\nand early_node_map confirms that:\nearly_node_map[3] active PFN ranges\n    1: 0x00000010 -\u003e 0x0000009c\n    1: 0x00000100 -\u003e 0x000bffa3\n    1: 0x00100000 -\u003e 0x00240000\n\nThe problem is that memory_present works in PAGE_SECTION_MASK aligned\nblocks so the reserved range sneaks into the the section as well.  This\nalso means that free_area_init_node will not take care of those reserved\npages and they stay uninitialized.\n\nWhen we try to read the removable status we walk through all available\nsections and hope that the zone is valid for all pages in the section.\nBut this is not true in this case as the zone and nid are not initialized.\n\nWe have only one node in this particular case and it is marked as node\u003d1\n(rather than 0) and that made the problem visible because page_to_nid will\nreturn 0 and there are no zones on the node.\n\nLet\u0027s check that the zone is valid and that the given pfn falls into its\nboundaries and mark the section not removable.  This might cause some\nfalse positives, probably, but we do not have any sane way to find out\nwhether the page is reserved by the platform or it is just not used for\nwhatever other reasons.\n\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: \u003cstable@vger.kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4111304dab198c687bc60f2e235a9f7ee92c47c8",
      "tree": "c98fbae214f73f8475bcdc54c8116dea82cd7d14",
      "parents": [
        "4d06f382c733f99ec67df006255e87525ac1efd3"
      ],
      "author": {
        "name": "Hugh Dickins",
        "email": "hughd@google.com",
        "time": "Thu Jan 12 17:20:01 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 12 20:13:10 2012 -0800"
      },
      "message": "mm: enum lru_list lru\n\nMostly we use \"enum lru_list lru\": change those few \"l\"s to \"lru\"s.\n\nSigned-off-by: Hugh Dickins \u003chughd@google.com\u003e\nReviewed-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "66199712e9eef5aede09dbcd9dfff87798a66917",
      "tree": "9994be003d97d9596fadb5e4c38c271ed3e79333",
      "parents": [
        "c82449352854ff09e43062246af86bdeb628f0c3"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Thu Jan 12 17:19:41 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 12 20:13:09 2012 -0800"
      },
      "message": "mm: page allocator: do not call direct reclaim for THP allocations while compaction is deferred\n\nIf compaction is deferred, direct reclaim is used to try to free enough\npages for the allocation to succeed.  For small high-orders, this has a\nreasonable chance of success.  However, if the caller has specified\n__GFP_NO_KSWAPD to limit the disruption to the system, it makes more sense\nto fail the allocation rather than stall the caller in direct reclaim.\nThis patch skips direct reclaim if compaction is deferred and the caller\nspecifies __GFP_NO_KSWAPD.\n\nAsync compaction only considers a subset of pages so it is possible for\ncompaction to be deferred prematurely and not enter direct reclaim even in\ncases where it should.  To compensate for this, this patch also defers\ncompaction only if sync compaction failed.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nReviewed-by: Rik van Riel\u003criel@redhat.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: Dave Jones \u003cdavej@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Andy Isaacson \u003cadi@hexapodia.org\u003e\nCc: Nai Xia \u003cnai.xia@gmail.com\u003e\nCc: Johannes Weiner \u003cjweiner@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d0048b0e59c1218d62bb4d014f34bbd7e7c0a214",
      "tree": "f6a69889bcb60b253ab37029616ebe6fa2bd24f9",
      "parents": [
        "3ed28fa1080c73747ce17f2025b28b062fb5aa7f"
      ],
      "author": {
        "name": "Bob Liu",
        "email": "lliubbo@gmail.com",
        "time": "Thu Jan 12 17:19:07 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 12 20:13:07 2012 -0800"
      },
      "message": "page_alloc: break early in check_for_regular_memory()\n\nIf there is a zone below ZONE_NORMAL has present_pages, we can set node\nstate to N_NORMAL_MEMORY, no need to loop to end.\n\nSigned-off-by: Bob Liu \u003clliubbo@gmail.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6290df545814990ca2663baf6e894669132d5f73",
      "tree": "c62472270ba81a7146bed0854be74e2e2338c629",
      "parents": [
        "b95a2f2d486d0d768a92879c023a03757b9c7e58"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "jweiner@redhat.com",
        "time": "Thu Jan 12 17:18:10 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Jan 12 20:13:05 2012 -0800"
      },
      "message": "mm: collect LRU list heads into struct lruvec\n\nHaving a unified structure with a LRU list set for both global zones and\nper-memcg zones allows to keep that code simple which deals with LRU\nlists and does not care about the container itself.\n\nOnce the per-memcg LRU lists directly link struct pages, the isolation\nfunction and all other list manipulations are shared between the memcg\ncase and the global LRU case.\n\nSigned-off-by: Johannes Weiner \u003cjweiner@redhat.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Kirill A. Shutemov \u003ckirill@shutemov.name\u003e\nCc: Daisuke Nishimura \u003cnishimura@mxp.nes.nec.co.jp\u003e\nCc: Balbir Singh \u003cbsingharora@gmail.com\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c3993076f842de3754360e5b998d6657a9d30303",
      "tree": "78c1ca3d031483932e2f236706b20064742c0b0c",
      "parents": [
        "43d2b113241d6797b890318767e0af78e313414b"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "hannes@cmpxchg.org",
        "time": "Tue Jan 10 15:08:10 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:44 2012 -0800"
      },
      "message": "mm: page_alloc: generalize order handling in __free_pages_bootmem()\n\n__free_pages_bootmem() used to special-case higher-order frees to save\nindividual page checking with free_pages_bulk().\n\nNowadays, both zero order and non-zero order frees use free_pages(), which\nchecks each individual page anyway, and so there is little point in making\nthe distinction anymore.  The higher-order loop will work just fine for\nzero order pages.\n\nSigned-off-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Uwe Kleine-König \u003cu.kleine-koenig@pengutronix.de\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "df0a6daa01fa3856c08f4274d4f21a8092caa480",
      "tree": "089c112e98c87c4326443c21711bf9410c1989ce",
      "parents": [
        "9571a982903bf9dcbca2479fd3e7dafd2211ecf9"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Tue Jan 10 15:08:02 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:44 2012 -0800"
      },
      "message": "mm: fix off-by-two in __zone_watermark_ok()\n\nCommit 88f5acf88ae6 (\"mm: page allocator: adjust the per-cpu counter\nthreshold when memory is low\") changed the form how free_pages is\ncalculated but it forgot that we used to do free_pages - ((1 \u003c\u003c order) -\n1) so we ended up with off-by-two when calculating free_pages.\n\nReported-by: Wang Sheng-Hui \u003cshhuiw@gmail.com\u003e\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a756cf5908530e8b40bdf569eb48b40139e8d7fd",
      "tree": "ba9df151d5468098c7eae563ce09faea6a539fc0",
      "parents": [
        "ccafa2879fb8d13b8031337a8743eac4189e5d6e"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "jweiner@redhat.com",
        "time": "Tue Jan 10 15:07:49 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:43 2012 -0800"
      },
      "message": "mm: try to distribute dirty pages fairly across zones\n\nThe maximum number of dirty pages that exist in the system at any time is\ndetermined by a number of pages considered dirtyable and a user-configured\npercentage of those, or an absolute number in bytes.\n\nThis number of dirtyable pages is the sum of memory provided by all the\nzones in the system minus their lowmem reserves and high watermarks, so\nthat the system can retain a healthy number of free pages without having\nto reclaim dirty pages.\n\nBut there is a flaw in that we have a zoned page allocator which does not\ncare about the global state but rather the state of individual memory\nzones.  And right now there is nothing that prevents one zone from filling\nup with dirty pages while other zones are spared, which frequently leads\nto situations where kswapd, in order to restore the watermark of free\npages, does indeed have to write pages from that zone\u0027s LRU list.  This\ncan interfere so badly with IO from the flusher threads that major\nfilesystems (btrfs, xfs, ext4) mostly ignore write requests from reclaim\nalready, taking away the VM\u0027s only possibility to keep such a zone\nbalanced, aside from hoping the flushers will soon clean pages from that\nzone.\n\nEnter per-zone dirty limits.  They are to a zone\u0027s dirtyable memory what\nthe global limit is to the global amount of dirtyable memory, and try to\nmake sure that no single zone receives more than its fair share of the\nglobally allowed dirty pages in the first place.  As the number of pages\nconsidered dirtyable excludes the zones\u0027 lowmem reserves and high\nwatermarks, the maximum number of dirty pages in a zone is such that the\nzone can always be balanced without requiring page cleaning.\n\nAs this is a placement decision in the page allocator and pages are\ndirtied only after the allocation, this patch allows allocators to pass\n__GFP_WRITE when they know in advance that the page will be written to and\nbecome dirty soon.  The page allocator will then attempt to allocate from\nthe first zone of the zonelist - which on NUMA is determined by the task\u0027s\nNUMA memory policy - that has not exceeded its dirty limit.\n\nAt first glance, it would appear that the diversion to lower zones can\nincrease pressure on them, but this is not the case.  With a full high\nzone, allocations will be diverted to lower zones eventually, so it is\nmore of a shift in timing of the lower zone allocations.  Workloads that\npreviously could fit their dirty pages completely in the higher zone may\nbe forced to allocate from lower zones, but the amount of pages that\n\"spill over\" are limited themselves by the lower zones\u0027 dirty constraints,\nand thus unlikely to become a problem.\n\nFor now, the problem of unfair dirty page distribution remains for NUMA\nconfigurations where the zones allowed for allocation are in sum not big\nenough to trigger the global dirty limits, wake up the flusher threads and\nremedy the situation.  Because of this, an allocation that could not\nsucceed on any of the considered zones is allowed to ignore the dirty\nlimits before going into direct reclaim or even failing the allocation,\nuntil a future patch changes the global dirty throttling and flusher\nthread activation so that they take individual zone states into account.\n\n\t\t\tTest results\n\n15M DMA + 3246M DMA32 + 504 Normal \u003d 3765M memory\n40% dirty ratio\n16G USB thumb drive\n10 runs of dd if\u003d/dev/zero of\u003ddisk/zeroes bs\u003d32k count\u003d$((10 \u003c\u003c 15))\n\n\t\tseconds\t\t\tnr_vmscan_write\n\t\t        (stddev)\t       min|     median|        max\nxfs\nvanilla:\t 549.747( 3.492)\t     0.000|      0.000|      0.000\npatched:\t 550.996( 3.802)\t     0.000|      0.000|      0.000\n\nfuse-ntfs\nvanilla:\t1183.094(53.178)\t 54349.000|  59341.000|  65163.000\npatched:\t 558.049(17.914)\t     0.000|      0.000|     43.000\n\nbtrfs\nvanilla:\t 573.679(14.015)\t156657.000| 460178.000| 606926.000\npatched:\t 563.365(11.368)\t     0.000|      0.000|   1362.000\n\next4\nvanilla:\t 561.197(15.782)\t     0.000|2725438.000|4143837.000\npatched:\t 568.806(17.496)\t     0.000|      0.000|      0.000\n\nSigned-off-by: Johannes Weiner \u003cjweiner@redhat.com\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nTested-by: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Shaohua Li \u003cshaohua.li@intel.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Chris Mason \u003cchris.mason@oracle.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ab8fabd46f811d5153d8a0cd2fac9a0d41fb593d",
      "tree": "0a6f7dcca59d22abe07973e3fafc41719ff3ad9d",
      "parents": [
        "25bd91bd27820d5971258cecd1c0e64b0e485144"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "jweiner@redhat.com",
        "time": "Tue Jan 10 15:07:42 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:43 2012 -0800"
      },
      "message": "mm: exclude reserved pages from dirtyable memory\n\nPer-zone dirty limits try to distribute page cache pages allocated for\nwriting across zones in proportion to the individual zone sizes, to reduce\nthe likelihood of reclaim having to write back individual pages from the\nLRU lists in order to make progress.\n\nThis patch:\n\nThe amount of dirtyable pages should not include the full number of free\npages: there is a number of reserved pages that the page allocator and\nkswapd always try to keep free.\n\nThe closer (reclaimable pages - dirty pages) is to the number of reserved\npages, the more likely it becomes for reclaim to run into dirty pages:\n\n       +----------+ ---\n       |   anon   |  |\n       +----------+  |\n       |          |  |\n       |          |  -- dirty limit new    -- flusher new\n       |   file   |  |                     |\n       |          |  |                     |\n       |          |  -- dirty limit old    -- flusher old\n       |          |                        |\n       +----------+                       --- reclaim\n       | reserved |\n       +----------+\n       |  kernel  |\n       +----------+\n\nThis patch introduces a per-zone dirty reserve that takes both the lowmem\nreserve as well as the high watermark of the zone into account, and a\nglobal sum of those per-zone values that is subtracted from the global\namount of dirtyable pages.  The lowmem reserve is unavailable to page\ncache allocations and kswapd tries to keep the high watermark free.  We\ndon\u0027t want to end up in a situation where reclaim has to clean pages in\norder to balance zones.\n\nNot treating reserved pages as dirtyable on a global level is only a\nconceptual fix.  In reality, dirty pages are not distributed equally\nacross zones and reclaim runs into dirty pages on a regular basis.\n\nBut it is important to get this right before tackling the problem on a\nper-zone level, where the distance between reclaim and the dirty pages is\nmostly much smaller in absolute numbers.\n\n[akpm@linux-foundation.org: fix highmem build]\nSigned-off-by: Johannes Weiner \u003cjweiner@redhat.com\u003e\nReviewed-by: Rik van Riel \u003criel@redhat.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Christoph Hellwig \u003chch@infradead.org\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Shaohua Li \u003cshaohua.li@intel.com\u003e\nCc: Chris Mason \u003cchris.mason@oracle.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c0a32fc5a2e470d0b02597b23ad79a317735253e",
      "tree": "2d164edae0062918ca2088772c00b0615781353b",
      "parents": [
        "1399ff86f2a2bbacbbe68fa00c5f8c752b344723"
      ],
      "author": {
        "name": "Stanislaw Gruszka",
        "email": "sgruszka@redhat.com",
        "time": "Tue Jan 10 15:07:28 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:42 2012 -0800"
      },
      "message": "mm: more intensive memory corruption debugging\n\nWith CONFIG_DEBUG_PAGEALLOC configured, the CPU will generate an exception\non access (read,write) to an unallocated page, which permits us to catch\ncode which corrupts memory.  However the kernel is trying to maximise\nmemory usage, hence there are usually few free pages in the system and\nbuggy code usually corrupts some crucial data.\n\nThis patch changes the buddy allocator to keep more free/protected pages\nand to interlace free/protected and allocated pages to increase the\nprobability of catching corruption.\n\nWhen the kernel is compiled with CONFIG_DEBUG_PAGEALLOC,\ndebug_guardpage_minorder defines the minimum order used by the page\nallocator to grant a request.  The requested size will be returned with\nthe remaining pages used as guard pages.\n\nThe default value of debug_guardpage_minorder is zero: no change from\ncurrent behaviour.\n\n[akpm@linux-foundation.org: tweak documentation, s/flg/flag/]\nSigned-off-by: Stanislaw Gruszka \u003csgruszka@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Pekka Enberg \u003cpenberg@cs.helsinki.fi\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f90ac3982a78d36f894824636beeef13361d7c59",
      "tree": "64bbe3b415bdfc151dc44f6b4c216c65351eb53c",
      "parents": [
        "938929f14cb595f43cd1a4e63e22d36cab1e4a1f"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Tue Jan 10 15:07:15 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:42 2012 -0800"
      },
      "message": "mm: avoid livelock on !__GFP_FS allocations\n\nColin Cross reported;\n\n  Under the following conditions, __alloc_pages_slowpath can loop forever:\n  gfp_mask \u0026 __GFP_WAIT is true\n  gfp_mask \u0026 __GFP_FS is false\n  reclaim and compaction make no progress\n  order \u003c\u003d PAGE_ALLOC_COSTLY_ORDER\n\n  These conditions happen very often during suspend and resume,\n  when pm_restrict_gfp_mask() effectively converts all GFP_KERNEL\n  allocations into __GFP_WAIT.\n\n  The oom killer is not run because gfp_mask \u0026 __GFP_FS is false,\n  but should_alloc_retry will always return true when order is less\n  than PAGE_ALLOC_COSTLY_ORDER.\n\nIn his fix, he avoided retrying the allocation if reclaim made no progress\nand __GFP_FS was not set.  The problem is that this would result in\nGFP_NOIO allocations failing that previously succeeded which would be very\nunfortunate.\n\nThe big difference between GFP_NOIO and suspend converting GFP_KERNEL to\nbehave like GFP_NOIO is that normally flushers will be cleaning pages and\nkswapd reclaims pages allowing GFP_NOIO to succeed after a short delay.\nThe same does not necessarily apply during suspend as the storage device\nmay be suspended.\n\nThis patch special cases the suspend case to fail the page allocation if\nreclaim cannot make progress and adds some documentation on how\ngfp_allowed_mask is currently used.  Failing allocations like this may\ncause suspend to abort but that is better than a livelock.\n\n[mgorman@suse.de: Rework fix to be suspend specific]\n[rientjes@google.com: Move suspended device check to should_alloc_retry]\nReported-by: Colin Cross \u003cccross@android.com\u003e\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nAcked-by: David Rientjes \u003crientjes@google.com\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Pekka Enberg \u003cpenberg@cs.helsinki.fi\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "938929f14cb595f43cd1a4e63e22d36cab1e4a1f",
      "tree": "54c23d02494c05d13cc6a6ffb327cc1fb03e72fd",
      "parents": [
        "937a94c9db30a818baa5e2c09dbf4589251355c3"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Tue Jan 10 15:07:14 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:42 2012 -0800"
      },
      "message": "mm: reduce the amount of work done when updating min_free_kbytes\n\nWhen min_free_kbytes is updated, some pageblocks are marked\nMIGRATE_RESERVE.  Ordinarily, this work is unnoticable as it happens early\nin boot but on large machines with 1TB of memory, this has been reported\nto delay boot times, probably due to the NUMA distances involved.\n\nThe bulk of the work is due to calling calling pageblock_is_reserved() an\nunnecessary amount of times and accessing far more struct page metadata\nthan is necessary.  This patch significantly reduces the amount of work\ndone by setup_zone_migrate_reserve() improving boot times on 1TB machines.\n\n[akpm@linux-foundation.org: coding-style fixes]\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b413d48aa70605701c0b395b2e350ca15f5d643a",
      "tree": "6aa777c589eedfb9dc498f375d553e561e203506",
      "parents": [
        "da066ad3570b88e7dee82e76a06ee9a7adffcf0d"
      ],
      "author": {
        "name": "Konstantin Khlebnikov",
        "email": "khlebnikov@openvz.org",
        "time": "Tue Jan 10 15:07:09 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:41 2012 -0800"
      },
      "message": "mm-tracepoint: rename page-free events\n\nRename mm_page_free_direct into mm_page_free and mm_pagevec_free into\nmm_page_free_batched\n\nSince v2.6.33-5426-gc475dab the kernel triggers mm_page_free_direct for\nall freed pages, not only for directly freed.  So, let\u0027s name it properly.\n For pages freed via page-list we also trigger mm_page_free_batched event.\n\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "da066ad3570b88e7dee82e76a06ee9a7adffcf0d",
      "tree": "0587cac700f316f9d658e350c0ddf4b4331413a5",
      "parents": [
        "cc59850ef940e4ee6a765d28b439b9bafe07cf63"
      ],
      "author": {
        "name": "Konstantin Khlebnikov",
        "email": "khlebnikov@openvz.org",
        "time": "Tue Jan 10 15:07:06 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:41 2012 -0800"
      },
      "message": "mm: remove unused pagevec_free\n\nIt not exported and now nobody uses it.\n\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nReviewed-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nAcked-by: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "cc59850ef940e4ee6a765d28b439b9bafe07cf63",
      "tree": "03b666986e9cc7dfc113a14721c44aa9e149f871",
      "parents": [
        "c909e99364c8b6ca07864d752950b6b4ecf6bef4"
      ],
      "author": {
        "name": "Konstantin Khlebnikov",
        "email": "khlebnikov@openvz.org",
        "time": "Tue Jan 10 15:07:04 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jan 10 16:30:41 2012 -0800"
      },
      "message": "mm: add free_hot_cold_page_list() helper\n\nThis patch adds helper free_hot_cold_page_list() to free list of 0-order\npages.  It frees pages directly from list without temporary page-vector.\nIt also calls trace_mm_pagevec_free() to simulate pagevec_free()\nbehaviour.\n\nbloat-o-meter:\n\nadd/remove: 1/1 grow/shrink: 1/3 up/down: 267/-295 (-28)\nfunction                                     old     new   delta\nfree_hot_cold_page_list                        -     264    +264\nget_page_from_freelist                      2129    2132      +3\n__pagevec_free                               243     239      -4\nsplit_free_page                              380     373      -7\nrelease_pages                                606     510     -96\nfree_page_list                               188       -    -188\n\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nAcked-by: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nAcked-by: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "98793265b429a3f0b3f1750e74d67cd4d740d162",
      "tree": "b0bd717673f0c21845cf053f3fb6b75d42530af5",
      "parents": [
        "b4a133da2eaccb844a7beaef16ffd9c76a0d21d3",
        "bd1b2a555952d959f47169056fca05acf7eff81f"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sun Jan 08 13:21:22 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sun Jan 08 13:21:22 2012 -0800"
      },
      "message": "Merge branch \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial\n\n* \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (53 commits)\n  Kconfig: acpi: Fix typo in comment.\n  misc latin1 to utf8 conversions\n  devres: Fix a typo in devm_kfree comment\n  btrfs: free-space-cache.c: remove extra semicolon.\n  fat: Spelling s/obsolate/obsolete/g\n  SCSI, pmcraid: Fix spelling error in a pmcraid_err() call\n  tools/power turbostat: update fields in manpage\n  mac80211: drop spelling fix\n  types.h: fix comment spelling for \u0027architectures\u0027\n  typo fixes: aera -\u003e area, exntension -\u003e extension\n  devices.txt: Fix typo of \u0027VMware\u0027.\n  sis900: Fix enum typo \u0027sis900_rx_bufer_status\u0027\n  decompress_bunzip2: remove invalid vi modeline\n  treewide: Fix comment and string typo \u0027bufer\u0027\n  hyper-v: Update MAINTAINERS\n  treewide: Fix typos in various parts of the kernel, and fix some comments.\n  clockevents: drop unknown Kconfig symbol GENERIC_CLOCKEVENTS_MIGR\n  gpio: Kconfig: drop unknown symbol \u0027CS5535_GPIO\u0027\n  leds: Kconfig: Fix typo \u0027D2NET_V2\u0027\n  sound: Kconfig: drop unknown symbol ARCH_CLPS7500\n  ...\n\nFix up trivial conflicts in arch/powerpc/platforms/40x/Kconfig (some new\nkconfig additions, close to removed commented-out old ones)\n"
    },
    {
      "commit": "972b2c719990f91eb3b2310d44ef8a2d38955a14",
      "tree": "b25a250ec5bec4b7b6355d214642d8b57c5cab32",
      "parents": [
        "02550d61f49266930e674286379d3601006b2893",
        "c3aa077648e147783a7a53b409578234647db853"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sun Jan 08 12:19:57 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Sun Jan 08 12:19:57 2012 -0800"
      },
      "message": "Merge branch \u0027for-linus2\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs\n\n* \u0027for-linus2\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (165 commits)\n  reiserfs: Properly display mount options in /proc/mounts\n  vfs: prevent remount read-only if pending removes\n  vfs: count unlinked inodes\n  vfs: protect remounting superblock read-only\n  vfs: keep list of mounts for each superblock\n  vfs: switch -\u003eshow_options() to struct dentry *\n  vfs: switch -\u003eshow_path() to struct dentry *\n  vfs: switch -\u003eshow_devname() to struct dentry *\n  vfs: switch -\u003eshow_stats to struct dentry *\n  switch security_path_chmod() to struct path *\n  vfs: prefer -\u003edentry-\u003ed_sb to -\u003emnt-\u003emnt_sb\n  vfs: trim includes a bit\n  switch mnt_namespace -\u003eroot to struct mount\n  vfs: take /proc/*/mounts and friends to fs/proc_namespace.c\n  vfs: opencode mntget() mnt_set_mountpoint()\n  vfs: spread struct mount - remaining argument of next_mnt()\n  vfs: move fsnotify junk to struct mount\n  vfs: move mnt_devname\n  vfs: move mnt_list to struct mount\n  vfs: switch pnode.h macros to struct mount *\n  ...\n"
    },
    {
      "commit": "f4ae40a6a50a98ac23d4b285f739455e926a473e",
      "tree": "c84d7393700bd85e5285a194f8c22d4d00e36b28",
      "parents": [
        "48176a973d65572e61d0ce95495e5072887e6fb6"
      ],
      "author": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Sun Jul 24 04:33:43 2011 -0400"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Jan 03 22:54:56 2012 -0500"
      },
      "message": "switch debugfs to umode_t\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "21830d75b1b3752a6418157c0abae8a11938d356",
      "tree": "53fdbee23b1512706d1ef4be2950c4bab64bd8d2",
      "parents": [
        "5b4993336fe6b14627ab85efde17aea094d333e8"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Thu Dec 08 14:34:27 2011 -0800"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@suse.de",
        "time": "Wed Dec 21 12:57:36 2011 -0800"
      },
      "message": "mm: Ensure that pfn_valid() is called once per pageblock when reserving pageblocks\n\ncommit d021563888312018ca65681096f62e36c20e63cc upstream.\n\nsetup_zone_migrate_reserve() expects that zone-\u003estart_pfn starts at\npageblock_nr_pages aligned pfn otherwise we could access beyond an\nexisting memblock resulting in the following panic if\nCONFIG_HOLES_IN_ZONE is not configured and we do not check pfn_valid:\n\n  IP: [\u003cc02d331d\u003e] setup_zone_migrate_reserve+0xcd/0x180\n  *pdpt \u003d 0000000000000000 *pde \u003d f000ff53f000ff53\n  Oops: 0000 [#1] SMP\n  Pid: 1, comm: swapper Not tainted 3.0.7-0.7-pae #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform\n  EIP: 0060:[\u003cc02d331d\u003e] EFLAGS: 00010006 CPU: 0\n  EIP is at setup_zone_migrate_reserve+0xcd/0x180\n  EAX: 000c0000 EBX: f5801fc0 ECX: 000c0000 EDX: 00000000\n  ESI: 000c01fe EDI: 000c01fe EBP: 00140000 ESP: f2475f58\n  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068\n  Process swapper (pid: 1, ti\u003df2474000 task\u003df2472cd0 task.ti\u003df2474000)\n  Call Trace:\n  [\u003cc02d389c\u003e] __setup_per_zone_wmarks+0xec/0x160\n  [\u003cc02d3a1f\u003e] setup_per_zone_wmarks+0xf/0x20\n  [\u003cc08a771c\u003e] init_per_zone_wmark_min+0x27/0x86\n  [\u003cc020111b\u003e] do_one_initcall+0x2b/0x160\n  [\u003cc086639d\u003e] kernel_init+0xbe/0x157\n  [\u003cc05cae26\u003e] kernel_thread_helper+0x6/0xd\n  Code: a5 39 f5 89 f7 0f 46 fd 39 cf 76 40 8b 03 f6 c4 08 74 32 eb 91 90 89 c8 c1 e8 0e 0f be 80 80 2f 86 c0 8b 14 85 60 2f 86 c0 89 c8 \u003c2b\u003e 82 b4 12 00 00 c1 e0 05 03 82 ac 12 00 00 8b 00 f6 c4 08 0f\n  EIP: [\u003cc02d331d\u003e] setup_zone_migrate_reserve+0xcd/0x180 SS:ESP 0068:f2475f58\n  CR2: 00000000000012b4\n\nWe crashed in pageblock_is_reserved() when accessing pfn 0xc0000 because\nhighstart_pfn \u003d 0x36ffe.\n\nThe issue was introduced in 3.0-rc1 by 6d3163ce (\"mm: check if any page\nin a pageblock is reserved before marking it MIGRATE_RESERVE\").\n\nMake sure that start_pfn is always aligned to pageblock_nr_pages to\nensure that pfn_valid s always called at the start of each pageblock.\nArchitectures with holes in pageblocks will be correctly handled by\npfn_valid_within in pageblock_is_reserved.\n\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nTested-by: Dang Bo \u003cbdang@vmware.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Arve Hjnnevg \u003carve@android.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@suse.de\u003e\n\n"
    },
    {
      "commit": "b492a377ac1507a67091c5232afd5ebd1c7c6e25",
      "tree": "1e8540841a9b4d3895b1cf852324d5914a439137",
      "parents": [
        "cd989fe1fe5da572e45468c6dcb361a7d3c63e5c"
      ],
      "author": {
        "name": "Youquan Song",
        "email": "youquan.song@intel.com",
        "time": "Thu Dec 08 14:34:18 2011 -0800"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@suse.de",
        "time": "Wed Dec 21 12:57:35 2011 -0800"
      },
      "message": "thp: set compound tail page _count to zero\n\ncommit 58a84aa92723d1ac3e1cc4e3b0ff49291663f7e1 upstream.\n\nCommit 70b50f94f1644 (\"mm: thp: tail page refcounting fix\") keeps all\npage_tail-\u003e_count zero at all times.  But the current kernel does not\nset page_tail-\u003e_count to zero if a 1GB page is utilized.  So when an\nIOMMU 1GB page is used by KVM, it wil result in a kernel oops because a\ntail page\u0027s _count does not equal zero.\n\n  kernel BUG at include/linux/mm.h:386!\n  invalid opcode: 0000 [#1] SMP\n  Call Trace:\n    gup_pud_range+0xb8/0x19d\n    get_user_pages_fast+0xcb/0x192\n    ? trace_hardirqs_off+0xd/0xf\n    hva_to_pfn+0x119/0x2f2\n    gfn_to_pfn_memslot+0x2c/0x2e\n    kvm_iommu_map_pages+0xfd/0x1c1\n    kvm_iommu_map_memslots+0x7c/0xbd\n    kvm_iommu_map_guest+0xaa/0xbf\n    kvm_vm_ioctl_assigned_device+0x2ef/0xa47\n    kvm_vm_ioctl+0x36c/0x3a2\n    do_vfs_ioctl+0x49e/0x4e4\n    sys_ioctl+0x5a/0x7c\n    system_call_fastpath+0x16/0x1b\n  RIP  gup_huge_pud+0xf2/0x159\n\nSigned-off-by: Youquan Song \u003cyouquan.song@intel.com\u003e\nReviewed-by: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@suse.de\u003e\n\n"
    },
    {
      "commit": "45aa0663cc408617b79a2b53f0a5f50e94688a48",
      "tree": "0a53931c317c3c72a3555bd2fbb70a881ee870f2",
      "parents": [
        "511585a28e5b5fd1cac61e601e42efc4c5dd64b5",
        "7bd0b0f0da3b1ec11cbcc798eb0ef747a1184077"
      ],
      "author": {
        "name": "Ingo Molnar",
        "email": "mingo@elte.hu",
        "time": "Tue Dec 20 12:14:26 2011 +0100"
      },
      "committer": {
        "name": "Ingo Molnar",
        "email": "mingo@elte.hu",
        "time": "Tue Dec 20 12:14:26 2011 +0100"
      },
      "message": "Merge branch \u0027memblock-kill-early_node_map\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc into core/memblock\n"
    },
    {
      "commit": "d021563888312018ca65681096f62e36c20e63cc",
      "tree": "cf403dfc0fd5a3775735815031add2f97b6efd40",
      "parents": [
        "09761333ed47e899cc1482c13090b95f3f711971"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Thu Dec 08 14:34:27 2011 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri Dec 09 07:50:28 2011 -0800"
      },
      "message": "mm: Ensure that pfn_valid() is called once per pageblock when reserving pageblocks\n\nsetup_zone_migrate_reserve() expects that zone-\u003estart_pfn starts at\npageblock_nr_pages aligned pfn otherwise we could access beyond an\nexisting memblock resulting in the following panic if\nCONFIG_HOLES_IN_ZONE is not configured and we do not check pfn_valid:\n\n  IP: [\u003cc02d331d\u003e] setup_zone_migrate_reserve+0xcd/0x180\n  *pdpt \u003d 0000000000000000 *pde \u003d f000ff53f000ff53\n  Oops: 0000 [#1] SMP\n  Pid: 1, comm: swapper Not tainted 3.0.7-0.7-pae #1 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform\n  EIP: 0060:[\u003cc02d331d\u003e] EFLAGS: 00010006 CPU: 0\n  EIP is at setup_zone_migrate_reserve+0xcd/0x180\n  EAX: 000c0000 EBX: f5801fc0 ECX: 000c0000 EDX: 00000000\n  ESI: 000c01fe EDI: 000c01fe EBP: 00140000 ESP: f2475f58\n  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068\n  Process swapper (pid: 1, ti\u003df2474000 task\u003df2472cd0 task.ti\u003df2474000)\n  Call Trace:\n  [\u003cc02d389c\u003e] __setup_per_zone_wmarks+0xec/0x160\n  [\u003cc02d3a1f\u003e] setup_per_zone_wmarks+0xf/0x20\n  [\u003cc08a771c\u003e] init_per_zone_wmark_min+0x27/0x86\n  [\u003cc020111b\u003e] do_one_initcall+0x2b/0x160\n  [\u003cc086639d\u003e] kernel_init+0xbe/0x157\n  [\u003cc05cae26\u003e] kernel_thread_helper+0x6/0xd\n  Code: a5 39 f5 89 f7 0f 46 fd 39 cf 76 40 8b 03 f6 c4 08 74 32 eb 91 90 89 c8 c1 e8 0e 0f be 80 80 2f 86 c0 8b 14 85 60 2f 86 c0 89 c8 \u003c2b\u003e 82 b4 12 00 00 c1 e0 05 03 82 ac 12 00 00 8b 00 f6 c4 08 0f\n  EIP: [\u003cc02d331d\u003e] setup_zone_migrate_reserve+0xcd/0x180 SS:ESP 0068:f2475f58\n  CR2: 00000000000012b4\n\nWe crashed in pageblock_is_reserved() when accessing pfn 0xc0000 because\nhighstart_pfn \u003d 0x36ffe.\n\nThe issue was introduced in 3.0-rc1 by 6d3163ce (\"mm: check if any page\nin a pageblock is reserved before marking it MIGRATE_RESERVE\").\n\nMake sure that start_pfn is always aligned to pageblock_nr_pages to\nensure that pfn_valid s always called at the start of each pageblock.\nArchitectures with holes in pageblocks will be correctly handled by\npfn_valid_within in pageblock_is_reserved.\n\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nTested-by: Dang Bo \u003cbdang@vmware.com\u003e\nReviewed-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Arve Hjnnevg \u003carve@android.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: Dave Hansen \u003cdave@linux.vnet.ibm.com\u003e\nCc: \u003cstable@vger.kernel.org\u003e\t[3.0+]\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "58a84aa92723d1ac3e1cc4e3b0ff49291663f7e1",
      "tree": "bdfad6b0f38590318da0dee67ff84718b60a8ca5",
      "parents": [
        "b6999b19120931ede364fa3b685e698a61fed31d"
      ],
      "author": {
        "name": "Youquan Song",
        "email": "youquan.song@intel.com",
        "time": "Thu Dec 08 14:34:18 2011 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Fri Dec 09 07:50:28 2011 -0800"
      },
      "message": "thp: set compound tail page _count to zero\n\nCommit 70b50f94f1644 (\"mm: thp: tail page refcounting fix\") keeps all\npage_tail-\u003e_count zero at all times.  But the current kernel does not\nset page_tail-\u003e_count to zero if a 1GB page is utilized.  So when an\nIOMMU 1GB page is used by KVM, it wil result in a kernel oops because a\ntail page\u0027s _count does not equal zero.\n\n  kernel BUG at include/linux/mm.h:386!\n  invalid opcode: 0000 [#1] SMP\n  Call Trace:\n    gup_pud_range+0xb8/0x19d\n    get_user_pages_fast+0xcb/0x192\n    ? trace_hardirqs_off+0xd/0xf\n    hva_to_pfn+0x119/0x2f2\n    gfn_to_pfn_memslot+0x2c/0x2e\n    kvm_iommu_map_pages+0xfd/0x1c1\n    kvm_iommu_map_memslots+0x7c/0xbd\n    kvm_iommu_map_guest+0xaa/0xbf\n    kvm_vm_ioctl_assigned_device+0x2ef/0xa47\n    kvm_vm_ioctl+0x36c/0x3a2\n    do_vfs_ioctl+0x49e/0x4e4\n    sys_ioctl+0x5a/0x7c\n    system_call_fastpath+0x16/0x1b\n  RIP  gup_huge_pud+0xf2/0x159\n\nSigned-off-by: Youquan Song \u003cyouquan.song@intel.com\u003e\nReviewed-by: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: \u003cstable@vger.kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "0ee332c1451869963626bf9cac88f165a90990e1",
      "tree": "a40e6c9c6cfe39ecbca37a08019be3c9e56a4a9b",
      "parents": [
        "a2bf79e7dcc97b4e9654f273453f9264f49e41ff"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Dec 08 10:22:09 2011 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Dec 08 10:22:09 2011 -0800"
      },
      "message": "memblock: Kill early_node_map[]\n\nNow all ARCH_POPULATES_NODE_MAP archs select HAVE_MEBLOCK_NODE_MAP -\nthere\u0027s no user of early_node_map[] left.  Kill early_node_map[] and\nreplace ARCH_POPULATES_NODE_MAP with HAVE_MEMBLOCK_NODE_MAP.  Also,\nrelocate for_each_mem_pfn_range() and helper from mm.h to memblock.h\nas page_alloc.c would no longer host an alternative implementation.\n\nThis change is ultimately one to one mapping and shouldn\u0027t cause any\nobservable difference; however, after the recent changes, there are\nsome functions which now would fit memblock.c better than page_alloc.c\nand dependency on HAVE_MEMBLOCK_NODE_MAP instead of HAVE_MEMBLOCK\ndoesn\u0027t make much sense on some of them.  Further cleanups for\nfunctions inside HAVE_MEMBLOCK_NODE_MAP in mm.h would be nice.\n\n-v2: Fix compile bug introduced by mis-spelling\n CONFIG_HAVE_MEMBLOCK_NODE_MAP to CONFIG_MEMBLOCK_HAVE_NODE_MAP in\n mmzone.h.  Reported by Stephen Rothwell.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Tony Luck \u003ctony.luck@intel.com\u003e\nCc: Ralf Baechle \u003cralf@linux-mips.org\u003e\nCc: Martin Schwidefsky \u003cschwidefsky@de.ibm.com\u003e\nCc: Chen Liqin \u003cliqin.chen@sunplusct.com\u003e\nCc: Paul Mundt \u003clethal@linux-sh.org\u003e\nCc: \"David S. Miller\" \u003cdavem@davemloft.net\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\n"
    },
    {
      "commit": "59f9f1c9ae463a3d4499cd9353619f8b1993371b",
      "tree": "c091bf4c74c696d412024d10ccb87e84b4c07d5e",
      "parents": [
        "c3a5a8cb8a7d082777d213cb6ff737450c5718a5"
      ],
      "author": {
        "name": "Jack Cheung",
        "email": "jackc@codeaurora.org",
        "time": "Tue Nov 29 16:52:49 2011 -0800"
      },
      "committer": {
        "name": "Jack Cheung",
        "email": "jackc@codeaurora.org",
        "time": "Tue Dec 06 15:00:36 2011 -0800"
      },
      "message": "mm: Add total_unmovable_pages global variable\n\nVmalloc will exit if the amount it needs to allocate is\ngreater than totalram_pages. Vmalloc cannot allocate\nfrom the movable zone, so pages in the movable zone should\nnot be counted.\n\nThis change adds a new global variable: total_unmovable_pages.\nIt is calculated in init.c, based on totalram_pages minus\nthe pages in the movable zone. Vmalloc now looks at this new\nglobal instead of totalram_pages.\n\ntotal_unmovable_pages can be modified during memory_hotplug.\nIf the zone you are offlining/onlining is unmovable, then\nyou modify it similar to totalram_pages.  If the zone is\nmovable, then no change is needed.\n\nChange-Id: Ie55c41051e9ad4b921eb04ecbb4798a8bd2344d6\nSigned-off-by: Jack Cheung \u003cjackc@codeaurora.org\u003e\n"
    },
    {
      "commit": "9f41da81017657a194a4e145bab337f13a4d7fd9",
      "tree": "b620a8cd103d0d10f6fec9692b0b58367306ac0d",
      "parents": [
        "2c60124cfd1fca38f961a88fbfcf2a0a96961e04"
      ],
      "author": {
        "name": "Jack Cheung",
        "email": "jackc@codeaurora.org",
        "time": "Mon Nov 28 16:41:28 2011 -0800"
      },
      "committer": {
        "name": "Jack Cheung",
        "email": "jackc@codeaurora.org",
        "time": "Tue Nov 29 18:28:05 2011 -0800"
      },
      "message": "mm: Cast lowmem_reserve to long\n\nz-\u003elowmem_reserve[classzone_idx] is an unsigned long but\nfree_pages and min are longs. If free_pages is\nnegative, the function will incorrectly return true\nbecause it will treat the negative long as a large,\npositive unsigned long.\n\nThis change casts z-\u003elowmem_reserve to a long and\nfixes a typo in the comment.\n\nChange-Id: Icada1fa5ca650fbcdb0656f637adbb98f223eec5\nSigned-off-by: Jack Cheung \u003cjackc@codeaurora.org\u003e\n"
    },
    {
      "commit": "d4bbf7e7759afc172e2bfbc5c416324590049cdd",
      "tree": "7eab5ee5481cd3dcf1162329fec827177640018a",
      "parents": [
        "a150439c4a97db379f0ed6faa46fbbb6e7bf3cb2",
        "401d0069cb344f401bc9d264c31db55876ff78c0"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Nov 28 09:46:22 2011 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Nov 28 09:46:22 2011 -0800"
      },
      "message": "Merge branch \u0027master\u0027 into x86/memblock\n\nConflicts \u0026 resolutions:\n\n* arch/x86/xen/setup.c\n\n\tdc91c728fd \"xen: allow extra memory to be in multiple regions\"\n\t24aa07882b \"memblock, x86: Replace memblock_x86_reserve/free...\"\n\n\tconflicted on xen_add_extra_mem() updates.  The resolution is\n\ttrivial as the latter just want to replace\n\tmemblock_x86_reserve_range() with memblock_reserve().\n\n* drivers/pci/intel-iommu.c\n\n\t166e9278a3f \"x86/ia64: intel-iommu: move to drivers/iommu/\"\n\t5dfe8660a3d \"bootmem: Replace work_with_active_regions() with...\"\n\n\tconflicted as the former moved the file under drivers/iommu/.\n\tResolved by applying the chnages from the latter on the moved\n\tfile.\n\n* mm/Kconfig\n\n\t6661672053a \"memblock: add NO_BOOTMEM config symbol\"\n\tc378ddd53f9 \"memblock, x86: Make ARCH_DISCARD_MEMBLOCK a config option\"\n\n\tconflicted trivially.  Both added config options.  Just\n\tletting both add their own options resolves the conflict.\n\n* mm/memblock.c\n\n\td1f0ece6cdc \"mm/memblock.c: small function definition fixes\"\n\ted7b56a799c \"memblock: Remove memblock_memory_can_coalesce()\"\n\n\tconfliected.  The former updates function removed by the\n\tlatter.  Resolution is trivial.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "6416b9fa43537c01098f8faa5bcbebb4a275297d",
      "tree": "c970bac752f8ea0e0faa4273b459b2ae67d05c4d",
      "parents": [
        "7433f2b78cb35cacf1799faa3b068255a6ef5f1f"
      ],
      "author": {
        "name": "Wang Sheng-Hui",
        "email": "shhuiw@gmail.com",
        "time": "Thu Nov 17 10:53:50 2011 +0100"
      },
      "committer": {
        "name": "Jiri Kosina",
        "email": "jkosina@suse.cz",
        "time": "Thu Nov 17 10:53:50 2011 +0100"
      },
      "message": "mm: cleanup the comment for head/tail pages of compound pages in mm/page_alloc.c\n\nOnly tail pages point at the head page using their -\u003efirst_page fields.\n\nSigned-off-by: Wang Sheng-Hui \u003cshhuiw@gmail.com\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nSigned-off-by: Jiri Kosina \u003cjkosina@suse.cz\u003e\n"
    },
    {
      "commit": "d074fa2796bdbc42c4f918c78d6711bafc80b1c8",
      "tree": "033929706a0aae95f65c134a8fc09cec3fb3e75d",
      "parents": [
        "53ae1740b250e4d02dd7a6ca82075355ad99dc23",
        "9ab6a29787b1221a697f85835645549668258bdc"
      ],
      "author": {
        "name": "Bryan Huntsman",
        "email": "bryanh@codeaurora.org",
        "time": "Wed Nov 16 13:52:50 2011 -0800"
      },
      "committer": {
        "name": "Bryan Huntsman",
        "email": "bryanh@codeaurora.org",
        "time": "Wed Nov 16 13:52:50 2011 -0800"
      },
      "message": "Merge remote-tracking branch \u0027common/android-3.0\u0027 into msm-3.0\n\n* common/android-3.0: (570 commits)\n  misc: remove kernel debugger core\n  ARM: common: fiq_debugger: dump sysrq directly to console if enabled\n  ARM: common: fiq_debugger: add irq context debug functions\n  net: wireless: bcmdhd: Call init_ioctl() only if was started properly for WEXT\n  net: wireless: bcmdhd: Call init_ioctl() only if was started properly\n  net: wireless: bcmdhd: Fix possible memory leak in escan/iscan\n  cpufreq: interactive governor: default 20ms timer\n  cpufreq: interactive governor: go to intermediate hi speed before max\n  cpufreq: interactive governor: scale to max only if at min speed\n  cpufreq: interactive governor: apply intermediate load on current speed\n  ARM: idle: update idle ticks before call idle end notifier\n  input: gpio_input: don\u0027t print debounce message unless flag is set\n  net: wireless: bcm4329: Skip dhd_bus_stop() if bus is already down\n  net: wireless: bcmdhd: Skip dhd_bus_stop() if bus is already down\n  net: wireless: bcmdhd: Improve suspend/resume processing\n  net: wireless: bcmdhd: Check if FW is Ok for internal FW call\n  tcp: Don\u0027t nuke connections for the wrong protocol\n  ARM: common: fiq_debugger: make uart irq be no_suspend\n  net: wireless: Skip connect warning for CONFIG_CFG80211_ALLOW_RECONNECT\n  mm: avoid livelock on !__GFP_FS allocations\n  ...\n\nConflicts:\n\tarch/arm/mm/cache-l2x0.c\n\tarch/arm/vfp/vfpmodule.c\n\tdrivers/mmc/core/host.c\n\tkernel/power/wakelock.c\n\tnet/bluetooth/hci_event.c\n\nSigned-off-by: Bryan Huntsman \u003cbryanh@codeaurora.org\u003e\n"
    },
    {
      "commit": "3ee9a4f086716d792219c021e8509f91165a4128",
      "tree": "f85162b8e024624f07909eaba4e85b89df924ebb",
      "parents": [
        "06d5e032adcbc7d50c606a1396f00e2474e4213e"
      ],
      "author": {
        "name": "Joe Perches",
        "email": "joe@perches.com",
        "time": "Mon Oct 31 17:08:35 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Oct 31 17:30:48 2011 -0700"
      },
      "message": "mm: neaten warn_alloc_failed\n\nAdd __attribute__((format (printf...) to the function to validate format\nand arguments.  Use vsprintf extension %pV to avoid any possible message\ninterleaving.  Coalesce format string.  Convert printks/pr_warning to\npr_warn.\n\n[akpm@linux-foundation.org: use the __printf() macro]\nSigned-off-by: Joe Perches \u003cjoe@perches.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4f31888c104687078f8d88c2f11eca1080c88464",
      "tree": "453bcfffe1955f6087156916eb102af1575206ee",
      "parents": [
        "f5fc870da2f8798edb5481cd2137a3b2d5bd1b19"
      ],
      "author": {
        "name": "Dave Jones",
        "email": "davej@redhat.com",
        "time": "Mon Oct 31 17:07:24 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Oct 31 17:30:45 2011 -0700"
      },
      "message": "mm: output a list of loaded modules when we hit bad_page()\n\nWhen we get a bad_page bug report, it\u0027s useful to see what modules the\nuser had loaded.\n\nSigned-off-by: Dave Jones \u003cdavej@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e6436864f939e77226c8e971610539649ed5f869",
      "tree": "328e285f17460eb3287730e520648c624c7ca717",
      "parents": [
        "f973fab692184e457095957336468146439f0a6e"
      ],
      "author": {
        "name": "Larry Bassel",
        "email": "lbassel@codeaurora.org",
        "time": "Fri Oct 14 10:59:07 2011 -0700"
      },
      "committer": {
        "name": "Larry Bassel",
        "email": "lbassel@codeaurora.org",
        "time": "Sun Oct 30 09:14:51 2011 -0700"
      },
      "message": "mm: use required fixed size of movable zone if FIX_MOVABLE_ZONE\n\nIf FIX_MOVABLE_ZONE is enabled, we want a specific size and\nlocation of ZONE_MOVABLE.\n\nChange-Id: I0b858c7310cd328e1118abc9d5fe6f364bb4ffad\nSigned-off-by: Larry Bassel \u003clbassel@codeaurora.org\u003e\n"
    },
    {
      "commit": "2bb3e310159b65c88caf0c67a20ed257568be267",
      "tree": "e4ad01c06a9e27939781c5dd9d0cb92e6fcd54d5",
      "parents": [
        "2f53cb72c1574d3880d9e88e254b756565fe2f6d",
        "97596c34030ed28657ccafddb67e17a03890b90a"
      ],
      "author": {
        "name": "Colin Cross",
        "email": "ccross@android.com",
        "time": "Thu Oct 27 15:01:19 2011 -0700"
      },
      "committer": {
        "name": "Colin Cross",
        "email": "ccross@android.com",
        "time": "Thu Oct 27 15:01:19 2011 -0700"
      },
      "message": "Merge commit \u0027v3.0.8\u0027 into android-3.0\n"
    },
    {
      "commit": "2f53cb72c1574d3880d9e88e254b756565fe2f6d",
      "tree": "1defa097001d4eab5cd7cefe4e9e62c7b14c52ca",
      "parents": [
        "f41047365480510bfb12260d9f4fc7a8b95a734e"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Oct 24 16:33:42 2011 -0700"
      },
      "committer": {
        "name": "Colin Cross",
        "email": "ccross@android.com",
        "time": "Wed Oct 26 21:49:46 2011 -0700"
      },
      "message": "mm: avoid livelock on !__GFP_FS allocations\n\nUnder the following conditions, __alloc_pages_slowpath can loop\nforever:\ngfp_mask \u0026 __GFP_WAIT is true\ngfp_mask \u0026 __GFP_FS is false\nreclaim and compaction make no progress\norder \u003c\u003d PAGE_ALLOC_COSTLY_ORDER\n\nThe gfp conditions are normally invalid, because !__GFP_FS\ndisables most of the reclaim methods that __GFP_WAIT would\nwait for.  However, these conditions happen very often during\nsuspend and resume, when pm_restrict_gfp_mask() effectively\nconverts all GFP_KERNEL allocations into __GFP_WAIT.\n\nThe oom killer is not run because gfp_mask \u0026 __GFP_FS is false,\nbut should_alloc_retry will always return true when order is less\nthan PAGE_ALLOC_COSTLY_ORDER.  __alloc_pages_slowpath will\nloop forever between the rebalance label and should_alloc_retry,\nunless another thread happens to release enough pages to satisfy\nthe allocation.\n\nAdd a check to detect when PM has disabled __GFP_FS, and do not\nretry if reclaim is not making any progress.\n\n[taken from patch on lkml by Mel Gorman, commit message by ccross]\nChange-Id: I864a24e9d9fd98bd0e3d6e9c1e85b6c1b766850e\nSigned-off-by: Colin Cross \u003cccross@android.com\u003e\n"
    },
    {
      "commit": "66d52cb7c42a5df2a6aded5f29dba98ac2882064",
      "tree": "6d154119840cec22ecaeda4cc76a5d9d412d5e09",
      "parents": [
        "42274b5f8129467095e8b907b5bc9536caf30fa8"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Jul 25 17:12:30 2011 -0700"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@suse.de",
        "time": "Mon Oct 03 11:40:04 2011 -0700"
      },
      "message": "mm: page allocator: reconsider zones for allocation after direct reclaim\n\ncommit 76d3fbf8fbf6cc78ceb63549e0e0c5bc8a88f838 upstream.\n\nWith zone_reclaim_mode enabled, it\u0027s possible for zones to be considered\nfull in the zonelist_cache so they are skipped in the future.  If the\nprocess enters direct reclaim, the ZLC may still consider zones to be full\neven after reclaiming pages.  Reconsider all zones for allocation if\ndirect reclaim returns successfully.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Stefan Priebe \u003cs.priebe@profihost.ag\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@suse.de\u003e\n\n"
    },
    {
      "commit": "42274b5f8129467095e8b907b5bc9536caf30fa8",
      "tree": "9f4d0a531adab1763b673a081e49704c481f1d09",
      "parents": [
        "10927d967aa3e7031b0a573be8f002af607e6227"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Jul 25 17:12:29 2011 -0700"
      },
      "committer": {
        "name": "Greg Kroah-Hartman",
        "email": "gregkh@suse.de",
        "time": "Mon Oct 03 11:40:04 2011 -0700"
      },
      "message": "mm: page allocator: initialise ZLC for first zone eligible for zone_reclaim\n\ncommit cd38b115d5ad79b0100ac6daa103c4fe2c50a913 upstream.\n\nThere have been a small number of complaints about significant stalls\nwhile copying large amounts of data on NUMA machines reported on a\ndistribution bugzilla.  In these cases, zone_reclaim was enabled by\ndefault due to large NUMA distances.  In general, the complaints have not\nbeen about the workload itself unless it was a file server (in which case\nthe recommendation was disable zone_reclaim).\n\nThe stalls are mostly due to significant amounts of time spent scanning\nthe preferred zone for pages to free.  After a failure, it might fallback\nto another node (as zonelists are often node-ordered rather than\nzone-ordered) but stall quickly again when the next allocation attempt\noccurs.  In bad cases, each page allocated results in a full scan of the\npreferred zone.\n\nPatch 1 checks the preferred zone for recent allocation failure\n        which is particularly important if zone_reclaim has failed\n        recently.  This avoids rescanning the zone in the near future and\n        instead falling back to another node.  This may hurt node locality\n        in some cases but a failure to zone_reclaim is more expensive than\n        a remote access.\n\nPatch 2 clears the zlc information after direct reclaim.\n        Otherwise, zone_reclaim can mark zones full, direct reclaim can\n        reclaim enough pages but the zone is still not considered for\n        allocation.\n\nThis was tested on a 24-thread 2-node x86_64 machine.  The tests were\nfocused on large amounts of IO.  All tests were bound to the CPUs on\nnode-0 to avoid disturbances due to processes being scheduled on different\nnodes.  The kernels tested are\n\n3.0-rc6-vanilla\t\tVanilla 3.0-rc6\nzlcfirst\t\tPatch 1 applied\nzlcreconsider\t\tPatches 1+2 applied\n\nFS-Mark\n./fs_mark  -d  /tmp/fsmark-10813  -D  100  -N  5000  -n  208  -L  35  -t  24  -S0  -s  524288\n                fsmark-3.0-rc6       3.0-rc6       \t\t3.0-rc6\n                   vanilla\t\t\t zlcfirs \tzlcreconsider\nFiles/s  min          54.90 ( 0.00%)       49.80 (-10.24%)       49.10 (-11.81%)\nFiles/s  mean        100.11 ( 0.00%)      135.17 (25.94%)      146.93 (31.87%)\nFiles/s  stddev       57.51 ( 0.00%)      138.97 (58.62%)      158.69 (63.76%)\nFiles/s  max         361.10 ( 0.00%)      834.40 (56.72%)      802.40 (55.00%)\nOverhead min       76704.00 ( 0.00%)    76501.00 ( 0.27%)    77784.00 (-1.39%)\nOverhead mean    1485356.51 ( 0.00%)  1035797.83 (43.40%)  1594680.26 (-6.86%)\nOverhead stddev  1848122.53 ( 0.00%)   881489.88 (109.66%)  1772354.90 ( 4.27%)\nOverhead max     7989060.00 ( 0.00%)  3369118.00 (137.13%) 10135324.00 (-21.18%)\nMMTests Statistics: duration\nUser/Sys Time Running Test (seconds)        501.49    493.91    499.93\nTotal Elapsed Time (seconds)               2451.57   2257.48   2215.92\n\nMMTests Statistics: vmstat\nPage Ins                                       46268       63840       66008\nPage Outs                                   90821596    90671128    88043732\nSwap Ins                                           0           0           0\nSwap Outs                                          0           0           0\nDirect pages scanned                        13091697     8966863     8971790\nKswapd pages scanned                               0     1830011     1831116\nKswapd pages reclaimed                             0     1829068     1829930\nDirect pages reclaimed                      13037777     8956828     8648314\nKswapd efficiency                               100%         99%         99%\nKswapd velocity                                0.000     810.643     826.346\nDirect efficiency                                99%         99%         96%\nDirect velocity                             5340.128    3972.068    4048.788\nPercentage direct scans                         100%         83%         83%\nPage writes by reclaim                             0           3           0\nSlabs scanned                                 796672      720640      720256\nDirect inode steals                          7422667     7160012     7088638\nKswapd inode steals                                0     1736840     2021238\n\nTest completes far faster with a large increase in the number of files\ncreated per second.  Standard deviation is high as a small number of\niterations were much higher than the mean.  The number of pages scanned by\nzone_reclaim is reduced and kswapd is used for more work.\n\nLARGE DD\n               \t\t3.0-rc6       3.0-rc6       3.0-rc6\n                   \tvanilla     zlcfirst     zlcreconsider\ndownload tar           59 ( 0.00%)   59 ( 0.00%)   55 ( 7.27%)\ndd source files       527 ( 0.00%)  296 (78.04%)  320 (64.69%)\ndelete source          36 ( 0.00%)   19 (89.47%)   20 (80.00%)\nMMTests Statistics: duration\nUser/Sys Time Running Test (seconds)        125.03    118.98    122.01\nTotal Elapsed Time (seconds)                624.56    375.02    398.06\n\nMMTests Statistics: vmstat\nPage Ins                                     3594216      439368      407032\nPage Outs                                   23380832    23380488    23377444\nSwap Ins                                           0           0           0\nSwap Outs                                          0         436         287\nDirect pages scanned                        17482342    69315973    82864918\nKswapd pages scanned                               0      519123      575425\nKswapd pages reclaimed                             0      466501      522487\nDirect pages reclaimed                       5858054     2732949     2712547\nKswapd efficiency                               100%         89%         90%\nKswapd velocity                                0.000    1384.254    1445.574\nDirect efficiency                                33%          3%          3%\nDirect velocity                            27991.453  184832.737  208171.929\nPercentage direct scans                         100%         99%         99%\nPage writes by reclaim                             0        5082       13917\nSlabs scanned                                  17280       29952       35328\nDirect inode steals                           115257     1431122      332201\nKswapd inode steals                                0           0      979532\n\nThis test downloads a large tarfile and copies it with dd a number of\ntimes - similar to the most recent bug report I\u0027ve dealt with.  Time to\ncompletion is reduced.  The number of pages scanned directly is still\ndisturbingly high with a low efficiency but this is likely due to the\nnumber of dirty pages encountered.  The figures could probably be improved\nwith more work around how kswapd is used and how dirty pages are handled\nbut that is separate work and this result is significant on its own.\n\nStreaming Mapped Writer\nMMTests Statistics: duration\nUser/Sys Time Running Test (seconds)        124.47    111.67    112.64\nTotal Elapsed Time (seconds)               2138.14   1816.30   1867.56\n\nMMTests Statistics: vmstat\nPage Ins                                       90760       89124       89516\nPage Outs                                  121028340   120199524   120736696\nSwap Ins                                           0          86          55\nSwap Outs                                          0           0           0\nDirect pages scanned                       114989363    96461439    96330619\nKswapd pages scanned                        56430948    56965763    57075875\nKswapd pages reclaimed                      27743219    27752044    27766606\nDirect pages reclaimed                         49777       46884       36655\nKswapd efficiency                                49%         48%         48%\nKswapd velocity                            26392.541   31363.631   30561.736\nDirect efficiency                                 0%          0%          0%\nDirect velocity                            53780.091   53108.759   51581.004\nPercentage direct scans                          67%         62%         62%\nPage writes by reclaim                           385         122        1513\nSlabs scanned                                  43008       39040       42112\nDirect inode steals                                0          10           8\nKswapd inode steals                              733         534         477\n\nThis test just creates a large file mapping and writes to it linearly.\nTime to completion is again reduced.\n\nThe gains are mostly down to two things.  In many cases, there is less\nscanning as zone_reclaim simply gives up faster due to recent failures.\nThe second reason is that memory is used more efficiently.  Instead of\nscanning the preferred zone every time, the allocator falls back to\nanother zone and uses it instead improving overall memory utilisation.\n\nThis patch: initialise ZLC for first zone eligible for zone_reclaim.\n\nThe zonelist cache (ZLC) is used among other things to record if\nzone_reclaim() failed for a particular zone recently.  The intention is to\navoid a high cost scanning extremely long zonelists or scanning within the\nzone uselessly.\n\nCurrently the zonelist cache is setup only after the first zone has been\nconsidered and zone_reclaim() has been called.  The objective was to avoid\na costly setup but zone_reclaim is itself quite expensive.  If it is\nfailing regularly such as the first eligible zone having mostly mapped\npages, the cost in scanning and allocation stalls is far higher than the\nZLC initialisation step.\n\nThis patch initialises ZLC before the first eligible zone calls\nzone_reclaim().  Once initialised, it is checked whether the zone failed\nzone_reclaim recently.  If it has, the zone is skipped.  As the first zone\nis now being checked, additional care has to be taken about zones marked\nfull.  A zone can be marked \"full\" because it should not have enough\nunmapped pages for zone_reclaim but this is excessive as direct reclaim or\nkswapd may succeed where zone_reclaim fails.  Only mark zones \"full\" after\nzone_reclaim fails if it failed to reclaim enough pages after scanning.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Stefan Priebe \u003cs.priebe@profihost.ag\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@suse.de\u003e\n\n"
    },
    {
      "commit": "dd48c085c1cdf9446f92826f1fd451167fb6c2fd",
      "tree": "d62870378cc08af36ea7a41531436bdebddec232",
      "parents": [
        "f48d1915b86f06a943087e5f9b29542a1ef4cd4d"
      ],
      "author": {
        "name": "Akinobu Mita",
        "email": "akinobu.mita@gmail.com",
        "time": "Wed Aug 03 16:21:01 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Aug 03 14:25:20 2011 -1000"
      },
      "message": "fault-injection: add ability to export fault_attr in arbitrary directory\n\ninit_fault_attr_dentries() is used to export fault_attr via debugfs.\nBut it can only export it in debugfs root directory.\n\nPer Forlin is working on mmc_fail_request which adds support to inject\ndata errors after a completed host transfer in MMC subsystem.\n\nThe fault_attr for mmc_fail_request should be defined per mmc host and\nexport it in debugfs directory per mmc host like\n/sys/kernel/debug/mmc0/mmc_fail_request.\n\ninit_fault_attr_dentries() doesn\u0027t help for mmc_fail_request.  So this\nintroduces fault_create_debugfs_attr() which is able to create a\ndirectory in the arbitrary directory and replace\ninit_fault_attr_dentries().\n\n[akpm@linux-foundation.org: extraneous semicolon, per Randy]\nSigned-off-by: Akinobu Mita \u003cakinobu.mita@gmail.com\u003e\nTested-by: Per Forlin \u003cper.forlin@linaro.org\u003e\nCc: Jens Axboe \u003caxboe@kernel.dk\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Pekka Enberg \u003cpenberg@kernel.org\u003e\nCc: Matt Mackall \u003cmpm@selenic.com\u003e\nCc: Randy Dunlap \u003crdunlap@xenotime.net\u003e\nCc: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b2588c4b4c3c075e9b45d61065d86c60de2b6441",
      "tree": "66942e8a252101aaa7e1f0a2ee2c3d8288dda659",
      "parents": [
        "810f09b87b75d7cc3906ffffe4311003f37caa2a"
      ],
      "author": {
        "name": "Akinobu Mita",
        "email": "akinobu.mita@gmail.com",
        "time": "Tue Jul 26 16:09:03 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 26 16:49:46 2011 -0700"
      },
      "message": "fail_page_alloc: simplify debugfs initialization\n\nNow cleanup_fault_attr_dentries() recursively removes a directory, So we\ncan simplify the error handling in the initialization code and no need\nto hold dentry structs for each debugfs file.\n\nSigned-off-by: Akinobu Mita \u003cakinobu.mita@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7f5ddcc8d3eaccd5e169fda738530f937509645e",
      "tree": "14f5581871040f98bbdab864314e1afc00a19a4c",
      "parents": [
        "8307fc257cf3931d87e172bd8663e80c3d1e56a3"
      ],
      "author": {
        "name": "Akinobu Mita",
        "email": "akinobu.mita@gmail.com",
        "time": "Tue Jul 26 16:09:02 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Jul 26 16:49:46 2011 -0700"
      },
      "message": "fault-injection: use debugfs_remove_recursive\n\nUse debugfs_remove_recursive() to simplify initialization and\ndeinitialization of fault injection debugfs files.\n\nSigned-off-by: Akinobu Mita \u003cakinobu.mita@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "76d3fbf8fbf6cc78ceb63549e0e0c5bc8a88f838",
      "tree": "cebd9474333db6965fe6af7cc3f652d3091b658b",
      "parents": [
        "cd38b115d5ad79b0100ac6daa103c4fe2c50a913"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Jul 25 17:12:30 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Jul 25 20:57:10 2011 -0700"
      },
      "message": "mm: page allocator: reconsider zones for allocation after direct reclaim\n\nWith zone_reclaim_mode enabled, it\u0027s possible for zones to be considered\nfull in the zonelist_cache so they are skipped in the future.  If the\nprocess enters direct reclaim, the ZLC may still consider zones to be full\neven after reclaiming pages.  Reconsider all zones for allocation if\ndirect reclaim returns successfully.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "cd38b115d5ad79b0100ac6daa103c4fe2c50a913",
      "tree": "ffa8b05c42d0c53ffdd1dee6ce2570bba1d5db2f",
      "parents": [
        "1d65f86db14806cf7b1218c7b4ecb8b4db5af27d"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mgorman@suse.de",
        "time": "Mon Jul 25 17:12:29 2011 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Jul 25 20:57:10 2011 -0700"
      },
      "message": "mm: page allocator: initialise ZLC for first zone eligible for zone_reclaim\n\nThere have been a small number of complaints about significant stalls\nwhile copying large amounts of data on NUMA machines reported on a\ndistribution bugzilla.  In these cases, zone_reclaim was enabled by\ndefault due to large NUMA distances.  In general, the complaints have not\nbeen about the workload itself unless it was a file server (in which case\nthe recommendation was disable zone_reclaim).\n\nThe stalls are mostly due to significant amounts of time spent scanning\nthe preferred zone for pages to free.  After a failure, it might fallback\nto another node (as zonelists are often node-ordered rather than\nzone-ordered) but stall quickly again when the next allocation attempt\noccurs.  In bad cases, each page allocated results in a full scan of the\npreferred zone.\n\nPatch 1 checks the preferred zone for recent allocation failure\n        which is particularly important if zone_reclaim has failed\n        recently.  This avoids rescanning the zone in the near future and\n        instead falling back to another node.  This may hurt node locality\n        in some cases but a failure to zone_reclaim is more expensive than\n        a remote access.\n\nPatch 2 clears the zlc information after direct reclaim.\n        Otherwise, zone_reclaim can mark zones full, direct reclaim can\n        reclaim enough pages but the zone is still not considered for\n        allocation.\n\nThis was tested on a 24-thread 2-node x86_64 machine.  The tests were\nfocused on large amounts of IO.  All tests were bound to the CPUs on\nnode-0 to avoid disturbances due to processes being scheduled on different\nnodes.  The kernels tested are\n\n3.0-rc6-vanilla\t\tVanilla 3.0-rc6\nzlcfirst\t\tPatch 1 applied\nzlcreconsider\t\tPatches 1+2 applied\n\nFS-Mark\n./fs_mark  -d  /tmp/fsmark-10813  -D  100  -N  5000  -n  208  -L  35  -t  24  -S0  -s  524288\n                fsmark-3.0-rc6       3.0-rc6       \t\t3.0-rc6\n                   vanilla\t\t\t zlcfirs \tzlcreconsider\nFiles/s  min          54.90 ( 0.00%)       49.80 (-10.24%)       49.10 (-11.81%)\nFiles/s  mean        100.11 ( 0.00%)      135.17 (25.94%)      146.93 (31.87%)\nFiles/s  stddev       57.51 ( 0.00%)      138.97 (58.62%)      158.69 (63.76%)\nFiles/s  max         361.10 ( 0.00%)      834.40 (56.72%)      802.40 (55.00%)\nOverhead min       76704.00 ( 0.00%)    76501.00 ( 0.27%)    77784.00 (-1.39%)\nOverhead mean    1485356.51 ( 0.00%)  1035797.83 (43.40%)  1594680.26 (-6.86%)\nOverhead stddev  1848122.53 ( 0.00%)   881489.88 (109.66%)  1772354.90 ( 4.27%)\nOverhead max     7989060.00 ( 0.00%)  3369118.00 (137.13%) 10135324.00 (-21.18%)\nMMTests Statistics: duration\nUser/Sys Time Running Test (seconds)        501.49    493.91    499.93\nTotal Elapsed Time (seconds)               2451.57   2257.48   2215.92\n\nMMTests Statistics: vmstat\nPage Ins                                       46268       63840       66008\nPage Outs                                   90821596    90671128    88043732\nSwap Ins                                           0           0           0\nSwap Outs                                          0           0           0\nDirect pages scanned                        13091697     8966863     8971790\nKswapd pages scanned                               0     1830011     1831116\nKswapd pages reclaimed                             0     1829068     1829930\nDirect pages reclaimed                      13037777     8956828     8648314\nKswapd efficiency                               100%         99%         99%\nKswapd velocity                                0.000     810.643     826.346\nDirect efficiency                                99%         99%         96%\nDirect velocity                             5340.128    3972.068    4048.788\nPercentage direct scans                         100%         83%         83%\nPage writes by reclaim                             0           3           0\nSlabs scanned                                 796672      720640      720256\nDirect inode steals                          7422667     7160012     7088638\nKswapd inode steals                                0     1736840     2021238\n\nTest completes far faster with a large increase in the number of files\ncreated per second.  Standard deviation is high as a small number of\niterations were much higher than the mean.  The number of pages scanned by\nzone_reclaim is reduced and kswapd is used for more work.\n\nLARGE DD\n               \t\t3.0-rc6       3.0-rc6       3.0-rc6\n                   \tvanilla     zlcfirst     zlcreconsider\ndownload tar           59 ( 0.00%)   59 ( 0.00%)   55 ( 7.27%)\ndd source files       527 ( 0.00%)  296 (78.04%)  320 (64.69%)\ndelete source          36 ( 0.00%)   19 (89.47%)   20 (80.00%)\nMMTests Statistics: duration\nUser/Sys Time Running Test (seconds)        125.03    118.98    122.01\nTotal Elapsed Time (seconds)                624.56    375.02    398.06\n\nMMTests Statistics: vmstat\nPage Ins                                     3594216      439368      407032\nPage Outs                                   23380832    23380488    23377444\nSwap Ins                                           0           0           0\nSwap Outs                                          0         436         287\nDirect pages scanned                        17482342    69315973    82864918\nKswapd pages scanned                               0      519123      575425\nKswapd pages reclaimed                             0      466501      522487\nDirect pages reclaimed                       5858054     2732949     2712547\nKswapd efficiency                               100%         89%         90%\nKswapd velocity                                0.000    1384.254    1445.574\nDirect efficiency                                33%          3%          3%\nDirect velocity                            27991.453  184832.737  208171.929\nPercentage direct scans                         100%         99%         99%\nPage writes by reclaim                             0        5082       13917\nSlabs scanned                                  17280       29952       35328\nDirect inode steals                           115257     1431122      332201\nKswapd inode steals                                0           0      979532\n\nThis test downloads a large tarfile and copies it with dd a number of\ntimes - similar to the most recent bug report I\u0027ve dealt with.  Time to\ncompletion is reduced.  The number of pages scanned directly is still\ndisturbingly high with a low efficiency but this is likely due to the\nnumber of dirty pages encountered.  The figures could probably be improved\nwith more work around how kswapd is used and how dirty pages are handled\nbut that is separate work and this result is significant on its own.\n\nStreaming Mapped Writer\nMMTests Statistics: duration\nUser/Sys Time Running Test (seconds)        124.47    111.67    112.64\nTotal Elapsed Time (seconds)               2138.14   1816.30   1867.56\n\nMMTests Statistics: vmstat\nPage Ins                                       90760       89124       89516\nPage Outs                                  121028340   120199524   120736696\nSwap Ins                                           0          86          55\nSwap Outs                                          0           0           0\nDirect pages scanned                       114989363    96461439    96330619\nKswapd pages scanned                        56430948    56965763    57075875\nKswapd pages reclaimed                      27743219    27752044    27766606\nDirect pages reclaimed                         49777       46884       36655\nKswapd efficiency                                49%         48%         48%\nKswapd velocity                            26392.541   31363.631   30561.736\nDirect efficiency                                 0%          0%          0%\nDirect velocity                            53780.091   53108.759   51581.004\nPercentage direct scans                          67%         62%         62%\nPage writes by reclaim                           385         122        1513\nSlabs scanned                                  43008       39040       42112\nDirect inode steals                                0          10           8\nKswapd inode steals                              733         534         477\n\nThis test just creates a large file mapping and writes to it linearly.\nTime to completion is again reduced.\n\nThe gains are mostly down to two things.  In many cases, there is less\nscanning as zone_reclaim simply gives up faster due to recent failures.\nThe second reason is that memory is used more efficiently.  Instead of\nscanning the preferred zone every time, the allocator falls back to\nanother zone and uses it instead improving overall memory utilisation.\n\nThis patch: initialise ZLC for first zone eligible for zone_reclaim.\n\nThe zonelist cache (ZLC) is used among other things to record if\nzone_reclaim() failed for a particular zone recently.  The intention is to\navoid a high cost scanning extremely long zonelists or scanning within the\nzone uselessly.\n\nCurrently the zonelist cache is setup only after the first zone has been\nconsidered and zone_reclaim() has been called.  The objective was to avoid\na costly setup but zone_reclaim is itself quite expensive.  If it is\nfailing regularly such as the first eligible zone having mostly mapped\npages, the cost in scanning and allocation stalls is far higher than the\nZLC initialisation step.\n\nThis patch initialises ZLC before the first eligible zone calls\nzone_reclaim().  Once initialised, it is checked whether the zone failed\nzone_reclaim recently.  If it has, the zone is skipped.  As the first zone\nis now being checked, additional care has to be taken about zones marked\nfull.  A zone can be marked \"full\" because it should not have enough\nunmapped pages for zone_reclaim but this is excessive as direct reclaim or\nkswapd may succeed where zone_reclaim fails.  Only mark zones \"full\" after\nzone_reclaim fails if it failed to reclaim enough pages after scanning.\n\nSigned-off-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Minchan Kim \u003cminchan.kim@gmail.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7c0caeb866b0f648d91bb75b8bc6f86af95bb033",
      "tree": "042804fe716310a4de4effbbaa4461237e2b5d4a",
      "parents": [
        "67e24bcb725cabd15ef577bf301275d03d6086d7"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jul 14 11:43:42 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Thu Jul 14 11:47:43 2011 -0700"
      },
      "message": "memblock: Add optional region-\u003enid\n\nFrom 83103b92f3234ec830852bbc5c45911bd6cbdb20 Mon Sep 17 00:00:00 2001\nFrom: Tejun Heo \u003ctj@kernel.org\u003e\nDate: Thu, 14 Jul 2011 11:22:16 +0200\n\nAdd optional region-\u003enid which can be enabled by arch using\nCONFIG_HAVE_MEMBLOCK_NODE_MAP.  When enabled, memblock also carries\nNUMA node information and replaces early_node_map[].\n\nNewly added memblocks have MAX_NUMNODES as nid.  Arch can then call\nmemblock_set_node() to set node information.  memblock takes care of\nmerging and node affine allocations w.r.t. node information.\n\nWhen MEMBLOCK_NODE_MAP is enabled, early_node_map[], related data\nstructures and functions to manipulate and iterate it are disabled.\nmemblock version of __next_mem_pfn_range() is provided such that\nfor_each_mem_pfn_range() behaves the same and its users don\u0027t have to\nbe updated.\n\n-v2: Yinghai spotted section mismatch caused by missing\n     __init_memblock in memblock_set_node().  Fixed.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/20110714094342.GF3455@htj.dyndns.org\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    },
    {
      "commit": "eb40c4c27f1722f058e4713ccfedebac577d5190",
      "tree": "b471a4451c7cab125b3aafced4c77c7958fd711d",
      "parents": [
        "e64980405cc6aa74ef178d8d9aa4018c867ceed1"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 12 10:46:35 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Thu Jul 14 11:45:35 2011 -0700"
      },
      "message": "memblock, x86: Replace memblock_x86_find_in_range_node() with generic memblock calls\n\nWith the previous changes, generic NUMA aware memblock API has feature\nparity with memblock_x86_find_in_range_node().  There currently are\ntwo users - x86 setup_node_data() and __alloc_memory_core_early() in\nnobootmem.c.\n\nThis patch converts the former to use memblock_alloc_nid() and the\nlatter memblock_find_range_in_node(), and kills\nmemblock_x86_find_in_range_node() and related functions including\nfind_memory_early_core_early() in page_alloc.c.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/1310460395-30913-9-git-send-email-tj@kernel.org\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    },
    {
      "commit": "c13291a536b835b2ab278ab201f2cb1ce22f2785",
      "tree": "6bb3a2fd47e22d75308314b14f3a0f0a4d338141",
      "parents": [
        "96e907d1360240d1958fe8ce3a3ac640733330d4"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 12 10:46:30 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Thu Jul 14 11:45:31 2011 -0700"
      },
      "message": "bootmem: Use for_each_mem_pfn_range() in page_alloc.c\n\nThe previous patch added for_each_mem_pfn_range() which is more\nversatile than for_each_active_range_index_in_nid().  This patch\nreplaces for_each_active_range_index_in_nid() and open coded\nearly_node_map[] walks with for_each_mem_pfn_range().\n\nAll conversions in this patch are straight-forward and shouldn\u0027t cause\nany functional difference.  After the conversions,\nfor_each_active_range_index_in_nid() doesn\u0027t have any user left and is\nremoved.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/1310460395-30913-4-git-send-email-tj@kernel.org\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    },
    {
      "commit": "96e907d1360240d1958fe8ce3a3ac640733330d4",
      "tree": "ac1fec297f9b6d620a76dc0cd0476b74bc628a95",
      "parents": [
        "5dfe8660a3d7f1ee1265c3536433ee53da3f98a3"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 12 10:46:29 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Thu Jul 14 11:45:30 2011 -0700"
      },
      "message": "bootmem: Reimplement __absent_pages_in_range() using for_each_mem_pfn_range()\n\n__absent_pages_in_range() was needlessly complex.  Reimplement it\nusing for_each_mem_pfn_range().\n\nAlso, update zone_absent_pages_in_node() such that it doesn\u0027t call\n__absent_pages_in_range() with @zone_start_pfn which is larger than\n@zone_end_pfn.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/1310460395-30913-3-git-send-email-tj@kernel.org\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    },
    {
      "commit": "5dfe8660a3d7f1ee1265c3536433ee53da3f98a3",
      "tree": "c58232b88741ba1d8cce417b62f3f658369ad9c2",
      "parents": [
        "fc769a8e70a3348d5de49e5f69f6aff810157360"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jul 14 09:46:10 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Thu Jul 14 11:45:29 2011 -0700"
      },
      "message": "bootmem: Replace work_with_active_regions() with for_each_mem_pfn_range()\n\nCallback based iteration is cumbersome and much less useful than\nfor_each_*() iterator.  This patch implements for_each_mem_pfn_range()\nwhich replaces work_with_active_regions().  All the current users of\nwork_with_active_regions() are converted.\n\nThis simplifies walking over early_node_map and will allow converting\ninternal logics in page_alloc to use iterator instead of walking\nearly_node_map directly, which in turn will enable moving node\ninformation to memblock.\n\npowerpc change is only compile tested.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/20110714074610.GD3455@htj.dyndns.org\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    },
    {
      "commit": "1f5026a7e21e409c2b9dd54f6dfb9446511fb7c5",
      "tree": "bcf0529d5f05ea8b685d6c0fddcb3197c2fab49c",
      "parents": [
        "348968eb151e2569ad0ebe19b2f9c3c25b5c816a"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 12 09:58:09 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Wed Jul 13 16:36:01 2011 -0700"
      },
      "message": "memblock: Kill MEMBLOCK_ERROR\n\n25818f0f28 (memblock: Make MEMBLOCK_ERROR be 0) thankfully made\nMEMBLOCK_ERROR 0 and there already are codes which expect error return\nto be 0.  There\u0027s no point in keeping MEMBLOCK_ERROR around.  End its\nmisery.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/1310457490-3356-6-git-send-email-tj@kernel.org\nCc: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    },
    {
      "commit": "53348f27168534561c0c814843bbf181314374f4",
      "tree": "619f7945ecb15317dd211c68267eb6603295521f",
      "parents": [
        "bf61549a2d8e0326f5d6e4d1718883a7212d725f"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 12 09:58:06 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Wed Jul 13 16:35:56 2011 -0700"
      },
      "message": "bootmem: Fix __free_pages_bootmem() to use @order properly\n\na226f6c899 (FRV: Clean up bootmem allocator\u0027s page freeing algorithm)\nseparated out __free_pages_bootmem() from free_all_bootmem_core().\n__free_pages_bootmem() takes @order argument but it assumes @order is\neither 0 or ilog2(BITS_PER_LONG).  Note that all the current users\nmatch that assumption and this doesn\u0027t cause actual problems.\n\nFix it by using 1 \u003c\u003c order instead of BITS_PER_LONG.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/1310457490-3356-3-git-send-email-tj@kernel.org\nCc: David Howells \u003cdhowells@redhat.com\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    },
    {
      "commit": "1e01979c8f502ac13e3cdece4f38712c5944e6e8",
      "tree": "d47c4700bfdcffc3f7f68b19d50c588c20689b48",
      "parents": [
        "d0ead157387f19801beb1b419568723b2e9b7c79"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 12 09:45:34 2011 +0200"
      },
      "committer": {
        "name": "H. Peter Anvin",
        "email": "hpa@linux.intel.com",
        "time": "Tue Jul 12 21:58:29 2011 -0700"
      },
      "message": "x86, numa: Implement pfn -\u003e nid mapping granularity check\n\nSPARSEMEM w/o VMEMMAP and DISCONTIGMEM, both used only on 32bit, use\nsections array to map pfn to nid which is limited in granularity.  If\nNUMA nodes are laid out such that the mapping cannot be accurate, boot\nwill fail triggering BUG_ON() in mminit_verify_page_links().\n\nOn 32bit, it\u0027s 512MiB w/ PAE and SPARSEMEM.  This seems to have been\ngranular enough until commit 2706a0bf7b (x86, NUMA: Enable\nCONFIG_AMD_NUMA on 32bit too).  Apparently, there is a machine which\naligns NUMA nodes to 128MiB and has only AMD NUMA but not SRAT.  This\nled to the following BUG_ON().\n\n On node 0 totalpages: 2096615\n   DMA zone: 32 pages used for memmap\n   DMA zone: 0 pages reserved\n   DMA zone: 3927 pages, LIFO batch:0\n   Normal zone: 1740 pages used for memmap\n   Normal zone: 220978 pages, LIFO batch:31\n   HighMem zone: 16405 pages used for memmap\n   HighMem zone: 1853533 pages, LIFO batch:31\n BUG: Int 6: CR2   (null)\n      EDI   (null)  ESI 00000002  EBP 00000002  ESP c1543ecc\n      EBX f2400000  EDX 00000006  ECX   (null)  EAX 00000001\n      err   (null)  EIP c16209aa   CS 00000060  flg 00010002\n Stack: f2400000 00220000 f7200800 c1620613 00220000 01000000 04400000 00238000\n          (null) f7200000 00000002 f7200b58 f7200800 c1620929 000375fe   (null)\n        f7200b80 c16395f0 00200a02 f7200a80   (null) 000375fe 00000002   (null)\n Pid: 0, comm: swapper Not tainted 2.6.39-rc5-00181-g2706a0b #17\n Call Trace:\n  [\u003cc136b1e5\u003e] ? early_fault+0x2e/0x2e\n  [\u003cc16209aa\u003e] ? mminit_verify_page_links+0x12/0x42\n  [\u003cc1620613\u003e] ? memmap_init_zone+0xaf/0x10c\n  [\u003cc1620929\u003e] ? free_area_init_node+0x2b9/0x2e3\n  [\u003cc1607e99\u003e] ? free_area_init_nodes+0x3f2/0x451\n  [\u003cc1601d80\u003e] ? paging_init+0x112/0x118\n  [\u003cc15f578d\u003e] ? setup_arch+0x791/0x82f\n  [\u003cc15f43d9\u003e] ? start_kernel+0x6a/0x257\n\nThis patch implements node_map_pfn_alignment() which determines\nmaximum internode alignment and update numa_register_memblks() to\nreject NUMA configuration if alignment exceeds the pfn -\u003e nid mapping\ngranularity of the memory model as determined by PAGES_PER_SECTION.\n\nThis makes the problematic machine boot w/ flatmem by rejecting the\nNUMA config and provides protection against crazy NUMA configurations.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLink: http://lkml.kernel.org/r/20110712074534.GB2872@htj.dyndns.org\nLKML-Reference: \u003c20110628174613.GP478@escobedo.osrc.amd.com\u003e\nReported-and-Tested-by: Hans Rosenfeld \u003chans.rosenfeld@amd.com\u003e\nCc: Conny Seidel \u003cconny.seidel@amd.com\u003e\nSigned-off-by: H. Peter Anvin \u003chpa@linux.intel.com\u003e\n"
    }
  ],
  "next": "83de731ffcc6777a33e8a6132c7da8d91faac9ca"
}
