)]}'
{
  "log": [
    {
      "commit": "0c0e6195896535481173df98935ad8db174f4d45",
      "tree": "2b35d3b81ba54b5d38e691d2a2019f4bcdfd1dce",
      "parents": [
        "a5d76b54a3f3a40385d7f76069a2feac9f1bad63"
      ],
      "author": {
        "name": "KAMEZAWA Hiroyuki",
        "email": "kamezawa.hiroyu@jp.fujitsu.com",
        "time": "Tue Oct 16 01:26:12 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:02 2007 -0700"
      },
      "message": "memory unplug: page offline\n\nLogic.\n - set all pages in  [start,end)  as isolated migration-type.\n   by this, all free pages in the range will be not-for-use.\n - Migrate all LRU pages in the range.\n - Test all pages in the range\u0027s refcnt is zero or not.\n\nTodo:\n - allocate migration destination page from better area.\n - confirm page_count(page)\u003d\u003d 0 \u0026\u0026 PageReserved(page) page is safe to be freed..\n (I don\u0027t like this kind of page but..\n - Find out pages which cannot be migrated.\n - more running tests.\n - Use reclaim for unplugging other memory type area.\n\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a5d76b54a3f3a40385d7f76069a2feac9f1bad63",
      "tree": "f58c432a4224b3be032bd4a4afa79dfa55d198a6",
      "parents": [
        "75884fb1c6388f3713ddcca662f3647b3129aaeb"
      ],
      "author": {
        "name": "KAMEZAWA Hiroyuki",
        "email": "kamezawa.hiroyu@jp.fujitsu.com",
        "time": "Tue Oct 16 01:26:11 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:02 2007 -0700"
      },
      "message": "memory unplug: page isolation\n\nImplement generic chunk-of-pages isolation method by using page grouping ops.\n\nThis patch add MIGRATE_ISOLATE to MIGRATE_TYPES. By this\n - MIGRATE_TYPES increases.\n - bitmap for migratetype is enlarged.\n\npages of MIGRATE_ISOLATE migratetype will not be allocated even if it is free.\nBy this, you can isolated *freed* pages from users. How-to-free pages is not\na purpose of this patch. You may use reclaim and migrate codes to free pages.\n\nIf start_isolate_page_range(start,end) is called,\n - migratetype of the range turns to be MIGRATE_ISOLATE  if\n   its type is MIGRATE_MOVABLE. (*) this check can be updated if other\n   memory reclaiming works make progress.\n - MIGRATE_ISOLATE is not on migratetype fallback list.\n - All free pages and will-be-freed pages are isolated.\nTo check all pages in the range are isolated or not,  use test_pages_isolated(),\nTo cancel isolation, use undo_isolate_page_range().\n\nChanges V6 -\u003e V7\n - removed unnecessary #ifdef\n\nThere are HOLES_IN_ZONE handling codes...I\u0027m glad if we can remove them..\n\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "48f13bf3e742fca8aab87f6c39451d03bf5952d4",
      "tree": "668160019ab157500a90655cf44f798ed3c77893",
      "parents": [
        "ea3061d227816d00717446ac12b853d7ae04b4fe"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:26:10 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:01 2007 -0700"
      },
      "message": "Breakout page_order() to internal.h to avoid special knowledge of the buddy allocator\n\nThe statistics patch later needs to know what order a free page is on the free\nlists.  Rather than having special knowledge of page_private() when\nPageBuddy() is set, this patch places out page_order() in internal.h and adds\na VM_BUG_ON to catch using it on non-PageBuddy pages.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "484f51f820199ab3e0ef15d08f1b6be20f53bf39",
      "tree": "eea15a3cb546463488595e7a5fae8628a5dd7877",
      "parents": [
        "467c996c1e1910633fa8e7adc9b052aa3ed5f97c"
      ],
      "author": {
        "name": "Adrian Bunk",
        "email": "bunk@stusta.de",
        "time": "Tue Oct 16 01:26:03 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:01 2007 -0700"
      },
      "message": "mm/page_alloc.c: make code static\n\nThis patch makes needlessly global code static.\n\nSigned-off-by: Adrian Bunk \u003cbunk@stusta.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "467c996c1e1910633fa8e7adc9b052aa3ed5f97c",
      "tree": "09e0e70160386be1bdaa12801afddf287e12c8a1",
      "parents": [
        "d9c2340052278d8eb2ffb16b0484f8f794def4de"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:26:02 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Print out statistics in relation to fragmentation avoidance to /proc/pagetypeinfo\n\nThis patch provides fragmentation avoidance statistics via /proc/pagetypeinfo.\n The information is collected only on request so there is no runtime overhead.\n The statistics are in three parts:\n\nThe first part prints information on the size of blocks that pages are\nbeing grouped on and looks like\n\nPage block order: 10\nPages per block:  1024\n\nThe second part is a more detailed version of /proc/buddyinfo and looks like\n\nFree pages count per migrate type at order       0      1      2      3      4      5      6      7      8      9     10\nNode    0, zone      DMA, type    Unmovable      0      0      0      0      0      0      0      0      0      0      0\nNode    0, zone      DMA, type  Reclaimable      1      0      0      0      0      0      0      0      0      0      0\nNode    0, zone      DMA, type      Movable      0      0      0      0      0      0      0      0      0      0      0\nNode    0, zone      DMA, type      Reserve      0      4      4      0      0      0      0      1      0      1      0\nNode    0, zone   Normal, type    Unmovable    111      8      4      4      2      3      1      0      0      0      0\nNode    0, zone   Normal, type  Reclaimable    293     89      8      0      0      0      0      0      0      0      0\nNode    0, zone   Normal, type      Movable      1      6     13      9      7      6      3      0      0      0      0\nNode    0, zone   Normal, type      Reserve      0      0      0      0      0      0      0      0      0      0      4\n\nThe third part looks like\n\nNumber of blocks type     Unmovable  Reclaimable      Movable      Reserve\nNode 0, zone      DMA            0            1            2            1\nNode 0, zone   Normal            3           17           94            4\n\nTo walk the zones within a node with interrupts disabled, walk_zones_in_node()\nis introduced and shared between /proc/buddyinfo, /proc/zoneinfo and\n/proc/pagetypeinfo to reduce code duplication.  It seems specific to what\nvmstat.c requires but could be broken out as a general utility function in\nmmzone.c if there were other other potential users.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nAcked-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d9c2340052278d8eb2ffb16b0484f8f794def4de",
      "tree": "aec7e4e11473a4fcdfd389c718544780a042c6df",
      "parents": [
        "d100313fd615cc30374ff92e0b3facb053838330"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:26:01 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Do not depend on MAX_ORDER when grouping pages by mobility\n\nCurrently mobility grouping works at the MAX_ORDER_NR_PAGES level.  This makes\nsense for the majority of users where this is also the huge page size.\nHowever, on platforms like ia64 where the huge page size is runtime\nconfigurable it is desirable to group at a lower order.  On x86_64 and\noccasionally on x86, the hugepage size may not always be MAX_ORDER_NR_PAGES.\n\nThis patch groups pages together based on the value of HUGETLB_PAGE_ORDER.  It\nuses a compile-time constant if possible and a variable where the huge page\nsize is runtime configurable.\n\nIt is assumed that grouping should be done at the lowest sensible order and\nthat the user would not want to override this.  If this is not true,\npage_block order could be forced to a variable initialised via a boot-time\nkernel parameter.\n\nOne potential issue with this patch is that IA64 now parses hugepagesz with\nearly_param() instead of __setup().  __setup() is called after the memory\nallocator has been initialised and the pageblock bitmaps already setup.  In\ntests on one IA64 there did not seem to be any problem with using\nearly_param() and in fact may be more correct as it guarantees the parameter\nis handled before the parsing of hugepages\u003d.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nAcked-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d100313fd615cc30374ff92e0b3facb053838330",
      "tree": "f0bcd5e3b07bee40a65182c63b54baceca366849",
      "parents": [
        "64c5e135bf5a2a7f0ededb3435a31adbe0202f0c"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:26:00 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Fix calculation in move_freepages_block for counting pages\n\nmove_freepages_block() returns the number of blocks moved.  This value is used\nto determine if a block of pages should be stolen for the exclusive use of a\nmigrate type or not.  However, the value returned is being used correctly.\nThis patch fixes the calculation to return the number of base pages that have\nbeen moved.\n\nThis should be considered a fix to the patch\nmove-free-pages-between-lists-on-steal.patch\n\nCredit to Andy Whitcroft for spotting the problem.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nAcked-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "64c5e135bf5a2a7f0ededb3435a31adbe0202f0c",
      "tree": "cb4ff93cbcc3c27176723419a313d7c53545d36b",
      "parents": [
        "ac0e5b7a6b93fb291b01fe1e951e3c16bcdd3503"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:59 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "don\u0027t group high order atomic allocations\n\nGrouping high-order atomic allocations together was intended to allow\nbursty users of atomic allocations to work such as e1000 in situations\nwhere their preallocated buffers were depleted.  This did not work in at\nleast one case with a wireless network adapter needing order-1 allocations\nfrequently.  To resolve that, the free pages used for min_free_kbytes were\nmoved to separate contiguous blocks with the patch\nbias-the-location-of-pages-freed-for-min_free_kbytes-in-the-same-max_order_nr_pages-blocks.\n\nIt is felt that keeping the free pages in the same contiguous blocks should\nbe sufficient for bursty short-lived high-order atomic allocations to\nsucceed, maybe even with the e1000.  Even if there is a failure, increasing\nthe value of min_free_kbytes will free pages as contiguous bloks in\ncontrast to the standard buddy allocator which makes no attempt to keep the\nminimum number of free pages contiguous.\n\nThis patch backs out grouping high order atomic allocations together to\ndetermine if it is really needed or not.  If a new report comes in about\nhigh-order atomic allocations failing, the feature can be reintroduced to\ndetermine if it fixes the problem or not.  As a side-effect, this patch\nreduces by 1 the number of bits required to track the mobility type of\npages within a MAX_ORDER_NR_PAGES block.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ac0e5b7a6b93fb291b01fe1e951e3c16bcdd3503",
      "tree": "732f67c8de6e0d2e001b60c17af9599468b80163",
      "parents": [
        "56fd56b868f19385c50af8941a4c78df433b2d32"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:58 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "remove PAGE_GROUP_BY_MOBILITY\n\nGrouping pages by mobility can be disabled at compile-time. This was\nconsidered undesirable by a number of people. However, in the current stack of\npatches, it is not a simple case of just dropping the configurable patch as it\nwould cause merge conflicts.  This patch backs out the configuration option.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "56fd56b868f19385c50af8941a4c78df433b2d32",
      "tree": "5ea8362e6e141e2d1124d4640811c76489567bc5",
      "parents": [
        "5c0e3066474b57c56ff0d88ca31d95bd14232fee"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:58 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Bias the location of pages freed for min_free_kbytes in the same MAX_ORDER_NR_PAGES blocks\n\nThe standard buddy allocator always favours the smallest block of pages.\nThe effect of this is that the pages free to satisfy min_free_kbytes tends\nto be preserved since boot time at the same location of memory ffor a very\nlong time and as a contiguous block.  When an administrator sets the\nreserve at 16384 at boot time, it tends to be the same MAX_ORDER blocks\nthat remain free.  This allows the occasional high atomic allocation to\nsucceed up until the point the blocks are split.  In practice, it is\ndifficult to split these blocks but when they do split, the benefit of\nhaving min_free_kbytes for contiguous blocks disappears.  Additionally,\nincreasing min_free_kbytes once the system has been running for some time\nhas no guarantee of creating contiguous blocks.\n\nOn the other hand, CONFIG_PAGE_GROUP_BY_MOBILITY favours splitting large\nblocks when there are no free pages of the appropriate type available.  A\nside-effect of this is that all blocks in memory tends to be used up and\nthe contiguous free blocks from boot time are not preserved like in the\nvanilla allocator.  This can cause a problem if a new caller is unwilling\nto reclaim or does not reclaim for long enough.\n\nA failure scenario was found for a wireless network device allocating\norder-1 atomic allocations but the allocations were not intense or frequent\nenough for a whole block of pages to be preserved for MIGRATE_HIGHALLOC.\nThis was reproduced on a desktop by booting with mem\u003d256mb, forcing the\ndriver to allocate at order-1, running a bittorrent client (downloading a\ndebian ISO) and building a kernel with -j2.\n\nThis patch addresses the problem on the desktop machine booted with\nmem\u003d256mb.  It works by setting aside a reserve of MAX_ORDER_NR_PAGES\nblocks, the number of which depends on the value of min_free_kbytes.  These\nblocks are only fallen back to when there is no other free pages.  Then the\nsmallest possible page is used just like the normal buddy allocator instead\nof the largest possible page to preserve contiguous pages The pages in free\nlists in the reserve blocks are never taken for another migrate type.  The\nresults is that even if min_free_kbytes is set to a low value, contiguous\nblocks will be preserved in the MIGRATE_RESERVE blocks.\n\nThis works better than the vanilla allocator because if min_free_kbytes is\nincreased, a new reserve block will be chosen based on the location of\nreclaimable pages and the block will free up as contiguous pages.  In the\nvanilla allocator, no effort is made to target a block of pages to free as\ncontiguous pages and min_free_kbytes pages are scattered randomly.\n\nThis effect has been observed on the test machine.  min_free_kbytes was set\ninitially low but it was kept as a contiguous free block within\nMIGRATE_RESERVE.  min_free_kbytes was then set to a higher value and over a\nperiod of time, the free blocks were within the reserve and coalescing.\nHow long it takes to free up depends on how quickly LRU is rotating.\nAmusingly, this means that more activity will free the blocks faster.\n\nThis mechanism potentially replaces MIGRATE_HIGHALLOC as it may be more\neffective than grouping contiguous free pages together.  It all depends on\nwhether the number of active atomic high allocations exceeds\nmin_free_kbytes or not.  If the number of active allocations exceeds\nmin_free_kbytes, it\u0027s worth it but maybe in that situation, min_free_kbytes\nshould be set higher.  Once there are no more reports of allocation\nfailures, a patch will be submitted that backs out MIGRATE_HIGHALLOC and\nsee if the reports stay missing.\n\nCredit to Mariusz Kozlowski for discovering the problem, describing the\nfailure scenario and testing patches and scenarios.\n\n[akpm@linux-foundation.org: cleanups]\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "46dafbca2bba811665b01d8cedf911204820623c",
      "tree": "c0dbc78b3da9749f257fa8756cd3133f39cd4f88",
      "parents": [
        "5adc5be7cd1bcef6bb64f5255d2a33f20a3cf5be"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:55 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Be more agressive about stealing when MIGRATE_RECLAIMABLE allocations fallback\n\nMIGRATE_RECLAIMABLE allocations tend to be very bursty in nature like when\nupdatedb starts.  It is likely this will occur in situations where MAX_ORDER\nblocks of pages are not free.  This means that updatedb can scatter\nMIGRATE_RECLAIMABLE pages throughout the address space.  This patch is more\nagressive about stealing blocks of pages for MIGRATE_RECLAIMABLE.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5adc5be7cd1bcef6bb64f5255d2a33f20a3cf5be",
      "tree": "39ed023e8c36cde0c0f9cba50f945e77d3ca26fa",
      "parents": [
        "9ef9acb05a741ec10a5e9122717736de12adced9"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:54 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Bias the placement of kernel pages at lower PFNs\n\nThis patch chooses blocks with lower PFNs when placing kernel allocations.\nThis is particularly important during fallback in low memory situations to\nstop unmovable pages being placed throughout the entire address space.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9ef9acb05a741ec10a5e9122717736de12adced9",
      "tree": "6008083999b3c6e115714fcdea637195f2b53df6",
      "parents": [
        "e010487dbe09d63cf916fd1b119d17abd0f48207"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:54 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Do not group pages by mobility type on low memory systems\n\nGrouping pages by mobility can only successfully operate when there are more\nMAX_ORDER_NR_PAGES areas than mobility types.  When there are insufficient\nareas, fallbacks cannot be avoided.  This has noticeable performance impacts\non machines with small amounts of memory in comparison to MAX_ORDER_NR_PAGES.\nFor example, on IA64 with a configuration including huge pages spans 1GiB with\nMAX_ORDER_NR_PAGES so would need at least 4GiB of RAM before grouping pages by\nmobility would be useful.  In comparison, an x86 would need 16MB.\n\nThis patch checks the size of vm_total_pages in build_all_zonelists(). If\nthere are not enough areas,  mobility is effectivly disabled by considering\nall allocations as the same type (UNMOVABLE).  This is achived via a\n__read_mostly flag.\n\nWith this patch, performance is comparable to disabling grouping pages\nby mobility at compile-time on a test machine with insufficient memory.\nWith this patch, it is reasonable to get rid of grouping pages by mobility\na compile-time option.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e010487dbe09d63cf916fd1b119d17abd0f48207",
      "tree": "37c7f36913daf4bc0a68a1d0ba1cc30ee0d4e307",
      "parents": [
        "e12ba74d8ff3e2f73a583500d7095e406df4d093"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:53 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Group high-order atomic allocations\n\nIn rare cases, the kernel needs to allocate a high-order block of pages\nwithout sleeping.  For example, this is the case with e1000 cards configured\nto use jumbo frames.  Migrating or reclaiming pages in this situation is not\nan option.\n\nThis patch groups these allocations together as much as possible by adding a\nnew MIGRATE_TYPE.  The MIGRATE_HIGHATOMIC type are exactly what they sound\nlike.  Care is taken that pages of other migrate types do not use the same\nblocks as high-order atomic allocations.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e12ba74d8ff3e2f73a583500d7095e406df4d093",
      "tree": "a0d3385b65f0b3e1e00b0bbf11b75e7538a93edb",
      "parents": [
        "c361be55b3128474aa66d31092db330b07539103"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:52 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Group short-lived and reclaimable kernel allocations\n\nThis patch marks a number of allocations that are either short-lived such as\nnetwork buffers or are reclaimable such as inode allocations.  When something\nlike updatedb is called, long-lived and unmovable kernel allocations tend to\nbe spread throughout the address space which increases fragmentation.\n\nThis patch groups these allocations together as much as possible by adding a\nnew MIGRATE_TYPE.  The MIGRATE_RECLAIMABLE type is for allocations that can be\nreclaimed on demand, but not moved.  i.e.  they can be migrated by deleting\nthem and re-reading the information from elsewhere.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Andy Whitcroft \u003capw@shadowen.org\u003e\nCc: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c361be55b3128474aa66d31092db330b07539103",
      "tree": "9ce134f4e679144d28f5c32924bdba999a1aae6b",
      "parents": [
        "e2c55dc87f4a398b9c4dcc702dbc23a07fe14e23"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:51 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:43:00 2007 -0700"
      },
      "message": "Move free pages between lists on steal\n\nWhen a fallback occurs, there will be free pages for one allocation type\nstored on the list for another.  When a large steal occurs, this patch will\nmove all the free pages within one list to the other.\n\n[y-goto@jp.fujitsu.com: fix BUG_ON check at move_freepages()]\n[apw@shadowen.org: Move to using pfn_valid_within()]\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nCc: Bjorn Helgaas \u003cbjorn.helgaas@hp.com\u003e\nSigned-off-by: Andy Whitcroft \u003candyw@uk.ibm.com\u003e\nCc: Bob Picco \u003cbob.picco@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e2c55dc87f4a398b9c4dcc702dbc23a07fe14e23",
      "tree": "b3811de298d4c98c4765db4af3838428553b5382",
      "parents": [
        "b92a6edd4b77a8794adb497280beea5df5e59a14"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:50 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "Drain per-cpu lists when high-order allocations fail\n\nPer-cpu pages can accidentally cause fragmentation because they are free, but\npinned pages in an otherwise contiguous block.  When this patch is applied,\nthe per-cpu caches are drained after the direct-reclaim is entered if the\nrequested order is greater than 0.  It simply reuses the code used by suspend\nand hotplug.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b92a6edd4b77a8794adb497280beea5df5e59a14",
      "tree": "396ea5cf2b53fc066e949c443f03747ec868de1e",
      "parents": [
        "535131e6925b4a95f321148ad7293f496e0e58d7"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:50 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "Add a configure option to group pages by mobility\n\nThe grouping mechanism has some memory overhead and a more complex allocation\npath.  This patch allows the strategy to be disabled for small memory systems\nor if it is known the workload is suffering because of the strategy.  It also\nacts to show where the page groupings strategy interacts with the standard\nbuddy allocator.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Joel Schopp \u003cjschopp@austin.ibm.com\u003e\nCc: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "535131e6925b4a95f321148ad7293f496e0e58d7",
      "tree": "fdd49e29f89eb6db3ba2b5ba7df7b059de95a91f",
      "parents": [
        "b2a0ac8875a0a3b9f0739b60526f8c5977d2200f"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:49 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "Choose pages from the per-cpu list based on migration type\n\nThe freelists for each migrate type can slowly become polluted due to the\nper-cpu list.  Consider what happens when the following happens\n\n1. A 2^(MAX_ORDER-1) list is reserved for __GFP_MOVABLE pages\n2. An order-0 page is allocated from the newly reserved block\n3. The page is freed and placed on the per-cpu list\n4. alloc_page() is called with GFP_KERNEL as the gfp_mask\n5. The per-cpu list is used to satisfy the allocation\n\nThis results in a kernel page is in the middle of a migratable region. This\npatch prevents this leak occuring by storing the MIGRATE_ type of the page in\npage-\u003eprivate. On allocate, a page will only be returned of the desired type,\nelse more pages will be allocated. This may temporarily allow a per-cpu list\nto go over the pcp-\u003ehigh limit but it\u0027ll be corrected on the next free. Care\nis taken to preserve the hotness of pages recently freed.\n\nThe additional code is not measurably slower for the workloads we\u0027ve tested.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b2a0ac8875a0a3b9f0739b60526f8c5977d2200f",
      "tree": "31826716b3209751a5468b840ff14190b4a5a8a2",
      "parents": [
        "835c134ec4dd755e5c4470af566db226d1e96742"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:48 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "Split the free lists for movable and unmovable allocations\n\nThis patch adds the core of the fragmentation reduction strategy.  It works by\ngrouping pages together based on their ability to migrate or be reclaimed.\nBasically, it works by breaking the list in zone-\u003efree_area list into\nMIGRATE_TYPES number of lists.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "835c134ec4dd755e5c4470af566db226d1e96742",
      "tree": "7bc659978b4fba5089fc820185a8b6f0cc010b08",
      "parents": [
        "954ffcb35f5aca428661d29b96c4eee82b3c19cd"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Oct 16 01:25:47 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "Add a bitmap that is used to track flags affecting a block of pages\n\nHere is the latest revision of the anti-fragmentation patches.  Of particular\nnote in this version is special treatment of high-order atomic allocations.\nCare is taken to group them together and avoid grouping pages of other types\nnear them.  Artifical tests imply that it works.  I\u0027m trying to get the\nhardware together that would allow setting up of a \"real\" test.  If anyone\nalready has a setup and test that can trigger the atomic-allocation problem,\nI\u0027d appreciate a test of these patches and a report.  The second major change\nis that these patches will apply cleanly with patches that implement\nanti-fragmentation through zones.\n\nkernbench shows effectively no performance difference varying between -0.2%\nand +2% on a variety of test machines.  Success rates for huge page allocation\nare dramatically increased.  For example, on a ppc64 machine, the vanilla\nkernel was only able to allocate 1% of memory as a hugepage and this was due\nto a single hugepage reserved as min_free_kbytes.  With these patches applied,\n17% was allocatable as superpages.  With reclaim-related fixes from Andy\nWhitcroft, it was 40% and further reclaim-related improvements should increase\nthis further.\n\nChangelog Since V28\no Group high-order atomic allocations together\no It is no longer required to set min_free_kbytes to 10% of memory. A value\n  of 16384 in most cases will be sufficient\no Now applied with zone-based anti-fragmentation\no Fix incorrect VM_BUG_ON within buffered_rmqueue()\no Reorder the stack so later patches do not back out work from earlier patches\no Fix bug were journal pages were being treated as movable\no Bias placement of non-movable pages to lower PFNs\no More agressive clustering of reclaimable pages in reactions to workloads\n  like updatedb that flood the size of inode caches\n\nChangelog Since V27\n\no Renamed anti-fragmentation to Page Clustering. Anti-fragmentation was giving\n  the mistaken impression that it was the 100% solution for high order\n  allocations. Instead, it greatly increases the chances high-order\n  allocations will succeed and lays the foundation for defragmentation and\n  memory hot-remove to work properly\no Redefine page groupings based on ability to migrate or reclaim instead of\n  basing on reclaimability alone\no Get rid of spurious inits\no Per-cpu lists are no longer split up per-type. Instead the per-cpu list is\n  searched for a page of the appropriate type\no Added more explanation commentary\no Fix up bug in pageblock code where bitmap was used before being initalised\n\nChangelog Since V26\no Fix double init of lists in setup_pageset\n\nChangelog Since V25\no Fix loop order of for_each_rclmtype_order so that order of loop matches args\no gfpflags_to_rclmtype uses gfp_t instead of unsigned long\no Rename get_pageblock_type() to get_page_rclmtype()\no Fix alignment problem in move_freepages()\no Add mechanism for assigning flags to blocks of pages instead of page-\u003eflags\no On fallback, do not examine the preferred list of free pages a second time\n\nThe purpose of these patches is to reduce external fragmentation by grouping\npages of related types together.  When pages are migrated (or reclaimed under\nmemory pressure), large contiguous pages will be freed.\n\nThis patch works by categorising allocations by their ability to migrate;\n\nMovable - The pages may be moved with the page migration mechanism. These are\n\tgenerally userspace pages.\n\nReclaimable - These are allocations for some kernel caches that are\n\treclaimable or allocations that are known to be very short-lived.\n\nUnmovable - These are pages that are allocated by the kernel that\n\tare not trivially reclaimed. For example, the memory allocated for a\n\tloaded module would be in this category. By default, allocations are\n\tconsidered to be of this type\n\nHighAtomic - These are high-order allocations belonging to callers that\n\tcannot sleep or perform any IO. In practice, this is restricted to\n\tjumbo frame allocation for network receive. It is assumed that the\n\tallocations are short-lived\n\nInstead of having one MAX_ORDER-sized array of free lists in struct free_area,\nthere is one for each type of reclaimability.  Once a 2^MAX_ORDER block of\npages is split for a type of allocation, it is added to the free-lists for\nthat type, in effect reserving it.  Hence, over time, pages of the different\ntypes can be clustered together.\n\nWhen the preferred freelists are expired, the largest possible block is taken\nfrom an alternative list.  Buddies that are split from that large block are\nplaced on the preferred allocation-type freelists to mitigate fragmentation.\n\nThis implementation gives best-effort for low fragmentation in all zones.\nIdeally, min_free_kbytes needs to be set to a value equal to 4 * (1 \u003c\u003c\n(MAX_ORDER-1)) pages in most cases.  This would be 16384 on x86 and x86_64 for\nexample.\n\nOur tests show that about 60-70% of physical memory can be allocated on a\ndesktop after a few days uptime.  In benchmarks and stress tests, we are\nfinding that 80% of memory is available as contiguous blocks at the end of the\ntest.  To compare, a standard kernel was getting \u003c 1% of memory as large pages\non a desktop and about 8-12% of memory as large pages at the end of stress\ntests.\n\nFollowing this email are 12 patches that implement thie page grouping feature.\n The first patch introduces a mechanism for storing flags related to a whole\nblock of pages.  Then allocations are split between movable and all other\nallocations.  Following that are patches to deal with per-cpu pages and make\nthe mechanism configurable.  The next patch moves free pages between lists\nwhen partially allocated blocks are used for pages of another migrate type.\nThe second last patch groups reclaimable kernel allocations such as inode\ncaches together.  The final patch related to groupings keeps high-order atomic\nallocations.\n\nThe last two patches are more concerned with control of fragmentation.  The\nsecond last patch biases placement of non-movable allocations towards the\nstart of memory.  This is with a view of supporting memory hot-remove of DIMMs\nwith higher PFNs in the future.  The biasing could be enforced a lot heavier\nbut it would cost.  The last patch agressively clusters reclaimable pages like\ninode caches together.\n\nThe fragmentation reduction strategy needs to track if pages within a block\ncan be moved or reclaimed so that pages are freed to the appropriate list.\nThis patch adds a bitmap for flags affecting a whole a MAX_ORDER block of\npages.\n\nIn non-SPARSEMEM configurations, the bitmap is stored in the struct zone and\nallocated during initialisation.  SPARSEMEM statically allocates the bitmap in\na struct mem_section so that bitmaps do not have to be resized during memory\nhotadd.  This wastes a small amount of memory per unused section (usually\nsizeof(unsigned long)) but the complexity of dynamically allocating the memory\nis quite high.\n\nAdditional credit to Andy Whitcroft who reviewed up an earlier implementation\nof the mechanism an suggested how to make it a *lot* cleaner.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "37b07e4163f7306aa735a6e250e8d22293e5b8de",
      "tree": "5c9c1935253a39aa840a9923bf1c86620cb6f733",
      "parents": [
        "0e1e7c7a739562a321fda07c7cd2a97a7114f8f8"
      ],
      "author": {
        "name": "Lee Schermerhorn",
        "email": "Lee.Schermerhorn@hp.com",
        "time": "Tue Oct 16 01:25:39 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "memoryless nodes: fixup uses of node_online_map in generic code\n\nHere\u0027s a cut at fixing up uses of the online node map in generic code.\n\nmm/shmem.c:shmem_parse_mpol()\n\n\tEnsure nodelist is subset of nodes with memory.\n\tUse node_states[N_HIGH_MEMORY] as default for missing\n\tnodelist for interleave policy.\n\nmm/shmem.c:shmem_fill_super()\n\n\tinitialize policy_nodes to node_states[N_HIGH_MEMORY]\n\nmm/page-writeback.c:highmem_dirtyable_memory()\n\n\tsum over nodes with memory\n\nmm/page_alloc.c:zlc_setup()\n\n\tallowednodes - use nodes with memory.\n\nmm/page_alloc.c:default_zonelist_order()\n\n\taverage over nodes with memory.\n\nmm/page_alloc.c:find_next_best_node()\n\n\tskip nodes w/o memory.\n\tN_HIGH_MEMORY state mask may not be initialized at this time,\n\tunless we want to depend on early_calculate_totalpages() [see\n\tbelow].  Will ZONE_MOVABLE ever be configurable?\n\nmm/page_alloc.c:find_zone_movable_pfns_for_nodes()\n\n\tspread kernelcore over nodes with memory.\n\n\tThis required calling early_calculate_totalpages()\n\tunconditionally, and populating N_HIGH_MEMORY node\n\tstate therein from nodes in the early_node_map[].\n\tIf we can depend on this, we can eliminate the\n\tpopulation of N_HIGH_MEMORY mask from __build_all_zonelists()\n\tand use the N_HIGH_MEMORY mask in find_next_best_node().\n\nmm/mempolicy.c:mpol_check_policy()\n\n\tEnsure nodes specified for policy are subset of\n\tnodes with memory.\n\n[akpm@linux-foundation.org: fix warnings]\nSigned-off-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nAcked-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Shaohua Li \u003cshaohua.li@intel.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "523b945855a1427000ffc707c610abe5947ae607",
      "tree": "2d84b5b6822a2a20bfd79146c08ce06ac8c80b9b",
      "parents": [
        "633c0666b5a5c41c376a5a7e4304d638dc48c1b9"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Tue Oct 16 01:25:37 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "Memoryless nodes: Fix GFP_THISNODE behavior\n\nGFP_THISNODE checks that the zone selected is within the pgdat (node) of the\nfirst zone of a nodelist.  That only works if the node has memory.  A\nmemoryless node will have its first node on another pgdat (node).\n\nGFP_THISNODE currently will return simply memory on the first pgdat.  Thus it\nis returning memory on other nodes.  GFP_THISNODE should fail if there is no\nlocal memory on a node.\n\nAdd a new set of zonelists for each node that only contain the nodes that\nbelong to the zones itself so that no fallback is possible.\n\nThen modify gfp_type to pickup the right zone based on the presence of\n__GFP_THISNODE.\n\nDrop the existing GFP_THISNODE checks from the page_allocators hot path.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nAcked-by: Nishanth Aravamudan \u003cnacc@us.ibm.com\u003e\nTested-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nAcked-by: Bob Picco \u003cbob.picco@hp.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmel@skynet.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "633c0666b5a5c41c376a5a7e4304d638dc48c1b9",
      "tree": "a8f42f5d4aaa4d25786d5a60e9c92b6d727254fd",
      "parents": [
        "37c0708dbee5825df3bd9ce6ef2199c6c1713970"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Tue Oct 16 01:25:37 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:59 2007 -0700"
      },
      "message": "Memoryless nodes: drop one memoryless node boot warning\n\nget_pfn_range_for_nid() is called multiple times for each node at boot time.\nEach time, it will warn about nodes with no memory, resulting in boot messages\nlike:\n\n        Node 0 active with no memory\n        Node 0 active with no memory\n        Node 0 active with no memory\n        Node 0 active with no memory\n        Node 0 active with no memory\n        Node 0 active with no memory\n        On node 0 totalpages: 0\n        Node 0 active with no memory\n        Node 0 active with no memory\n          DMA zone: 0 pages used for memmap\n        Node 0 active with no memory\n        Node 0 active with no memory\n          Normal zone: 0 pages used for memmap\n        Node 0 active with no memory\n        Node 0 active with no memory\n          Movable zone: 0 pages used for memmap\n\nand so on for each memoryless node.\n\nWe already have the \"On node N totalpages: ...\" and other related messages, so\ndrop the \"Node N active with no memory\" warnings.\n\nSigned-off-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nCc: Bob Picco \u003cbob.picco@hp.com\u003e\nCc: Nishanth Aravamudan \u003cnacc@us.ibm.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmel@skynet.ie\u003e\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "37c0708dbee5825df3bd9ce6ef2199c6c1713970",
      "tree": "747551aa58484e7f872da118b864c8f3ca6e892d",
      "parents": [
        "56bbd65df0e92a4a8eb70c5f2b416ae2b6c5fb31"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Tue Oct 16 01:25:36 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:58 2007 -0700"
      },
      "message": "Memoryless nodes: Add N_CPU node state\n\nWe need the check for a node with cpu in zone reclaim.  Zone reclaim will not\nallow remote zone reclaim if a node has a cpu.\n\n[Lee.Schermerhorn@hp.com: Move setup of N_CPU node state mask]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nTested-by:  Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nAcked-by: Bob Picco \u003cbob.picco@hp.com\u003e\nCc: Nishanth Aravamudan \u003cnacc@us.ibm.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmel@skynet.ie\u003e\nSigned-off-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7ea1530ab3fdfa85441061909cc8040e84776fd4",
      "tree": "f3af0f8ed40a6df90bdbb9396d6163d59798a821",
      "parents": [
        "13808910713a98cc1159291e62cdfec92cc94d05"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Tue Oct 16 01:25:29 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:58 2007 -0700"
      },
      "message": "Memoryless nodes: introduce mask of nodes with memory\n\nIt is necessary to know if nodes have memory since we have recently begun to\nadd support for memoryless nodes.  For that purpose we introduce a two new\nnode states: N_HIGH_MEMORY and N_NORMAL_MEMORY.\n\nA node has its bit in N_HIGH_MEMORY set if it has any memory regardless of the\ntype of mmemory.  If a node has memory then it has at least one zone defined\nin its pgdat structure that is located in the pgdat itself.\n\nA node has its bit in N_NORMAL_MEMORY set if it has a lower zone than\nZONE_HIGHMEM.  This means it is possible to allocate memory that is not\nsubject to kmap.\n\nN_HIGH_MEMORY and N_NORMAL_MEMORY can then be used in various places to insure\nthat we do the right thing when we encounter a memoryless node.\n\n[akpm@linux-foundation.org: build fix]\n[Lee.Schermerhorn@hp.com: update N_HIGH_MEMORY node state for memory hotadd]\n[y-goto@jp.fujitsu.com: Fix memory hotplug + sparsemem build]\nSigned-off-by: Lee Schermerhorn \u003cLee.Schermerhorn@hp.com\u003e\nSigned-off-by: Nishanth Aravamudan \u003cnacc@us.ibm.com\u003e\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nAcked-by: Bob Picco \u003cbob.picco@hp.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmel@skynet.ie\u003e\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nSigned-off-by: Paul Mundt \u003clethal@linux-sh.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "13808910713a98cc1159291e62cdfec92cc94d05",
      "tree": "0fd7189dc2a76e1ae165ca5d6e8c6b4e6f1761af",
      "parents": [
        "55144768e100b68447f44c5e5c9deb155ad661bd"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Tue Oct 16 01:25:27 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:58 2007 -0700"
      },
      "message": "Memoryless nodes: Generic management of nodemasks for various purposes\n\nWhy do we need to support memoryless nodes?\n\nKAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e wrote:\n\n\u003e For fujitsu, problem is called \"empty\" node.\n\u003e\n\u003e When ACPI\u0027s SRAT table includes \"possible nodes\", ia64 bootstrap(acpi_numa_init)\n\u003e creates nodes, which includes no memory, no cpu.\n\u003e\n\u003e I tried to remove empty-node in past, but that was denied.\n\u003e It was because we can hot-add cpu to the empty node.\n\u003e (node-hotplug triggered by cpu is not implemented now. and it will be ugly.)\n\u003e\n\u003e\n\u003e For HP, (Lee can comment on this later), they have memory-less-node.\n\u003e As far as I hear, HP\u0027s machine can have following configration.\n\u003e\n\u003e (example)\n\u003e Node0: CPU0   memory AAA MB\n\u003e Node1: CPU1   memory AAA MB\n\u003e Node2: CPU2   memory AAA MB\n\u003e Node3: CPU3   memory AAA MB\n\u003e Node4: Memory XXX GB\n\u003e\n\u003e AAA is very small value (below 16MB)  and will be omitted by ia64 bootstrap.\n\u003e After boot, only Node 4 has valid memory (but have no cpu.)\n\u003e\n\u003e Maybe this is memory-interleave by firmware config.\n\nChristoph Lameter \u003cclameter@sgi.com\u003e wrote:\n\n\u003e Future SGI platforms (actually also current one can have but nothing like\n\u003e that is deployed to my knowledge) have nodes with only cpus. Current SGI\n\u003e platforms have nodes with just I/O that we so far cannot manage in the\n\u003e core. So the arch code maps them to the nearest memory node.\n\nLee Schermerhorn \u003cLee.Schermerhorn@hp.com\u003e wrote:\n\n\u003e For the HP platforms, we can configure each cell with from 0% to 100%\n\u003e \"cell local memory\".  When we configure with \u003c100% CLM, the \"missing\n\u003e percentages\" are interleaved by hardware on a cache-line granularity to\n\u003e improve bandwidth at the expense of latency for numa-challenged\n\u003e applications [and OSes, but not our problem ;-)].  When we boot Linux on\n\u003e such a config, all of the real nodes have no memory--it all resides in a\n\u003e single interleaved pseudo-node.\n\u003e\n\u003e When we boot Linux on a 100% CLM configuration [\u003d\u003d NUMA], we still have\n\u003e the interleaved pseudo-node.  It contains a few hundred MB stolen from\n\u003e the real nodes to contain the DMA zone.  [Interleaved memory resides at\n\u003e phys addr 0].  The memoryless-nodes patches, along with the zoneorder\n\u003e patches, support this config as well.\n\u003e\n\u003e Also, when we boot a NUMA config with the \"mem\u003d\" command line,\n\u003e specifying less memory than actually exists, Linux takes the excluded\n\u003e memory \"off the top\" rather than distributing it across the nodes.  This\n\u003e can result in memoryless nodes, as well.\n\u003e\n\nThis patch:\n\nPreparation for memoryless node patches.\n\nProvide a generic way to keep nodemasks describing various characteristics of\nNUMA nodes.\n\nRemove the node_online_map and the node_possible map and realize the same\nfunctionality using two nodes stats: N_POSSIBLE and N_ONLINE.\n\n[Lee.Schermerhorn@hp.com: Initialize N_*_MEMORY and N_CPU masks for non-NUMA config]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nTested-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nAcked-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nAcked-by: Bob Picco \u003cbob.picco@hp.com\u003e\nCc: Nishanth Aravamudan \u003cnacc@us.ibm.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmel@skynet.ie\u003e\nSigned-off-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nCc: \"Serge E. Hallyn\" \u003cserge@hallyn.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "8691f3a72f32f8b3ed535faa27140b3ae293c90b",
      "tree": "78ed23998e99de2fb4fb5b135c1fa4ad70612ef0",
      "parents": [
        "26fb1589cb0aaec3a0b4418c54f30c1a2b1781f6"
      ],
      "author": {
        "name": "Jesper Juhl",
        "email": "jesper.juhl@gmail.com",
        "time": "Tue Oct 16 01:24:49 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Oct 16 09:42:54 2007 -0700"
      },
      "message": "mm: no need to cast vmalloc() return value in zone_wait_table_init()\n\nvmalloc() returns a void pointer, so there\u0027s no need to cast its\nreturn value in mm/page_alloc.c::zone_wait_table_init().\n\nSigned-off-by: Jesper Juhl \u003cjesper.juhl@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "641916881322a2dee5b120d509a3bdd05a502510",
      "tree": "79c0b1f449bcf09404ba932b07330a24bce4d00d",
      "parents": [
        "060d11b0b35d13eb2251fd08403d900b71df5791"
      ],
      "author": {
        "name": "Andrew Morton",
        "email": "akpm@linux-foundation.org",
        "time": "Thu Aug 30 23:56:17 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Fri Aug 31 01:42:22 2007 -0700"
      },
      "message": "process_zones(): fix recovery code\n\nDon\u0027t try to free memory which we didn\u0027t allocate.\n\nAcked-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b377fd3982ad957c796758a90e2988401a884241",
      "tree": "3d7449ccdf7038bffffa9323873f4095cc1ac6ce",
      "parents": [
        "8e92f21ba3ea3f54e4be062b87ef9fc4af2d33e2"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Wed Aug 22 14:02:05 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Wed Aug 22 19:52:47 2007 -0700"
      },
      "message": "Apply memory policies to top two highest zones when highest zone is ZONE_MOVABLE\n\nThe NUMA layer only supports NUMA policies for the highest zone.  When\nZONE_MOVABLE is configured with kernelcore\u003d, the the highest zone becomes\nZONE_MOVABLE.  The result is that policies are only applied to allocations\nlike anonymous pages and page cache allocated from ZONE_MOVABLE when the\nzone is used.\n\nThis patch applies policies to the two highest zones when the highest zone\nis ZONE_MOVABLE.  As ZONE_MOVABLE consists of pages from the highest \"real\"\nzone, it\u0027s always functionally equivalent.\n\nThe patch has been tested on a variety of machines both NUMA and non-NUMA\ncovering x86, x86_64 and ppc64.  No abnormal results were seen in\nkernbench, tbench, dbench or hackbench.  It passes regression tests from\nthe numactl package with and without kernelcore\u003d once numactl tests are\npatched to wait for vmstat counters to update.\n\nakpm: this is the nasty hack to fix NUMA mempolicies in the presence of\nZONE_MOVABLE and kernelcore\u003d in 2.6.23.  Christoph says \"For .24 either merge\nthe mobility or get the other solution that Mel is working on.  That solution\nwould only use a single zonelist per node and filter on the fly.  That may\nhelp performance and also help to make memory policies work better.\"\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by:  Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nTested-by:  Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nAcked-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Andi Kleen \u003cak@suse.de\u003e\nCc: Paul Mundt \u003clethal@linux-sh.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a8bbf72ab9b3072ece630d97689145b1a2f01221",
      "tree": "4d55318fff1aecd3c5d2c1e877847bcb2347dd2e",
      "parents": [
        "252e48389ed30d7c77385fb92fca765a680de408"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Jul 31 00:37:30 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Jul 31 15:39:36 2007 -0700"
      },
      "message": "Do not trigger OOM-killer for high-order allocation failures\n\nout_of_memory() may be called when an allocation is failing and the direct\nreclaim is not making any progress.  This does not take into account the\nrequested order of the allocation.  If the request if for an order larger\nthan PAGE_ALLOC_COSTLY_ORDER, it is reasonable to fail the allocation\nbecause the kernel makes no guarantees about those allocations succeeding.\n\nThis false OOM situation can occur if a user is trying to grow the hugepage\npool in a script like;\n\n#!/bin/bash\nREQUIRED\u003d$1\necho 1 \u003e /proc/sys/vm/hugepages_treat_as_movable\necho $REQUIRED \u003e /proc/sys/vm/nr_hugepages\nACTUAL\u003d`cat /proc/sys/vm/nr_hugepages`\nwhile [ $REQUIRED -ne $ACTUAL ]; do\n\techo Huge page pool at $ACTUAL growing to $REQUIRED\n\techo $REQUIRED \u003e /proc/sys/vm/nr_hugepages\n\tACTUAL\u003d`cat /proc/sys/vm/nr_hugepages`\n\tsleep 1\ndone\n\nThis is a reasonable scenario when ZONE_MOVABLE is in use but triggers OOM\neasily on 2.6.23-rc1. This patch will fail an allocation for an order above\nPAGE_ALLOC_COSTLY_ORDER instead of killing processes and retrying.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "296699de6bdc717189a331ab6bbe90e05c94db06",
      "tree": "53c847ecc8cce11952502921844052e44ca60d5e",
      "parents": [
        "b0cb1a19d05b8ea8611a9ef48a17fe417f1832e6"
      ],
      "author": {
        "name": "Rafael J. Wysocki",
        "email": "rjw@sisk.pl",
        "time": "Sun Jul 29 23:27:18 2007 +0200"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Jul 29 16:45:38 2007 -0700"
      },
      "message": "Introduce CONFIG_SUSPEND for suspend-to-Ram and standby\n\nIntroduce CONFIG_SUSPEND representing the ability to enter system sleep\nstates, such as the ACPI S3 state, and allow the user to choose SUSPEND\nand HIBERNATION independently of each other.\n\nMake HOTPLUG_CPU be selected automatically if SUSPEND or HIBERNATION has\nbeen chosen and the kernel is intended for SMP systems.\n\nAlso, introduce CONFIG_PM_SLEEP which is automatically selected if\nCONFIG_SUSPEND or CONFIG_HIBERNATION is set and use it to select the\ncode needed for both suspend and hibernation.\n\nThe top-level power management headers and the ACPI code related to\nsuspend and hibernation are modified to use the new definitions (the\nchanges in drivers/acpi/sleep/main.c are, mostly, moving code to reduce\nthe number of ifdefs).\n\nThere are many other files in which CONFIG_PM can be replaced with\nCONFIG_PM_SLEEP or even with CONFIG_SUSPEND, but they can be updated in\nthe future.\n\nSigned-off-by: Rafael J. Wysocki \u003crjw@sisk.pl\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b5445f956ec3c8c19b760775e9ff92a160e3a167",
      "tree": "034d62678f1e402e6d71b07ba0bf96544554a2ba",
      "parents": [
        "ee2077d97b2f392cfc0b884775ac58aa9b9b8c8f"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Thu Jul 26 10:41:18 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Thu Jul 26 11:35:19 2007 -0700"
      },
      "message": "Allow nodes to exist that only contain ZONE_MOVABLE\n\nWith the introduction of kernelcore\u003d, a configurable zone is created on\nrequest.  In some cases, this value will be small enough that some nodes\ncontain only ZONE_MOVABLE.  On some NUMA configurations when this occurs,\narch-independent zone-sizing will get the size of the memory holes within\nthe node incorrect.  The value of present_pages goes negative and the boot\nfails.\n\nThis patch fixes the bug in the calculation of the size of the hole.  The\ntest case is to boot test a NUMA machine with a low value of kernelcore\u003d\nbefore and after the patch is applied.  While this bug exists in early\nkernel it cannot be triggered in practice.\n\nThis patch has been boot-tested on a variety machines with and without\nkernelcore\u003d set.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e228929bc257b963523ed75aa60d2ad77ece2189",
      "tree": "d72cf8e6d8a126792565549527efc2c9a6fcdcbe",
      "parents": [
        "8d1b87530e7df5c9541a69910ef7f786f034eca0"
      ],
      "author": {
        "name": "Paul Mundt",
        "email": "lethal@linux-sh.org",
        "time": "Fri Jul 20 00:31:44 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Fri Jul 20 08:44:19 2007 -0700"
      },
      "message": "mm: fix memory hotplug oops from ZONE_MOVABLE changes.\n\nzone_movable_pfn is presently marked as __initdata and referenced from\nadjust_zone_range_for_zone_movable(), which in turn is referenced by\nzone_spanned_pages_in_node().  Both of these are __meminit annotated.  When\nmemory hotplug is enabled, this will oops on a hot-add, due to\nzone_movable_pfn having been freed.\n\n__meminitdata annotation gives the desired behaviour.\n\nThis will only impact platforms that enable both memory hotplug\nand ARCH_POPULATES_NODE_MAP.\n\nSigned-off-by: Paul Mundt \u003clethal@linux-sh.org\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "fe3cba17c49471e99d3421e675fc8b3deaaf0b70",
      "tree": "df696c4584c6db2e439f068d2474fcb946ca587d",
      "parents": [
        "d8983910a4045fa21022cfccf76ed13eb40fd7f5"
      ],
      "author": {
        "name": "Fengguang Wu",
        "email": "wfg@mail.ustc.edu.cn",
        "time": "Thu Jul 19 01:48:07 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Thu Jul 19 10:04:44 2007 -0700"
      },
      "message": "mm: share PG_readahead and PG_reclaim\n\nShare the same page flag bit for PG_readahead and PG_reclaim.\n\nOne is used only on file reads, another is only for emergency writes.  One\nis used mostly for fresh/young pages, another is for old pages.\n\nCombinations of possible interactions are:\n\na) clear PG_reclaim \u003d\u003e implicit clear of PG_readahead\n\tit will delay an asynchronous readahead into a synchronous one\n\tit actually does _good_ for readahead:\n\t\tthe pages will be reclaimed soon, it\u0027s readahead thrashing!\n\t\tin this case, synchronous readahead makes more sense.\n\nb) clear PG_readahead \u003d\u003e implicit clear of PG_reclaim\n\tone(and only one) page will not be reclaimed in time\n\tit can be avoided by checking PageWriteback(page) in readahead first\n\nc) set PG_reclaim \u003d\u003e implicit set of PG_readahead\n\twill confuse readahead and make it restart the size rampup process\n\tit\u0027s a trivial problem, and can mostly be avoided by checking\n\tPageWriteback(page) first in readahead\n\nd) set PG_readahead \u003d\u003e implicit set of PG_reclaim\n\tPG_readahead will never be set on already cached pages.\n\tPG_reclaim will always be cleared on dirtying a page.\n\tso not a problem.\n\nIn summary,\n\ta)   we get better behavior\n\tb,d) possible interactions can be avoided\n\tc)   racy condition exists that might affect readahead, but the chance\n\t     is _really_ low, and the hurt on readahead is trivial.\n\nCompound pages also use PG_reclaim, but for now they do not interact with\nreclaim/readahead code.\n\nSigned-off-by: Fengguang Wu \u003cwfg@mail.ustc.edu.cn\u003e\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d77c2d7cc5126639a47d73300b40d461f2811a0f",
      "tree": "d02b32ca92fde9a04be9bee0f0b7c8961479448c",
      "parents": [
        "2ba2d00363975242dee9bb22cf798b487e3cd61e"
      ],
      "author": {
        "name": "Fengguang Wu",
        "email": "wfg@mail.ustc.edu.cn",
        "time": "Thu Jul 19 01:47:55 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Thu Jul 19 10:04:43 2007 -0700"
      },
      "message": "readahead: introduce PG_readahead\n\nIntroduce a new page flag: PG_readahead.\n\nIt acts as a look-ahead mark, which tells the page reader: Hey, it\u0027s time to\ninvoke the read-ahead logic.  For the sake of I/O pipelining, don\u0027t wait until\nit runs out of cached pages!\n\nSigned-off-by: Fengguang Wu \u003cwfg@mail.ustc.edu.cn\u003e\nCc: Steven Pratt \u003cslpratt@austin.ibm.com\u003e\nCc: Ram Pai \u003clinuxram@us.ibm.com\u003e\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c2f1a551dea8b37c2e0cb886885c250fb703e9d8",
      "tree": "11a5f256703d856017ceb2268bd02b7b510dee30",
      "parents": [
        "1e5140279f31e47d58ed6036ee61ba7a65710e63"
      ],
      "author": {
        "name": "Meelap Shah",
        "email": "meelap@umich.edu",
        "time": "Tue Jul 17 04:04:39 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Jul 17 10:23:07 2007 -0700"
      },
      "message": "knfsd: nfsd4: vary maximum delegation limit based on RAM size\n\nOur original NFSv4 delegation policy was to give out a read delegation on any\nopen when it was possible to.\n\nSince the lifetime of a delegation isn\u0027t limited to that of an open, a client\nmay quite reasonably hang on to a delegation as long as it has the inode\ncached.  This becomes an obvious problem the first time a client\u0027s inode cache\napproaches the size of the server\u0027s total memory.\n\nOur first quick solution was to add a hard-coded limit.  This patch makes a\nmild incremental improvement by varying that limit according to the server\u0027s\ntotal memory size, allowing at most 4 delegations per megabyte of RAM.\n\nMy quick back-of-the-envelope calculation finds that in the worst case (where\nevery delegation is for a different inode), a delegation could take about\n1.5K, which would make the worst case usage about 6% of memory.  The new limit\nworks out to be about the same as the old on a 1-gig server.\n\n[akpm@linux-foundation.org: Don\u0027t needlessly bloat vmlinux]\n[akpm@linux-foundation.org: Make it right for highmem machines]\nSigned-off-by: \"J. Bruce Fields\" \u003cbfields@citi.umich.edu\u003e\nSigned-off-by: Neil Brown \u003cneilb@suse.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5ad333eb66ff1e52a87639822ae088577669dcf9",
      "tree": "addae6bbd19585f19328f309924d06d647e8f2b7",
      "parents": [
        "7e63efef857575320fb413fbc3d0ee704b72845f"
      ],
      "author": {
        "name": "Andy Whitcroft",
        "email": "apw@shadowen.org",
        "time": "Tue Jul 17 04:03:16 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Jul 17 10:22:59 2007 -0700"
      },
      "message": "Lumpy Reclaim V4\n\nWhen we are out of memory of a suitable size we enter reclaim.  The current\nreclaim algorithm targets pages in LRU order, which is great for fairness at\norder-0 but highly unsuitable if you desire pages at higher orders.  To get\npages of higher order we must shoot down a very high proportion of memory;\n\u003e95% in a lot of cases.\n\nThis patch set adds a lumpy reclaim algorithm to the allocator.  It targets\ngroups of pages at the specified order anchored at the end of the active and\ninactive lists.  This encourages groups of pages at the requested orders to\nmove from active to inactive, and active to free lists.  This behaviour is\nonly triggered out of direct reclaim when higher order pages have been\nrequested.\n\nThis patch set is particularly effective when utilised with an\nanti-fragmentation scheme which groups pages of similar reclaimability\ntogether.\n\nThis patch set is based on Peter Zijlstra\u0027s lumpy reclaim V2 patch which forms\nthe foundation.  Credit to Mel Gorman for sanitity checking.\n\nMel said:\n\n  The patches have an application with hugepage pool resizing.\n\n  When lumpy-reclaim is used used with ZONE_MOVABLE, the hugepages pool can\n  be resized with greater reliability.  Testing on a desktop machine with 2GB\n  of RAM showed that growing the hugepage pool with ZONE_MOVABLE on it\u0027s own\n  was very slow as the success rate was quite low.  Without lumpy-reclaim,\n  each attempt to grow the pool by 100 pages would yield 1 or 2 hugepages.\n  With lumpy-reclaim, getting 40 to 70 hugepages on each attempt was typical.\n\n[akpm@osdl.org: ia64 pfn_to_nid fixes and loop cleanup]\n[bunk@stusta.de: static declarations for internal functions]\n[a.p.zijlstra@chello.nl: initial lumpy V2 implementation]\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nAcked-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Bob Picco \u003cbob.picco@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7e63efef857575320fb413fbc3d0ee704b72845f",
      "tree": "ce33c10e5f5d9ea16b0e6944d6994b1f9cc22040",
      "parents": [
        "ed7ed365172e27b0efe9d43cc962723c7193e34e"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Jul 17 04:03:15 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Jul 17 10:22:59 2007 -0700"
      },
      "message": "Add a movablecore\u003d parameter for sizing ZONE_MOVABLE\n\nThis patch adds a new parameter for sizing ZONE_MOVABLE called\nmovablecore\u003d.  While kernelcore\u003d is used to specify the minimum amount of\nmemory that must be available for all allocation types, movablecore\u003d is\nused to specify the minimum amount of memory that is used for migratable\nallocations.  The amount of memory used for migratable allocations\ndetermines how large the huge page pool could be dynamically resized to at\nruntime for example.\n\nHow movablecore is actually handled is that the total number of pages in\nthe system is calculated and a value is set for kernelcore that is\n\nkernelcore \u003d\u003d totalpages - movablecore\n\nBoth kernelcore\u003d and movablecore\u003d can be safely specified at the same time.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ed7ed365172e27b0efe9d43cc962723c7193e34e",
      "tree": "6c22daf6908f92c64aae2b425e6383fe0ed404ac",
      "parents": [
        "396faf0303d273219db5d7eb4a2879ad977ed185"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Jul 17 04:03:14 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Jul 17 10:22:59 2007 -0700"
      },
      "message": "handle kernelcore\u003d: generic\n\nThis patch adds the kernelcore\u003d parameter for x86.\n\nOnce all patches are applied, a new command-line parameter exist and a new\nsysctl.  This patch adds the necessary documentation.\n\nFrom: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\n\n  When \"kernelcore\" boot option is specified, kernel can\u0027t boot up on ia64\n  because of an infinite loop.  In addition, the parsing code can be handled\n  in an architecture-independent manner.\n\n  This patch uses common code to handle the kernelcore\u003d parameter.  It is\n  only available to architectures that support arch-independent zone-sizing\n  (i.e.  define CONFIG_ARCH_POPULATES_NODE_MAP).  Other architectures will\n  ignore the boot parameter.\n\n[bunk@stusta.de: make cmdline_parse_kernelcore() static]\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "2a1e274acf0b1c192face19a4be7c12d4503eaaf",
      "tree": "f7e98e1fe19d38bb10bf178fb8f8ed1789b659b2",
      "parents": [
        "769848c03895b63e5662eb7e4ec8c4866f7d0183"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Tue Jul 17 04:03:12 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Jul 17 10:22:59 2007 -0700"
      },
      "message": "Create the ZONE_MOVABLE zone\n\nThe following 8 patches against 2.6.20-mm2 create a zone called ZONE_MOVABLE\nthat is only usable by allocations that specify both __GFP_HIGHMEM and\n__GFP_MOVABLE.  This has the effect of keeping all non-movable pages within a\nsingle memory partition while allowing movable allocations to be satisfied\nfrom either partition.  The patches may be applied with the list-based\nanti-fragmentation patches that groups pages together based on mobility.\n\nThe size of the zone is determined by a kernelcore\u003d parameter specified at\nboot-time.  This specifies how much memory is usable by non-movable\nallocations and the remainder is used for ZONE_MOVABLE.  Any range of pages\nwithin ZONE_MOVABLE can be released by migrating the pages or by reclaiming.\n\nWhen selecting a zone to take pages from for ZONE_MOVABLE, there are two\nthings to consider.  First, only memory from the highest populated zone is\nused for ZONE_MOVABLE.  On the x86, this is probably going to be ZONE_HIGHMEM\nbut it would be ZONE_DMA on ppc64 or possibly ZONE_DMA32 on x86_64.  Second,\nthe amount of memory usable by the kernel will be spread evenly throughout\nNUMA nodes where possible.  If the nodes are not of equal size, the amount of\nmemory usable by the kernel on some nodes may be greater than others.\n\nBy default, the zone is not as useful for hugetlb allocations because they are\npinned and non-migratable (currently at least).  A sysctl is provided that\nallows huge pages to be allocated from that zone.  This means that the huge\npage pool can be resized to the size of ZONE_MOVABLE during the lifetime of\nthe system assuming that pages are not mlocked.  Despite huge pages being\nnon-movable, we do not introduce additional external fragmentation of note as\nhuge pages are always the largest contiguous block we care about.\n\nCredit goes to Andy Whitcroft for catching a large variety of problems during\nreview of the patches.\n\nThis patch creates an additional zone, ZONE_MOVABLE.  This zone is only usable\nby allocations which specify both __GFP_HIGHMEM and __GFP_MOVABLE.  Hot-added\nmemory continues to be placed in their existing destination as there is no\nmechanism to redirect them to a specific zone.\n\n[y-goto@jp.fujitsu.com: Fix section mismatch of memory hotplug related code]\n[akpm@linux-foundation.org: various fixes]\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nCc: William Lee Irwin III \u003cwli@holomorphy.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "54114994f4de7e8076fc250e44501e55e19b75b5",
      "tree": "b0b0c2b5c3cf1c0daa7c6db7911c42dcf912f181",
      "parents": [
        "203a2935c734c054bfd4665fb5d8835498af50a8"
      ],
      "author": {
        "name": "Akinobu Mita",
        "email": "akinobu.mita@gmail.com",
        "time": "Sun Jul 15 23:40:23 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon Jul 16 09:05:45 2007 -0700"
      },
      "message": "fault-injection: add min-order parameter to fail_page_alloc\n\nLimiting smaller allocation failures by fault injection helps to find real\npossible bugs.  Because higher order allocations are likely to fail and\nzero-order allocations are not likely to fail.\n\nThis patch adds min-order parameter to fail_page_alloc.  It specifies the\nminimum page allocation order to be injected failures.\n\nSigned-off-by: Akinobu Mita \u003cakinobu.mita@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b49ad484c54116862d717ffafcab1c9a46600b48",
      "tree": "d081db8e0026022d13cc08a6c594c9471c74e1d4",
      "parents": [
        "6193a2ff180920f84ee06977165ebf32431fc2d2"
      ],
      "author": {
        "name": "Dan Aloni",
        "email": "da-x@monatomic.org",
        "time": "Sun Jul 15 23:38:23 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon Jul 16 09:05:36 2007 -0700"
      },
      "message": "mm/page_alloc.c: lower printk severity\n\nSigned-off-by: Dan Aloni \u003cda-x@monatomic.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6ea6e6887dad1fd44e6d5020a0fd355af4f2b6b3",
      "tree": "184e6c217acd1013b60f439bb7cb8b400525ca43",
      "parents": [
        "8f0accc8627043702e6ea2bb8b9aa3a171ef8393"
      ],
      "author": {
        "name": "Paul Mundt",
        "email": "lethal@linux-sh.org",
        "time": "Sun Jul 15 23:38:20 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon Jul 16 09:05:36 2007 -0700"
      },
      "message": "mm: more __meminit annotations\n\nCurrently zone_spanned_pages_in_node() and zone_absent_pages_in_node() are\nnon-static for ARCH_POPULATES_NODE_MAP and static otherwise.  However, only\nthe non-static versions are __meminit annotated, despite only being called\nfrom __meminit functions in either case.\n\nzone_init_free_lists() is currently non-static and not __meminit annotated\neither, despite only being called once in the entire tree by\ninit_currently_empty_zone(), which too is __meminit.  So make it static and\nproperly annotated.\n\nSigned-off-by: Paul Mundt \u003clethal@linux-sh.org\u003e\nCc: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "98011f569e2ae1e4ae394f6e23faa16676d50de4",
      "tree": "6cde069a3fd943c36d4029b21d8b7ad4c24811f6",
      "parents": [
        "140d5a49046b6d73dce4a4229e88c000a99ee126"
      ],
      "author": {
        "name": "Jan Beulich",
        "email": "jbeulich@novell.com",
        "time": "Sun Jul 15 23:38:17 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon Jul 16 09:05:36 2007 -0700"
      },
      "message": "mm: fix improper .init-type section references\n\n.. which modpost started warning about.\n\nSigned-off-by: Jan Beulich \u003cjbeulich@novell.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "1037b83bd04e31449dc9323f1e8ddada4264ef66",
      "tree": "aad6be88185e675503847d3cf43a257ceb93d0f9",
      "parents": [
        "b92151bab91ef906378d3e0e7128d55dd641e966"
      ],
      "author": {
        "name": "Eric Dumazet",
        "email": "dada1@cosmosbay.com",
        "time": "Sun Jul 15 23:38:05 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon Jul 16 09:05:35 2007 -0700"
      },
      "message": "MM: alloc_large_system_hash() can free some memory for non power-of-two bucketsize\n\nalloc_large_system_hash() is called at boot time to allocate space for\nseveral large hash tables.\n\nLately, TCP hash table was changed and its bucketsize is not a power-of-two\nanymore.\n\nOn most setups, alloc_large_system_hash() allocates one big page (order \u003e\n0) with __get_free_pages(GFP_ATOMIC, order).  This single high_order page\nhas a power-of-two size, bigger than the needed size.\n\nWe can free all pages that wont be used by the hash table.\n\nOn a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.\n\nTCP established hash table entries: 32768 (order: 6, 393216 bytes)\n\nSigned-off-by: Eric Dumazet \u003cdada1@cosmosbay.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f0c0b2b808f232741eadac272bd4bc51f18df0f4",
      "tree": "c2568efdc496cc165a4e72d8aa2542b22035e342",
      "parents": [
        "18a8bd949d6adb311ea816125ff65050df1f3f6e"
      ],
      "author": {
        "name": "KAMEZAWA Hiroyuki",
        "email": "kamezawa.hiroyu@jp.fujitsu.com",
        "time": "Sun Jul 15 23:38:01 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon Jul 16 09:05:35 2007 -0700"
      },
      "message": "change zonelist order: zonelist order selection logic\n\nMake zonelist creation policy selectable from sysctl/boot option v6.\n\nThis patch makes NUMA\u0027s zonelist (of pgdat) order selectable.\nAvailable order are Default(automatic)/ Node-based / Zone-based.\n\n[Default Order]\nThe kernel selects Node-based or Zone-based order automatically.\n\n[Node-based Order]\nThis policy treats the locality of memory as the most important parameter.\nZonelist order is created by each zone\u0027s locality. This means lower zones\n(ex. ZONE_DMA) can be used before higher zone (ex. ZONE_NORMAL) exhausion.\nIOW. ZONE_DMA will be in the middle of zonelist.\ncurrent 2.6.21 kernel uses this.\n\nPros.\n * A user can expect local memory as much as possible.\nCons.\n * lower zone will be exhansted before higher zone. This may cause OOM_KILL.\n\nMaybe suitable if ZONE_DMA is relatively big and you never see OOM_KILL\nbecause of ZONE_DMA exhaution and you need the best locality.\n\n(example)\nassume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.\n\n*node(0)\u0027s memory allocation order:\n\n node(0)\u0027s NORMAL -\u003e node(0)\u0027s DMA -\u003e node(1)\u0027s NORMAL.\n\n*node(1)\u0027s memory allocation order:\n\n node(1)\u0027s NORMAL -\u003e node(0)\u0027s NORMAL -\u003e node(0)\u0027s DMA.\n\n[Zone-based order]\nThis policy treats the zone type as the most important parameter.\nZonelist order is created by zone-type order. This means lower zone\nnever be used bofere higher zone exhaustion.\nIOW. ZONE_DMA will be always at the tail of zonelist.\n\nPros.\n * OOM_KILL(bacause of lower zone) occurs only if the whole zones are exhausted.\nCons.\n * memory locality may not be best.\n\n(example)\nassume 2 node NUMA. node(0) has ZONE_DMA/ZONE_NORMAL, node(1) has ZONE_NORMAL.\n\n*node(0)\u0027s memory allocation order:\n\n node(0)\u0027s NORMAL -\u003e node(1)\u0027s NORMAL -\u003e node(0)\u0027s DMA.\n\n*node(1)\u0027s memory allocation order:\n\n node(1)\u0027s NORMAL -\u003e node(0)\u0027s NORMAL -\u003e node(0)\u0027s DMA.\n\nbootoption \"numa_zonelist_order\u003d\" and proc/sysctl is supporetd.\n\ncommand:\n%echo N \u003e /proc/sys/vm/numa_zonelist_order\n\nWill rebuild zonelist in Node-based order.\n\ncommand:\n%echo Z \u003e /proc/sys/vm/numa_zonelist_order\n\nWill rebuild zonelist in Zone-based order.\n\nThanks to Lee Schermerhorn, he gives me much help and codes.\n\n[Lee.Schermerhorn@hp.com: add check_highest_zone to build_zonelists_in_zone_order]\n[akpm@linux-foundation.org: build fix]\nSigned-off-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nCc: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Andi Kleen \u003cak@suse.de\u003e\nCc: \"jesse.barnes@intel.com\" \u003cjesse.barnes@intel.com\u003e\nSigned-off-by: Lee Schermerhorn \u003clee.schermerhorn@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d09c6b809432668371b5de9102f4f9aa6a7c79cc",
      "tree": "23a80218149ce732dfab1afca6abd3f370bc0131",
      "parents": [
        "902233ee494f9d9da6dbb818316fcbf892bebbed"
      ],
      "author": {
        "name": "Paul Mundt",
        "email": "lethal@linux-sh.org",
        "time": "Thu Jun 14 15:13:16 2007 +0900"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Fri Jun 15 16:18:08 2007 -0700"
      },
      "message": "mm: Fix memory/cpu hotplug section mismatch and oops.\n\nWhen building with memory hotplug enabled and cpu hotplug disabled, we\nend up with the following section mismatch:\n\nWARNING: mm/built-in.o(.text+0x4e58): Section mismatch: reference to\n.init.text: (between \u0027free_area_init_node\u0027 and \u0027__build_all_zonelists\u0027)\n\nThis happens as a result of:\n\n        -\u003e free_area_init_node()\n          -\u003e free_area_init_core()\n            -\u003e zone_pcp_init() \u003c-- all __meminit up to this point\n              -\u003e zone_batchsize() \u003c-- marked as __cpuinit                     fo\n\nThis happens because CONFIG_HOTPLUG_CPU\u003dn sets __cpuinit to __init, but\nCONFIG_MEMORY_HOTPLUG\u003dy unsets __meminit.\n\nChanging zone_batchsize() to __devinit fixes this.\n\n__devinit is the only thing that is common between CONFIG_HOTPLUG_CPU\u003dy and\nCONFIG_MEMORY_HOTPLUG\u003dy. In the long run, perhaps this should be moved to\nanother section identifier completely. Without this, memory hot-add\nof offline nodes (via hotadd_new_pgdat()) will oops if CPU hotplug is\nnot also enabled.\n\nSigned-off-by: Paul Mundt \u003clethal@linux-sh.org\u003e\nAcked-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n\n--\n\n mm/page_alloc.c |    2 +-\n 1 file changed, 1 insertion(+), 1 deletion(-)\n"
    },
    {
      "commit": "12d810c1b8c2b913d48e629e2b5c01d105029839",
      "tree": "b39162d3168f6173af3d0e5790e16eb45a70dfaf",
      "parents": [
        "00c541eae7a477e3d1adb1ebf27cccc0bdb5f824"
      ],
      "author": {
        "name": "Roman Zippel",
        "email": "zippel@linux-m68k.org",
        "time": "Thu May 31 00:40:54 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Thu May 31 07:58:14 2007 -0700"
      },
      "message": "m68k: discontinuous memory support\n\nFix support for discontinuous memory\n\nSigned-off-by: Roman Zippel \u003czippel@linux-m68k.org\u003e\nSigned-off-by: Geert Uytterhoeven \u003cgeert@linux-m68k.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "418508c13222ddba475873ea95c8aeadd26104f2",
      "tree": "6056f5d28f1a0ac8474a275680e3991cc315ed30",
      "parents": [
        "ead5f0b5fa41dd3649a44bfc922d06641ff3dbcf"
      ],
      "author": {
        "name": "Miklos Szeredi",
        "email": "mszeredi@suse.cz",
        "time": "Wed May 23 13:57:55 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Wed May 23 20:14:13 2007 -0700"
      },
      "message": "fix unused setup_nr_node_ids\n\nmm/page_alloc.c:931: warning: \u0027setup_nr_node_ids\u0027 defined but not used\n\nThis is now the only (!) compiler warning I get in my UML build :)\n\nSigned-off-by: Miklos Szeredi \u003cmszeredi@suse.cz\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "577a32f620271416d05f852477151fb51c790bc6",
      "tree": "9c4f219f59fc8117aa7d376d130d57f1ac841a8e",
      "parents": [
        "92080309df1975729a9f8b45fd56528817e34db8"
      ],
      "author": {
        "name": "Sam Ravnborg",
        "email": "sam@ravnborg.org",
        "time": "Thu May 17 23:29:25 2007 +0200"
      },
      "committer": {
        "name": "Sam Ravnborg",
        "email": "sam@ravnborg.org",
        "time": "Sat May 19 09:11:58 2007 +0200"
      },
      "message": "mm: fix section mismatch warnings\n\nmodpost had two cases hardcoded for mm/\nShift over to __init_refok and kill the\nhardcoded function names in modpost.\n\nThis has the drawback that the functions\nwill always be kept no matter configuration.\nWith previous code the function were placed in\ninit section if configuration allowed it.\n\nSigned-off-by: Sam Ravnborg \u003csam@ravnborg.org\u003e\n"
    },
    {
      "commit": "6f076f5dd9d227cea2704061048894b00cc0d62b",
      "tree": "0c6be41a5d1d77ef78e76ba9d1861a062aef55bd",
      "parents": [
        "a6a62b69b9f1b0cec0a119c5f4cd2f17d091e8f5"
      ],
      "author": {
        "name": "Stephen Rothwell",
        "email": "sfr@canb.auug.org.au",
        "time": "Thu May 10 03:15:27 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Thu May 10 09:26:52 2007 -0700"
      },
      "message": "early_pfn_to_nid needs to be __meminit\n\nSince it is referenced by memmap_init_zone (which is __meminit) via the\nearly_pfn_in_nid macro when CONFIG_NODES_SPAN_OTHER_NODES is set (which\nbasically means PowerPC 64).\n\nThis removes a section mismatch warning in those circumstances.\n\nSigned-off-by: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\nCc: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4037d452202e34214e8a939fa5621b2b3bbb45b7",
      "tree": "31b59c0ca94fba4d53b6738b0bad3d1e9fde3063",
      "parents": [
        "77461ab33229d48614402decfb1b2eaa6d446861"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Wed May 09 02:35:14 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Wed May 09 12:30:56 2007 -0700"
      },
      "message": "Move remote node draining out of slab allocators\n\nCurrently the slab allocators contain callbacks into the page allocator to\nperform the draining of pagesets on remote nodes.  This requires SLUB to have\na whole subsystem in order to be compatible with SLAB.  Moving node draining\nout of the slab allocators avoids a section of code in SLUB.\n\nMove the node draining so that is is done when the vm statistics are updated.\nAt that point we are already touching all the cachelines with the pagesets of\na processor.\n\nAdd a expire counter there.  If we have to update per zone or global vm\nstatistics then assume that the pageset will require subsequent draining.\n\nThe expire counter will be decremented on each vm stats update pass until it\nreaches zero.  Then we will drain one batch from the pageset.  The draining\nwill cause vm counter updates which will then cause another expiration until\nthe pcp is empty.  So we will drain a batch every 3 seconds.\n\nNote that remote node draining is a somewhat esoteric feature that is required\non large NUMA systems because otherwise significant portions of system memory\ncan become trapped in pcp queues.  The number of pcp is determined by the\nnumber of processors and nodes in a system.  A system with 4 processors and 2\nnodes has 8 pcps which is okay.  But a system with 1024 processors and 512\nnodes has 512k pcps with a high potential for large amount of memory being\ncaught in them.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "8bb7844286fb8c9fce6f65d8288aeb09d03a5e0d",
      "tree": "f4e305edaedbde05774bb1e4acd89a9475661d2e",
      "parents": [
        "f37bc2712b54ec641e0c0c8634f1a4b61d9956c0"
      ],
      "author": {
        "name": "Rafael J. Wysocki",
        "email": "rjw@sisk.pl",
        "time": "Wed May 09 02:35:10 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Wed May 09 12:30:56 2007 -0700"
      },
      "message": "Add suspend-related notifications for CPU hotplug\n\nSince nonboot CPUs are now disabled after tasks and devices have been\nfrozen and the CPU hotplug infrastructure is used for this purpose, we need\nspecial CPU hotplug notifications that will help the CPU-hotplug-aware\nsubsystems distinguish normal CPU hotplug events from CPU hotplug events\nrelated to a system-wide suspend or resume operation in progress.  This\npatch introduces such notifications and causes them to be used during\nsuspend and resume transitions.  It also changes all of the\nCPU-hotplug-aware subsystems to take these notifications into consideration\n(for now they are handled in the same way as the corresponding \"normal\"\nones).\n\n[oleg@tv-sign.ru: cleanups]\nSigned-off-by: Rafael J. Wysocki \u003crjw@sisk.pl\u003e\nCc: Gautham R Shenoy \u003cego@in.ibm.com\u003e\nCc: Pavel Machek \u003cpavel@ucw.cz\u003e\nSigned-off-by: Oleg Nesterov \u003coleg@tv-sign.ru\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "72280ede316911fd5a82ef78d12a6705b1007d36",
      "tree": "c6cdec169d300f6967c47771917d99035423bf91",
      "parents": [
        "a3142c8e1dd57ff48040bdb3478cff9312543dc3"
      ],
      "author": {
        "name": "Yasunori Goto",
        "email": "y-goto@jp.fujitsu.com",
        "time": "Tue May 08 00:23:10 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue May 08 11:14:57 2007 -0700"
      },
      "message": "Add white list into modpost.c for memory hotplug code and ia64\u0027s machvec section\n\nThis patch is add white list into modpost.c for some functions and\nia64\u0027s section to fix section mismatchs.\n\n  sparse_index_alloc() and zone_wait_table_init() calls bootmem allocator\n  at boot time, and kmalloc/vmalloc at hotplug time. If config\n  memory hotplug is on, there are references of bootmem allocater(init text)\n  from them (normal text). This is cause of section mismatch.\n\n  Bootmem is called by many functions and it must be\n  used only at boot time. I think __init of them should keep for\n  section mismatch check. So, I would like to register sparse_index_alloc()\n  and zone_wait_table_init() into white list.\n\n  In addition, ia64\u0027s .machvec section is function table of some platform\n  dependent code. It is mixture of .init.text and normal text. These\n  reference of __init functions are valid too.\n\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nCc: Sam Ravnborg \u003csam@ravnborg.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a3142c8e1dd57ff48040bdb3478cff9312543dc3",
      "tree": "14beeb03421338b917a956e9269a2ce95e0f62cf",
      "parents": [
        "0ceb331433e8aad9c5f441a965d7c681f8b9046f"
      ],
      "author": {
        "name": "Yasunori Goto",
        "email": "y-goto@jp.fujitsu.com",
        "time": "Tue May 08 00:23:07 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue May 08 11:14:57 2007 -0700"
      },
      "message": "Fix section mismatch of memory hotplug related code.\n\nThis is to fix many section mismatches of code related to memory hotplug.\nI checked compile with memory hotplug on/off on ia64 and x86-64 box.\n\nSigned-off-by: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7be9823491ecbaf9700d7d3502cb4b4dd0ed868a",
      "tree": "10f606c59837d851376823dae5d8faf50a51bde8",
      "parents": [
        "433ecb4ab312f873870b67ee374502e84f6dcf92"
      ],
      "author": {
        "name": "Rafael J. Wysocki",
        "email": "rjw@sisk.pl",
        "time": "Sun May 06 14:50:42 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon May 07 12:12:58 2007 -0700"
      },
      "message": "swsusp: use inline functions for changing page flags\n\nReplace direct invocations of SetPageNosave(), SetPageNosaveFree() etc.  with\ncalls to inline functions that can be changed in subsequent patches without\nmodifying the code calling them.\n\nSigned-off-by: Rafael J. Wysocki \u003crjw@sisk.pl\u003e\nAcked-by: Pavel Machek \u003cpavel@ucw.cz\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6d7779538f765963ced45a3fa4bed7ba8d2c277d",
      "tree": "07d47e6ff1ab30309004e2ba0674dcabd83945c1",
      "parents": [
        "d85f33855c303acfa87fa457157cef755b6087df"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sun May 06 14:49:40 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon May 07 12:12:53 2007 -0700"
      },
      "message": "mm: optimize compound_head() by avoiding a shared page flag\n\nThe patch adds PageTail(page) and PageHead(page) to check if a page is the\nhead or the tail of a compound page.  This is done by masking the two bits\ndescribing the state of a compound page and then comparing them.  So one\ncomparision and a branch instead of two bit checks and two branches.\n\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d85f33855c303acfa87fa457157cef755b6087df",
      "tree": "f1184a1a24b432727b0399594ede37c7539db888",
      "parents": [
        "30520864839dc796fd314812e7036e754880b47d"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sun May 06 14:49:39 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon May 07 12:12:53 2007 -0700"
      },
      "message": "Make page-\u003eprivate usable in compound pages\n\nIf we add a new flag so that we can distinguish between the first page and the\ntail pages then we can avoid to use page-\u003eprivate in the first page.\npage-\u003eprivate \u003d\u003d page for the first page, so there is no real information in\nthere.\n\nFreeing up page-\u003eprivate makes the use of compound pages more transparent.\nThey become more usable like real pages.  Right now we have to be careful f.e.\n if we are going beyond PAGE_SIZE allocations in the slab on i386 because we\ncan then no longer use the private field.  This is one of the issues that\ncause us not to support debugging for page size slabs in SLAB.\n\nHaving page-\u003eprivate available for SLUB would allow more meta information in\nthe page struct.  I can probably avoid the 16 bit ints that I have in there\nright now.\n\nAlso if page-\u003eprivate is available then a compound page may be equipped with\nbuffer heads.  This may free up the way for filesystems to support larger\nblocks than page size.\n\nWe add PageTail as an alias of PageReclaim.  Compound pages cannot currently\nbe reclaimed.  Because of the alias one needs to check PageCompound first.\n\nThe RFC for the this approach was discussed at\nhttp://marc.info/?t\u003d117574302800001\u0026r\u003d1\u0026w\u003d2\n\n[nacc@us.ibm.com: fix hugetlbfs]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Nishanth Aravamudan \u003cnacc@us.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "3b1d92c56514987010bb0201b5c71aeb633fc4f8",
      "tree": "f31a72692c35eb27fc94590964ba389776c0439f",
      "parents": [
        "8da3430d8a7f885c2bf65121181d76c9d290a86e"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Sun May 06 14:49:30 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon May 07 12:12:53 2007 -0700"
      },
      "message": "Do not disable interrupts when reading min_free_kbytes\n\nThe sysctl handler for min_free_kbytes calls setup_per_zone_pages_min() on\nread or write.  This function iterates through every zone and calls\nspin_lock_irqsave() on the zone LRU lock.  When reading min_free_kbytes,\nthis is a total waste of time that disables interrupts on the local\nprocessor.  It might even be noticable machines with large numbers of zones\nif a process started constantly reading min_free_kbytes.\n\nThis patch only calls setup_per_zone_pages_min() only on write. Tested on\nan x86 laptop and it did the right thing.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "14e072984179d3d421bf9ab75cc67e0961742841",
      "tree": "65a5a6f7d9756b8e7010278b58908d04da257a28",
      "parents": [
        "ac267728f13c55017ed5ee243c9c3166e27ab929"
      ],
      "author": {
        "name": "Andy Whitcroft",
        "email": "apw@shadowen.org",
        "time": "Sun May 06 14:49:14 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon May 07 12:12:52 2007 -0700"
      },
      "message": "add pfn_valid_within helper for sub-MAX_ORDER hole detection\n\nGenerally we work under the assumption that memory the mem_map array is\ncontigious and valid out to MAX_ORDER_NR_PAGES block of pages, ie.  that if we\nhave validated any page within this MAX_ORDER_NR_PAGES block we need not check\nany other.  This is not true when CONFIG_HOLES_IN_ZONE is set and we must\ncheck each and every reference we make from a pfn.\n\nAdd a pfn_valid_within() helper which should be used when scanning pages\nwithin a MAX_ORDER_NR_PAGES block when we have already checked the validility\nof the block normally with pfn_valid().  This can then be optimised away when\nwe do not have holes within a MAX_ORDER_NR_PAGES block of pages.\n\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Bob Picco \u003cbob.picco@hp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "476f35348eb8d2a827765992899fea78b7dcc46f",
      "tree": "81dbace9de3d4ffa3ecc67bffe265134962117bd",
      "parents": [
        "aee16b3cee2746880e40945a9b5bff4f309cfbc4"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@engr.sgi.com",
        "time": "Sun May 06 14:48:58 2007 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Mon May 07 12:12:51 2007 -0700"
      },
      "message": "Safer nr_node_ids and nr_node_ids determination and initial values\n\nThe nr_cpu_ids value is currently only calculated in smp_init.  However, it\nmay be needed before (SLUB needs it on kmem_cache_init!) and other kernel\ncomponents may also want to allocate dynamically sized per cpu array before\nsmp_init.  So move the determination of possible cpus into sched_init()\nwhere we already loop over all possible cpus early in boot.\n\nAlso initialize both nr_node_ids and nr_cpu_ids with the highest value they\ncould take.  If we have accidental users before these values are determined\nthen the current valud of 0 may cause too small per cpu and per node arrays\nto be allocated.  If it is set to the maximum possible then we only waste\nsome memory for early boot users.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5409bae07a63630ba5a40f3f00b7f3e6d7eceedd",
      "tree": "cc8837a90476091fc1fec7f20e4a2ca99ffbae41",
      "parents": [
        "93a6fefe2f6fc380870c0985b246bec7f37a06f7"
      ],
      "author": {
        "name": "Nick Piggin",
        "email": "nickpiggin@yahoo.com.au",
        "time": "Wed Feb 28 20:12:27 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Thu Mar 01 14:53:37 2007 -0800"
      },
      "message": "[PATCH] Rename PG_checked to PG_owner_priv_1\n\nRename PG_checked to PG_owner_priv_1 to reflect its availablilty as a\nprivate flag for use by the owner/allocator of the page.  In the case of\npagecache pages (which might be considered to be owned by the mm),\nfilesystems may use the flag.\n\nSigned-off-by: Jeremy Fitzhardinge \u003cjeremy@xensource.com\u003e\nSigned-off-by: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "8ef8286689c6b5bc76212437b85bdd2ba749ee44",
      "tree": "9ef088691bd06699adc6c7875bc1b2e6e96ce066",
      "parents": [
        "53b8a315b76a3f3c70a5644976c0095460eb13d8"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Tue Feb 20 13:57:52 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Feb 20 17:10:13 2007 -0800"
      },
      "message": "[PATCH] slab: reduce size of alien cache to cover only possible nodes\n\nThe alien cache is a per cpu per node array allocated for every slab on the\nsystem.  Currently we size this array for all nodes that the kernel does\nsupport.  For IA64 this is 1024 nodes.  So we allocate an array with 1024\nobjects even if we only boot a system with 4 nodes.\n\nThis patch uses \"nr_node_ids\" to determine the number of possible nodes\nsupported by a hardware configuration and only allocates an alien cache\nsized for possible nodes.\n\nThe initialization of nr_node_ids occurred too late relative to the bootstrap\nof the slab allocator and so I moved the setup_nr_node_ids() into\nfree_area_init_nodes().\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "74c7aa8b8581e0ba8d6d17c623b9279aaabbb0cf",
      "tree": "e8bfdd1d4bd5a7d4ee0e0bbf83c45c9f2b5deb59",
      "parents": [
        "5ec553a90448b3edbd26c1acc72464f877614bfa"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Tue Feb 20 13:57:51 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Tue Feb 20 17:10:13 2007 -0800"
      },
      "message": "[PATCH] Replace highest_possible_node_id() with nr_node_ids\n\nhighest_possible_node_id() is currently used to calculate the last possible\nnode idso that the network subsystem can figure out how to size per node\narrays.\n\nI think having the ability to determine the maximum amount of nodes in a\nsystem at runtime is useful but then we should name this entry\ncorrespondingly, it should return the number of node_ids, and the the value\nneeds to be setup only once on bootup.  The node_possible_map does not\nchange after bootup.\n\nThis patch introduces nr_node_ids and replaces the use of\nhighest_possible_node_id().  nr_node_ids is calculated on bootup when the\npage allocators pagesets are initialized.\n\n[deweerdt@free.fr: fix oops]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Neil Brown \u003cneilb@suse.de\u003e\nCc: Trond Myklebust \u003ctrond.myklebust@fys.uio.no\u003e\nSigned-off-by: Frederik Deweerdt \u003cfrederik.deweerdt@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "4b51d66989218aad731a721b5b28c79bf5388c09",
      "tree": "8ff7acbd219f699c20c2f1fd201ffb3db5a64062",
      "parents": [
        "66701b1499a3ff11882c8c4aef36e8eac86e17b1"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sat Feb 10 01:43:10 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:18 2007 -0800"
      },
      "message": "[PATCH] optional ZONE_DMA: optional ZONE_DMA in the VM\n\nMake ZONE_DMA optional in core code.\n\n- ifdef all code for ZONE_DMA and related definitions following the example\n  for ZONE_DMA32 and ZONE_HIGHMEM.\n\n- Without ZONE_DMA, ZONE_HIGHMEM and ZONE_DMA32 we get to a ZONES_SHIFT of\n  0.\n\n- Modify the VM statistics to work correctly without a DMA zone.\n\n- Modify slab to not create DMA slabs if there is no ZONE_DMA.\n\n[akpm@osdl.org: cleanup]\n[jdike@addtoit.com: build fix]\n[apw@shadowen.org: Simplify calculation of the number of bits we need for ZONES_SHIFT]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Andi Kleen \u003cak@suse.de\u003e\nCc: \"Luck, Tony\" \u003ctony.luck@intel.com\u003e\nCc: Kyle McMartin \u003ckyle@mcmartin.ca\u003e\nCc: Matthew Wilcox \u003cwilly@debian.org\u003e\nCc: James Bottomley \u003cJames.Bottomley@steeleye.com\u003e\nCc: Paul Mundt \u003clethal@linux-sh.org\u003e\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Jeff Dike \u003cjdike@addtoit.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6267276f3fdda9ad0d5ca451bdcbdf42b802d64b",
      "tree": "fcc92bd645401a2c0f9e6a35b8e215868969db4d",
      "parents": [
        "65e458d43dff872ee560e721fb0fdb367bb5adb0"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sat Feb 10 01:43:07 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:18 2007 -0800"
      },
      "message": "[PATCH] optional ZONE_DMA: deal with cases of ZONE_DMA meaning the first zone\n\nThis patchset follows up on the earlier work in Andrew\u0027s tree to reduce the\nnumber of zones.  The patches allow to go to a minimum of 2 zones.  This one\nallows also to make ZONE_DMA optional and therefore the number of zones can be\nreduced to one.\n\nZONE_DMA is usually used for ISA DMA devices.  There are a number of reasons\nwhy we would not want to have ZONE_DMA\n\n1. Some arches do not need ZONE_DMA at all.\n\n2. With the advent of IOMMUs DMA zones are no longer needed.\n   The necessity of DMA zones may drastically be reduced\n   in the future. This patchset allows a compilation of\n   a kernel without that overhead.\n\n3. Devices that require ISA DMA get rare these days. All\n   my systems do not have any need for ISA DMA.\n\n4. The presence of an additional zone unecessarily complicates\n   VM operations because it must be scanned and balancing\n   logic must operate on its.\n\n5. With only ZONE_NORMAL one can reach the situation where\n   we have only one zone. This will allow the unrolling of many\n   loops in the VM and allows the optimization of varous\n   code paths in the VM.\n\n6. Having only a single zone in a NUMA system results in a\n   1-1 correspondence between nodes and zones. Various additional\n   optimizations to critical VM paths become possible.\n\nMany systems today can operate just fine with a single zone.  If you look at\nwhat is in ZONE_DMA then one usually sees that nothing uses it.  The DMA slabs\nare empty (Some arches use ZONE_DMA instead of ZONE_NORMAL, then ZONE_NORMAL\nwill be empty instead).\n\nOn all of my systems (i386, x86_64, ia64) ZONE_DMA is completely empty.  Why\nconstantly look at an empty zone in /proc/zoneinfo and empty slab in\n/proc/slabinfo?  Non i386 also frequently have no need for ZONE_DMA and zones\nstay empty.\n\nThe patchset was tested on i386 (UP / SMP), x86_64 (UP, NUMA) and ia64 (NUMA).\n\nThe RFC posted earlier (see\nhttp://marc.theaimsgroup.com/?l\u003dlinux-kernel\u0026m\u003d115231723513008\u0026w\u003d2) had lots\nof #ifdefs in them.  An effort has been made to minize the number of #ifdefs\nand make this as compact as possible.  The job was made much easier by the\nongoing efforts of others to extract common arch specific functionality.\n\nI have been running this for awhile now on my desktop and finally Linux is\nusing all my available RAM instead of leaving the 16MB in ZONE_DMA untouched:\n\nchristoph@pentium940:~$ cat /proc/zoneinfo\nNode 0, zone   Normal\n  pages free     4435\n        min      1448\n        low      1810\n        high     2172\n        active   241786\n        inactive 210170\n        scanned  0 (a: 0 i: 0)\n        spanned  524224\n        present  524224\n    nr_anon_pages 61680\n    nr_mapped    14271\n    nr_file_pages 390264\n    nr_slab_reclaimable 27564\n    nr_slab_unreclaimable 1793\n    nr_page_table_pages 449\n    nr_dirty     39\n    nr_writeback 0\n    nr_unstable  0\n    nr_bounce    0\n    cpu: 0 pcp: 0\n              count: 156\n              high:  186\n              batch: 31\n    cpu: 0 pcp: 1\n              count: 9\n              high:  62\n              batch: 15\n  vm stats threshold: 20\n    cpu: 1 pcp: 0\n              count: 177\n              high:  186\n              batch: 31\n    cpu: 1 pcp: 1\n              count: 12\n              high:  62\n              batch: 15\n  vm stats threshold: 20\n  all_unreclaimable: 0\n  prev_priority:     12\n  temp_priority:     12\n  start_pfn:         0\n\nThis patch:\n\nIn two places in the VM we use ZONE_DMA to refer to the first zone.  If\nZONE_DMA is optional then other zones may be first.  So simply replace\nZONE_DMA with zone 0.\n\nThis also fixes ZONETABLE_PGSHIFT.  If we have only a single zone then\nZONES_PGSHIFT may become 0 because there is no need anymore to encode the zone\nnumber related to a pgdat.  However, we still need a zonetable to index all\nthe zones for each node if this is a NUMA system.  Therefore define\nZONETABLE_SHIFT unconditionally as the offset of the ZONE field in page flags.\n\n[apw@shadowen.org: fix mismerge]\nAcked-by: Christoph Hellwig \u003chch@infradead.org\u003e\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Andi Kleen \u003cak@suse.de\u003e\nCc: \"Luck, Tony\" \u003ctony.luck@intel.com\u003e\nCc: Kyle McMartin \u003ckyle@mcmartin.ca\u003e\nCc: Matthew Wilcox \u003cwilly@debian.org\u003e\nCc: James Bottomley \u003cJames.Bottomley@steeleye.com\u003e\nCc: Paul Mundt \u003clethal@linux-sh.org\u003e\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "65e458d43dff872ee560e721fb0fdb367bb5adb0",
      "tree": "e903ec97a4a6c0ee952108b696387ef098a6a80c",
      "parents": [
        "05a0416be2b88d859efcbc4a4290555a04d169a1"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sat Feb 10 01:43:05 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:18 2007 -0800"
      },
      "message": "[PATCH] Drop get_zone_counts()\n\nValues are available via ZVC sums.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9195481d2f869a2707a272057f3f8664fd277534",
      "tree": "995f43619af48009b616bf5a7ce4a6bffd75de79",
      "parents": [
        "96177299416dbccb73b54e6b344260154a445375"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sat Feb 10 01:43:04 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:18 2007 -0800"
      },
      "message": "[PATCH] Drop nr_free_pages_pgdat()\n\nFunction is unnecessary now.  We can use the summing features of the ZVCs to\nget the values we need.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "96177299416dbccb73b54e6b344260154a445375",
      "tree": "586454851d0fbbb365d6b12c852d5a7dd6b004f4",
      "parents": [
        "51ed4491271be8c56bdb2a03481ed34ea4984bc2"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sat Feb 10 01:43:03 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:18 2007 -0800"
      },
      "message": "[PATCH] Drop free_pages()\n\nnr_free_pages is now a simple access to a global variable.  Make it a macro\ninstead of a function.\n\nThe nr_free_pages now requires vmstat.h to be included.  There is one\noccurrence in power management where we need to add the include.  Directly\nrefrer to global_page_state() there to clarify why the #include was added.\n\n[akpm@osdl.org: arm build fix]\n[akpm@osdl.org: sparc64 build fix]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d23ad42324cc4378132e51f2fc5c9ba6cbe75182",
      "tree": "6844416befb3988e432e8f422f3a369e2f760d39",
      "parents": [
        "c878538598d1e7ab41ecc0de8894e34e2fdef630"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sat Feb 10 01:43:02 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:17 2007 -0800"
      },
      "message": "[PATCH] Use ZVC for free_pages\n\nThis is again simplifies some of the VM counter calculations through the use\nof the ZVC consolidated counters.\n\n[michal.k.k.piotrowski@gmail.com: build fix]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Michal Piotrowski \u003cmichal.k.k.piotrowski@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c878538598d1e7ab41ecc0de8894e34e2fdef630",
      "tree": "d22e73fddef75521e287c3e7754a1d3224c348d9",
      "parents": [
        "c3704ceb4ad055b489b143f4e37c57d128908012"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Sat Feb 10 01:43:01 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:17 2007 -0800"
      },
      "message": "[PATCH] Use ZVC for inactive and active counts\n\nThe determination of the dirty ratio to determine writeback behavior is\ncurrently based on the number of total pages on the system.\n\nHowever, not all pages in the system may be dirtied.  Thus the ratio is always\ntoo low and can never reach 100%.  The ratio may be particularly skewed if\nlarge hugepage allocations, slab allocations or device driver buffers make\nlarge sections of memory not available anymore.  In that case we may get into\na situation in which f.e.  the background writeback ratio of 40% cannot be\nreached anymore which leads to undesired writeback behavior.\n\nThis patchset fixes that issue by determining the ratio based on the actual\npages that may potentially be dirty.  These are the pages on the active and\nthe inactive list plus free pages.\n\nThe problem with those counts has so far been that it is expensive to\ncalculate these because counts from multiple nodes and multiple zones will\nhave to be summed up.  This patchset makes these counters ZVC counters.  This\nmeans that a current sum per zone, per node and for the whole system is always\navailable via global variables and not expensive anymore to calculate.\n\nThe patchset results in some other good side effects:\n\n- Removal of the various functions that sum up free, active and inactive\n  page counts\n\n- Cleanup of the functions that display information via the proc filesystem.\n\nThis patch:\n\nThe use of a ZVC for nr_inactive and nr_active allows a simplification of some\ncounter operations.  More ZVC functionality is used for sums etc in the\nfollowing patches.\n\n[akpm@osdl.org: UP build fix]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a6af2bc3d5ce8722b9d09c5bdd5383c91c419653",
      "tree": "d4af36e83f25d84979584b670b692068eb2b5a56",
      "parents": [
        "e10a4437cb37c85f2df95432025b392d98aac2aa"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@csn.ul.ie",
        "time": "Sat Feb 10 01:42:57 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Sun Feb 11 10:51:17 2007 -0800"
      },
      "message": "[PATCH] Avoid excessive sorting of early_node_map[]\n\nfind_min_pfn_for_node() and find_min_pfn_with_active_regions() sort\nearly_node_map[] on every call.  This is an excessive amount of sorting and\nthat can be avoided.  This patch always searches the whole early_node_map[]\nin find_min_pfn_for_node() instead of returning the first value found.  The\nmap is then only sorted once when required.  Successfully boot tested on a\nnumber of machines.\n\n[akpm@osdl.org: cleanup]\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a25700a53f715fde30443e737e52310c6d4a311a",
      "tree": "7f0593a68d7c791f3d01ce04f061b534fa26db91",
      "parents": [
        "45941d0481f538324fa21d6450116d13f6e51e91"
      ],
      "author": {
        "name": "Andrew Morton",
        "email": "akpm@osdl.org",
        "time": "Thu Feb 08 14:20:40 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Fri Feb 09 09:25:47 2007 -0800"
      },
      "message": "[PATCH] mm: show bounce pages in oom killer output\n\nAlso split that long line up - people like to send us wordwrapped oom-kill\ntraces.\n\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nCc: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6fd6b17c6d9713f56b5f20903ec3e00fa6cc435e",
      "tree": "f5dd7477e48fc5a1184d3472548291397be9c2b5",
      "parents": [
        "f56df2f4db6e4af87fb8e941cff69f4501a111df"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Wed Jan 31 16:43:36 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.linux-foundation.org",
        "time": "Wed Jan 31 16:46:40 2007 -0800"
      },
      "message": "Revert \"[PATCH] mm: micro optimise zone_watermark_ok\"\n\nThis reverts commit e80ee884ae0e3794ef2b65a18a767d502ad712ee.\n\nPawel Sikora had a boot-time oops due to it - because the sign change\ninvalidates the following comparisons, since \u0027free_pages\u0027 can be\nnegative.\n\nThe micro-optimization just isn\u0027t worth it.\n\nBisected-by: Pawel Sikora \u003cpluto@agmk.net\u003e\nAcked-by: Andrew Morton \u003cakpm@osdl.org\u003e\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a2f3aa02576632cdb60bd3de1f4bf55e9ac65604",
      "tree": "2b9b73675de73866fbd219fab5bf2d804e6817b1",
      "parents": [
        "47a4d5be7c50b2e9b905abbe2b97dc87051c5a44"
      ],
      "author": {
        "name": "Dave Hansen",
        "email": "haveblue@us.ibm.com",
        "time": "Wed Jan 10 23:15:30 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Jan 11 18:18:20 2007 -0800"
      },
      "message": "[PATCH] Fix sparsemem on Cell\n\nFix an oops experienced on the Cell architecture when init-time functions,\nearly_*(), are called at runtime.  It alters the call paths to make sure\nthat the callers explicitly say whether the call is being made on behalf of\na hotplug even, or happening at boot-time.\n\nIt has been compile tested on ppc64, ia64, s390, i386 and x86_64.\n\nAcked-by: Arnd Bergmann \u003carndb@de.ibm.com\u003e\nSigned-off-by: Dave Hansen \u003chaveblue@us.ibm.com\u003e\nCc: Yasunori Goto \u003cy-goto@jp.fujitsu.com\u003e\nAcked-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nCc: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nCc: Martin Schwidefsky \u003cschwidefsky@de.ibm.com\u003e\nAcked-by: Heiko Carstens \u003cheiko.carstens@de.ibm.com\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nCc: Paul Mackerras \u003cpaulus@samba.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "f2e12bb272f2544d1504f982270e90ae3dcc4ff2",
      "tree": "68e8d10521fdcf1d7f4df411d87809cd1110b929",
      "parents": [
        "6929da4427b4335365dd51ab0b7dd2a0393656f0"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Fri Jan 05 16:37:02 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Fri Jan 05 23:55:29 2007 -0800"
      },
      "message": "[PATCH] Check for populated zone in __drain_pages\n\nBoth process_zones() and drain_node_pages() check for populated zones\nbefore touching pagesets.  However, __drain_pages does not do so,\n\nThis may result in a NULL pointer dereference for pagesets in unpopulated\nzones if a NUMA setup is combined with cpu hotplug.\n\nInitially the unpopulated zone has the pcp pointers pointing to the boot\npagesets.  Since the zone is not populated the boot pageset pointers will\nnot be changed during page allocator and slab bootstrap.\n\nIf a cpu is later brought down (first call to __drain_pages()) then the pcp\npointers for cpus in unpopulated zones are set to NULL since __drain_pages\ndoes not first check for an unpopulated zone.\n\nIf the cpu is then brought up again then we call process_zones() which will\nignore the unpopulated zone.  So the pageset pointers will still be NULL.\n\nIf the cpu is then again brought down then __drain_pages will attempt to\ndrain pages by following the NULL pageset pointer for unpopulated zones.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "9ab37b8f21b4dfe256d736c13738d20c88a1f3ad",
      "tree": "11b7b33b1e88ce19175492f25cfc71add2b3dcd6",
      "parents": [
        "dd0ec16fa6cf2498b831663a543e1b67fce6e155"
      ],
      "author": {
        "name": "Paul Mundt",
        "email": "lethal@linux-sh.org",
        "time": "Fri Jan 05 16:36:30 2007 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Fri Jan 05 23:55:23 2007 -0800"
      },
      "message": "[PATCH] Sanely size hash tables when using large base pages\n\nAt the moment the inode/dentry cache hash tables (common by way of\nalloc_large_system_hash()) are incorrectly sized by their respective\ndetection logic when we attempt to use large base pages on systems with\nlittle memory.\n\nThis results in odd behaviour when using a 64kB PAGE_SIZE, such as:\n\nDentry cache hash table entries: 8192 (order: -1, 32768 bytes)\nInode-cache hash table entries: 4096 (order: -2, 16384 bytes)\n\nThe mount cache hash table is seemingly the only one that gets this right\nby directly taking PAGE_SIZE in to account.\n\nThe following patch attempts to catch the bogus values and round it up to\nat least 0-order.\n\nSigned-off-by: Paul Mundt \u003clethal@linux-sh.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "02a0e53d8227aff5e62e0433f82c12c1c2805fd6",
      "tree": "fe32435308e5f1afe8bd12357bd8c5ff3b4133c7",
      "parents": [
        "55935a34a428a1497e3b37982e2782c09c6f914d"
      ],
      "author": {
        "name": "Paul Jackson",
        "email": "pj@sgi.com",
        "time": "Wed Dec 13 00:34:25 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Wed Dec 13 09:05:49 2006 -0800"
      },
      "message": "[PATCH] cpuset: rework cpuset_zone_allowed api\n\nElaborate the API for calling cpuset_zone_allowed(), so that users have to\nexplicitly choose between the two variants:\n\n  cpuset_zone_allowed_hardwall()\n  cpuset_zone_allowed_softwall()\n\nUntil now, whether or not you got the hardwall flavor depended solely on\nwhether or not you or\u0027d in the __GFP_HARDWALL gfp flag to the gfp_mask\nargument.\n\nIf you didn\u0027t specify __GFP_HARDWALL, you implicitly got the softwall\nversion.\n\nUnfortunately, this meant that users would end up with the softwall version\nwithout thinking about it.  Since only the softwall version might sleep,\nthis led to bugs with possible sleeping in interrupt context on more than\none occassion.\n\nThe hardwall version requires that the current tasks mems_allowed allows\nthe node of the specified zone (or that you\u0027re in interrupt or that\n__GFP_THISNODE is set or that you\u0027re on a one cpuset system.)\n\nThe softwall version, depending on the gfp_mask, might allow a node if it\nwas allowed in the nearest enclusing cpuset marked mem_exclusive (which\nrequires taking the cpuset lock \u0027callback_mutex\u0027 to evaluate.)\n\nThis patch removes the cpuset_zone_allowed() call, and forces the caller to\nexplicitly choose between the hardwall and the softwall case.\n\nIf the caller wants the gfp_mask to determine this choice, they should (1)\nbe sure they can sleep or that __GFP_HARDWALL is set, and (2) invoke the\ncpuset_zone_allowed_softwall() routine.\n\nThis adds another 100 or 200 bytes to the kernel text space, due to the few\nlines of nearly duplicate code at the top of both cpuset_zone_allowed_*\nroutines.  It should save a few instructions executed for the calls that\nturned into calls of cpuset_zone_allowed_hardwall, thanks to not having to\nset (before the call) then check (within the call) the __GFP_HARDWALL flag.\n\nFor the most critical call, from get_page_from_freelist(), the same\ninstructions are executed as before -- the old cpuset_zone_allowed()\nroutine it used to call is the same code as the\ncpuset_zone_allowed_softwall() routine that it calls now.\n\nNot a perfect win, but seems worth it, to reduce this chance of hitting a\nsleeping with irq off complaint again.\n\nSigned-off-by: Paul Jackson \u003cpj@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "6b1b60f41eef3ba7b188fd72f1d6de478aafd93c",
      "tree": "96be18f573ef01f65547e8d74f0b4d7ce52f2c11",
      "parents": [
        "f1729c28a37e4f11ea5d9f468ab26adadb1aadab"
      ],
      "author": {
        "name": "Don Mullis",
        "email": "dwm@meer.net",
        "time": "Fri Dec 08 02:39:53 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Fri Dec 08 08:29:03 2006 -0800"
      },
      "message": "[PATCH] fault-injection: defaults likely to please a new user\n\nAssign defaults most likely to please a new user:\n 1) generate some logging output\n    (verbose\u003d2)\n 2) avoid injecting failures likely to lock up UI\n    (ignore_gfp_wait\u003d1, ignore_gfp_highmem\u003d1)\n\nSigned-off-by: Don Mullis \u003cdwm@meer.net\u003e\nCc: Akinobu Mita \u003cakinobu.mita@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "933e312e73f8fc39652bd4d216a5393cc3a014b9",
      "tree": "a2aacc2a098c3c95fe5a94fef1a2cc9751bee79b",
      "parents": [
        "8a8b6502fb669c3a0638a08955442814cedc86b1"
      ],
      "author": {
        "name": "Akinobu Mita",
        "email": "akinobu.mita@gmail.com",
        "time": "Fri Dec 08 02:39:45 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Fri Dec 08 08:29:02 2006 -0800"
      },
      "message": "[PATCH] fault-injection capability for alloc_pages()\n\nThis patch provides fault-injection capability for alloc_pages()\n\nBoot option:\n\nfail_page_alloc\u003d\u003cinterval\u003e,\u003cprobability\u003e,\u003cspace\u003e,\u003ctimes\u003e\n\n\t\u003cinterval\u003e -- specifies the interval of failures.\n\n\t\u003cprobability\u003e -- specifies how often it should fail in percent.\n\n\t\u003cspace\u003e -- specifies the size of free space where memory can be\n\t\t   allocated safely in pages.\n\n\t\u003ctimes\u003e -- specifies how many times failures may happen at most.\n\nDebugfs:\n\n/debug/fail_page_alloc/interval\n/debug/fail_page_alloc/probability\n/debug/fail_page_alloc/specifies\n/debug/fail_page_alloc/times\n/debug/fail_page_alloc/ignore-gfp-highmem\n/debug/fail_page_alloc/ignore-gfp-wait\n\nExample:\n\n\tfail_page_alloc\u003d10,100,0,-1\n\nThe page allocation (alloc_pages(), ...) fails once per 10 times.\n\nSigned-off-by: Akinobu Mita \u003cakinobu.mita@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "f0d1b0b30d250a07627ad8b9fbbb5c7cc08422e8",
      "tree": "0aa5379150574374351fb92af7881a48dbfcf2ce",
      "parents": [
        "b3d7ae5f47a58a9f7b152deeaf7daa1fc558a8f1"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Fri Dec 08 02:37:49 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Fri Dec 08 08:28:51 2006 -0800"
      },
      "message": "[PATCH] LOG2: Implement a general integer log2 facility in the kernel\n\nThis facility provides three entry points:\n\n\tilog2()\t\tLog base 2 of unsigned long\n\tilog2_u32()\tLog base 2 of u32\n\tilog2_u64()\tLog base 2 of u64\n\nThese facilities can either be used inside functions on dynamic data:\n\n\tint do_something(long q)\n\t{\n\t\t...;\n\t\ty \u003d ilog2(x)\n\t\t...;\n\t}\n\nOr can be used to statically initialise global variables with constant values:\n\n\tunsigned n \u003d ilog2(27);\n\nWhen performing static initialisation, the compiler will report \"error:\ninitializer element is not constant\" if asked to take a log of zero or of\nsomething not reducible to a constant.  They treat negative numbers as\nunsigned.\n\nWhen not dealing with a constant, they fall back to using fls() which permits\nthem to use arch-specific log calculation instructions - such as BSR on\nx86/x86_64 or SCAN on FRV - if available.\n\n[akpm@osdl.org: MMC fix]\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nCc: Paul Mackerras \u003cpaulus@samba.org\u003e\nCc: Herbert Xu \u003cherbert@gondor.apana.org.au\u003e\nCc: David Howells \u003cdhowells@redhat.com\u003e\nCc: Wojtek Kaniewski \u003cwojtekka@toxygen.net\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "15ad7cdcfd76450d4beebc789ec646664238184d",
      "tree": "279d05a76ae0906c23ee2de8c5684d95d9886ad3",
      "parents": [
        "4a08a9f68168e547c2baf100020e9b96cae5fbd1"
      ],
      "author": {
        "name": "Helge Deller",
        "email": "deller@gmx.de",
        "time": "Wed Dec 06 20:40:36 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:46 2006 -0800"
      },
      "message": "[PATCH] struct seq_operations and struct file_operations constification\n\n - move some file_operations structs into the .rodata section\n\n - move static strings from policy_types[] array into the .rodata section\n\n - fix generic seq_operations usages, so that those structs may be defined\n   as \"const\" as well\n\n[akpm@osdl.org: couple of fixes]\nSigned-off-by: Helge Deller \u003cdeller@gmx.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "02316067852187b8bec781bec07410e91af79627",
      "tree": "856e3f4610c91a6548bf3bf5c70ecbc0b28a4145",
      "parents": [
        "a38a44c1a93078fc5fadc4ac2df8dea4697069e2"
      ],
      "author": {
        "name": "Ingo Molnar",
        "email": "mingo@elte.hu",
        "time": "Wed Dec 06 20:38:17 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:39 2006 -0800"
      },
      "message": "[PATCH] hotplug CPU: clean up hotcpu_notifier() use\n\nThere was lots of #ifdef noise in the kernel due to hotcpu_notifier(fn,\nprio) not correctly marking \u0027fn\u0027 as used in the !HOTPLUG_CPU case, and thus\ngenerating compiler warnings of unused symbols, hence forcing people to add\n#ifdefs.\n\nthe compiler can skip truly unused functions just fine:\n\n    text    data     bss     dec     hex filename\n 1624412  728710 3674856 6027978  5bfaca vmlinux.before\n 1624412  728710 3674856 6027978  5bfaca vmlinux.after\n\n[akpm@osdl.org: topology.c fix]\nSigned-off-by: Ingo Molnar \u003cmingo@elte.hu\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "04903664325acb3f199dd8a4b8f1aa437e9fd6b2",
      "tree": "6da8e4716c1477d578b9d89cd23be703aac8bedc",
      "parents": [
        "93f210dd9e614ddab7ecef0b4c9ba6ad3720d860"
      ],
      "author": {
        "name": "Andrew Morton",
        "email": "akpm@osdl.org",
        "time": "Wed Dec 06 20:37:33 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:37 2006 -0800"
      },
      "message": "[PATCH] remove HASH_HIGHMEM\n\nIt has no users and it\u0027s doubtful that we\u0027ll need it again.\n\nCc: \"David S. Miller\" \u003cdavem@davemloft.net\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "33f2ef89f8e181486b63fdbdc97c6afa6ca9f34b",
      "tree": "b90eac24ff367bc628c44eaa51a9f0ea1b69d1a4",
      "parents": [
        "3c517a6132098ca37e122a2980fc64a9e798b0d7"
      ],
      "author": {
        "name": "Andy Whitcroft",
        "email": "apw@shadowen.org",
        "time": "Wed Dec 06 20:33:32 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:25 2006 -0800"
      },
      "message": "[PATCH] mm: make compound page destructor handling explicit\n\nCurrently we we use the lru head link of the second page of a compound page\nto hold its destructor.  This was ok when it was purely an internal\nimplmentation detail.  However, hugetlbfs overrides this destructor\nviolating the layering.  Abstract this out as explicit calls, also\nintroduce a type for the callback function allowing them to be type\nchecked.  For each callback we pre-declare the function, causing a type\nerror on definition rather than on use elsewhere.\n\n[akpm@osdl.org: cleanups]\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "952f3b51beb592f3f1de15adcdef802fc086ea91",
      "tree": "45d9b240230494d3d985400f81f963b67db1e788",
      "parents": [
        "5bcd234d881d83ac0259c6d42d98f134e31c60a8"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Wed Dec 06 20:33:26 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:25 2006 -0800"
      },
      "message": "[PATCH] GFP_THISNODE must not trigger global reclaim\n\nThe intent of GFP_THISNODE is to make sure that an allocation occurs on a\nparticular node.  If this is not possible then NULL needs to be returned so\nthat the caller can choose what to do next on its own (the slab allocator\ndepends on that).\n\nHowever, GFP_THISNODE currently triggers reclaim before returning a failure\n(GFP_THISNODE means GFP_NORETRY is set).  If we have over allocated a node\nthen we will currently do some reclaim before returning NULL.  The caller\nmay want memory from other nodes before reclaim should be triggered.  (If\nthe caller wants reclaim then he can directly use __GFP_THISNODE instead).\n\nThere is no flag to avoid reclaim in the page allocator and adding yet\nanother GFP_xx flag would be difficult given that we are out of available\nflags.\n\nSo just compare and see if all bits for GFP_THISNODE (__GFP_THISNODE,\n__GFP_NORETRY and __GFP_NOWARN) are set.  If so then we return NULL before\nwaking up kswapd.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "ce421c799b5bde77aa60776d6fb61036ae0aea11",
      "tree": "98e0a52cdafdfa28c986991f57854209e68b8226",
      "parents": [
        "5d1854e15ee979f8e27330f0d3ce5e2703afa1dc"
      ],
      "author": {
        "name": "Andy Whitcroft",
        "email": "apw@shadowen.org",
        "time": "Wed Dec 06 20:33:08 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:23 2006 -0800"
      },
      "message": "[PATCH] mm: cleanup indentation on switch for CPU operations\n\nThese patches introduced new switch statements which are indented contrary\nto the concensus in mm/*.c.  Fix them up to match that concensus.\n\n    [PATCH] node local per-cpu-pages\n    [PATCH] ZVC: Scale thresholds depending on the size of the system\n    commit e7c8d5c9955a4d2e88e36b640563f5d6d5aba48a\n    commit df9ecaba3f152d1ea79f2a5e0b87505e03f47590\n\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nCc: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "25ba77c141dbcd2602dd0171824d0d72aa023a01",
      "tree": "153eb9bc567f63d739dcaf8a3caf11c8f48b8379",
      "parents": [
        "bc4ba393c007248f76c05945abb7b7b892cdd1cc"
      ],
      "author": {
        "name": "Andy Whitcroft",
        "email": "apw@shadowen.org",
        "time": "Wed Dec 06 20:33:03 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:23 2006 -0800"
      },
      "message": "[PATCH] numa node ids are int, page_to_nid and zone_to_nid should return int\n\nNUMA node ids are passed as either int or unsigned int almost exclusivly\npage_to_nid and zone_to_nid both return unsigned long.  This is a throw\nback to when page_to_nid was a #define and was thus exposing the real type\nof the page flags field.\n\nIn addition to fixing up the definitions of page_to_nid and zone_to_nid I\naudited the users of these functions identifying the following incorrect\nuses:\n\n1) mm/page_alloc.c show_node() -- printk dumping the node id,\n2) include/asm-ia64/pgalloc.h pgtable_quicklist_free() -- comparison\n   against numa_node_id() which returns an int from cpu_to_node(), and\n3) mm/mpolicy.c check_pte_range -- used as an index in node_isset which\n   uses bit_set which in generic code takes an int.\n\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nCc: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nCc: \"Luck, Tony\" \u003ctony.luck@intel.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "bc4ba393c007248f76c05945abb7b7b892cdd1cc",
      "tree": "3c2cf7e13ecf991c58ffc00617757fcdeb3d863f",
      "parents": [
        "ebe29738f3934ad6a93c8bd76e30aa5d797a269d"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Wed Dec 06 20:33:02 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:23 2006 -0800"
      },
      "message": "[PATCH] drain_node_page(): Drain pages in batch units\n\ndrain_node_pages() currently drains the complete pageset of all pages.  If\nthere are a large number of pages in the queues then we may hold off\ninterrupts for too long.\n\nDuplicate the method used in free_hot_cold_page.  Only drain pcp-\u003ebatch\npages at one time.\n\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "b43a57bb4dae72e8f7232e7c821a8799eda30022",
      "tree": "4293286b44c8b11bac6a03c4ddfe75aea40aa089",
      "parents": [
        "a3eea484f7a1aadb70ed6665338026a09ad6ce85"
      ],
      "author": {
        "name": "Kirill Korotaev",
        "email": "dev@openvz.org",
        "time": "Wed Dec 06 20:32:27 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:22 2006 -0800"
      },
      "message": "[PATCH] OOM can panic due to processes stuck in __alloc_pages()\n\nOOM can panic due to the processes stuck in __alloc_pages() doing infinite\nrebalance loop while no memory can be reclaimed.  OOM killer tries to kill\nsome processes, but unfortunetaly, rebalance label was moved by someone\nbelow the TIF_MEMDIE check, so buddy allocator doesn\u0027t see that process is\nOOM-killed and it can simply fail the allocation :/\n\nObserved in reality on RHEL4(2.6.9)+OpenVZ kernel when a user doing some\nmemory allocation tricks triggered OOM panic.\n\nSigned-off-by: Denis Lunev \u003cden@sw.ru\u003e\nSigned-off-by: Kirill Korotaev \u003cdev@openvz.org\u003e\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "cc102509074bba0316f2b5deebd7ef4447da295e",
      "tree": "44ac5fc0c0dd7a24e8925e680a03361f4722a5a6",
      "parents": [
        "7602bdf2fd14a40dd9b104e516fdc05e1bd17952"
      ],
      "author": {
        "name": "Nick Piggin",
        "email": "npiggin@suse.de",
        "time": "Wed Dec 06 20:32:00 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:21 2006 -0800"
      },
      "message": "[PATCH] mm: add arch_alloc_page\n\nAdd an arch_alloc_page to match arch_free_page.\n\nSigned-off-by: Nick Piggin \u003cnpiggin@suse.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "9276b1bc96a132f4068fdee00983c532f43d3a26",
      "tree": "04d64444cf6558632cfc7514b5437578b5e616af",
      "parents": [
        "89689ae7f95995723fbcd5c116c47933a3bb8b13"
      ],
      "author": {
        "name": "Paul Jackson",
        "email": "pj@sgi.com",
        "time": "Wed Dec 06 20:31:48 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:20 2006 -0800"
      },
      "message": "[PATCH] memory page_alloc zonelist caching speedup\n\nOptimize the critical zonelist scanning for free pages in the kernel memory\nallocator by caching the zones that were found to be full recently, and\nskipping them.\n\nRemembers the zones in a zonelist that were short of free memory in the\nlast second.  And it stashes a zone-to-node table in the zonelist struct,\nto optimize that conversion (minimize its cache footprint.)\n\nRecent changes:\n\n    This differs in a significant way from a similar patch that I\n    posted a week ago.  Now, instead of having a nodemask_t of\n    recently full nodes, I have a bitmask of recently full zones.\n    This solves a problem that last weeks patch had, which on\n    systems with multiple zones per node (such as DMA zone) would\n    take seeing any of these zones full as meaning that all zones\n    on that node were full.\n\n    Also I changed names - from \"zonelist faster\" to \"zonelist cache\",\n    as that seemed to better convey what we\u0027re doing here - caching\n    some of the key zonelist state (for faster access.)\n\n    See below for some performance benchmark results.  After all that\n    discussion with David on why I didn\u0027t need them, I went and got\n    some ;).  I wanted to verify that I had not hurt the normal case\n    of memory allocation noticeably.  At least for my one little\n    microbenchmark, I found (1) the normal case wasn\u0027t affected, and\n    (2) workloads that forced scanning across multiple nodes for\n    memory improved up to 10% fewer System CPU cycles and lower\n    elapsed clock time (\u0027sys\u0027 and \u0027real\u0027).  Good.  See details, below.\n\n    I didn\u0027t have the logic in get_page_from_freelist() for various\n    full nodes and zone reclaim failures correct.  That should be\n    fixed up now - notice the new goto labels zonelist_scan,\n    this_zone_full, and try_next_zone, in get_page_from_freelist().\n\nThere are two reasons I persued this alternative, over some earlier\nproposals that would have focused on optimizing the fake numa\nemulation case by caching the last useful zone:\n\n 1) Contrary to what I said before, we (SGI, on large ia64 sn2 systems)\n    have seen real customer loads where the cost to scan the zonelist\n    was a problem, due to many nodes being full of memory before\n    we got to a node we could use.  Or at least, I think we have.\n    This was related to me by another engineer, based on experiences\n    from some time past.  So this is not guaranteed.  Most likely, though.\n\n    The following approach should help such real numa systems just as\n    much as it helps fake numa systems, or any combination thereof.\n\n 2) The effort to distinguish fake from real numa, using node_distance,\n    so that we could cache a fake numa node and optimize choosing\n    it over equivalent distance fake nodes, while continuing to\n    properly scan all real nodes in distance order, was going to\n    require a nasty blob of zonelist and node distance munging.\n\n    The following approach has no new dependency on node distances or\n    zone sorting.\n\nSee comment in the patch below for a description of what it actually does.\n\nTechnical details of note (or controversy):\n\n - See the use of \"zlc_active\" and \"did_zlc_setup\" below, to delay\n   adding any work for this new mechanism until we\u0027ve looked at the\n   first zone in zonelist.  I figured the odds of the first zone\n   having the memory we needed were high enough that we should just\n   look there, first, then get fancy only if we need to keep looking.\n\n - Some odd hackery was needed to add items to struct zonelist, while\n   not tripping up the custom zonelists built by the mm/mempolicy.c\n   code for MPOL_BIND.  My usual wordy comments below explain this.\n   Search for \"MPOL_BIND\".\n\n - Some per-node data in the struct zonelist is now modified frequently,\n   with no locking.  Multiple CPU cores on a node could hit and mangle\n   this data.  The theory is that this is just performance hint data,\n   and the memory allocator will work just fine despite any such mangling.\n   The fields at risk are the struct \u0027zonelist_cache\u0027 fields \u0027fullzones\u0027\n   (a bitmask) and \u0027last_full_zap\u0027 (unsigned long jiffies).  It should\n   all be self correcting after at most a one second delay.\n\n - This still does a linear scan of the same lengths as before.  All\n   I\u0027ve optimized is making the scan faster, not algorithmically\n   shorter.  It is now able to scan a compact array of \u0027unsigned\n   short\u0027 in the case of many full nodes, so one cache line should\n   cover quite a few nodes, rather than each node hitting another\n   one or two new and distinct cache lines.\n\n - If both Andi and Nick don\u0027t find this too complicated, I will be\n   (pleasantly) flabbergasted.\n\n - I removed the comment claiming we only use one cachline\u0027s worth of\n   zonelist.  We seem, at least in the fake numa case, to have put the\n   lie to that claim.\n\n - I pay no attention to the various watermarks and such in this performance\n   hint.  A node could be marked full for one watermark, and then skipped\n   over when searching for a page using a different watermark.  I think\n   that\u0027s actually quite ok, as it will tend to slightly increase the\n   spreading of memory over other nodes, away from a memory stressed node.\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nPerformance - some benchmark results and analysis:\n\nThis benchmark runs a memory hog program that uses multiple\nthreads to touch alot of memory as quickly as it can.\n\nMultiple runs were made, touching 12, 38, 64 or 90 GBytes out of\nthe total 96 GBytes on the system, and using 1, 19, 37, or 55\nthreads (on a 56 CPU system.)  System, user and real (elapsed)\ntimings were recorded for each run, shown in units of seconds,\nin the table below.\n\nTwo kernels were tested - 2.6.18-mm3 and the same kernel with\nthis zonelist caching patch added.  The table also shows the\npercentage improvement the zonelist caching sys time is over\n(lower than) the stock *-mm kernel.\n\n      number     2.6.18-mm3\t   zonelist-cache    delta (\u003c 0 good)\tpercent\n GBs    N  \t------------\t   --------------    ----------------\tsystime\n mem threads   sys user  real\t  sys  user  real     sys  user  real\t better\n  12\t 1     153   24   177\t  151\t 24   176      -2     0    -1\t   1%\n  12\t19\t99   22     8\t   99\t 22\t8\t0     0     0\t   0%\n  12\t37     111   25     6\t  112\t 25\t6\t1     0     0\t  -0%\n  12\t55     115   25     5\t  110\t 23\t5      -5    -2     0\t   4%\n  38\t 1     502   74   576\t  497\t 73   570      -5    -1    -6\t   0%\n  38\t19     426   78    48\t  373\t 76    39     -53    -2    -9\t  12%\n  38\t37     544   83    36\t  547\t 82    36\t3    -1     0\t  -0%\n  38\t55     501   77    23\t  511\t 80    24      10     3     1\t  -1%\n  64\t 1     917  125  1042\t  890\t124  1014     -27    -1   -28\t   2%\n  64\t19    1118  138   119\t  965\t141   103    -153     3   -16\t  13%\n  64\t37    1202  151    94\t 1136\t150    81     -66    -1   -13\t   5%\n  64\t55    1118  141    61\t 1072\t140    58     -46    -1    -3\t   4%\n  90\t 1    1342  177  1519\t 1275\t174  1450     -67    -3   -69\t   4%\n  90\t19    2392  199   192\t 2116\t189   176    -276   -10   -16\t  11%\n  90\t37    3313  238   175\t 2972\t225   145    -341   -13   -30\t  10%\n  90\t55    1948  210   104\t 1843\t213   100    -105     3    -4\t   5%\n\nNotes:\n 1) This test ran a memory hog program that started a specified number N of\n    threads, and had each thread allocate and touch 1/N\u0027th of\n    the total memory to be used in the test run in a single loop,\n    writing a constant word to memory, one store every 4096 bytes.\n    Watching this test during some earlier trial runs, I would see\n    each of these threads sit down on one CPU and stay there, for\n    the remainder of the pass, a different CPU for each thread.\n\n 2) The \u0027real\u0027 column is not comparable to the \u0027sys\u0027 or \u0027user\u0027 columns.\n    The \u0027real\u0027 column is seconds wall clock time elapsed, from beginning\n    to end of that test pass.  The \u0027sys\u0027 and \u0027user\u0027 columns are total\n    CPU seconds spent on that test pass.  For a 19 thread test run,\n    for example, the sum of \u0027sys\u0027 and \u0027user\u0027 could be up to 19 times the\n    number of \u0027real\u0027 elapsed wall clock seconds.\n\n 3) Tests were run on a fresh, single-user boot, to minimize the amount\n    of memory already in use at the start of the test, and to minimize\n    the amount of background activity that might interfere.\n\n 4) Tests were done on a 56 CPU, 28 Node system with 96 GBytes of RAM.\n\n 5) Notice that the \u0027real\u0027 time gets large for the single thread runs, even\n    though the measured \u0027sys\u0027 and \u0027user\u0027 times are modest.  I\u0027m not sure what\n    that means - probably something to do with it being slow for one thread to\n    be accessing memory along ways away.  Perhaps the fake numa system, running\n    ostensibly the same workload, would not show this substantial degradation\n    of \u0027real\u0027 time for one thread on many nodes -- lets hope not.\n\n 6) The high thread count passes (one thread per CPU - on 55 of 56 CPUs)\n    ran quite efficiently, as one might expect.  Each pair of threads needed\n    to allocate and touch the memory on the node the two threads shared, a\n    pleasantly parallizable workload.\n\n 7) The intermediate thread count passes, when asking for alot of memory forcing\n    them to go to a few neighboring nodes, improved the most with this zonelist\n    caching patch.\n\nConclusions:\n * This zonelist cache patch probably makes little difference one way or the\n   other for most workloads on real numa hardware, if those workloads avoid\n   heavy off node allocations.\n * For memory intensive workloads requiring substantial off-node allocations\n   on real numa hardware, this patch improves both kernel and elapsed timings\n   up to ten per-cent.\n * For fake numa systems, I\u0027m optimistic, but will have to leave that up to\n   Rohit Seth to actually test (once I get him a 2.6.18 backport.)\n\nSigned-off-by: Paul Jackson \u003cpj@sgi.com\u003e\nCc: Rohit Seth \u003crohitseth@google.com\u003e\nCc: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nCc: David Rientjes \u003crientjes@cs.washington.edu\u003e\nCc: Paul Menage \u003cmenage@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "89689ae7f95995723fbcd5c116c47933a3bb8b13",
      "tree": "4d73ff59b557fa1a84c6064406ff101c76ff8adc",
      "parents": [
        "c0a499c2c42992cff097b38be29d2ba60d2fd99a"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "clameter@sgi.com",
        "time": "Wed Dec 06 20:31:45 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:20 2006 -0800"
      },
      "message": "[PATCH] Get rid of zone_table[]\n\nThe zone table is mostly not needed.  If we have a node in the page flags\nthen we can get to the zone via NODE_DATA() which is much more likely to be\nalready in the cpu cache.\n\nIn case of SMP and UP NODE_DATA() is a constant pointer which allows us to\naccess an exact replica of zonetable in the node_zones field.  In all of\nthe above cases there will be no need at all for the zone table.\n\nThe only remaining case is if in a NUMA system the node numbers do not fit\ninto the page flags.  In that case we make sparse generate a table that\nmaps sections to nodes and use that table to to figure out the node number.\n This table is sized to fit in a single cache line for the known 32 bit\nNUMA platform which makes it very likely that the information can be\nobtained without a cache miss.\n\nFor sparsemem the zone table seems to be have been fairly large based on\nthe maximum possible number of sections and the number of zones per node.\nThere is some memory saving by removing zone_table.  The main benefit is to\nreduce the cache foootprint of the VM from the frequent lookups of zones.\nPlus it simplifies the page allocator.\n\n[akpm@osdl.org: build fix]\nSigned-off-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Dave Hansen \u003chaveblue@us.ibm.com\u003e\nCc: Andy Whitcroft \u003capw@shadowen.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "0798e5193cd70f6c867ec176d7730589f944c627",
      "tree": "abe3ada0b04080729418a0c301d8b55b4363b56e",
      "parents": [
        "a2ce774096110ccc5c02cbdc05897d005fcd3db8"
      ],
      "author": {
        "name": "Paul Jackson",
        "email": "pj@sgi.com",
        "time": "Wed Dec 06 20:31:38 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Dec 07 08:39:20 2006 -0800"
      },
      "message": "[PATCH] memory page alloc minor cleanups\n\n- s/freeliest/freelist/ spelling fix\n\n- Check for NULL *z zone seems useless - even if it could happen, so\n  what?  Perhaps we should have a check later on if we are faced with an\n  allocation request that is not allowed to fail - shouldn\u0027t that be a\n  serious kernel error, passing an empty zonelist with a mandate to not\n  fail?\n\n- Initializing \u0027z\u0027 to zonelist-\u003ezones can wait until after the first\n  get_page_from_freelist() fails; we only use \u0027z\u0027 in the wakeup_kswapd()\n  loop, so let\u0027s initialize \u0027z\u0027 there, in a \u0027for\u0027 loop.  Seems clearer.\n\n- Remove superfluous braces around a break\n\n- Fix a couple errant spaces\n\n- Adjust indentation on the cpuset_zone_allowed() check, to match the\n  lines just before it -- seems easier to read in this case.\n\n- Add another set of braces to the zone_watermark_ok logic\n\nFrom: Paul Jackson \u003cpj@sgi.com\u003e\n\n  Backout one item from a previous \"memory page_alloc minor cleanups\" patch.\n   Until and unless we are certain that no one can ever pass an empty zonelist\n  to __alloc_pages(), this check for an empty zonelist (or some BUG\n  equivalent) is essential.  The code in get_page_from_freelist() blow ups if\n  passed an empty zonelist.\n\nSigned-off-by: Paul Jackson \u003cpj@sgi.com\u003e\nAcked-by: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nSigned-off-by: Paul Jackson \u003cpj@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "1abbfb412b1610ec3a7ec0164108cee01191d9f5",
      "tree": "ab63b4e9b901455385a55ffa4a30b23343d363eb",
      "parents": [
        "0b1082efb92eedb28e982cfae526267ebdcf5622"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@skynet.ie",
        "time": "Thu Nov 23 12:01:41 2006 +0000"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@woody.osdl.org",
        "time": "Thu Nov 23 09:30:38 2006 -0800"
      },
      "message": "[PATCH] x86_64: fix bad page state in process \u0027swapper\u0027\n\nfind_min_pfn_for_node() and find_min_pfn_with_active_regions() both\ndepend on a sorted early_node_map[].  However, sort_node_map() is being\ncalled after fin_min_pfn_with_active_regions() in\nfree_area_init_nodes().\n\nIn most cases, this is ok, but on at least one x86_64, the SRAT table\ncaused the E820 ranges to be registered out of order.  This gave the\nwrong values for the min PFN range resulting in some pages not being\ninitialised.\n\nThis patch sorts the early_node_map in find_min_pfn_for_node().  It has\nbeen boot tested on x86, x86_64, ppc64 and ia64.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Andre Noll \u003cmaan@systemlinux.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "941c7105dc4f4961727acc518e18e00b9a03cbf3",
      "tree": "a57c90d295b48492d2e6f1151cdcdff31475aa70",
      "parents": [
        "c7e12b838989b0e432c7a1cdf1e6c6fd936007f6"
      ],
      "author": {
        "name": "nkalmala",
        "email": "nkalmala@gmail.com",
        "time": "Thu Nov 02 22:07:04 2006 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@g5.osdl.org",
        "time": "Fri Nov 03 12:27:56 2006 -0800"
      },
      "message": "[PATCH] mm: un-needed add-store operation wastes a few bytes\n\nUn-needed add-store operation wastes a few bytes.\n8 bytes wasted with -O2, on a ppc.\n\nSigned-off-by: nkalmala \u003cnkalmala@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "0c6cb974636dd29681b03f8eb0ae227decab01fb",
      "tree": "dfcf831a0c067eec8e1afbd4c22be3e1e736155e",
      "parents": [
        "057647fc47b3a5fbcfa997041db3f483d506603c"
      ],
      "author": {
        "name": "Mel Gorman",
        "email": "mel@skynet.ie",
        "time": "Sat Oct 28 10:38:59 2006 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@g5.osdl.org",
        "time": "Sat Oct 28 11:30:55 2006 -0700"
      },
      "message": "[PATCH] Calculation fix for memory holes beyong the end of physical memory\n\nabsent_pages_in_range() made the assumption that users of the\narch-independent zone-sizing API would not care about holes beyound the end\nof physical memory.  This was not the case and was \"fixed\" in a patch\ncalled \"Account for holes that are outside the range of physical memory\".\nHowever, when given a range that started before a hole in \"real\" memory and\nended beyond the end of memory, it would get the result wrong.  The bug is\nin mainline but a patch is below.\n\nIt has been tested successfully on a number of machines and architectures.\nAdditional credit to Keith Mannthey for discovering the problem, helping\nidentify the correct fix and confirming it Worked For Him.\n\nSigned-off-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: keith mannthey \u003ckmannth@us.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "3bb1a852ab6c9cdf211a2f4a2f502340c8c38eca",
      "tree": "d08aa652e8eb40c47d5bc37fa1a240b4fb7db029",
      "parents": [
        "2ae88149a27cadf2840e0ab8155bef13be285c03"
      ],
      "author": {
        "name": "Martin Bligh",
        "email": "mbligh@mbligh.org",
        "time": "Sat Oct 28 10:38:24 2006 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@g5.osdl.org",
        "time": "Sat Oct 28 11:30:50 2006 -0700"
      },
      "message": "[PATCH] vmscan: Fix temp_priority race\n\nThe temp_priority field in zone is racy, as we can walk through a reclaim\npath, and just before we copy it into prev_priority, it can be overwritten\n(say with DEF_PRIORITY) by another reclaimer.\n\nThe same bug is contained in both try_to_free_pages and balance_pgdat, but\nit is fixed slightly differently.  In balance_pgdat, we keep a separate\npriority record per zone in a local array.  In try_to_free_pages there is\nno need to do this, as the priority level is the same for all zones that we\nreclaim from.\n\nImpact of this bug is that temp_priority is copied into prev_priority, and\nsetting this artificially high causes reclaimers to set distress\nartificially low.  They then fail to reclaim mapped pages, when they are,\nin fact, under severe memory pressure (their priority may be as low as 0).\nThis causes the OOM killer to fire incorrectly.\n\nFrom: Andrew Morton \u003cakpm@osdl.org\u003e\n\n__zone_reclaim() isn\u0027t modifying zone-\u003eprev_priority.  But zone-\u003eprev_priority\nis used in the decision whether or not to bring mapped pages onto the inactive\nlist.  Hence there\u0027s a risk here that __zone_reclaim() will fail because\nzone-\u003eprev_priority ir large (ie: low urgency) and lots of mapped pages end up\nstuck on the active list.\n\nFix that up by decreasing (ie making more urgent) zone-\u003eprev_priority as\n__zone_reclaim() scans the zone\u0027s pages.\n\nThis bug perhaps explains why ZONE_RECLAIM_PRIORITY was created.  It should be\npossible to remove that now, and to just start out at DEF_PRIORITY?\n\nCc: Nick Piggin \u003cnickpiggin@yahoo.com.au\u003e\nCc: Christoph Lameter \u003cclameter@engr.sgi.com\u003e\nCc: \u003cstable@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    },
    {
      "commit": "7516795739bd53175629b90fab0ad488d7a6a9f7",
      "tree": "1f6b4b7a4f08a25155605b10d5963b7c6ca72e7b",
      "parents": [
        "047a66d4bb24aaf19f41d620f8f0534c2153cd0b"
      ],
      "author": {
        "name": "Andy Whitcroft",
        "email": "apw@shadowen.org",
        "time": "Sat Oct 21 10:24:14 2006 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@g5.osdl.org",
        "time": "Sat Oct 21 13:35:06 2006 -0700"
      },
      "message": "[PATCH] Reintroduce NODES_SPAN_OTHER_NODES for powerpc\n\nReintroduce NODES_SPAN_OTHER_NODES for powerpc\n\nRevert \"[PATCH] Remove SPAN_OTHER_NODES config definition\"\n    This reverts commit f62859bb6871c5e4a8e591c60befc8caaf54db8c.\nRevert \"[PATCH] mm: remove arch independent NODES_SPAN_OTHER_NODES\"\n    This reverts commit a94b3ab7eab4edcc9b2cb474b188f774c331adf7.\n\nAlso update the comments to indicate that this is still required\nand where its used.\n\nSigned-off-by: Andy Whitcroft \u003capw@shadowen.org\u003e\nCc: Paul Mackerras \u003cpaulus@samba.org\u003e\nCc: Mike Kravetz \u003ckravetz@us.ibm.com\u003e\nCc: Benjamin Herrenschmidt \u003cbenh@kernel.crashing.org\u003e\nAcked-by: Mel Gorman \u003cmel@csn.ul.ie\u003e\nAcked-by: Will Schmidt \u003cwill_schmidt@vnet.ibm.com\u003e\nCc: Christoph Lameter \u003cclameter@sgi.com\u003e\nCc: \u003cstable@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@osdl.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@osdl.org\u003e\n"
    }
  ],
  "next": "6220ec7844fda2686496013a66b5b9169976b991"
}
