)]}'
{
  "log": [
    {
      "commit": "48efe453e6b29561f78a1df55c7f58375259cb8c",
      "tree": "53d6ac1f2010b102c15b264b13fc4c98ba634d48",
      "parents": [
        "ac4de9543aca59f2b763746647577302fbedd57e",
        "2999ee7fda3f670effbfa746164c525f9d1be4b8"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 16:11:45 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 16:11:45 2013 -0700"
      },
      "message": "Merge branch \u0027for-next\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending\n\nPull SCSI target updates from Nicholas Bellinger:\n \"Lots of activity again this round for I/O performance optimizations\n  (per-cpu IDA pre-allocation for vhost + iscsi/target), and the\n  addition of new fabric independent features to target-core\n  (COMPARE_AND_WRITE + EXTENDED_COPY).\n\n  The main highlights include:\n\n   - Support for iscsi-target login multiplexing across individual\n     network portals\n   - Generic Per-cpu IDA logic (kent + akpm + clameter)\n   - Conversion of vhost to use per-cpu IDA pre-allocation for\n     descriptors, SGLs and userspace page pointer list\n   - Conversion of iscsi-target + iser-target to use per-cpu IDA\n     pre-allocation for descriptors\n   - Add support for generic COMPARE_AND_WRITE (AtomicTestandSet)\n     emulation for virtual backend drivers\n   - Add support for generic EXTENDED_COPY (CopyOffload) emulation for\n     virtual backend drivers.\n   - Add support for fast memory registration mode to iser-target (Vu)\n\n  The patches to add COMPARE_AND_WRITE and EXTENDED_COPY support are of\n  particular significance, which make us the first and only open source\n  target to support the full set of VAAI primitives.\n\n  Currently Linux clients are lacking upstream support to actually\n  utilize these primitives.  However, with server side support now in\n  place for folks like MKP + ZAB working on the client, this logic once\n  reserved for the highest end of storage arrays, can now be run in VMs\n  on their laptops\"\n\n* \u0027for-next\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending: (50 commits)\n  target/iscsi: Bump versions to v4.1.0\n  target: Update copyright ownership/year information to 2013\n  iscsi-target: Bump default TCP listen backlog to 256\n  target: Fix \u003e\u003d v3.9+ regression in PR APTPL + ALUA metadata write-out\n  iscsi-target; Bump default CmdSN Depth to 64\n  iscsi-target: Remove unnecessary wait_for_completion in iscsi_get_thread_set\n  iscsi-target: Add thread_set-\u003ets_activate_sem + use common deallocate\n  iscsi-target: Fix race with thread_pre_handler flush_signals + ISCSI_THREAD_SET_DIE\n  target: remove unused including \u003clinux/version.h\u003e\n  iser-target: introduce fast memory registration mode (FRWR)\n  iser-target: generalize rdma memory registration and cleanup\n  iser-target: move rdma wr processing to a shared function\n  target: Enable global EXTENDED_COPY setup/release\n  target: Add Third Party Copy (3PC) bit in INQUIRY response\n  target: Enable EXTENDED_COPY setup in spc_parse_cdb\n  target: Add support for EXTENDED_COPY copy offload emulation\n  target: Avoid non-existent tg_pt_gp_mem in target_alua_state_check\n  target: Add global device list for EXTENDED_COPY\n  target: Make helpers non static for EXTENDED_COPY command setup\n  target: Make spc_parse_naa_6h_vendor_specific non static\n  ...\n"
    },
    {
      "commit": "ac4de9543aca59f2b763746647577302fbedd57e",
      "tree": "40407750569ee030de56233c41c9a97f7e89cf67",
      "parents": [
        "26935fb06ee88f1188789807687c03041f3c70d9",
        "de32a8177f64bc62e1b19c685dd391af664ab13f"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:44:27 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:44:27 2013 -0700"
      },
      "message": "Merge branch \u0027akpm\u0027 (patches from Andrew Morton)\n\nMerge more patches from Andrew Morton:\n \"The rest of MM.  Plus one misc cleanup\"\n\n* emailed patches from Andrew Morton \u003cakpm@linux-foundation.org\u003e: (35 commits)\n  mm/Kconfig: add MMU dependency for MIGRATION.\n  kernel: replace strict_strto*() with kstrto*()\n  mm, thp: count thp_fault_fallback anytime thp fault fails\n  thp: consolidate code between handle_mm_fault() and do_huge_pmd_anonymous_page()\n  thp: do_huge_pmd_anonymous_page() cleanup\n  thp: move maybe_pmd_mkwrite() out of mk_huge_pmd()\n  mm: cleanup add_to_page_cache_locked()\n  thp: account anon transparent huge pages into NR_ANON_PAGES\n  truncate: drop \u0027oldsize\u0027 truncate_pagecache() parameter\n  mm: make lru_add_drain_all() selective\n  memcg: document cgroup dirty/writeback memory statistics\n  memcg: add per cgroup writeback pages accounting\n  memcg: check for proper lock held in mem_cgroup_update_page_stat\n  memcg: remove MEMCG_NR_FILE_MAPPED\n  memcg: reduce function dereference\n  memcg: avoid overflow caused by PAGE_ALIGN\n  memcg: rename RESOURCE_MAX to RES_COUNTER_MAX\n  memcg: correct RESOURCE_MAX to ULLONG_MAX\n  mm: memcg: do not trap chargers with full callstack on OOM\n  mm: memcg: rework and document OOM waiting and wakeup\n  ...\n"
    },
    {
      "commit": "c02925540ca7019465a43c00f8a3c0186ddace2b",
      "tree": "3097ece86eedd0a01cf5dbc0a8f6c28fcbd1f4f7",
      "parents": [
        "128ec037bafe5905b2e6f2796f426a1d247d0066"
      ],
      "author": {
        "name": "Kirill A. Shutemov",
        "email": "kirill.shutemov@linux.intel.com",
        "time": "Thu Sep 12 15:14:05 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:03 2013 -0700"
      },
      "message": "thp: consolidate code between handle_mm_fault() and do_huge_pmd_anonymous_page()\n\ndo_huge_pmd_anonymous_page() has copy-pasted piece of handle_mm_fault()\nto handle fallback path.\n\nLet\u0027s consolidate code back by introducing VM_FAULT_FALLBACK return\ncode.\n\nSigned-off-by: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nAcked-by: Hillf Danton \u003cdhillf@gmail.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Andi Kleen \u003cak@linux.intel.com\u003e\nCc: Matthew Wilcox \u003cwilly@linux.intel.com\u003e\nCc: Dave Hansen \u003cdave.hansen@linux.intel.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7caef26767c1727d7abfbbbfbe8b2bb473430d48",
      "tree": "909e2a3c1b0a20a976fa3f84a17a00f8a21607bf",
      "parents": [
        "5fbc461636c32efdb9d5216d491d37a40d54535b"
      ],
      "author": {
        "name": "Kirill A. Shutemov",
        "email": "kirill.shutemov@linux.intel.com",
        "time": "Thu Sep 12 15:13:56 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:02 2013 -0700"
      },
      "message": "truncate: drop \u0027oldsize\u0027 truncate_pagecache() parameter\n\ntruncate_pagecache() doesn\u0027t care about old size since commit\ncedabed49b39 (\"vfs: Fix vmtruncate() regression\").  Let\u0027s drop it.\n\nSigned-off-by: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: OGAWA Hirofumi \u003chirofumi@mail.parknet.co.jp\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5fbc461636c32efdb9d5216d491d37a40d54535b",
      "tree": "119599fe279ba3daf94422d54cfc7bd2a5ae4a80",
      "parents": [
        "9cb2dc1c950cf0624202c1ea2705705e1e51c278"
      ],
      "author": {
        "name": "Chris Metcalf",
        "email": "cmetcalf@tilera.com",
        "time": "Thu Sep 12 15:13:55 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:02 2013 -0700"
      },
      "message": "mm: make lru_add_drain_all() selective\n\nmake lru_add_drain_all() only selectively interrupt the cpus that have\nper-cpu free pages that can be drained.\n\nThis is important in nohz mode where calling mlockall(), for example,\notherwise will interrupt every core unnecessarily.\n\nThis is important on workloads where nohz cores are handling 10 Gb traffic\nin userspace.  Those CPUs do not enter the kernel and place pages into LRU\npagevecs and they really, really don\u0027t want to be interrupted, or they\ndrop packets on the floor.\n\nSigned-off-by: Chris Metcalf \u003ccmetcalf@tilera.com\u003e\nReviewed-by: Tejun Heo \u003ctj@kernel.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "3ea67d06e4679a16f69f66f43a8d6ee4778985fc",
      "tree": "0ec35a312de85ce91bf0bf6e4c5b88440f3d0f1d",
      "parents": [
        "658b72c5a7a033f0dde61b15dff86bf423ce425e"
      ],
      "author": {
        "name": "Sha Zhengju",
        "email": "handai.szj@taobao.com",
        "time": "Thu Sep 12 15:13:53 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:02 2013 -0700"
      },
      "message": "memcg: add per cgroup writeback pages accounting\n\nAdd memcg routines to count writeback pages, later dirty pages will also\nbe accounted.\n\nAfter Kame\u0027s commit 89c06bd52fb9 (\"memcg: use new logic for page stat\naccounting\"), we can use \u0027struct page\u0027 flag to test page state instead\nof per page_cgroup flag.  But memcg has a feature to move a page from a\ncgroup to another one and may have race between \"move\" and \"page stat\naccounting\".  So in order to avoid the race we have designed a new lock:\n\n         mem_cgroup_begin_update_page_stat()\n         modify page information        --\u003e(a)\n         mem_cgroup_update_page_stat()  --\u003e(b)\n         mem_cgroup_end_update_page_stat()\n\nIt requires both (a) and (b)(writeback pages accounting) to be pretected\nin mem_cgroup_{begin/end}_update_page_stat().  It\u0027s full no-op for\n!CONFIG_MEMCG, almost no-op if memcg is disabled (but compiled in), rcu\nread lock in the most cases (no task is moving), and spin_lock_irqsave\non top in the slow path.\n\nThere\u0027re two writeback interfaces to modify: test_{clear/set}_page_writeback().\nAnd the lock order is:\n\t--\u003e memcg-\u003emove_lock\n\t  --\u003e mapping-\u003etree_lock\n\nSigned-off-by: Sha Zhengju \u003chandai.szj@taobao.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Greg Thelen \u003cgthelen@google.com\u003e\nCc: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "68b4876d996e8749142b2895bc2e251448996363",
      "tree": "bd21b2e160d48dc38b11869c1bef5d38100ddd98",
      "parents": [
        "1a36e59d4833de19120dc7482c61ef69e228c73c"
      ],
      "author": {
        "name": "Sha Zhengju",
        "email": "handai.szj@taobao.com",
        "time": "Thu Sep 12 15:13:50 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:02 2013 -0700"
      },
      "message": "memcg: remove MEMCG_NR_FILE_MAPPED\n\nWhile accounting memcg page stat, it\u0027s not worth to use\nMEMCG_NR_FILE_MAPPED as an extra layer of indirection because of the\ncomplexity and presumed performance overhead.  We can use\nMEM_CGROUP_STAT_FILE_MAPPED directly.\n\nSigned-off-by: Sha Zhengju \u003chandai.szj@taobao.com\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nReviewed-by: Greg Thelen \u003cgthelen@google.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6de5a8bfcae6e3b427d642eff078d8305b324b52",
      "tree": "333d73c79bdec97184c4a60e45453a167730fd7b",
      "parents": [
        "34ff8dc08956098563989d8599840b130be81252"
      ],
      "author": {
        "name": "Sha Zhengju",
        "email": "handai.szj@taobao.com",
        "time": "Thu Sep 12 15:13:47 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:02 2013 -0700"
      },
      "message": "memcg: rename RESOURCE_MAX to RES_COUNTER_MAX\n\nRESOURCE_MAX is far too general name, change it to RES_COUNTER_MAX.\n\nSigned-off-by: Sha Zhengju \u003chandai.szj@taobao.com\u003e\nSigned-off-by: Qiang Huang \u003ch.huangqiang@huawei.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Daisuke Nishimura \u003cnishimura@mxp.nes.nec.co.jp\u003e\nCc: Jeff Liu \u003cjeff.liu@oracle.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "34ff8dc08956098563989d8599840b130be81252",
      "tree": "39c964226586a57b0b1e97d9eb1f34b1dc99519e",
      "parents": [
        "3812c8c8f3953921ef18544110dafc3505c1ac62"
      ],
      "author": {
        "name": "Sha Zhengju",
        "email": "handai.szj@taobao.com",
        "time": "Thu Sep 12 15:13:46 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:02 2013 -0700"
      },
      "message": "memcg: correct RESOURCE_MAX to ULLONG_MAX\n\nCurrent RESOURCE_MAX is ULONG_MAX, but the value we used to set resource\nlimit is unsigned long long, so we can set bigger value than that which is\nstrange.  The XXX_MAX should be reasonable max value, bigger than that\nshould be overflow.\n\nNotice that this change will affect user output of default *.limit_in_bytes:\nbefore change:\n\n  $ cat /cgroup/memory/memory.limit_in_bytes\n  9223372036854775807\n\nafter change:\n\n  $ cat /cgroup/memory/memory.limit_in_bytes\n  18446744073709551615\n\nBut it doesn\u0027t alter the API in term of input - we can still use \"echo -1\n\u003e *.limit_in_bytes\" to reset the numbers to \"unlimited\".\n\nSigned-off-by: Sha Zhengju \u003chandai.szj@taobao.com\u003e\nSigned-off-by: Qiang Huang \u003ch.huangqiang@huawei.com\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Daisuke Nishimura \u003cnishimura@mxp.nes.nec.co.jp\u003e\nCc: Jeff Liu \u003cjeff.liu@oracle.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "3812c8c8f3953921ef18544110dafc3505c1ac62",
      "tree": "8e5efc15fec4700644774df5fb5302f5c82f4a31",
      "parents": [
        "fb2a6fc56be66c169f8b80e07ed999ba453a2db2"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "hannes@cmpxchg.org",
        "time": "Thu Sep 12 15:13:44 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:02 2013 -0700"
      },
      "message": "mm: memcg: do not trap chargers with full callstack on OOM\n\nThe memcg OOM handling is incredibly fragile and can deadlock.  When a\ntask fails to charge memory, it invokes the OOM killer and loops right\nthere in the charge code until it succeeds.  Comparably, any other task\nthat enters the charge path at this point will go to a waitqueue right\nthen and there and sleep until the OOM situation is resolved.  The problem\nis that these tasks may hold filesystem locks and the mmap_sem; locks that\nthe selected OOM victim may need to exit.\n\nFor example, in one reported case, the task invoking the OOM killer was\nabout to charge a page cache page during a write(), which holds the\ni_mutex.  The OOM killer selected a task that was just entering truncate()\nand trying to acquire the i_mutex:\n\nOOM invoking task:\n  mem_cgroup_handle_oom+0x241/0x3b0\n  mem_cgroup_cache_charge+0xbe/0xe0\n  add_to_page_cache_locked+0x4c/0x140\n  add_to_page_cache_lru+0x22/0x50\n  grab_cache_page_write_begin+0x8b/0xe0\n  ext3_write_begin+0x88/0x270\n  generic_file_buffered_write+0x116/0x290\n  __generic_file_aio_write+0x27c/0x480\n  generic_file_aio_write+0x76/0xf0           # takes -\u003ei_mutex\n  do_sync_write+0xea/0x130\n  vfs_write+0xf3/0x1f0\n  sys_write+0x51/0x90\n  system_call_fastpath+0x18/0x1d\n\nOOM kill victim:\n  do_truncate+0x58/0xa0              # takes i_mutex\n  do_last+0x250/0xa30\n  path_openat+0xd7/0x440\n  do_filp_open+0x49/0xa0\n  do_sys_open+0x106/0x240\n  sys_open+0x20/0x30\n  system_call_fastpath+0x18/0x1d\n\nThe OOM handling task will retry the charge indefinitely while the OOM\nkilled task is not releasing any resources.\n\nA similar scenario can happen when the kernel OOM killer for a memcg is\ndisabled and a userspace task is in charge of resolving OOM situations.\nIn this case, ALL tasks that enter the OOM path will be made to sleep on\nthe OOM waitqueue and wait for userspace to free resources or increase\nthe group\u0027s limit.  But a userspace OOM handler is prone to deadlock\nitself on the locks held by the waiting tasks.  For example one of the\nsleeping tasks may be stuck in a brk() call with the mmap_sem held for\nwriting but the userspace handler, in order to pick an optimal victim,\nmay need to read files from /proc/\u003cpid\u003e, which tries to acquire the same\nmmap_sem for reading and deadlocks.\n\nThis patch changes the way tasks behave after detecting a memcg OOM and\nmakes sure nobody loops or sleeps with locks held:\n\n1. When OOMing in a user fault, invoke the OOM killer and restart the\n   fault instead of looping on the charge attempt.  This way, the OOM\n   victim can not get stuck on locks the looping task may hold.\n\n2. When OOMing in a user fault but somebody else is handling it\n   (either the kernel OOM killer or a userspace handler), don\u0027t go to\n   sleep in the charge context.  Instead, remember the OOMing memcg in\n   the task struct and then fully unwind the page fault stack with\n   -ENOMEM.  pagefault_out_of_memory() will then call back into the\n   memcg code to check if the -ENOMEM came from the memcg, and then\n   either put the task to sleep on the memcg\u0027s OOM waitqueue or just\n   restart the fault.  The OOM victim can no longer get stuck on any\n   lock a sleeping task may hold.\n\nDebugged by Michal Hocko.\n\nSigned-off-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nReported-by: azurIt \u003cazurit@pobox.sk\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "519e52473ebe9db5cdef44670d5a97f1fd53d721",
      "tree": "635fce64ff3658250745b9c8dfebd47e981a5b16",
      "parents": [
        "3a13c4d761b4b979ba8767f42345fed3274991b0"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "hannes@cmpxchg.org",
        "time": "Thu Sep 12 15:13:42 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:01 2013 -0700"
      },
      "message": "mm: memcg: enable memcg OOM killer only for user faults\n\nSystem calls and kernel faults (uaccess, gup) can handle an out of memory\nsituation gracefully and just return -ENOMEM.\n\nEnable the memcg OOM killer only for user faults, where it\u0027s really the\nonly option available.\n\nSigned-off-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nAcked-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: azurIt \u003cazurit@pobox.sk\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "759496ba6407c6994d6a5ce3a5e74937d7816208",
      "tree": "aeff8de8af36f70f2591114cef58c9ae7df25565",
      "parents": [
        "871341023c771ad233620b7a1fb3d9c7031c4e5c"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "hannes@cmpxchg.org",
        "time": "Thu Sep 12 15:13:39 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:01 2013 -0700"
      },
      "message": "arch: mm: pass userspace fault flag to generic fault handler\n\nUnlike global OOM handling, memory cgroup code will invoke the OOM killer\nin any OOM situation because it has no way of telling faults occuring in\nkernel context - which could be handled more gracefully - from\nuser-triggered faults.\n\nPass a flag that identifies faults originating in user space from the\narchitecture-specific fault handlers to generic code so that memcg OOM\nhandling can be improved.\n\nSigned-off-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: azurIt \u003cazurit@pobox.sk\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "de57780dc659f95b17ccb649f003278dde0b5b86",
      "tree": "d2493cc412c16946f3ead9158a61b26dd1f0c45a",
      "parents": [
        "a5b7c87f92076352dbff2fe0423ec255e1c9a71b"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Thu Sep 12 15:13:26 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:00 2013 -0700"
      },
      "message": "memcg: enhance memcg iterator to support predicates\n\nThe caller of the iterator might know that some nodes or even subtrees\nshould be skipped but there is no way to tell iterators about that so the\nonly choice left is to let iterators to visit each node and do the\nselection outside of the iterating code.  This, however, doesn\u0027t scale\nwell with hierarchies with many groups where only few groups are\ninteresting.\n\nThis patch adds mem_cgroup_iter_cond variant of the iterator with a\ncallback which gets called for every visited node.  There are three\npossible ways how the callback can influence the walk.  Either the node is\nvisited, it is skipped but the tree walk continues down the tree or the\nwhole subtree of the current group is skipped.\n\n[hughd@google.com: fix memcg-less page reclaim]\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Balbir Singh \u003cbsingharora@gmail.com\u003e\nCc: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nSigned-off-by: Hugh Dickins \u003chughd@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "a5b7c87f92076352dbff2fe0423ec255e1c9a71b",
      "tree": "fbc14b98d1412a078fc570914b050cd618e359f2",
      "parents": [
        "e883110aad718b65de658db77387aaa69cce996d"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Thu Sep 12 15:13:25 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:00 2013 -0700"
      },
      "message": "vmscan, memcg: do softlimit reclaim also for targeted reclaim\n\nSoft reclaim has been done only for the global reclaim (both background\nand direct).  Since \"memcg: integrate soft reclaim tighter with zone\nshrinking code\" there is no reason for this limitation anymore as the soft\nlimit reclaim doesn\u0027t use any special code paths and it is a part of the\nzone shrinking code which is used by both global and targeted reclaims.\n\nFrom the semantic point of view it is natural to consider soft limit\nbefore touching all groups in the hierarchy tree which is touching the\nhard limit because soft limit tells us where to push back when there is a\nmemory pressure.  It is not important whether the pressure comes from the\nlimit or imbalanced zones.\n\nThis patch simply enables soft reclaim unconditionally in\nmem_cgroup_should_soft_reclaim so it is enabled for both global and\ntargeted reclaim paths.  mem_cgroup_soft_reclaim_eligible needs to learn\nabout the root of the reclaim to know where to stop checking soft limit\nstate of parents up the hierarchy.  Say we have\n\nA (over soft limit)\n \\\n  B (below s.l., hit the hard limit)\n / \\\nC   D (below s.l.)\n\nB is the source of the outside memory pressure now for D but we shouldn\u0027t\nsoft reclaim it because it is behaving well under B subtree and we can\nstill reclaim from C (pressumably it is over the limit).\nmem_cgroup_soft_reclaim_eligible should therefore stop climbing up the\nhierarchy at B (root of the memory pressure).\n\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Glauber Costa \u003cglommer@openvz.org\u003e\nReviewed-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Balbir Singh \u003cbsingharora@gmail.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "3b38722efd9f66da63bbbd41520c2e6fa9db3d68",
      "tree": "aeec255d0358051b8ffe83f6744a2054b383c62e",
      "parents": [
        "c33bd8354f3a3bb26a98d2b6bf600b7b35657328"
      ],
      "author": {
        "name": "Michal Hocko",
        "email": "mhocko@suse.cz",
        "time": "Thu Sep 12 15:13:21 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:38:00 2013 -0700"
      },
      "message": "memcg, vmscan: integrate soft reclaim tighter with zone shrinking code\n\nThis patchset is sitting out of tree for quite some time without any\nobjections.  I would be really happy if it made it into 3.12.  I do not\nwant to push it too hard but I think this work is basically ready and\nwaiting more doesn\u0027t help.\n\nThe basic idea is quite simple.  Pull soft reclaim into shrink_zone in the\nfirst step and get rid of the previous soft reclaim infrastructure.\nshrink_zone is done in two passes now.  First it tries to do the soft\nlimit reclaim and it falls back to reclaim-all mode if no group is over\nthe limit or no pages have been scanned.  The second pass happens at the\nsame priority so the only time we waste is the memcg tree walk which has\nbeen updated in the third step to have only negligible overhead.\n\nAs a bonus we will get rid of a _lot_ of code by this and soft reclaim\nwill not stand out like before when it wasn\u0027t integrated into the zone\nshrinking code and it reclaimed at priority 0 (the testing results show\nthat some workloads suffers from such an aggressive reclaim).  The clean\nup is in a separate patch because I felt it would be easier to review that\nway.\n\nThe second step is soft limit reclaim integration into targeted reclaim.\nIt should be rather straight forward.  Soft limit has been used only for\nthe global reclaim so far but it makes sense for any kind of pressure\ncoming from up-the-hierarchy, including targeted reclaim.\n\nThe third step (patches 4-8) addresses the tree walk overhead by enhancing\nmemcg iterators to enable skipping whole subtrees and tracking number of\nover soft limit children at each level of the hierarchy.  This information\nis updated same way the old soft limit tree was updated (from\nmemcg_check_events) so we shouldn\u0027t see an additional overhead.  In fact\nmem_cgroup_update_soft_limit is much simpler than tree manipulation done\npreviously.\n\n__shrink_zone uses mem_cgroup_soft_reclaim_eligible as a predicate for\nmem_cgroup_iter so the decision whether a particular group should be\nvisited is done at the iterator level which allows us to decide to skip\nthe whole subtree as well (if there is no child in excess).  This reduces\nthe tree walk overhead considerably.\n\n* TEST 1\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nMy primary test case was a parallel kernel build with 2 groups (make is\nrunning with -j8 with a distribution .config in a separate cgroup without\nany hard limit) on a 32 CPU machine booted with 1GB memory and both builds\nrun taskset to Node 0 cpus.\n\nI was mostly interested in 2 setups.  Default - no soft limit set and -\nand 0 soft limit set to both groups.  The first one should tell us whether\nthe rework regresses the default behavior while the second one should show\nus improvements in an extreme case where both workloads are always over\nthe soft limit.\n\n/usr/bin/time -v has been used to collect the statistics and each\nconfiguration had 3 runs after fresh boot without any other load on the\nsystem.\n\nbase is mmotm-2013-07-18-16-40\nrework all 8 patches applied on top of base\n\n* No-limit\nUser\nno-limit/base: min: 651.92 max: 672.65 avg: 664.33 std: 8.01 runs: 6\nno-limit/rework: min: 657.34 [100.8%] max: 668.39 [99.4%] avg: 663.13 [99.8%] std: 3.61 runs: 6\nSystem\nno-limit/base: min: 69.33 max: 71.39 avg: 70.32 std: 0.79 runs: 6\nno-limit/rework: min: 69.12 [99.7%] max: 71.05 [99.5%] avg: 70.04 [99.6%] std: 0.59 runs: 6\nElapsed\nno-limit/base: min: 398.27 max: 422.36 avg: 408.85 std: 7.74 runs: 6\nno-limit/rework: min: 386.36 [97.0%] max: 438.40 [103.8%] avg: 416.34 [101.8%] std: 18.85 runs: 6\n\nThe results are within noise. Elapsed time has a bigger variance but the\naverage looks good.\n\n* 0-limit\nUser\n0-limit/base: min: 573.76 max: 605.63 avg: 585.73 std: 12.21 runs: 6\n0-limit/rework: min: 645.77 [112.6%] max: 666.25 [110.0%] avg: 656.97 [112.2%] std: 7.77 runs: 6\nSystem\n0-limit/base: min: 69.57 max: 71.13 avg: 70.29 std: 0.54 runs: 6\n0-limit/rework: min: 68.68 [98.7%] max: 71.40 [100.4%] avg: 69.91 [99.5%] std: 0.87 runs: 6\nElapsed\n0-limit/base: min: 1306.14 max: 1550.17 avg: 1430.35 std: 90.86 runs: 6\n0-limit/rework: min: 404.06 [30.9%] max: 465.94 [30.1%] avg: 434.81 [30.4%] std: 22.68 runs: 6\n\nThe improvement is really huge here (even bigger than with my previous\ntesting and I suspect that this highly depends on the storage).  Page\nfault statistics tell us at least part of the story:\n\nMinor\n0-limit/base: min: 37180461.00 max: 37319986.00 avg: 37247470.00 std: 54772.71 runs: 6\n0-limit/rework: min: 36751685.00 [98.8%] max: 36805379.00 [98.6%] avg: 36774506.33 [98.7%] std: 17109.03 runs: 6\nMajor\n0-limit/base: min: 170604.00 max: 221141.00 avg: 196081.83 std: 18217.01 runs: 6\n0-limit/rework: min: 2864.00 [1.7%] max: 10029.00 [4.5%] avg: 5627.33 [2.9%] std: 2252.71 runs: 6\n\nSame as with my previous testing Minor faults are more or less within\nnoise but Major fault count is way bellow the base kernel.\n\nWhile this looks as a nice win it is fair to say that 0-limit\nconfiguration is quite artificial. So I was playing with 0-no-limit\nloads as well.\n\n* TEST 2\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nThe following results are from 2 groups configuration on a 16GB machine\n(single NUMA node).\n\n- A running stream IO (dd if\u003d/dev/zero of\u003dlocal.file bs\u003d1024) with\n  2*TotalMem with 0 soft limit.\n- B running a mem_eater which consumes TotalMem-1G without any limit. The\n  mem_eater consumes the memory in 100 chunks with 1s nap after each\n  mmap+poppulate so that both loads have chance to fight for the memory.\n\nThe expected result is that B shouldn\u0027t be reclaimed and A shouldn\u0027t see\na big dropdown in elapsed time.\n\nUser\nbase: min: 2.68 max: 2.89 avg: 2.76 std: 0.09 runs: 3\nrework: min: 3.27 [122.0%] max: 3.74 [129.4%] avg: 3.44 [124.6%] std: 0.21 runs: 3\nSystem\nbase: min: 86.26 max: 88.29 avg: 87.28 std: 0.83 runs: 3\nrework: min: 81.05 [94.0%] max: 84.96 [96.2%] avg: 83.14 [95.3%] std: 1.61 runs: 3\nElapsed\nbase: min: 317.28 max: 332.39 avg: 325.84 std: 6.33 runs: 3\nrework: min: 281.53 [88.7%] max: 298.16 [89.7%] avg: 290.99 [89.3%] std: 6.98 runs: 3\n\nSystem time improved slightly as well as Elapsed. My previous testing\nhas shown worse numbers but this again seem to depend on the storage\nspeed.\n\nMy theory is that the writeback doesn\u0027t catch up and prio-0 soft reclaim\nfalls into wait on writeback page too often in the base kernel. The\npatched kernel doesn\u0027t do that because the soft reclaim is done from the\nkswapd/direct reclaim context. This can be seen on the following graph\nnicely. The A\u0027s group usage_in_bytes regurarly drops really low very often.\n\nAll 3 runs\nhttp://labs.suse.cz/mhocko/soft_limit_rework/stream_io-vs-mem_eater/stream.png\nresp. a detail of the single run\nhttp://labs.suse.cz/mhocko/soft_limit_rework/stream_io-vs-mem_eater/stream-one-run.png\n\nmem_eater seems to be doing better as well. It gets to the full\nallocation size faster as can be seen on the following graph:\nhttp://labs.suse.cz/mhocko/soft_limit_rework/stream_io-vs-mem_eater/mem_eater-one-run.png\n\n/proc/meminfo collected during the test also shows that rework kernel\nhasn\u0027t swapped that much (well almost not at all):\nbase: max: 123900 K avg: 56388.29 K\nrework: max: 300 K avg: 128.68 K\n\nkswapd and direct reclaim statistics are of no use unfortunatelly because\nsoft reclaim is not accounted properly as the counters are hidden by\nglobal_reclaim() checks in the base kernel.\n\n* TEST 3\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nAnother test was the same configuration as TEST2 except the stream IO was\nreplaced by a single kbuild (16 parallel jobs bound to Node0 cpus same as\nin TEST1) and mem_eater allocated TotalMem-200M so kbuild had only 200MB\nleft.\n\nKbuild did better with the rework kernel here as well:\nUser\nbase: min: 860.28 max: 872.86 avg: 868.03 std: 5.54 runs: 3\nrework: min: 880.81 [102.4%] max: 887.45 [101.7%] avg: 883.56 [101.8%] std: 2.83 runs: 3\nSystem\nbase: min: 84.35 max: 85.06 avg: 84.79 std: 0.31 runs: 3\nrework: min: 85.62 [101.5%] max: 86.09 [101.2%] avg: 85.79 [101.2%] std: 0.21 runs: 3\nElapsed\nbase: min: 135.36 max: 243.30 avg: 182.47 std: 45.12 runs: 3\nrework: min: 110.46 [81.6%] max: 116.20 [47.8%] avg: 114.15 [62.6%] std: 2.61 runs: 3\nMinor\nbase: min: 36635476.00 max: 36673365.00 avg: 36654812.00 std: 15478.03 runs: 3\nrework: min: 36639301.00 [100.0%] max: 36695541.00 [100.1%] avg: 36665511.00 [100.0%] std: 23118.23 runs: 3\nMajor\nbase: min: 14708.00 max: 53328.00 avg: 31379.00 std: 16202.24 runs: 3\nrework: min: 302.00 [2.1%] max: 414.00 [0.8%] avg: 366.33 [1.2%] std: 47.22 runs: 3\n\nAgain we can see a significant improvement in Elapsed (it also seems to\nbe more stable), there is a huge dropdown for the Major page faults and\nmuch more swapping:\nbase: max: 583736 K avg: 112547.43 K\nrework: max: 4012 K avg: 124.36 K\n\nGraphs from all three runs show the variability of the kbuild quite\nnicely.  It even seems that it took longer after every run with the base\nkernel which would be quite surprising as the source tree for the build is\nremoved and caches are dropped after each run so the build operates on a\nfreshly extracted sources everytime.\nhttp://labs.suse.cz/mhocko/soft_limit_rework/stream_io-vs-mem_eater/kbuild-mem_eater.png\n\nMy other testing shows that this is just a matter of timing and other runs\nbehave differently the std for Elapsed time is similar ~50.  Example of\nother three runs:\nhttp://labs.suse.cz/mhocko/soft_limit_rework/stream_io-vs-mem_eater/kbuild-mem_eater2.png\n\nSo to wrap this up.  The series is still doing good and improves the soft\nlimit.\n\nThe testing results for bunch of cgroups with both stream IO and kbuild\nloads can be found in \"memcg: track children in soft limit excess to\nimprove soft limit\".\n\nThis patch:\n\nMemcg soft reclaim has been traditionally triggered from the global\nreclaim paths before calling shrink_zone.  mem_cgroup_soft_limit_reclaim\nthen picked up a group which exceeds the soft limit the most and reclaimed\nit with 0 priority to reclaim at least SWAP_CLUSTER_MAX pages.\n\nThe infrastructure requires per-node-zone trees which hold over-limit\ngroups and keep them up-to-date (via memcg_check_events) which is not cost\nfree.  Although this overhead hasn\u0027t turned out to be a bottle neck the\nimplementation is suboptimal because mem_cgroup_update_tree has no idea\nwhich zones consumed memory over the limit so we could easily end up\nhaving a group on a node-zone tree having only few pages from that\nnode-zone.\n\nThis patch doesn\u0027t try to fix node-zone trees management because it seems\nthat integrating soft reclaim into zone shrinking sounds much easier and\nmore appropriate for several reasons.  First of all 0 priority reclaim was\na crude hack which might lead to big stalls if the group\u0027s LRUs are big\nand hard to reclaim (e.g.  a lot of dirty/writeback pages).  Soft reclaim\nshould be applicable also to the targeted reclaim which is awkward right\nnow without additional hacks.  Last but not least the whole infrastructure\neats quite some code.\n\nAfter this patch shrink_zone is done in 2 passes.  First it tries to do\nthe soft reclaim if appropriate (only for global reclaim for now to keep\ncompatible with the original state) and fall back to ignoring soft limit\nif no group is eligible to soft reclaim or nothing has been scanned during\nthe first pass.  Only groups which are over their soft limit or any of\ntheir parents up the hierarchy is over the limit are considered eligible\nduring the first pass.\n\nSoft limit tree which is not necessary anymore will be removed in the\nfollow up patch to make this patch smaller and easier to review.\n\nSigned-off-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nReviewed-by: Glauber Costa \u003cglommer@openvz.org\u003e\nReviewed-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Balbir Singh \u003cbsingharora@gmail.com\u003e\nCc: Glauber Costa \u003cglommer@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "26935fb06ee88f1188789807687c03041f3c70d9",
      "tree": "381c487716540b52348d78bee6555f8fa61d77ef",
      "parents": [
        "3cc69b638e11bfda5d013c2b75b60934aa0e88a1",
        "bf2ba3bc185269eca274b458aac46ba1ad7c1121"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:01:38 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 15:01:38 2013 -0700"
      },
      "message": "Merge branch \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs\n\nPull vfs pile 4 from Al Viro:\n \"list_lru pile, mostly\"\n\nThis came out of Andrew\u0027s pile, Al ended up doing the merge work so that\nAndrew didn\u0027t have to.\n\nAdditionally, a few fixes.\n\n* \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (42 commits)\n  super: fix for destroy lrus\n  list_lru: dynamically adjust node arrays\n  shrinker: Kill old -\u003eshrink API.\n  shrinker: convert remaining shrinkers to count/scan API\n  staging/lustre/libcfs: cleanup linux-mem.h\n  staging/lustre/ptlrpc: convert to new shrinker API\n  staging/lustre/obdclass: convert lu_object shrinker to count/scan API\n  staging/lustre/ldlm: convert to shrinkers to count/scan API\n  hugepage: convert huge zero page shrinker to new shrinker API\n  i915: bail out earlier when shrinker cannot acquire mutex\n  drivers: convert shrinkers to new count/scan API\n  fs: convert fs shrinkers to new scan/count API\n  xfs: fix dquot isolation hang\n  xfs-convert-dquot-cache-lru-to-list_lru-fix\n  xfs: convert dquot cache lru to list_lru\n  xfs: rework buffer dispose list tracking\n  xfs-convert-buftarg-lru-to-generic-code-fix\n  xfs: convert buftarg LRU to generic code\n  fs: convert inode and dentry shrinking to be node aware\n  vmscan: per-node deferred work\n  ...\n"
    },
    {
      "commit": "5223161dc0f5e44fbf3d5e42d23697b6796cdf4e",
      "tree": "10837ec58d96e751469d78d347f76c0d49238d72",
      "parents": [
        "e5d0c874391a500be7643d3eef9fb07171eee129",
        "61abeba5222895d6900b13115f5d8eba7988d7d6"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 11:35:33 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 11:35:33 2013 -0700"
      },
      "message": "Merge branch \u0027for-next\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/cooloney/linux-leds\n\nPull led updates from Bryan Wu:\n \"Sorry for the late pull request, since I\u0027m just back from vacation.\n\n  LED subsystem updates for 3.12:\n   - pca9633 driver DT supporting and pca9634 chip supporting\n   - restore legacy device attributes for lp5521\n   - other fixing and updates\"\n\n* \u0027for-next\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/cooloney/linux-leds: (28 commits)\n  leds: wm831x-status: Request a REG resource\n  leds: trigger: ledtrig-backlight: Fix invalid memory access in fb_event notification callback\n  leds-pca963x: Fix device tree parsing\n  leds-pca9633: Rename to leds-pca963x\n  leds-pca9633: Add mutex to the ledout register\n  leds-pca9633: Unique naming of the LEDs\n  leds-pca9633: Add support for PCA9634\n  leds: lp5562: use LP55xx common macros for device attributes\n  Documentation: leds-lp5521,lp5523: update device attribute information\n  leds: lp5523: remove unnecessary writing commands\n  leds: lp5523: restore legacy device attributes\n  leds: lp5523: LED MUX configuration on initializing\n  leds: lp5523: make separate API for loading engine\n  leds: lp5521: remove unnecessary writing commands\n  leds: lp5521: restore legacy device attributes\n  leds: lp55xx: add common macros for device attributes\n  leds: lp55xx: add common data structure for program\n  Documentation: leds: Fix a typo\n  leds: ss4200: Fix incorrect placement of __initdata\n  leds: clevo-mail: Fix incorrect placement of __initdata\n  ...\n"
    },
    {
      "commit": "e5d0c874391a500be7643d3eef9fb07171eee129",
      "tree": "e584dda865c5628fbb8e59a50096a0f4c21bf2bd",
      "parents": [
        "d5adf7e2db897f9d4a00be59262875ae5d9574f4",
        "d6a60fc1a8187004792a01643d8af1d06a465026"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 11:29:26 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 11:29:26 2013 -0700"
      },
      "message": "Merge tag \u0027iommu-updates-v3.12\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu\n\nPull IOMMU Updates from Joerg Roedel:\n \"This round the updates contain:\n\n   - A new driver for the Freescale PAMU IOMMU from Varun Sethi.\n\n     This driver has cooked for a while and required changes to the\n     IOMMU-API and infrastructure that were already merged before.\n\n   - Updates for the ARM-SMMU driver from Will Deacon\n\n   - Various fixes, the most important one is probably a fix from Alex\n     Williamson for a memory leak in the VT-d page-table freeing code\n\n  In summary not all that much.  The biggest part in the diffstat is the\n  new PAMU driver\"\n\n* tag \u0027iommu-updates-v3.12\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:\n  intel-iommu: Fix leaks in pagetable freeing\n  iommu/amd: Fix resource leak in iommu_init_device()\n  iommu/amd: Clean up unnecessary MSI/MSI-X capability find\n  iommu/arm-smmu: Simplify VMID and ASID allocation\n  iommu/arm-smmu: Don\u0027t use VMIDs for stage-1 translations\n  iommu/arm-smmu: Tighten up global fault reporting\n  iommu/arm-smmu: Remove broken big-endian check\n  iommu/fsl: Remove unnecessary \u0027fsl-pamu\u0027 prefixes\n  iommu/fsl: Fix whitespace problems noticed by git-am\n  iommu/fsl: Freescale PAMU driver and iommu implementation.\n  iommu/fsl: Add additional iommu attributes required by the PAMU driver.\n  powerpc: Add iommu domain pointer to device archdata\n  iommu/exynos: Remove dead code (set_prefbuf)\n"
    },
    {
      "commit": "02b9735c12892e04d3e101b06e4c6d64a814f566",
      "tree": "7907deb1cbfd1599d4f34d414873170d3266f164",
      "parents": [
        "75acebf2423ab13ff6198daa6e17ef7a2543bfe4",
        "f1728fd1599112239ed5cebc7be9810264db6792"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 11:22:45 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 11:22:45 2013 -0700"
      },
      "message": "Merge tag \u0027pm+acpi-fixes-3.12-rc1\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm\n\nPull ACPI and power management fixes from Rafael Wysocki:\n \"All of these commits are fixes that have emerged recently and some of\n  them fix bugs introduced during this merge window.\n\n  Specifics:\n\n   1) ACPI-based PCI hotplug (ACPIPHP) fixes related to spurious events\n\n      After the recent ACPIPHP changes we\u0027ve seen some interesting\n      breakage on a system that triggers device check notifications\n      during boot for non-existing devices.  Although those\n      notifications are really spurious, we should be able to deal with\n      them nevertheless and that shouldn\u0027t introduce too much overhead.\n      Four commits to make that work properly.\n\n   2) Memory hotplug and hibernation mutual exclusion rework\n\n      This was maent to be a cleanup, but it happens to fix a classical\n      ABBA deadlock between system suspend/hibernation and ACPI memory\n      hotplug which is possible if they are started roughly at the same\n      time.  Three commits rework memory hotplug so that it doesn\u0027t\n      acquire pm_mutex and make hibernation use device_hotplug_lock\n      which prevents it from racing with memory hotplug.\n\n   3) ACPI Intel LPSS (Low-Power Subsystem) driver crash fix\n\n      The ACPI LPSS driver crashes during boot on Apple Macbook Air with\n      Haswell that has slightly unusual BIOS configuration in which one\n      of the LPSS device\u0027s _CRS method doesn\u0027t return all of the\n      information expected by the driver.  Fix from Mika Westerberg, for\n      stable.\n\n   4) ACPICA fix related to Store-\u003eArgX operation\n\n      AML interpreter fix for obscure breakage that causes AML to be\n      executed incorrectly on some machines (observed in practice).\n      From Bob Moore.\n\n   5) ACPI core fix for PCI ACPI device objects lookup\n\n      There still are cases in which there is more than one ACPI device\n      object matching a given PCI device and we don\u0027t choose the one\n      that the BIOS expects us to choose, so this makes the lookup take\n      more criteria into account in those cases.\n\n   6) Fix to prevent cpuidle from crashing in some rare cases\n\n      If the result of cpuidle_get_driver() is NULL, which can happen on\n      some systems, cpuidle_driver_ref() will crash trying to use that\n      pointer and the Daniel Fu\u0027s fix prevents that from happening.\n\n   7) cpufreq fixes related to CPU hotplug\n\n      Stephen Boyd reported a number of concurrency problems with\n      cpufreq related to CPU hotplug which are addressed by a series of\n      fixes from Srivatsa S Bhat and Viresh Kumar.\n\n   8) cpufreq fix for time conversion in time_in_state attribute\n\n      Time conversion carried out by cpufreq when user space attempts to\n      read /sys/devices/system/cpu/cpu*/cpufreq/stats/time_in_state\n      won\u0027t work correcty if cputime_t doesn\u0027t map directly to jiffies.\n      Fix from Andreas Schwab.\n\n   9) Revert of a troublesome cpufreq commit\n\n      Commit 7c30ed5 (cpufreq: make sure frequency transitions are\n      serialized) was intended to address some known concurrency\n      problems in cpufreq related to the ordering of transitions, but\n      unfortunately it introduced several problems of its own, so I\n      decided to revert it now and address the original problems later\n      in a more robust way.\n\n  10) Intel Haswell CPU models for intel_pstate from Nell Hardcastle.\n\n  11) cpufreq fixes related to system suspend/resume\n\n      The recent cpufreq changes that made it preserve CPU sysfs\n      attributes over suspend/resume cycles introduced a possible NULL\n      pointer dereference that caused it to crash during the second\n      attempt to suspend.  Three commits from Srivatsa S Bhat fix that\n      problem and a couple of related issues.\n\n  12) cpufreq locking fix\n\n      cpufreq_policy_restore() should acquire the lock for reading, but\n      it acquires it for writing.  Fix from Lan Tianyu\"\n\n* tag \u0027pm+acpi-fixes-3.12-rc1\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (25 commits)\n  cpufreq: Acquire the lock in cpufreq_policy_restore() for reading\n  cpufreq: Prevent problems in update_policy_cpu() if last_cpu \u003d\u003d new_cpu\n  cpufreq: Restructure if/else block to avoid unintended behavior\n  cpufreq: Fix crash in cpufreq-stats during suspend/resume\n  intel_pstate: Add Haswell CPU models\n  Revert \"cpufreq: make sure frequency transitions are serialized\"\n  cpufreq: Use signed type for \u0027ret\u0027 variable, to store negative error values\n  cpufreq: Remove temporary fix for race between CPU hotplug and sysfs-writes\n  cpufreq: Synchronize the cpufreq store_*() routines with CPU hotplug\n  cpufreq: Invoke __cpufreq_remove_dev_finish() after releasing cpu_hotplug.lock\n  cpufreq: Split __cpufreq_remove_dev() into two parts\n  cpufreq: Fix wrong time unit conversion\n  cpufreq: serialize calls to __cpufreq_governor()\n  cpufreq: don\u0027t allow governor limits to be changed when it is disabled\n  ACPI / bind: Prefer device objects with _STA to those without it\n  ACPI / hotplug / PCI: Avoid parent bus rescans on spurious device checks\n  ACPI / hotplug / PCI: Use _OST to notify firmware about notify status\n  ACPI / hotplug / PCI: Avoid doing too much for spurious notifies\n  ACPICA: Fix for a Store-\u003eArgX when ArgX contains a reference to a field.\n  ACPI / hotplug / PCI: Don\u0027t trim devices before scanning the namespace\n  ...\n"
    },
    {
      "commit": "5762482f5496cb1dd86acd2aace3ea25d1404e1f",
      "tree": "6d74d7b501002f7516e2eb3068f5a942f63098ee",
      "parents": [
        "b7c09ad4014e3678e8cc01fdf663c9f43b272dc6"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 10:12:47 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 10:12:47 2013 -0700"
      },
      "message": "vfs: move get_fs_root_and_pwd() to single caller\n\nLet\u0027s not pollute the include files with inline functions that are only\nused in a single place.  Especially not if we decide we might want to\nchange the semantics of said function to make it more efficient..\n\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "1370e97bb2eb1ef2df7355204e5a4ba13e12b861",
      "tree": "9e3c2e9b0e0a7f67e50898e8ff5ecb462f260625",
      "parents": [
        "decf7abcc97444ecd2d3cf278f5cc8093f33f49a"
      ],
      "author": {
        "name": "Waiman Long",
        "email": "Waiman.Long@hp.com",
        "time": "Thu Sep 12 10:55:34 2013 -0400"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 09:25:23 2013 -0700"
      },
      "message": "seqlock: Add a new locking reader type\n\nThe sequence lock (seqlock) was originally designed for the cases where\nthe readers do not need to block the writers by making the readers retry\nthe read operation when the data change.\n\nSince then, the use cases have been expanded to include situations where\na thread does not need to change the data (effectively a reader) at all\nbut have to take the writer lock because it can\u0027t tolerate changes to\nthe protected structure.  Some examples are the d_path() function and\nthe getcwd() syscall in fs/dcache.c where the functions take the writer\nlock on rename_lock even though they don\u0027t need to change anything in\nthe protected data structure at all.  This is inefficient as a reader is\nnow blocking other sequence number reading readers from moving forward\nby pretending to be a writer.\n\nThis patch tries to eliminate this inefficiency by introducing a new\ntype of locking reader to the seqlock locking mechanism.  This new\nlocking reader will try to take an exclusive lock preventing other\nwriters and locking readers from going forward.  However, it won\u0027t\naffect the progress of the other sequence number reading readers as the\nsequence number won\u0027t be changed.\n\nSigned-off-by: Waiman Long \u003cWaiman.Long@hp.com\u003e\nCc: Alexander Viro \u003cviro@zeniv.linux.org.uk\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d6a60fc1a8187004792a01643d8af1d06a465026",
      "tree": "033a6976e47fc49c5f095c38008f1d1f6be5f93e",
      "parents": [
        "6e4664525b1db28f8c4e1130957f70a94c19213e",
        "ca19243e9ce81f8e8a25ee33969444f11b0590b7",
        "634544bf718dd29cd2e29efba6801a8d08daf335",
        "ecfadb6e5b49a0a56df2038bf39f1fcd652788b9",
        "e644a013fe67f2bccd54378b88556d07fa2714d6",
        "3269ee0bd6686baf86630300d528500ac5b516d7"
      ],
      "author": {
        "name": "Joerg Roedel",
        "email": "joro@8bytes.org",
        "time": "Thu Sep 12 16:46:34 2013 +0200"
      },
      "committer": {
        "name": "Joerg Roedel",
        "email": "joro@8bytes.org",
        "time": "Thu Sep 12 16:46:34 2013 +0200"
      },
      "message": "Merge branches \u0027arm/exynos\u0027, \u0027ppc/pamu\u0027, \u0027arm/smmu\u0027, \u0027x86/amd\u0027 and \u0027iommu/fixes\u0027 into next\n"
    },
    {
      "commit": "b9b42eeb88d36cc7400925302f1587aaaa348905",
      "tree": "f5260ad8013adeca9f86c85c096099844238c725",
      "parents": [
        "7b7a2f0a31c6c1ff53a3c87c0bca4f8d01471391",
        "50e66c7ed8a1cd7e933628f9f5cf2617394adf5a"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 07:42:59 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Thu Sep 12 07:42:59 2013 -0700"
      },
      "message": "Merge branch \u0027next\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux\n\nPull thermal management updates from Zhang Rui:\n \"We have a lot of SOC changes and a few thermal core fixes this time.\n\n  The biggest change is about exynos thermal driver restructure.  The\n  patch set adds TMU (Thermal management Unit) driver support for\n  exynos5440 platform.  There are 3 instances of the TMU controllers so\n  necessary cleanup/re-structure is done to handle multiple thermal\n  zone.\n\n  The next biggest change is the introduction of the imx thermal driver.\n  It adds the imx thermal support using Temperature Monitor (TEMPMON)\n  block found on some Freescale i.MX SoCs.  The driver uses syscon\n  regmap interface to access TEMPMON control registers and calibration\n  data, and supports cpufreq as the cooling device.\n\n  Highlights:\n\n   - restructure exynos thermal driver.\n\n   - introduce new imx thermal driver.\n\n   - fix a bug in thermal core, which powers on the fans unexpectedly\n     after resume from suspend\"\n\n* \u0027next\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux: (46 commits)\n  drivers: thermal: add check when unregistering cpu cooling\n  thermal: thermal_core: allow binding with limits on bind_params\n  drivers: thermal: make usage of CONFIG_THERMAL_HWMON optional\n  drivers: thermal: parent virtual hwmon with thermal zone\n  thermal: hwmon: move hwmon support to single file\n  thermal: exynos: Clean up non-DT remnants\n  thermal: exynos: Fix potential NULL pointer dereference\n  thermal: exynos: Fix typos in Kconfig\n  thermal: ti-soc-thermal: Ensure to compute thermal trend\n  thermal: ti-soc-thermal: Set the bandgap mask counter delay value\n  thermal: ti-soc-thermal: Initialize counter_delay field for TI DRA752 sensors\n  thermal: step_wise: return instance-\u003etarget by default\n  thermal: step_wise: cdev only needs update on a new target state\n  Thermal/cpu_cooling: Return directly for the cpu out of allowed_cpus in the cpufreq_thermal_notifier()\n  thermal: exynos_tmu: fix wrong error check for mapped memory\n  thermal: imx: implement thermal alarm interrupt handling\n  thermal: imx: dynamic passive and SoC specific critical trip points\n  Documentation: thermal: Explain the exynos thermal driver model\n  ARM: dts: thermal: exynos: Add documentation for Exynos SoC thermal bindings\n  thermal: exynos: Support for TMU regulator defined at device tree\n  ...\n"
    },
    {
      "commit": "b34081f1cd59585451efaa69e1dff1b9507e6c89",
      "tree": "b04c842059aeeed535e71e72570a29e2989ceeb3",
      "parents": [
        "20b8875abcf2daa1dda5cf70bd6369df5e85d4c1"
      ],
      "author": {
        "name": "Sergey Senozhatsky",
        "email": "sergey.senozhatsky@gmail.com",
        "time": "Wed Sep 11 14:26:32 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:45 2013 -0700"
      },
      "message": "lz4: fix compression/decompression signedness mismatch\n\nLZ4 compression and decompression functions require different in\nsignedness input/output parameters: unsigned char for compression and\nsigned char for decompression.\n\nChange decompression API to require \"(const) unsigned char *\".\n\nSigned-off-by: Sergey Senozhatsky \u003csergey.senozhatsky@gmail.com\u003e\nCc: Kyungsik Lee \u003ckyungsik.lee@lge.com\u003e\nCc: Geert Uytterhoeven \u003cgeert@linux-m68k.org\u003e\nCc: Yann Collet \u003cyann.collet.73@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d9a605e40b1376eb02b067d7690580255a0df68f",
      "tree": "b21254f7172ae8db6faffd9b7941d579fa421478",
      "parents": [
        "c2c737a0461e61a34676bd0bd1bc1a70a1b4e396"
      ],
      "author": {
        "name": "Davidlohr Bueso",
        "email": "davidlohr.bueso@hp.com",
        "time": "Wed Sep 11 14:26:24 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:42 2013 -0700"
      },
      "message": "ipc: rename ids-\u003erw_mutex\n\nSince in some situations the lock can be shared for readers, we shouldn\u0027t\nbe calling it a mutex, rename it to rwsem.\n\nSigned-off-by: Davidlohr Bueso \u003cdavidlohr.bueso@hp.com\u003e\nTested-by: Sedat Dilek \u003csedat.dilek@gmail.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Manfred Spraul \u003cmanfred@colorfullife.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "57f150a58c40cda598c31af8bceb8598f43c3e5f",
      "tree": "fde3e7fc48c97f0db5b3975fd74e12773f423fe2",
      "parents": [
        "4bbee76bc986af326be0a84ad661000cf89b29f6"
      ],
      "author": {
        "name": "Rob Landley",
        "email": "rob@landley.net",
        "time": "Wed Sep 11 14:26:10 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:37 2013 -0700"
      },
      "message": "initmpfs: move rootfs code from fs/ramfs/ to init/\n\nWhen the rootfs code was a wrapper around ramfs, having them in the same\nfile made sense.  Now that it can wrap another filesystem type, move it in\nwith the init code instead.\n\nThis also allows a subsequent patch to access rootfstype\u003d command line\narg.\n\nSigned-off-by: Rob Landley \u003crob@landley.net\u003e\nCc: Jeff Layton \u003cjlayton@redhat.com\u003e\nCc: Jens Axboe \u003caxboe@kernel.dk\u003e\nCc: Stephen Warren \u003cswarren@nvidia.com\u003e\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nCc: Jim Cromie \u003cjim.cromie@gmail.com\u003e\nCc: Sam Ravnborg \u003csam@ravnborg.org\u003e\nCc: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\nCc: \"Eric W. Biederman\" \u003cebiederm@xmission.com\u003e\nCc: Alexander Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5e4c0d974139a98741b829b27cf38dc8f9284490",
      "tree": "fddd959828300c1de1ade15eeb33606c317b79db",
      "parents": [
        "4b39248365e09fb8268b6fecd1704907ffc3d980"
      ],
      "author": {
        "name": "Jan Kara",
        "email": "jack@suse.cz",
        "time": "Wed Sep 11 14:26:05 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:36 2013 -0700"
      },
      "message": "lib/radix-tree.c: make radix_tree_node_alloc() work correctly within interrupt\n\nWith users of radix_tree_preload() run from interrupt (block/blk-ioc.c is\none such possible user), the following race can happen:\n\nradix_tree_preload()\n...\nradix_tree_insert()\n  radix_tree_node_alloc()\n    if (rtp-\u003enr) {\n      ret \u003d rtp-\u003enodes[rtp-\u003enr - 1];\n\u003cinterrupt\u003e\n...\nradix_tree_preload()\n...\nradix_tree_insert()\n  radix_tree_node_alloc()\n    if (rtp-\u003enr) {\n      ret \u003d rtp-\u003enodes[rtp-\u003enr - 1];\n\nAnd we give out one radix tree node twice.  That clearly results in radix\ntree corruption with different results (usually OOPS) depending on which\ntwo users of radix tree race.\n\nWe fix the problem by making radix_tree_node_alloc() always allocate fresh\nradix tree nodes when in interrupt.  Using preloading when in interrupt\ndoesn\u0027t make sense since all the allocations have to be atomic anyway and\nwe cannot steal nodes from process-context users because some users rely\non radix_tree_insert() succeeding after radix_tree_preload().\nin_interrupt() check is somewhat ugly but we cannot simply key off passed\ngfp_mask as that is acquired from root_gfp_mask() and thus the same for\nall preload users.\n\nAnother part of the fix is to avoid node preallocation in\nradix_tree_preload() when passed gfp_mask doesn\u0027t allow waiting.  Again,\npreallocation in such case doesn\u0027t make sense and when preallocation would\nhappen in interrupt we could possibly leak some allocated nodes.  However,\nsome users of radix_tree_preload() require following radix_tree_insert()\nto succeed.  To avoid unexpected effects for these users,\nradix_tree_preload() only warns if passed gfp mask doesn\u0027t allow waiting\nand we provide a new function radix_tree_maybe_preload() for those users\nwhich get different gfp mask from different call sites and which are\nprepared to handle radix_tree_insert() failure.\n\nSigned-off-by: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jens Axboe \u003cjaxboe@fusionio.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "2b529089257705499207ce7da9d0e3ae26a844ba",
      "tree": "fffde388176946c7de6eaddd4e7bdd2b7ea27ffd",
      "parents": [
        "9dee5c51516d2c3fff22633c1272c5652e68075a"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Wed Sep 11 14:25:11 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:20 2013 -0700"
      },
      "message": "rbtree: add rbtree_postorder_for_each_entry_safe() helper\n\nBecause deletion (of the entire tree) is a relatively common use of the\nrbtree_postorder iteration, and because doing it safely means fiddling\nwith temporary storage, provide a helper to simplify postorder rbtree\niteration.\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nReviewed-by: Seth Jennings \u003csjenning@linux.vnet.ibm.com\u003e\nCc: David Woodhouse \u003cDavid.Woodhouse@intel.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9dee5c51516d2c3fff22633c1272c5652e68075a",
      "tree": "b8d1811b0357a74c720008911e06559f772ce731",
      "parents": [
        "b4bc4a18a226f46fec4ef47f2df28ea209db8b5d"
      ],
      "author": {
        "name": "Cody P Schafer",
        "email": "cody@linux.vnet.ibm.com",
        "time": "Wed Sep 11 14:25:10 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:19 2013 -0700"
      },
      "message": "rbtree: add postorder iteration functions\n\nPostorder iteration yields all of a node\u0027s children prior to yielding the\nnode itself, and this particular implementation also avoids examining the\nleaf links in a node after that node has been yielded.\n\nIn what I expect will be its most common usage, postorder iteration allows\nthe deletion of every node in an rbtree without modifying the rbtree nodes\n(no _requirement_ that they be nulled) while avoiding referencing child\nnodes after they have been \"deleted\" (most commonly, freed).\n\nI have only updated zswap to use this functionality at this point, but\nnumerous bits of code (most notably in the filesystem drivers) use a hand\nrolled postorder iteration that NULLs child links as it traverses the\ntree.  Each of those instances could be replaced with this common\nimplementation.\n\n1 \u0026 2 add rbtree postorder iteration functions.\n3 adds testing of the iteration to the rbtree runtime tests\n4 allows building the rbtree runtime tests as builtins\n5 updates zswap.\n\nThis patch:\n\nAdd postorder iteration functions for rbtree.  These are useful for safely\nfreeing an entire rbtree without modifying the tree at all.\n\nSigned-off-by: Cody P Schafer \u003ccody@linux.vnet.ibm.com\u003e\nReviewed-by: Seth Jennings \u003csjenning@linux.vnet.ibm.com\u003e\nCc: David Woodhouse \u003cDavid.Woodhouse@intel.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9cb218131de1c59dca9063b2efe876f053f316af",
      "tree": "e01f110a4137e8e2d33bc28f1f77e3a6361c0ee4",
      "parents": [
        "97b0f6f9cd73ff8285835c5e295d3c4b0e2dbf78"
      ],
      "author": {
        "name": "Michael Holzheu",
        "email": "holzheu@linux.vnet.ibm.com",
        "time": "Wed Sep 11 14:24:51 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:10 2013 -0700"
      },
      "message": "vmcore: introduce remap_oldmem_pfn_range()\n\nFor zfcpdump we can\u0027t map the HSA storage because it is only available via\na read interface.  Therefore, for the new vmcore mmap feature we have\nintroduce a new mechanism to create mappings on demand.\n\nThis patch introduces a new architecture function remap_oldmem_pfn_range()\nthat should be used to create mappings with remap_pfn_range() for oldmem\nareas that can be directly mapped.  For zfcpdump this is everything\nbesides of the HSA memory.  For the areas that are not mapped by\nremap_oldmem_pfn_range() a generic vmcore a new generic vmcore fault\nhandler mmap_vmcore_fault() is called.\n\nThis handler works as follows:\n\n* Get already available or new page from page cache (find_or_create_page)\n* Check if /proc/vmcore page is filled with data (PageUptodate)\n* If yes:\n  Return that page\n* If no:\n  Fill page using __vmcore_read(), set PageUptodate, and return page\n\nSigned-off-by: Michael Holzheu \u003cholzheu@linux.vnet.ibm.com\u003e\nAcked-by: Vivek Goyal \u003cvgoyal@redhat.com\u003e\nCc: HATAYAMA Daisuke \u003cd.hatayama@jp.fujitsu.com\u003e\nCc: Jan Willeke \u003cwilleke@de.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "be8a8d069e508d4408125e2b1471f549e7813d25",
      "tree": "d69d792fdefbaebc9346f7c3bad36ee4383ef659",
      "parents": [
        "80c74f6a40284c5c5d49f3b3289172bbce0b30b8"
      ],
      "author": {
        "name": "Michael Holzheu",
        "email": "holzheu@linux.vnet.ibm.com",
        "time": "Wed Sep 11 14:24:49 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:10 2013 -0700"
      },
      "message": "vmcore: introduce ELF header in new memory feature\n\nFor s390 we want to use /proc/vmcore for our SCSI stand-alone dump\n(zfcpdump).  We have support where the first HSA_SIZE bytes are saved into\na hypervisor owned memory area (HSA) before the kdump kernel is booted.\nWhen the kdump kernel starts, it is restricted to use only HSA_SIZE bytes.\n\nThe advantages of this mechanism are:\n\n * No crashkernel memory has to be defined in the old kernel.\n * Early boot problems (before kexec_load has been done) can be dumped\n * Non-Linux systems can be dumped.\n\nWe modify the s390 copy_oldmem_page() function to read from the HSA memory\nif memory below HSA_SIZE bytes is requested.\n\nSince we cannot use the kexec tool to load the kernel in this scenario,\nwe have to build the ELF header in the 2nd (kdump/new) kernel.\n\nSo with the following patch set we would like to introduce the new\nfunction that the ELF header for /proc/vmcore can be created in the 2nd\nkernel memory.\n\nThe following steps are done during zfcpdump execution:\n\n1.  Production system crashes\n2.  User boots a SCSI disk that has been prepared with the zfcpdump tool\n3.  Hypervisor saves CPU state of boot CPU and HSA_SIZE bytes of memory into HSA\n4.  Boot loader loads kernel into low memory area\n5.  Kernel boots and uses only HSA_SIZE bytes of memory\n6.  Kernel saves registers of non-boot CPUs\n7.  Kernel does memory detection for dump memory map\n8.  Kernel creates ELF header for /proc/vmcore\n9.  /proc/vmcore uses this header for initialization\n10. The zfcpdump user space reads /proc/vmcore to write dump to SCSI disk\n    - copy_oldmem_page() copies from HSA for memory below HSA_SIZE\n    - copy_oldmem_page() copies from real memory for memory above HSA_SIZE\n\nCurrently for s390 we create the ELF core header in the 2nd kernel with a\nsmall trick.  We relocate the addresses in the ELF header in a way that\nfor the /proc/vmcore code it seems to be in the 1st kernel (old) memory\nand the read_from_oldmem() returns the correct data.  This allows the\n/proc/vmcore code to use the ELF header in the 2nd kernel.\n\nThis patch:\n\nExchange the old mechanism with the new and much cleaner function call\noverride feature that now offcially allows to create the ELF core header\nin the 2nd kernel.\n\nTo use the new feature the following function have to be defined\nby the architecture backend code to read from new memory:\n\n * elfcorehdr_alloc: Allocate ELF header\n * elfcorehdr_free: Free the memory of the ELF header\n * elfcorehdr_read: Read from ELF header\n * elfcorehdr_read_notes: Read from ELF notes\n\nSigned-off-by: Michael Holzheu \u003cholzheu@linux.vnet.ibm.com\u003e\nAcked-by: Vivek Goyal \u003cvgoyal@redhat.com\u003e\nCc: HATAYAMA Daisuke \u003cd.hatayama@jp.fujitsu.com\u003e\nCc: Jan Willeke \u003cwilleke@de.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "131b2f9f1214f338f0bf7c0d9760019f2b1d0c20",
      "tree": "b60a498414e259fe4e81f210378538f90ada9224",
      "parents": [
        "5d1baf3b63bfc8c709dc44df85ff1475c7ef489d"
      ],
      "author": {
        "name": "Oleg Nesterov",
        "email": "oleg@redhat.com",
        "time": "Wed Sep 11 14:24:39 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:59:04 2013 -0700"
      },
      "message": "exec: kill \"int depth\" in search_binary_handler()\n\nNobody except search_binary_handler() should touch -\u003erecursion_depth, \"int\ndepth\" buys nothing but complicates the code, kill it.\n\nProbably we should also kill \"fn\" and the !NULL check, -\u003eload_binary\nshould be always defined.  And it can not go away after read_unlock() or\nthis code is buggy anyway.\n\nSigned-off-by: Oleg Nesterov \u003coleg@redhat.com\u003e\nAcked-by: Kees Cook \u003ckeescook@chromium.org\u003e\nCc: Al Viro \u003cviro@ZenIV.linux.org.uk\u003e\nCc: Evgeniy Polyakov \u003czbr@ioremap.net\u003e\nCc: Zach Levis \u003czml@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "af96397de8600232effbff43dc8b4ca20ddc02b1",
      "tree": "d236fe3b4d37d5439ee41497a0d179a0b7614883",
      "parents": [
        "c802d64a356b5cf349121ac4c5e005f037ce548d"
      ],
      "author": {
        "name": "Heiko Carstens",
        "email": "heiko.carstens@de.ibm.com",
        "time": "Wed Sep 11 14:24:13 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:52 2013 -0700"
      },
      "message": "kprobes: allow to specify custom allocator for insn caches\n\nThe current two insn slot caches both use module_alloc/module_free to\nallocate and free insn slot cache pages.\n\nFor s390 this is not sufficient since there is the need to allocate insn\nslots that are either within the vmalloc module area or within dma memory.\n\nTherefore add a mechanism which allows to specify an own allocator for an\nown insn slot cache.\n\nSigned-off-by: Heiko Carstens \u003cheiko.carstens@de.ibm.com\u003e\nAcked-by: Masami Hiramatsu \u003cmasami.hiramatsu.pt@hitachi.com\u003e\nCc: Ananth N Mavinakayanahalli \u003cananth@in.ibm.com\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nCc: Martin Schwidefsky \u003cschwidefsky@de.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c802d64a356b5cf349121ac4c5e005f037ce548d",
      "tree": "654c5af4d00a40eeaa576acc1aee238e7c8a8a87",
      "parents": [
        "ae79744975cb0b3b9c469fe1a05db37d2943c863"
      ],
      "author": {
        "name": "Heiko Carstens",
        "email": "heiko.carstens@de.ibm.com",
        "time": "Wed Sep 11 14:24:11 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:52 2013 -0700"
      },
      "message": "kprobes: unify insn caches\n\nThe current kpropes insn caches allocate memory areas for insn slots\nwith module_alloc().  The assumption is that the kernel image and module\narea are both within the same +/- 2GB memory area.\n\nThis however is not true for s390 where the kernel image resides within\nthe first 2GB (DMA memory area), but the module area is far away in the\nvmalloc area, usually somewhere close below the 4TB area.\n\nFor new pc relative instructions s390 needs insn slots that are within\n+/- 2GB of each area.  That way we can patch displacements of\npc-relative instructions within the insn slots just like x86 and\npowerpc.\n\nThe module area works already with the normal insn slot allocator,\nhowever there is currently no way to get insn slots that are within the\nfirst 2GB on s390 (aka DMA area).\n\nTherefore this patch set modifies the kprobes insn slot cache code in\norder to allow to specify a custom allocator for the insn slot cache\npages.  In addition architecure can now have private insn slot caches\nwithhout the need to modify common code.\n\nPatch 1 unifies and simplifies the current insn and optinsn caches\n        implementation. This is a preparation which allows to add more\n        insn caches in a simple way.\n\nPatch 2 adds the possibility to specify a custom allocator.\n\nPatch 3 makes s390 use the new insn slot mechanisms and adds support for\n        pc-relative instructions with long displacements.\n\nThis patch (of 3):\n\nThe two insn caches (insn, and optinsn) each have an own mutex and\nalloc/free functions (get_[opt]insn_slot() / free_[opt]insn_slot()).\n\nSince there is the need for yet another insn cache which satifies dma\nallocations on s390, unify and simplify the current implementation:\n\n- Move the per insn cache mutex into struct kprobe_insn_cache.\n- Move the alloc/free functions to kprobe.h so they are simply\n  wrappers for the generic __get_insn_slot/__free_insn_slot functions.\n  The implementation is done with a DEFINE_INSN_CACHE_OPS() macro\n  which provides the alloc/free functions for each cache if needed.\n- move the struct kprobe_insn_cache to kprobe.h which allows to generate\n  architecture specific insn slot caches outside of the core kprobes\n  code.\n\nSigned-off-by: Heiko Carstens \u003cheiko.carstens@de.ibm.com\u003e\nCc: Masami Hiramatsu \u003cmasami.hiramatsu.pt@hitachi.com\u003e\nCc: Ananth N Mavinakayanahalli \u003cananth@in.ibm.com\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nCc: Martin Schwidefsky \u003cschwidefsky@de.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f9597f24c089dcbddbd2d9e99fbf00df57fb70c6",
      "tree": "c7aa5c1ab542839a07bafff202dc4c68e8f3486f",
      "parents": [
        "e656a634118285142063527b2cd40c749036de82"
      ],
      "author": {
        "name": "Sergei Trofimovich",
        "email": "slyfox@gentoo.org",
        "time": "Wed Sep 11 14:23:28 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:25 2013 -0700"
      },
      "message": "syscalls.h: add forward declarations for inplace syscall wrappers\n\nUnclutter -Wmissing-prototypes warning types (enabled at make W\u003d1)\n\n    linux/include/linux/syscalls.h:190:18: warning: no previous prototype for \u0027SyS_semctl\u0027 [-Wmissing-prototypes]\n      asmlinkage long SyS##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \\\n                      ^\n    linux/include/linux/syscalls.h:183:2: note: in expansion of macro \u0027__SYSCALL_DEFINEx\u0027\n      __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)\n      ^\nby adding forward declarations right before definitions.\n\nSigned-off-by: Sergei Trofimovich \u003cslyfox@gentoo.org\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "bff2dc42bcafdd75c0296987747f782965d691a0",
      "tree": "3e921a8fc93d7bff9a5ac1d5221be9f9938447e4",
      "parents": [
        "081192b25c2d4620b5f5838620624d3daee94b66"
      ],
      "author": {
        "name": "David Daney",
        "email": "david.daney@cavium.com",
        "time": "Wed Sep 11 14:23:26 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:25 2013 -0700"
      },
      "message": "smp.h: move !SMP version of on_each_cpu() out-of-line\n\nAll of the other non-trivial !SMP versions of functions in smp.h are\nout-of-line in up.c.  Move on_each_cpu() there as well.\n\nThis allows us to get rid of the #include \u003clinux/irqflags.h\u003e.  The\ndrawback is that this makes both the x86_64 and i386 defconfig !SMP\nkernels about 200 bytes larger each.\n\nSigned-off-by: David Daney \u003cdavid.daney@cavium.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "fa688207c9db48b64ab6538abc3fcdf26110b9ec",
      "tree": "47fff6ebaa5b0b7d3feca64010051899e29db475",
      "parents": [
        "c14c338cb05c700a260480c197cfd6da8f8b7d2e"
      ],
      "author": {
        "name": "David Daney",
        "email": "david.daney@cavium.com",
        "time": "Wed Sep 11 14:23:24 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:23 2013 -0700"
      },
      "message": "smp: quit unconditionally enabling irq in on_each_cpu_mask and on_each_cpu_cond\n\nAs in commit f21afc25f9ed (\"smp.h: Use local_irq_{save,restore}() in\n!SMP version of on_each_cpu()\"), we don\u0027t want to enable irqs if they\nare not already enabled.  There are currently no known problematical\ncallers of these functions, but since it is a known failure pattern, we\npreemptively fix them.\n\nSince they are not trivial functions, make them non-inline by moving\nthem to up.c.  This also makes it so we don\u0027t have to fix #include\ndependancies for preempt_{disable,enable}.\n\nSigned-off-by: David Daney \u003cdavid.daney@cavium.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "f9121153fdfbfaa930bf65077a5597e20d3ac608",
      "tree": "a72e82c0c3394cf84b3ff8698134ff124a61b491",
      "parents": [
        "841fcc583f81c632d20a27e17beccb20320530a1"
      ],
      "author": {
        "name": "Wanpeng Li",
        "email": "liwanp@linux.vnet.ibm.com",
        "time": "Wed Sep 11 14:22:52 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:08 2013 -0700"
      },
      "message": "mm/hwpoison: don\u0027t need to hold compound lock for hugetlbfs page\n\ncompound lock is introduced by commit e9da73d67(\"thp: compound_lock.\"), it\nis used to serialize put_page against __split_huge_page_refcount().  In\naddition, transparent hugepages will be splitted in hwpoison handler and\njust one subpage will be poisoned.  There is unnecessary to hold compound\nlock for hugetlbfs page.  This patch replace compound_trans_order by\ncompond_order in the place where the page is hugetlbfs page.\n\nSigned-off-by: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nReviewed-by: Naoya Horiguchi \u003cn-horiguchi@ah.jp.nec.com\u003e\nCc: Andi Kleen \u003candi@firstfloor.org\u003e\nCc: Tony Luck \u003ctony.luck@intel.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5a53748568f79641eaf40e41081a2f4987f005c2",
      "tree": "929e07be4f378f96398110dce35a64b61e1505d7",
      "parents": [
        "4c3bffc272755c98728c2b58b1a8148cf9e9fd1f"
      ],
      "author": {
        "name": "Maxim Patlasov",
        "email": "mpatlasov@parallels.com",
        "time": "Wed Sep 11 14:22:46 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:04 2013 -0700"
      },
      "message": "mm/page-writeback.c: add strictlimit feature\n\nThe feature prevents mistrusted filesystems (ie: FUSE mounts created by\nunprivileged users) to grow a large number of dirty pages before\nthrottling.  For such filesystems balance_dirty_pages always check bdi\ncounters against bdi limits.  I.e.  even if global \"nr_dirty\" is under\n\"freerun\", it\u0027s not allowed to skip bdi checks.  The only use case for now\nis fuse: it sets bdi max_ratio to 1% by default and system administrators\nare supposed to expect that this limit won\u0027t be exceeded.\n\nThe feature is on if a BDI is marked by BDI_CAP_STRICTLIMIT flag.  A\nfilesystem may set the flag when it initializes its BDI.\n\nThe problematic scenario comes from the fact that nobody pays attention to\nthe NR_WRITEBACK_TEMP counter (i.e.  number of pages under fuse\nwriteback).  The implementation of fuse writeback releases original page\n(by calling end_page_writeback) almost immediately.  A fuse request queued\nfor real processing bears a copy of original page.  Hence, if userspace\nfuse daemon doesn\u0027t finalize write requests in timely manner, an\naggressive mmap writer can pollute virtually all memory by those temporary\nfuse page copies.  They are carefully accounted in NR_WRITEBACK_TEMP, but\nnobody cares.\n\nTo make further explanations shorter, let me use \"NR_WRITEBACK_TEMP\nproblem\" as a shortcut for \"a possibility of uncontrolled grow of amount\nof RAM consumed by temporary pages allocated by kernel fuse to process\nwriteback\".\n\nThe problem was very easy to reproduce.  There is a trivial example\nfilesystem implementation in fuse userspace distribution: fusexmp_fh.c.  I\nadded \"sleep(1);\" to the write methods, then recompiled and mounted it.\nThen created a huge file on the mount point and run a simple program which\nmmap-ed the file to a memory region, then wrote a data to the region.  An\nhour later I observed almost all RAM consumed by fuse writeback.  Since\nthen some unrelated changes in kernel fuse made it more difficult to\nreproduce, but it is still possible now.\n\nPutting this theoretical happens-in-the-lab thing aside, there is another\nthing that really hurts real world (FUSE) users.  This is write-through\npage cache policy FUSE currently uses.  I.e.  handling write(2), kernel\nfuse populates page cache and flushes user data to the server\nsynchronously.  This is excessively suboptimal.  Pavel Emelyanov\u0027s patches\n(\"writeback cache policy\") solve the problem, but they also make resolving\nNR_WRITEBACK_TEMP problem absolutely necessary.  Otherwise, simply copying\na huge file to a fuse mount would result in memory starvation.  Miklos,\nthe maintainer of FUSE, believes strictlimit feature the way to go.\n\nAnd eventually putting FUSE topics aside, there is one more use-case for\nstrictlimit feature.  Using a slow USB stick (mass storage) in a machine\nwith huge amount of RAM installed is a well-known pain.  Let\u0027s make simple\ncomputations.  Assuming 64GB of RAM installed, existing implementation of\nbalance_dirty_pages will start throttling only after 9.6GB of RAM becomes\ndirty (freerun \u003d\u003d 15% of total RAM).  So, the command \"cp 9GB_file\n/media/my-usb-storage/\" may return in a few seconds, but subsequent\n\"umount /media/my-usb-storage/\" will take more than two hours if effective\nthroughput of the storage is, to say, 1MB/sec.\n\nAfter inclusion of strictlimit feature, it will be trivial to add a knob\n(e.g.  /sys/devices/virtual/bdi/x:y/strictlimit) to enable it on demand.\nManually or via udev rule.  May be I\u0027m wrong, but it seems to be quite a\nnatural desire to limit the amount of dirty memory for some devices we are\nnot fully trust (in the sense of sustainable throughput).\n\n[akpm@linux-foundation.org: fix warning in page-writeback.c]\nSigned-off-by: Maxim Patlasov \u003cMPatlasov@parallels.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Miklos Szeredi \u003cmiklos@szeredi.hu\u003e\nCc: Wu Fengguang \u003cfengguang.wu@intel.com\u003e\nCc: Pavel Emelyanov \u003cxemul@parallels.com\u003e\nCc: James Bottomley \u003cJames.Bottomley@HansenPartnership.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7d9f073b8da45a894bb7148433bd84d21eed6757",
      "tree": "513aa8ce5502ba3f1c167c7b63774136fafaac8f",
      "parents": [
        "187320932dcece9c4b93f38f56d1f888bd5c325f"
      ],
      "author": {
        "name": "Wanpeng Li",
        "email": "liwanp@linux.vnet.ibm.com",
        "time": "Wed Sep 11 14:22:40 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:02 2013 -0700"
      },
      "message": "mm/writeback: make writeback_inodes_wb static\n\nIt\u0027s not used globally and could be static.\n\nSigned-off-by: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nCc: Dave Hansen \u003cdave.hansen@linux.intel.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nCc: Joonsoo Kim \u003ciamjoonsoo.kim@lge.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Yasuaki Ishimatsu \u003cisimatu.yasuaki@jp.fujitsu.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Jiri Kosina \u003cjkosina@suse.cz\u003e\nCc: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6e543d5780e36ff5ee56c44d7e2e30db3457a7ed",
      "tree": "094208c4caad9d0d766137c243d0cfe97a1ce0b9",
      "parents": [
        "7a8010cd36273ff5f6fea5201ef9232f30cebbd9"
      ],
      "author": {
        "name": "Lisa Du",
        "email": "cldu@marvell.com",
        "time": "Wed Sep 11 14:22:36 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:01 2013 -0700"
      },
      "message": "mm: vmscan: fix do_try_to_free_pages() livelock\n\nThis patch is based on KOSAKI\u0027s work and I add a little more description,\nplease refer https://lkml.org/lkml/2012/6/14/74.\n\nCurrently, I found system can enter a state that there are lots of free\npages in a zone but only order-0 and order-1 pages which means the zone is\nheavily fragmented, then high order allocation could make direct reclaim\npath\u0027s long stall(ex, 60 seconds) especially in no swap and no compaciton\nenviroment.  This problem happened on v3.4, but it seems issue still lives\nin current tree, the reason is do_try_to_free_pages enter live lock:\n\nkswapd will go to sleep if the zones have been fully scanned and are still\nnot balanced.  As kswapd thinks there\u0027s little point trying all over again\nto avoid infinite loop.  Instead it changes order from high-order to\n0-order because kswapd think order-0 is the most important.  Look at\n73ce02e9 in detail.  If watermarks are ok, kswapd will go back to sleep\nand may leave zone-\u003eall_unreclaimable \u003d3D 0.  It assume high-order users\ncan still perform direct reclaim if they wish.\n\nDirect reclaim continue to reclaim for a high order which is not a\nCOSTLY_ORDER without oom-killer until kswapd turn on\nzone-\u003eall_unreclaimble\u003d .  This is because to avoid too early oom-kill.\nSo it means direct_reclaim depends on kswapd to break this loop.\n\nIn worst case, direct-reclaim may continue to page reclaim forever when\nkswapd sleeps forever until someone like watchdog detect and finally kill\nthe process.  As described in:\nhttp://thread.gmane.org/gmane.linux.kernel.mm/103737\n\nWe can\u0027t turn on zone-\u003eall_unreclaimable from direct reclaim path because\ndirect reclaim path don\u0027t take any lock and this way is racy.  Thus this\npatch removes zone-\u003eall_unreclaimable field completely and recalculates\nzone reclaimable state every time.\n\nNote: we can\u0027t take the idea that direct-reclaim see zone-\u003epages_scanned\ndirectly and kswapd continue to use zone-\u003eall_unreclaimable.  Because, it\nis racy.  commit 929bea7c71 (vmscan: all_unreclaimable() use\nzone-\u003eall_unreclaimable as a name) describes the detail.\n\n[akpm@linux-foundation.org: uninline zone_reclaimable_pages() and zone_reclaimable()]\nCc: Aaditya Kumar \u003caaditya.kumar.30@gmail.com\u003e\nCc: Ying Han \u003cyinghan@google.com\u003e\nCc: Nick Piggin \u003cnpiggin@gmail.com\u003e\nAcked-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Mel Gorman \u003cmel@csn.ul.ie\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Christoph Lameter \u003ccl@linux.com\u003e\nCc: Bob Liu \u003clliubbo@gmail.com\u003e\nCc: Neil Zhang \u003czhangwm@marvell.com\u003e\nCc: Russell King - ARM Linux \u003clinux@arm.linux.org.uk\u003e\nReviewed-by: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Minchan Kim \u003cminchan@kernel.org\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nSigned-off-by: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nSigned-off-by: Lisa Du \u003ccldu@marvell.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "7a8010cd36273ff5f6fea5201ef9232f30cebbd9",
      "tree": "3805f3d9a8a1f1c1c555ef31bc1bdb51fb51e33e",
      "parents": [
        "5b40998ae35cf64561868370e6c9f3d3e94b6bf7"
      ],
      "author": {
        "name": "Vlastimil Babka",
        "email": "vbabka@suse.cz",
        "time": "Wed Sep 11 14:22:35 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:58:01 2013 -0700"
      },
      "message": "mm: munlock: manual pte walk in fast path instead of follow_page_mask()\n\nCurrently munlock_vma_pages_range() calls follow_page_mask() to obtain\neach individual struct page.  This entails repeated full page table\ntranslations and page table lock taken for each page separately.\n\nThis patch avoids the costly follow_page_mask() where possible, by\niterating over ptes within single pmd under single page table lock.  The\nfirst pte is obtained by get_locked_pte() for non-THP page acquired by the\ninitial follow_page_mask().  The rest of the on-stack pagevec for munlock\nis filled up using pte_walk as long as pte_present() and vm_normal_page()\nare sufficient to obtain the struct page.\n\nAfter this patch, a 14% speedup was measured for munlocking a 56GB large\nmemory area with THP disabled.\n\nSigned-off-by: Vlastimil Babka \u003cvbabka@suse.cz\u003e\nCc: Jörn Engel \u003cjoern@logfs.org\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Michel Lespinasse \u003cwalken@google.com\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Vlastimil Babka \u003cvbabka@suse.cz\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d9104d1ca9662498339c0de975b4666c30485f4e",
      "tree": "cb95c72dde19930ca985b9834d604958ef4eecde",
      "parents": [
        "3b11f0aaae830f0f569cb8fb7fd26f4133ebdabd"
      ],
      "author": {
        "name": "Cyrill Gorcunov",
        "email": "gorcunov@gmail.com",
        "time": "Wed Sep 11 14:22:24 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:56 2013 -0700"
      },
      "message": "mm: track vma changes with VM_SOFTDIRTY bit\n\nPavel reported that in case if vma area get unmapped and then mapped (or\nexpanded) in-place, the soft dirty tracker won\u0027t be able to recognize this\nsituation since it works on pte level and ptes are get zapped on unmap,\nloosing soft dirty bit of course.\n\nSo to resolve this situation we need to track actions on vma level, there\nVM_SOFTDIRTY flag comes in.  When new vma area created (or old expanded)\nwe set this bit, and keep it here until application calls for clearing\nsoft dirty bit.\n\nThus when user space application track memory changes now it can detect if\nvma area is renewed.\n\nReported-by: Pavel Emelyanov \u003cxemul@parallels.com\u003e\nSigned-off-by: Cyrill Gorcunov \u003cgorcunov@openvz.org\u003e\nCc: Andy Lutomirski \u003cluto@amacapital.net\u003e\nCc: Matt Mackall \u003cmpm@selenic.com\u003e\nCc: Xiao Guangrong \u003cxiaoguangrong@linux.vnet.ibm.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@gmail.com\u003e\nCc: Stephen Rothwell \u003csfr@canb.auug.org.au\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: \"Aneesh Kumar K.V\" \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nCc: Rob Landley \u003crob@landley.net\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e76b63f80d938a1319eb5fb0ae7ea69bddfbae38",
      "tree": "4480ea31ebd4cbae35fcf7fa75c834ab06e39ffd",
      "parents": [
        "0bf598d863e3c741d47e3178d645f04c9d6c186c"
      ],
      "author": {
        "name": "Yinghai Lu",
        "email": "yinghai@kernel.org",
        "time": "Wed Sep 11 14:22:17 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:51 2013 -0700"
      },
      "message": "memblock, numa: binary search node id\n\nCurrent early_pfn_to_nid() on arch that support memblock go over\nmemblock.memory one by one, so will take too many try near the end.\n\nWe can use existing memblock_search to find the node id for given pfn,\nthat could save some time on bigger system that have many entries\nmemblock.memory array.\n\nHere are the timing differences for several machines.  In each case with\nthe patch less time was spent in __early_pfn_to_nid().\n\n                        3.11-rc5        with patch      difference (%)\n                        --------        ----------      --------------\nUV1: 256 nodes  9TB:     411.66          402.47         -9.19 (2.23%)\nUV2: 255 nodes 16TB:    1141.02         1138.12         -2.90 (0.25%)\nUV2:  64 nodes  2TB:     128.15          126.53         -1.62 (1.26%)\nUV2:  32 nodes  2TB:     121.87          121.07         -0.80 (0.66%)\n                        Time in seconds.\n\nSigned-off-by: Yinghai Lu \u003cyinghai@kernel.org\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: Russ Anderson \u003crja@sgi.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "83467efbdb7948146581a56cbd683a22a0684bbb",
      "tree": "8faaf4d713adcfd5875190ee23f0218212838f24",
      "parents": [
        "c8721bbbdd36382de51cd6b7a56322e0acca2414"
      ],
      "author": {
        "name": "Naoya Horiguchi",
        "email": "n-horiguchi@ah.jp.nec.com",
        "time": "Wed Sep 11 14:22:11 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:49 2013 -0700"
      },
      "message": "mm: migrate: check movability of hugepage in unmap_and_move_huge_page()\n\nCurrently hugepage migration works well only for pmd-based hugepages\n(mainly due to lack of testing,) so we had better not enable migration of\nother levels of hugepages until we are ready for it.\n\nSome users of hugepage migration (mbind, move_pages, and migrate_pages) do\npage table walk and check pud/pmd_huge() there, so they are safe.  But the\nother users (softoffline and memory hotremove) don\u0027t do this, so without\nthis patch they can try to migrate unexpected types of hugepages.\n\nTo prevent this, we introduce hugepage_migration_support() as an\narchitecture dependent check of whether hugepage are implemented on a pmd\nbasis or not.  And on some architecture multiple sizes of hugepages are\navailable, so hugepage_migration_support() also checks hugepage size.\n\nSigned-off-by: Naoya Horiguchi \u003cn-horiguchi@ah.jp.nec.com\u003e\nCc: Andi Kleen \u003cak@linux.intel.com\u003e\nCc: Hillf Danton \u003cdhillf@gmail.com\u003e\nCc: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: \"Aneesh Kumar K.V\" \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c8721bbbdd36382de51cd6b7a56322e0acca2414",
      "tree": "8fb7b55974defcde9a4b07f571f0dd2dd1ad591f",
      "parents": [
        "71ea2efb1e936a127690a0a540b3a6162f95e48a"
      ],
      "author": {
        "name": "Naoya Horiguchi",
        "email": "n-horiguchi@ah.jp.nec.com",
        "time": "Wed Sep 11 14:22:09 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:48 2013 -0700"
      },
      "message": "mm: memory-hotplug: enable memory hotplug to handle hugepage\n\nUntil now we can\u0027t offline memory blocks which contain hugepages because a\nhugepage is considered as an unmovable page.  But now with this patch\nseries, a hugepage has become movable, so by using hugepage migration we\ncan offline such memory blocks.\n\nWhat\u0027s different from other users of hugepage migration is that we need to\ndecompose all the hugepages inside the target memory block into free buddy\npages after hugepage migration, because otherwise free hugepages remaining\nin the memory block intervene the memory offlining.  For this reason we\nintroduce new functions dissolve_free_huge_page() and\ndissolve_free_huge_pages().\n\nOther than that, what this patch does is straightforwardly to add hugepage\nmigration code, that is, adding hugepage code to the functions which scan\nover pfn and collect hugepages to be migrated, and adding a hugepage\nallocation function to alloc_migrate_target().\n\nAs for larger hugepages (1GB for x86_64), it\u0027s not easy to do hotremove\nover them because it\u0027s larger than memory block.  So we now simply leave\nit to fail as it is.\n\n[yongjun_wei@trendmicro.com.cn: remove duplicated include]\nSigned-off-by: Naoya Horiguchi \u003cn-horiguchi@ah.jp.nec.com\u003e\nAcked-by: Andi Kleen \u003cak@linux.intel.com\u003e\nCc: Hillf Danton \u003cdhillf@gmail.com\u003e\nCc: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: \"Aneesh Kumar K.V\" \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nSigned-off-by: Wei Yongjun \u003cyongjun_wei@trendmicro.com.cn\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "71ea2efb1e936a127690a0a540b3a6162f95e48a",
      "tree": "a511e464a3c5efb48d7f31e38a97ea9f05660bfe",
      "parents": [
        "74060e4d78795c7c43805133cb717d82533d4e0d"
      ],
      "author": {
        "name": "Naoya Horiguchi",
        "email": "n-horiguchi@ah.jp.nec.com",
        "time": "Wed Sep 11 14:22:08 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:48 2013 -0700"
      },
      "message": "mm: migrate: remove VM_HUGETLB from vma flag check in vma_migratable()\n\nEnable hugepage migration from migrate_pages(2), move_pages(2), and\nmbind(2).\n\nSigned-off-by: Naoya Horiguchi \u003cn-horiguchi@ah.jp.nec.com\u003e\nAcked-by: Hillf Danton \u003cdhillf@gmail.com\u003e\nAcked-by: Andi Kleen \u003cak@linux.intel.com\u003e\nReviewed-by: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: \"Aneesh Kumar K.V\" \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "74060e4d78795c7c43805133cb717d82533d4e0d",
      "tree": "923febdc5b4565fbbcf05387d7cc423c72648695",
      "parents": [
        "e632a938d914d271bec26e570d36c755a1e35e4c"
      ],
      "author": {
        "name": "Naoya Horiguchi",
        "email": "n-horiguchi@ah.jp.nec.com",
        "time": "Wed Sep 11 14:22:06 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:48 2013 -0700"
      },
      "message": "mm: mbind: add hugepage migration code to mbind()\n\nExtend do_mbind() to handle vma with VM_HUGETLB set.  We will be able to\nmigrate hugepage with mbind(2) after applying the enablement patch which\ncomes later in this series.\n\nSigned-off-by: Naoya Horiguchi \u003cn-horiguchi@ah.jp.nec.com\u003e\nAcked-by: Andi Kleen \u003cak@linux.intel.com\u003e\nReviewed-by: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nAcked-by: Hillf Danton \u003cdhillf@gmail.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: \"Aneesh Kumar K.V\" \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "b8ec1cee5a4375c1244b85709138a2eac2d89cb6",
      "tree": "c3c548949ac53e1a66d891171d4b176f1d11538d",
      "parents": [
        "31caf665e666b51fe36efd1e54031ed29e86c0b4"
      ],
      "author": {
        "name": "Naoya Horiguchi",
        "email": "n-horiguchi@ah.jp.nec.com",
        "time": "Wed Sep 11 14:22:01 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:47 2013 -0700"
      },
      "message": "mm: soft-offline: use migrate_pages() instead of migrate_huge_page()\n\nCurrently migrate_huge_page() takes a pointer to a hugepage to be migrated\nas an argument, instead of taking a pointer to the list of hugepages to be\nmigrated.  This behavior was introduced in commit 189ebff28 (\"hugetlb:\nsimplify migrate_huge_page()\"), and was OK because until now hugepage\nmigration is enabled only for soft-offlining which migrates only one\nhugepage in a single call.\n\nBut the situation will change in the later patches in this series which\nenable other users of page migration to support hugepage migration.  They\ncan kick migration for both of normal pages and hugepages in a single\ncall, so we need to go back to original implementation which uses linked\nlists to collect the hugepages to be migrated.\n\nWith this patch, soft_offline_huge_page() switches to use migrate_pages(),\nand migrate_huge_page() is not used any more.  So let\u0027s remove it.\n\nSigned-off-by: Naoya Horiguchi \u003cn-horiguchi@ah.jp.nec.com\u003e\nAcked-by: Andi Kleen \u003cak@linux.intel.com\u003e\nReviewed-by: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nAcked-by: Hillf Danton \u003cdhillf@gmail.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: \"Aneesh Kumar K.V\" \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "31caf665e666b51fe36efd1e54031ed29e86c0b4",
      "tree": "e17452c7c698aade9946cd5557e3d999663e3f76",
      "parents": [
        "07443a85ad90c7b62fbe11dcd3d6a1de1e10516f"
      ],
      "author": {
        "name": "Naoya Horiguchi",
        "email": "n-horiguchi@ah.jp.nec.com",
        "time": "Wed Sep 11 14:21:59 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:46 2013 -0700"
      },
      "message": "mm: migrate: make core migration code aware of hugepage\n\nCurrently hugepage migration is available only for soft offlining, but\nit\u0027s also useful for some other users of page migration (clearly because\nusers of hugepage can enjoy the benefit of mempolicy and memory hotplug.)\nSo this patchset tries to extend such users to support hugepage migration.\n\nThe target of this patchset is to enable hugepage migration for NUMA\nrelated system calls (migrate_pages(2), move_pages(2), and mbind(2)), and\nmemory hotplug.\n\nThis patchset does not add hugepage migration for memory compaction,\nbecause users of memory compaction mainly expect to construct thp by\narranging raw pages, and there\u0027s little or no need to compact hugepages.\nCMA, another user of page migration, can have benefit from hugepage\nmigration, but is not enabled to support it for now (just because of lack\nof testing and expertise in CMA.)\n\nHugepage migration of non pmd-based hugepage (for example 1GB hugepage in\nx86_64, or hugepages in architectures like ia64) is not enabled for now\n(again, because of lack of testing.)\n\nAs for how these are achived, I extended the API (migrate_pages()) to\nhandle hugepage (with patch 1 and 2) and adjusted code of each caller to\ncheck and collect movable hugepages (with patch 3-7).  Remaining 2 patches\nare kind of miscellaneous ones to avoid unexpected behavior.  Patch 8 is\nabout making sure that we only migrate pmd-based hugepages.  And patch 9\nis about choosing appropriate zone for hugepage allocation.\n\nMy test is mainly functional one, simply kicking hugepage migration via\neach entry point and confirm that migration is done correctly.  Test code\nis available here:\n\n  git://github.com/Naoya-Horiguchi/test_hugepage_migration_extension.git\n\nAnd I always run libhugetlbfs test when changing hugetlbfs\u0027s code.  With\nthis patchset, no regression was found in the test.\n\nThis patch (of 9):\n\nBefore enabling each user of page migration to support hugepage,\nthis patch enables the list of pages for migration to link not only\nLRU pages, but also hugepages. As a result, putback_movable_pages()\nand migrate_pages() can handle both of LRU pages and hugepages.\n\nSigned-off-by: Naoya Horiguchi \u003cn-horiguchi@ah.jp.nec.com\u003e\nAcked-by: Andi Kleen \u003cak@linux.intel.com\u003e\nReviewed-by: Wanpeng Li \u003cliwanp@linux.vnet.ibm.com\u003e\nAcked-by: Hillf Danton \u003cdhillf@gmail.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: \"Aneesh Kumar K.V\" \u003caneesh.kumar@linux.vnet.ibm.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "674470d97958a0ec72f72caf7f6451da40159cc7",
      "tree": "5085abf683ef3ac3f2dcf745b0d214dc70031582",
      "parents": [
        "eee87e1726af8c746f0e15ae6c57a97675f5e960"
      ],
      "author": {
        "name": "Joonyoung Shim",
        "email": "jy0922.shim@samsung.com",
        "time": "Wed Sep 11 14:21:43 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:35 2013 -0700"
      },
      "message": "lib/genalloc.c: fix overflow of ending address of memory chunk\n\nIn struct gen_pool_chunk, end_addr means the end address of memory chunk\n(inclusive), but in the implementation it is treated as address + size of\nmemory chunk (exclusive), so it points to the address plus one instead of\ncorrect ending address.\n\nThe ending address of memory chunk plus one will cause overflow on the\nmemory chunk including the last address of memory map, e.g.  when starting\naddress is 0xFFF00000 and size is 0x100000 on 32bit machine, ending\naddress will be 0x100000000.\n\nUse correct ending address like starting address + size - 1.\n\n[akpm@linux-foundation.org: add comment to struct gen_pool_chunk:end_addr]\nSigned-off-by: Joonyoung Shim \u003cjy0922.shim@samsung.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "2bb921e526656556e68f99f5f15a4a1bf2691844",
      "tree": "91b009a59938d7713de0781df9d5c0c2eacfc51f",
      "parents": [
        "d2cf5ad6312ca9913464fac40fb47ba47ad945c4"
      ],
      "author": {
        "name": "Christoph Lameter",
        "email": "cl@linux.com",
        "time": "Wed Sep 11 14:21:30 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:31 2013 -0700"
      },
      "message": "vmstat: create separate function to fold per cpu diffs into local counters\n\nThe main idea behind this patchset is to reduce the vmstat update overhead\nby avoiding interrupt enable/disable and the use of per cpu atomics.\n\nThis patch (of 3):\n\nIt is better to have a separate folding function because\nrefresh_cpu_vm_stats() also does other things like expire pages in the\npage allocator caches.\n\nIf we have a separate function then refresh_cpu_vm_stats() is only called\nfrom the local cpu which allows additional optimizations.\n\nThe folding function is only called when a cpu is being downed and\ntherefore no other processor will be accessing the counters.  Also\nsimplifies synchronization.\n\n[akpm@linux-foundation.org: fix UP build]\nSigned-off-by: Christoph Lameter \u003ccl@linux.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCC: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nCc: Alexey Dobriyan \u003cadobriyan@gmail.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "d2cf5ad6312ca9913464fac40fb47ba47ad945c4",
      "tree": "05590c6fed5ee9b86b65e1c23a899e921faeb040",
      "parents": [
        "bc4b4448dba660afc8df3790564320302d9709a1"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "iamjoonsoo.kim@lge.com",
        "time": "Wed Sep 11 14:21:29 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:31 2013 -0700"
      },
      "message": "swap: clean-up #ifdef in page_mapping()\n\nPageSwapCache() is always false when !CONFIG_SWAP, so compiler\nproperly discard related code. Therefore, we don\u0027t need #ifdef explicitly.\n\nSigned-off-by: Joonsoo Kim \u003ciamjoonsoo.kim@lge.com\u003e\nAcked-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "81c0a2bb515fd4daae8cab64352877480792b515",
      "tree": "5ef326d226fdd14332cd0e5382e6dd2759dd08e3",
      "parents": [
        "e085dbc52fad8d79fa2245339c84bf3ef0b3a802"
      ],
      "author": {
        "name": "Johannes Weiner",
        "email": "hannes@cmpxchg.org",
        "time": "Wed Sep 11 14:20:47 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:23 2013 -0700"
      },
      "message": "mm: page_alloc: fair zone allocator policy\n\nEach zone that holds userspace pages of one workload must be aged at a\nspeed proportional to the zone size.  Otherwise, the time an individual\npage gets to stay in memory depends on the zone it happened to be\nallocated in.  Asymmetry in the zone aging creates rather unpredictable\naging behavior and results in the wrong pages being reclaimed, activated\netc.\n\nBut exactly this happens right now because of the way the page allocator\nand kswapd interact.  The page allocator uses per-node lists of all zones\nin the system, ordered by preference, when allocating a new page.  When\nthe first iteration does not yield any results, kswapd is woken up and the\nallocator retries.  Due to the way kswapd reclaims zones below the high\nwatermark while a zone can be allocated from when it is above the low\nwatermark, the allocator may keep kswapd running while kswapd reclaim\nensures that the page allocator can keep allocating from the first zone in\nthe zonelist for extended periods of time.  Meanwhile the other zones\nrarely see new allocations and thus get aged much slower in comparison.\n\nThe result is that the occasional page placed in lower zones gets\nrelatively more time in memory, even gets promoted to the active list\nafter its peers have long been evicted.  Meanwhile, the bulk of the\nworking set may be thrashing on the preferred zone even though there may\nbe significant amounts of memory available in the lower zones.\n\nEven the most basic test -- repeatedly reading a file slightly bigger than\nmemory -- shows how broken the zone aging is.  In this scenario, no single\npage should be able stay in memory long enough to get referenced twice and\nactivated, but activation happens in spades:\n\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 0\n      nr_inactive_file 0\n      nr_active_file 8\n      nr_inactive_file 1582\n      nr_active_file 11994\n  $ cat data data data data \u003e/dev/null\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 70\n      nr_inactive_file 258753\n      nr_active_file 443214\n      nr_inactive_file 149793\n      nr_active_file 12021\n\nFix this with a very simple round robin allocator.  Each zone is allowed a\nbatch of allocations that is proportional to the zone\u0027s size, after which\nit is treated as full.  The batch counters are reset when all zones have\nbeen tried and the allocator enters the slowpath and kicks off kswapd\nreclaim.  Allocation and reclaim is now fairly spread out to all\navailable/allowable zones:\n\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 0\n      nr_inactive_file 174\n      nr_active_file 4865\n      nr_inactive_file 53\n      nr_active_file 860\n  $ cat data data data data \u003e/dev/null\n  $ grep active_file /proc/zoneinfo\n      nr_inactive_file 0\n      nr_active_file 0\n      nr_inactive_file 666622\n      nr_active_file 4988\n      nr_inactive_file 190969\n      nr_active_file 937\n\nWhen zone_reclaim_mode is enabled, allocations will now spread out to all\nzones on the local node, not just the first preferred zone (which on a 4G\nnode might be a tiny Normal zone).\n\nSigned-off-by: Johannes Weiner \u003channes@cmpxchg.org\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nReviewed-by: Rik van Riel \u003criel@redhat.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: Paul Bolle \u003cpaul.bollee@gmail.com\u003e\nCc: Zlatko Calusic \u003czcalusic@bitsync.net\u003e\nTested-by: Kevin Hilman \u003ckhilman@linaro.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ebc2a1a69111eadfeda8487e577f1a5d42ef0dae",
      "tree": "8a1d08bc6c0a1eb7e1bcd93056141614c22a7d40",
      "parents": [
        "edfe23dac3e2981277087b05bec7fec7790d1835"
      ],
      "author": {
        "name": "Shaohua Li",
        "email": "shli@kernel.org",
        "time": "Wed Sep 11 14:20:32 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:17 2013 -0700"
      },
      "message": "swap: make cluster allocation per-cpu\n\nswap cluster allocation is to get better request merge to improve\nperformance.  But the cluster is shared globally, if multiple tasks are\ndoing swap, this will cause interleave disk access.  While multiple tasks\nswap is quite common, for example, each numa node has a kswapd thread\ndoing swap and multiple threads/processes doing direct page reclaim.\n\nioscheduler can\u0027t help too much here, because tasks don\u0027t send swapout IO\ndown to block layer in the meantime.  Block layer does merge some IOs, but\na lot not, depending on how many tasks are doing swapout concurrently.  In\npractice, I\u0027ve seen a lot of small size IO in swapout workloads.\n\nWe makes the cluster allocation per-cpu here.  The interleave disk access\nissue goes away.  All tasks swapout to their own cluster, so swapout will\nbecome sequential, which can be easily merged to big size IO.  If one CPU\ncan\u0027t get its per-cpu cluster (for example, there is no free cluster\nanymore in the swap), it will fallback to scan swap_map.  The CPU can\nstill continue swap.  We don\u0027t need recycle free swap entries of other\nCPUs.\n\nIn my test (swap to a 2-disk raid0 partition), this improves around 10%\nswapout throughput, and request size is increased significantly.\n\nHow does this impact swap readahead is uncertain though.  On one side,\npage reclaim always isolates and swaps several adjancent pages, this will\nmake page reclaim write the pages sequentially and benefit readahead.  On\nthe other side, several CPU write pages interleave means the pages don\u0027t\nlive _sequentially_ but relatively _near_.  In the per-cpu allocation\ncase, if adjancent pages are written by different cpus, they will live\nrelatively _far_.  So how this impacts swap readahead depends on how many\npages page reclaim isolates and swaps one time.  If the number is big,\nthis patch will benefit swap readahead.  Of course, this is about\nsequential access pattern.  The patch has no impact for random access\npattern, because the new cluster allocation algorithm is just for SSD.\n\nAlternative solution is organizing swap layout to be per-mm instead of\nthis per-cpu approach.  In the per-mm layout, we allocate a disk range for\neach mm, so pages of one mm live in swap disk adjacently.  per-mm layout\nhas potential issues of lock contention if multiple reclaimers are swap\npages from one mm.  For a sequential workload, per-mm layout is better to\nimplement swap readahead, because pages from the mm are adjacent in disk.\nBut per-cpu layout isn\u0027t very bad in this workload, as page reclaim always\nisolates and swaps several pages one time, such pages will still live in\ndisk sequentially and readahead can utilize this.  For a random workload,\nper-mm layout isn\u0027t beneficial of request merge, because it\u0027s quite\npossible pages from different mm are swapout in the meantime and IO can\u0027t\nbe merged in per-mm layout.  while with per-cpu layout we can merge\nrequests from any mm.  Considering random workload is more popular in\nworkloads with swap (and per-cpu approach isn\u0027t too bad for sequential\nworkload too), I\u0027m choosing per-cpu layout.\n\n[akpm@linux-foundation.org: coding-style fixes]\nSigned-off-by: Shaohua Li \u003cshli@fusionio.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Kyungmin Park \u003ckmpark@infradead.org\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Rafael Aquini \u003caquini@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "815c2c543d3aeb914a361f981440ece552778724",
      "tree": "7d6f0de8493abbb08f0a42cb565087868b9eaeb4",
      "parents": [
        "2a8f9449343260373398d59228a62a4332ea513a"
      ],
      "author": {
        "name": "Shaohua Li",
        "email": "shli@kernel.org",
        "time": "Wed Sep 11 14:20:30 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:15 2013 -0700"
      },
      "message": "swap: make swap discard async\n\nswap can do cluster discard for SSD, which is good, but there are some\nproblems here:\n\n1. swap do the discard just before page reclaim gets a swap entry and\n   writes the disk sectors.  This is useless for high end SSD, because an\n   overwrite to a sector implies a discard to original sector too.  A\n   discard + overwrite \u003d\u003d overwrite.\n\n2. the purpose of doing discard is to improve SSD firmware garbage\n   collection.  Idealy we should send discard as early as possible, so\n   firmware can do something smart.  Sending discard just after swap entry\n   is freed is considered early compared to sending discard before write.\n   Of course, if workload is already bound to gc speed, sending discard\n   earlier or later doesn\u0027t make\n\n3. block discard is a sync API, which will delay scan_swap_map()\n   significantly.\n\n4. Write and discard command can be executed parallel in PCIe SSD.\n   Making swap discard async can make execution more efficiently.\n\nThis patch makes swap discard async and moves discard to where swap entry\nis freed.  Discard and write have no dependence now, so above issues can\nbe avoided.  Idealy we should do discard for any freed sectors, but some\nSSD discard is very slow.  This patch still does discard for a whole\ncluster.\n\nMy test does a several round of \u0027mmap, write, unmap\u0027, which will trigger a\nlot of swap discard.  In a fusionio card, with this patch, the test\nruntime is reduced to 18% of the time without it, so around 5.5x faster.\n\n[akpm@linux-foundation.org: coding-style fixes]\nSigned-off-by: Shaohua Li \u003cshli@fusionio.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Kyungmin Park \u003ckmpark@infradead.org\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Rafael Aquini \u003caquini@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "2a8f9449343260373398d59228a62a4332ea513a",
      "tree": "76c6ddf2a99d9dc7519585ba65c9883005908286",
      "parents": [
        "15ca220e1a63af06e000691e4ae1beaba5430c32"
      ],
      "author": {
        "name": "Shaohua Li",
        "email": "shli@kernel.org",
        "time": "Wed Sep 11 14:20:28 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:15 2013 -0700"
      },
      "message": "swap: change block allocation algorithm for SSD\n\nI\u0027m using a fast SSD to do swap.  scan_swap_map() sometimes uses up to\n20~30% CPU time (when cluster is hard to find, the CPU time can be up to\n80%), which becomes a bottleneck.  scan_swap_map() scans a byte array to\nsearch a 256 page cluster, which is very slow.\n\nHere I introduced a simple algorithm to search cluster.  Since we only\ncare about 256 pages cluster, we can just use a counter to track if a\ncluster is free.  Every 256 pages use one int to store the counter.  If\nthe counter of a cluster is 0, the cluster is free.  All free clusters\nwill be added to a list, so searching cluster is very efficient.  With\nthis, scap_swap_map() overhead disappears.\n\nThis might help low end SD card swap too.  Because if the cluster is\naligned, SD firmware can do flash erase more efficiently.\n\nWe only enable the algorithm for SSD.  Hard disk swap isn\u0027t fast enough\nand has downside with the algorithm which might introduce regression (see\nbelow).\n\nThe patch slightly changes which cluster is choosen.  It always adds free\ncluster to list tail.  This can help wear leveling for low end SSD too.\nAnd if no cluster found, the scan_swap_map() will do search from the end\nof last cluster.  So if no cluster found, the scan_swap_map() will do\nsearch from the end of last free cluster, which is random.  For SSD, this\nisn\u0027t a problem at all.\n\nAnother downside is the cluster must be aligned to 256 pages, which will\nreduce the chance to find a cluster.  I would expect this isn\u0027t a big\nproblem for SSD because of the non-seek penality.  (And this is the reason\nI only enable the algorithm for SSD).\n\nSigned-off-by: Shaohua Li \u003cshli@fusionio.com\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Minchan Kim \u003cminchan@kernel.org\u003e\nCc: Kyungmin Park \u003ckmpark@infradead.org\u003e\nCc: Hugh Dickins \u003chughd@google.com\u003e\nCc: Rafael Aquini \u003caquini@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "6df46865ff8715932e7d42e52cac17e8461758cb",
      "tree": "7c7e1d43b22a2bec2d4a6fce95ddc3cbd481aa1e",
      "parents": [
        "9824cf9753ecbe8f5b47aa9b2f218207defea211"
      ],
      "author": {
        "name": "Dave Hansen",
        "email": "dave@sr71.net",
        "time": "Wed Sep 11 14:20:24 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:09 2013 -0700"
      },
      "message": "mm: vmstats: track TLB flush stats on UP too\n\nThe previous patch doing vmstats for TLB flushes (\"mm: vmstats: tlb flush\ncounters\") effectively missed UP since arch/x86/mm/tlb.c is only compiled\nfor SMP.\n\nUP systems do not do remote TLB flushes, so compile those counters out on\nUP.\n\narch/x86/kernel/cpu/mtrr/generic.c calls __flush_tlb() directly.  This is\nprobably an optimization since both the mtrr code and __flush_tlb() write\ncr4.  It would probably be safe to make that a flush_tlb_all() (and then\nget these statistics), but the mtrr code is ancient and I\u0027m hesitant to\ntouch it other than to just stick in the counters.\n\n[akpm@linux-foundation.org: tweak comments]\nSigned-off-by: Dave Hansen \u003cdave.hansen@linux.intel.com\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Ingo Molnar \u003cmingo@elte.hu\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9824cf9753ecbe8f5b47aa9b2f218207defea211",
      "tree": "5baf98172d2f6bedaf83487b04bbeb579be2ff18",
      "parents": [
        "822518dc56810a0de44cff0f85a227268818749c"
      ],
      "author": {
        "name": "Dave Hansen",
        "email": "dave@sr71.net",
        "time": "Wed Sep 11 14:20:23 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:08 2013 -0700"
      },
      "message": "mm: vmstats: tlb flush counters\n\nI was investigating some TLB flush scaling issues and realized that we do\nnot have any good methods for figuring out how many TLB flushes we are\ndoing.\n\nIt would be nice to be able to do these in generic code, but the\narch-independent calls don\u0027t explicitly specify whether we actually need\nto do remote flushes or not.  In the end, we really need to know if we\nactually _did_ global vs.  local invalidations, so that leaves us with few\noptions other than to muck with the counters from arch-specific code.\n\nSigned-off-by: Dave Hansen \u003cdave.hansen@linux.intel.com\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Ingo Molnar \u003cmingo@elte.hu\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ef0855d334e1e4af7c3e0c42146a8479ea14a5ab",
      "tree": "5955b0424bb392e1949acc0ad5066cb461bef867",
      "parents": [
        "c07303c0af38ffb1e5fd9b5ff37d0798298a7acf"
      ],
      "author": {
        "name": "Oleg Nesterov",
        "email": "oleg@redhat.com",
        "time": "Wed Sep 11 14:20:14 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:57:00 2013 -0700"
      },
      "message": "mm: mempolicy: turn vma_set_policy() into vma_dup_policy()\n\nSimple cleanup.  Every user of vma_set_policy() does the same work, this\nlooks a bit annoying imho.  And the new trivial helper which does\nmpol_dup() + vma_set_policy() to simplify the callers.\n\nSigned-off-by: Oleg Nesterov \u003coleg@redhat.com\u003e\nCc: KOSAKI Motohiro \u003ckosaki.motohiro@jp.fujitsu.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Andi Kleen \u003candi@firstfloor.org\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "bab55417b10c95e6bff8cea315c315adfa009487",
      "tree": "56bfc578d47c7ea786bf6a35bf946f37e9b458b1",
      "parents": [
        "ed751e683c563be64322b9bfa0f0f7e5da9bd37c"
      ],
      "author": {
        "name": "Cai Zhiyong",
        "email": "caizhiyong@huawei.com",
        "time": "Wed Sep 11 14:20:09 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:56:57 2013 -0700"
      },
      "message": "block: support embedded device command line partition\n\nRead block device partition table from command line.  The partition used\nfor fixed block device (eMMC) embedded device.  It is no MBR, save\nstorage space.  Bootloader can be easily accessed by absolute address of\ndata on the block device.  Users can easily change the partition.\n\nThis code reference MTD partition, source \"drivers/mtd/cmdlinepart.c\"\nAbout the partition verbose reference\n\"Documentation/block/cmdline-partition.txt\"\n\n[akpm@linux-foundation.org: fix printk text]\n[yongjun_wei@trendmicro.com.cn: fix error return code in parse_parts()]\nSigned-off-by: Cai Zhiyong \u003ccaizhiyong@huawei.com\u003e\nCc: Karel Zak \u003ckzak@redhat.com\u003e\nCc: \"Wanglin (Albert)\" \u003calbert.wanglin@huawei.com\u003e\nCc: Marius Groeger \u003cmag@sysgo.de\u003e\nCc: David Woodhouse \u003cdwmw2@infradead.org\u003e\nCc: Jens Axboe \u003caxboe@kernel.dk\u003e\nCc: Brian Norris \u003ccomputersforpeace@gmail.com\u003e\nCc: Artem Bityutskiy \u003cdedekind@infradead.org\u003e\nSigned-off-by: Wei Yongjun \u003cyongjun_wei@trendmicro.com.cn\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e1403b8edf669ff49bbdf602cc97fefa2760cb15",
      "tree": "ab4c68709afca445b444577e9b29b53ed72eee17",
      "parents": [
        "28e8be31803b19d0d8f76216cb11b480b8a98bec"
      ],
      "author": {
        "name": "Oleg Nesterov",
        "email": "oleg@redhat.com",
        "time": "Wed Sep 11 14:20:06 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:56:56 2013 -0700"
      },
      "message": "include/linux/sched.h: don\u0027t use task-\u003epid/tgid in same_thread_group/has_group_leader_pid\n\ntask_struct-\u003epid/tgid should go away.\n\n1. Change same_thread_group() to use task-\u003esignal for comparison.\n\n2. Change has_group_leader_pid(task) to compare task_pid(task) with\n   signal-\u003eleader_pid.\n\nSigned-off-by: Oleg Nesterov \u003coleg@redhat.com\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nCc: Sergey Dyasly \u003cdserrg@gmail.com\u003e\nReviewed-by: \"Eric W. Biederman\" \u003cebiederm@xmission.com\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Ingo Molnar \u003cmingo@elte.hu\u003e\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "3b8967d713d7426e9dd107d065208b84adface91",
      "tree": "f1df2b00ef08ac92636fdc4e7bc273963deec433",
      "parents": [
        "e831cbfc1ad843b5542cc45f777e1a00b73c0685"
      ],
      "author": {
        "name": "Andrew Morton",
        "email": "akpm@linux-foundation.org",
        "time": "Wed Sep 11 14:19:37 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Sep 11 15:56:19 2013 -0700"
      },
      "message": "include/linux/smp.h:on_each_cpu(): switch back to a C function\n\nRevert commit c846ef7deba2 (\"include/linux/smp.h:on_each_cpu(): switch\nback to a macro\").  It turns out that the problematic linux/irqflags.h\ninclude was fixed within ia64 and mn10300.\n\nCc: Geert Uytterhoeven \u003cgeert@linux-m68k.org\u003e\nCc: David Daney \u003cdavid.daney@cavium.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "0df03a30c333d67ee9b4c37f32d423624f48fe05",
      "tree": "c8f8304fb05926777defdc4607c95f91d340d6a3",
      "parents": [
        "0a733e6effb4a429551d8b000aa02750cc7e04ba",
        "6cdcdb793791f776ea9408581b1242b636d43b37"
      ],
      "author": {
        "name": "Rafael J. Wysocki",
        "email": "rafael.j.wysocki@intel.com",
        "time": "Wed Sep 11 15:23:15 2013 +0200"
      },
      "committer": {
        "name": "Rafael J. Wysocki",
        "email": "rafael.j.wysocki@intel.com",
        "time": "Wed Sep 11 15:23:15 2013 +0200"
      },
      "message": "Merge branch \u0027pm-cpufreq\u0027\n\n* pm-cpufreq:\n  intel_pstate: Add Haswell CPU models\n  Revert \"cpufreq: make sure frequency transitions are serialized\"\n  cpufreq: Use signed type for \u0027ret\u0027 variable, to store negative error values\n  cpufreq: Remove temporary fix for race between CPU hotplug and sysfs-writes\n  cpufreq: Synchronize the cpufreq store_*() routines with CPU hotplug\n  cpufreq: Invoke __cpufreq_remove_dev_finish() after releasing cpu_hotplug.lock\n  cpufreq: Split __cpufreq_remove_dev() into two parts\n  cpufreq: Fix wrong time unit conversion\n  cpufreq: serialize calls to __cpufreq_governor()\n  cpufreq: don\u0027t allow governor limits to be changed when it is disabled\n"
    },
    {
      "commit": "a22a0fdba4191473581f86c9dd5361cf581521d3",
      "tree": "ef5d3992f791641d6c8c16cee781f214fecbb105",
      "parents": [
        "bf83e61464803d386d0ec3fc92e5449d7963a409",
        "db15e6312efd537e2deb2cbad110c23f98704a3c"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 22:58:14 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 22:58:14 2013 -0700"
      },
      "message": "Merge tag \u0027for-v3.12\u0027 of git://git.infradead.org/battery-2.6\n\nPull battery/power supply driver updates from Anton Vorontsov:\n \"New drivers:\n\n   - APM X-Gene system reboot driver by Feng Kan and Loc Ho (APM).\n\n   - Qualcomm MSM reboot/poweroff driver by Abhimanyu Kapur (Codeaurora).\n\n   - Texas Instruments BQ24190 charger driver by Mark A.  Greer (Animal\n     Creek Technologies).\n\n   - Texas Instruments TWL4030 MADC battery driver by Lukas Märdian and\n     Marek Belisko (Golden Delicious Computers).  The driver is used on\n     Freerunner GTA04 phones.\n\n  Highlighted fixes and improvements:\n\n   - Suspend/wakeup logic improvements: power supply objects will block\n     system suspend until all power supply events are processed.  Thanks\n     to Zoran Markovic (Linaro), Arve Hjonnevag and Todd Poynor (Google)\"\n\n* tag \u0027for-v3.12\u0027 of git://git.infradead.org/battery-2.6:\n  rx51_battery: Fix channel number when reading adc value\n  power: Add twl4030_madc battery driver.\n  bq24190_charger: Workaround SS definition problem on i386 builds\n  power_supply: Prevent suspend until power supply events are processed\n  vexpress-poweroff: Should depend on the required infrastructure\n  twl4030-charger: Fix compiler warning with regulator_enable()\n  rx51_battery: Replace hardcoded channels values.\n  bq24190_charger: Add support for TI BQ24190 Battery Charger\n  ab8500-charger: We print an unintended error message\n  max8925_power: Fix missing of_node_put\n  power_supply: Replace strict_strtol() with kstrtol()\n  power: Add APM X-Gene system reboot driver\n  power_supply: tosa_battery: Get rid of irq_to_gpio usage\n  power supply: collie_battery: Convert to use dev_pm_ops\n  power_supply: Make goldfish_battery depend on GOLDFISH || COMPILE_TEST\n  power: reset: Add msm restart support\n  MAINTAINERS: drivers/power: add entry for SmartReflex AVS drivers\n"
    },
    {
      "commit": "fa1586a7e43760f0e25e72b2e3f97ee18b2be967",
      "tree": "ff202134366622b72a7d5d743f940b8f57182d7c",
      "parents": [
        "cf596766fc53bbfa0e2b21e3569932aa54f5f9ca",
        "01172772c7c973debf5b4881fcb9463891ea97ec"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 20:05:57 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 20:05:57 2013 -0700"
      },
      "message": "Merge branch \u0027drm-fixes\u0027 of git://people.freedesktop.org/~airlied/linux\n\nPull drm fixes from Dave Airlie:\n \"Daniel had some fixes queued up, that were delayed, the stolen memory\n  ones and vga arbiter ones are quite useful, along with his usual bunch\n  of stuff, nothing for HSW outputs yet.\n\n  The one nouveau fix is for a regression I caused with the poweroff stuff\"\n\n* \u0027drm-fixes\u0027 of git://people.freedesktop.org/~airlied/linux: (30 commits)\n  drm/nouveau: fix oops on runtime suspend/resume\n  drm/i915: Delay disabling of VGA memory until vgacon-\u003efbcon handoff is done\n  drm/i915: try not to lose backlight CBLV precision\n  drm/i915: Confine page flips to BCS on Valleyview\n  drm/i915: Skip stolen region initialisation if none is reserved\n  drm/i915: fix gpu hang vs. flip stall deadlocks\n  drm/i915: Hold an object reference whilst we shrink it\n  drm/i915: fix i9xx_crtc_clock_get for multiplied pixels\n  drm/i915: handle sdvo input pixel multiplier correctly again\n  drm/i915: fix hpd work vs. flush_work in the pageflip code deadlock\n  drm/i915: fix up the relocate_entry refactoring\n  drm/i915: Fix pipe config warnings when dealing with LVDS fixed mode\n  drm/i915: Don\u0027t call sg_free_table() if sg_alloc_table() fails\n  i915: Update VGA arbiter support for newer devices\n  vgaarb: Fix VGA decodes changes\n  vgaarb: Don\u0027t disable resources that are not owned\n  drm/i915: Pin pages whilst mapping the dma-buf\n  drm/i915: enable trickle feed on Haswell\n  x86: add early quirk for reserving Intel graphics stolen memory v5\n  drm/i915: split PCI IDs out into i915_drm.h v4\n  ...\n"
    },
    {
      "commit": "cf596766fc53bbfa0e2b21e3569932aa54f5f9ca",
      "tree": "6e88bae48c06f5b4a099989abb04178b939d2b24",
      "parents": [
        "516f7b3f2a7dbe93d3075e76a06bbfcd0c0ee4f7",
        "d4a516560fc96a9d486a9939bcb567e3fdce8f49"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 20:04:59 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 20:04:59 2013 -0700"
      },
      "message": "Merge branch \u0027nfsd-next\u0027 of git://linux-nfs.org/~bfields/linux\n\nPull nfsd updates from Bruce Fields:\n \"This was a very quiet cycle! Just a few bugfixes and some cleanup\"\n\n* \u0027nfsd-next\u0027 of git://linux-nfs.org/~bfields/linux:\n  rpc: let xdr layer allocate gssproxy receieve pages\n  rpc: fix huge kmalloc\u0027s in gss-proxy\n  rpc: comment on linux_cred encoding, treat all as unsigned\n  rpc: clean up decoding of gssproxy linux creds\n  svcrpc: remove unused rq_resused\n  nfsd4: nfsd4_create_clid_dir prints uninitialized data\n  nfsd4: fix leak of inode reference on delegation failure\n  Revert \"nfsd: nfs4_file_get_access: need to be more careful with O_RDWR\"\n  sunrpc: prepare NFS for 2038\n  nfsd4: fix setlease error return\n  nfsd: nfs4_file_get_access: need to be more careful with O_RDWR\n"
    },
    {
      "commit": "5ca302c8e502ca53b7d75f12127ec0289904003a",
      "tree": "80a5b248c01fc3f33392a0b6ef14a2baab86cdb0",
      "parents": [
        "a0b02131c5fcd8545b867db72224b3659e813f10"
      ],
      "author": {
        "name": "Glauber Costa",
        "email": "glommer@openvz.org",
        "time": "Wed Aug 28 10:18:18 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:32 2013 -0400"
      },
      "message": "list_lru: dynamically adjust node arrays\n\nWe currently use a compile-time constant to size the node array for the\nlist_lru structure.  Due to this, we don\u0027t need to allocate any memory at\ninitialization time.  But as a consequence, the structures that contain\nembedded list_lru lists can become way too big (the superblock for\ninstance contains two of them).\n\nThis patch aims at ameliorating this situation by dynamically allocating\nthe node arrays with the firmware provided nr_node_ids.\n\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: Dave Chinner \u003cdchinner@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "a0b02131c5fcd8545b867db72224b3659e813f10",
      "tree": "3ba5156965ca4625cd5a4ad78405180143eaf15c",
      "parents": [
        "70534a739c12b908789e27b08512d2615ba40f2f"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:18:16 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:32 2013 -0400"
      },
      "message": "shrinker: Kill old -\u003eshrink API.\n\nThere are no more users of this API, so kill it dead, dead, dead and\nquietly bury the corpse in a shallow, unmarked grave in a dark forest deep\nin the hills...\n\n[glommer@openvz.org: added flowers to the grave]\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nReviewed-by: Greg Thelen \u003cgthelen@google.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "9b17c62382dd2e7507984b9890bf44e070cdd8bb",
      "tree": "e64979ddbd8f0b6924f6b940fa15804490301908",
      "parents": [
        "1d3d4437eae1bb2963faab427f65f90663c64aa1"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:18:05 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:31 2013 -0400"
      },
      "message": "fs: convert inode and dentry shrinking to be node aware\n\nNow that the shrinker is passing a node in the scan control structure, we\ncan pass this to the the generic LRU list code to isolate reclaim to the\nlists on matching nodes.\n\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@parallels.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "1d3d4437eae1bb2963faab427f65f90663c64aa1",
      "tree": "1a5aa2be9b9f260fcd5dbd70b5c4e540b177b3f3",
      "parents": [
        "0ce3d74450815500e31f16a0b65f6bab687985c3"
      ],
      "author": {
        "name": "Glauber Costa",
        "email": "glommer@openvz.org",
        "time": "Wed Aug 28 10:18:04 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:31 2013 -0400"
      },
      "message": "vmscan: per-node deferred work\n\nThe list_lru infrastructure already keeps per-node LRU lists in its\nnode-specific list_lru_node arrays and provide us with a per-node API, and\nthe shrinkers are properly equiped with node information.  This means that\nwe can now focus our shrinking effort in a single node, but the work that\nis deferred from one run to another is kept global at nr_in_batch.  Work\ncan be deferred, for instance, during direct reclaim under a GFP_NOFS\nallocation, where situation, all the filesystem shrinkers will be\nprevented from running and accumulate in nr_in_batch the amount of work\nthey should have done, but could not.\n\nThis creates an impedance problem, where upon node pressure, work deferred\nwill accumulate and end up being flushed in other nodes.  The problem we\ndescribe is particularly harmful in big machines, where many nodes can\naccumulate at the same time, all adding to the global counter nr_in_batch.\n As we accumulate more and more, we start to ask for the caches to flush\neven bigger numbers.  The result is that the caches are depleted and do\nnot stabilize.  To achieve stable steady state behavior, we need to tackle\nit differently.\n\nIn this patch we keep the deferred count per-node, in the new array\nnr_deferred[] (the name is also a bit more descriptive) and will never\naccumulate that to other nodes.\n\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: Dave Chinner \u003cdchinner@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "0ce3d74450815500e31f16a0b65f6bab687985c3",
      "tree": "82c7a5a75958da8f44102276e862eaf325c5f0ce",
      "parents": [
        "4e717f5c1083995c334ced639cc77a75e9972567"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:18:03 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:31 2013 -0400"
      },
      "message": "shrinker: add node awareness\n\nPass the node of the current zone being reclaimed to shrink_slab(),\nallowing the shrinker control nodemask to be set appropriately for node\naware shrinkers.\n\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "4e717f5c1083995c334ced639cc77a75e9972567",
      "tree": "f236061b46b4401913652b167798210132d611ad",
      "parents": [
        "6a4f496fd2fc74fa036732ae52c184952d6e3e37"
      ],
      "author": {
        "name": "Glauber Costa",
        "email": "glommer@gmail.com",
        "time": "Wed Aug 28 10:18:03 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:31 2013 -0400"
      },
      "message": "list_lru: remove special case function list_lru_dispose_all.\n\nThe list_lru implementation has one function, list_lru_dispose_all, with\nonly one user (the dentry code).  At first, such function appears to make\nsense because we are really not interested in the result of isolating each\ndentry separately - all of them are going away anyway.  However, it\u0027s\nimplementation is buggy in the following way:\n\nWhen we call list_lru_dispose_all in fs/dcache.c, we scan all dentries\nmarking them with DCACHE_SHRINK_LIST.  However, this is done without the\nnlru-\u003elock taken.  The imediate result of that is that someone else may\nadd or remove the dentry from the LRU at the same time.  When list_lru_del\nhappens in that scenario we will see an element that is not yet marked\nwith DCACHE_SHRINK_LIST (even though it will be in the future) and\nobviously remove it from an lru where the element no longer is.  Since\nlist_lru_dispose_all will in effect count down nlru\u0027s nr_items and\nlist_lru_del will do the same, this will lead to an imbalance.\n\nThe solution for this would not be so simple: we can obviously just keep\nthe lru_lock taken, but then we have no guarantees that we will be able to\nacquire the dentry lock (dentry-\u003ed_lock).  To properly solve this, we need\na communication mechanism between the lru and dentry code, so they can\ncoordinate this with each other.\n\nSuch mechanism already exists in the form of the list_lru_walk_cb\ncallback.  So it is possible to construct a dcache-side prune function\nthat does the right thing only by calling list_lru_walk in a loop until no\nmore dentries are available.\n\nWith only one user, plus the fact that a sane solution for the problem\nwould involve boucing between dcache and list_lru anyway, I see little\njustification to keep the special case list_lru_dispose_all in tree.\n\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: Michal Hocko \u003cmhocko@suse.cz\u003e\nAcked-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "6a4f496fd2fc74fa036732ae52c184952d6e3e37",
      "tree": "f0d68cd73062f87b54f070756775fd022fdf865e",
      "parents": [
        "5cedf721a7cdb54e9222133516c916210d836470"
      ],
      "author": {
        "name": "Glauber Costa",
        "email": "glommer@openvz.org",
        "time": "Wed Aug 28 10:18:02 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "list_lru: per-node API\n\nThis patch adapts the list_lru API to accept an optional node argument, to\nbe used by NUMA aware shrinking functions.  Code that does not care about\nthe NUMA placement of objects can still call into the very same functions\nas before.  They will simply iterate over all nodes.\n\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: Dave Chinner \u003cdchinner@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "3b1d58a4c96799eb4c92039e1b851b86f853548a",
      "tree": "3d72b6c0506c0a5138ef44dec8ab5c02fd5b29ba",
      "parents": [
        "f604156751db77e08afe47ce29fe8f3d51ad9b04"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:18:00 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "list_lru: per-node list infrastructure\n\nNow that we have an LRU list API, we can start to enhance the\nimplementation.  This splits the single LRU list into per-node lists and\nlocks to enhance scalability.  Items are placed on lists according to the\nnode the memory belongs to.  To make scanning the lists efficient, also\ntrack whether the per-node lists have entries in them in a active\nnodemask.\n\nNote: We use a fixed-size array for the node LRU, this struct can be very\nbig if MAX_NUMNODES is big.  If this becomes a problem this is fixable by\nturning this into a pointer and dynamically allocating this to\nnr_node_ids.  This quantity is firwmare-provided, and still would provide\nroom for all nodes at the cost of a pointer lookup and an extra\nallocation.  Because that allocation will most likely come from a may very\nwell fail.\n\n[glommer@openvz.org: fix warnings, added note about node lru]\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nReviewed-by: Greg Thelen \u003cgthelen@google.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "f604156751db77e08afe47ce29fe8f3d51ad9b04",
      "tree": "e0a109be920e4db54ac6384bebb2460aa1e309a9",
      "parents": [
        "d38fa6986e9124f827aa6ea4a9dde01e67a37be7"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:18:00 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "dcache: convert to use new lru list infrastructure\n\n[glommer@openvz.org: don\u0027t reintroduce double decrement of nr_unused_dentries, adapted for new LRU return codes]\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "bc3b14cb2d505dda969dbe3a31038dbb24aca945",
      "tree": "7890b246ee6cc7093f156bd44d2be215f2097f4b",
      "parents": [
        "a38e40824844a5ec85f3ea95632be953477d2afa"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:17:58 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "inode: convert inode lru list to generic lru list code.\n\n[glommer@openvz.org: adapted for new LRU return codes]\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "a38e40824844a5ec85f3ea95632be953477d2afa",
      "tree": "5f5df05ea253689cd515ef0ce47c6baf2210f094",
      "parents": [
        "0a234c6dcb79a270803f5c9773ed650b78730962"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:17:58 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "list: add a new LRU list type\n\nSeveral subsystems use the same construct for LRU lists - a list head, a\nspin lock and and item count.  They also use exactly the same code for\nadding and removing items from the LRU.  Create a generic type for these\nLRU lists.\n\nThis is the beginning of generic, node aware LRUs for shrinkers to work\nwith.\n\n[glommer@openvz.org: enum defined constants for lru. Suggested by gthelen, don\u0027t relock over retry]\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nReviewed-by: Greg Thelen \u003cgthelen@google.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "0a234c6dcb79a270803f5c9773ed650b78730962",
      "tree": "8f93bd04d5c01a32dc78617c04dc770dc4b86883",
      "parents": [
        "24f7c6b981fb70084757382da464ea85d72af300"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:17:57 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "shrinker: convert superblock shrinkers to new API\n\nConvert superblock shrinker to use the new count/scan API, and propagate\nthe API changes through to the filesystem callouts.  The filesystem\ncallouts already use a count/scan API, so it\u0027s just changing counters to\nlongs to match the VM API.\n\nThis requires the dentry and inode shrinker callouts to be converted to\nthe count/scan API.  This is mainly a mechanical change.\n\n[glommer@openvz.org: use mult_frac for fractional proportions, build fixes]\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "24f7c6b981fb70084757382da464ea85d72af300",
      "tree": "641ec828955f54b13641fadcee35b530989349a6",
      "parents": [
        "dd1f6b2e43a53ee58eb87d5e623cf44e277d005d"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:17:56 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "mm: new shrinker API\n\nThe current shrinker callout API uses an a single shrinker call for\nmultiple functions.  To determine the function, a special magical value is\npassed in a parameter to change the behaviour.  This complicates the\nimplementation and return value specification for the different\nbehaviours.\n\nSeparate the two different behaviours into separate operations, one to\nreturn a count of freeable objects in the cache, and another to scan a\ncertain number of objects in the cache for freeing.  In defining these new\noperations, ensure the return values and resultant behaviours are clearly\ndefined and documented.\n\nModify shrink_slab() to use the new API and implement the callouts for all\nthe existing shrinkers.\n\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@parallels.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "19156840e33a23eeb1a749c0f991dab6588b077d",
      "tree": "460675d21b0d6a5de3c179b951d18fec24e77cc8",
      "parents": [
        "62d36c77035219ac776d1882ed3a662f2b75f258"
      ],
      "author": {
        "name": "Dave Chinner",
        "email": "dchinner@redhat.com",
        "time": "Wed Aug 28 10:17:55 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:30 2013 -0400"
      },
      "message": "dentry: move to per-sb LRU locks\n\nWith the dentry LRUs being per-sb structures, there is no real need for\na global dentry_lru_lock. The locking can be made more fine-grained by\nmoving to a per-sb LRU lock, isolating the LRU operations of different\nfilesytsems completely from each other. The need for this is independent\nof any performance consideration that may arise: in the interest of\nabstracting the lru operations away, it is mandatory that each lru works\naround its own lock instead of a global lock for all of them.\n\n[glommer@openvz.org: updated changelog ]\nSigned-off-by: Dave Chinner \u003cdchinner@redhat.com\u003e\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nReviewed-by: Christoph Hellwig \u003chch@lst.de\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "55f841ce9395a72c6285fbcc4c403c0c786e1c74",
      "tree": "d64933e4976ca3fe5a83e619ba6bdc96c5690438",
      "parents": [
        "3942c07ccf98e66b8893f396dca98f5b076f905f"
      ],
      "author": {
        "name": "Glauber Costa",
        "email": "glommer@openvz.org",
        "time": "Wed Aug 28 10:17:53 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:29 2013 -0400"
      },
      "message": "super: fix calculation of shrinkable objects for small numbers\n\nThe sysctl knob sysctl_vfs_cache_pressure is used to determine which\npercentage of the shrinkable objects in our cache we should actively try\nto shrink.\n\nIt works great in situations in which we have many objects (at least more\nthan 100), because the aproximation errors will be negligible.  But if\nthis is not the case, specially when total_objects \u003c 100, we may end up\nconcluding that we have no objects at all (total / 100 \u003d 0, if total \u003c\n100).\n\nThis is certainly not the biggest killer in the world, but may matter in\nvery low kernel memory situations.\n\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nReviewed-by: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nAcked-by: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nAcked-by: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Dave Chinner \u003cdavid@fromorbit.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "3942c07ccf98e66b8893f396dca98f5b076f905f",
      "tree": "063ec7aa542d9fa812482c02e2436205fe6a9e8e",
      "parents": [
        "da5338c7498556b760871661ffecb053cc6f708f"
      ],
      "author": {
        "name": "Glauber Costa",
        "email": "glommer@openvz.org",
        "time": "Wed Aug 28 10:17:53 2013 +1000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:29 2013 -0400"
      },
      "message": "fs: bump inode and dentry counters to long\n\nThis series reworks our current object cache shrinking infrastructure in\ntwo main ways:\n\n * Noticing that a lot of users copy and paste their own version of LRU\n   lists for objects, we put some effort in providing a generic version.\n   It is modeled after the filesystem users: dentries, inodes, and xfs\n   (for various tasks), but we expect that other users could benefit in\n   the near future with little or no modification.  Let us know if you\n   have any issues.\n\n * The underlying list_lru being proposed automatically and\n   transparently keeps the elements in per-node lists, and is able to\n   manipulate the node lists individually.  Given this infrastructure, we\n   are able to modify the up-to-now hammer called shrink_slab to proceed\n   with node-reclaim instead of always searching memory from all over like\n   it has been doing.\n\nPer-node lru lists are also expected to lead to less contention in the lru\nlocks on multi-node scans, since we are now no longer fighting for a\nglobal lock.  The locks usually disappear from the profilers with this\nchange.\n\nAlthough we have no official benchmarks for this version - be our guest to\nindependently evaluate this - earlier versions of this series were\nperformance tested (details at\nhttp://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no\nvisible performance regressions while yielding a better qualitative\nbehavior in NUMA machines.\n\nWith this infrastructure in place, we can use the list_lru entry point to\nprovide memcg isolation and per-memcg targeted reclaim.  Historically,\nthose two pieces of work have been posted together.  This version presents\nonly the infrastructure work, deferring the memcg work for a later time,\nso we can focus on getting this part tested.  You can see more about the\nhistory of such work at http://lwn.net/Articles/552769/\n\nDave Chinner (18):\n  dcache: convert dentry_stat.nr_unused to per-cpu counters\n  dentry: move to per-sb LRU locks\n  dcache: remove dentries from LRU before putting on dispose list\n  mm: new shrinker API\n  shrinker: convert superblock shrinkers to new API\n  list: add a new LRU list type\n  inode: convert inode lru list to generic lru list code.\n  dcache: convert to use new lru list infrastructure\n  list_lru: per-node list infrastructure\n  shrinker: add node awareness\n  fs: convert inode and dentry shrinking to be node aware\n  xfs: convert buftarg LRU to generic code\n  xfs: rework buffer dispose list tracking\n  xfs: convert dquot cache lru to list_lru\n  fs: convert fs shrinkers to new scan/count API\n  drivers: convert shrinkers to new count/scan API\n  shrinker: convert remaining shrinkers to count/scan API\n  shrinker: Kill old -\u003eshrink API.\n\nGlauber Costa (7):\n  fs: bump inode and dentry counters to long\n  super: fix calculation of shrinkable objects for small numbers\n  list_lru: per-node API\n  vmscan: per-node deferred work\n  i915: bail out earlier when shrinker cannot acquire mutex\n  hugepage: convert huge zero page shrinker to new shrinker API\n  list_lru: dynamically adjust node arrays\n\nThis patch:\n\nThere are situations in very large machines in which we can have a large\nquantity of dirty inodes, unused dentries, etc.  This is particularly true\nwhen umounting a filesystem, where eventually since every live object will\neventually be discarded.\n\nDave Chinner reported a problem with this while experimenting with the\nshrinker revamp patchset.  So we believe it is time for a change.  This\npatch just moves int to longs.  Machines where it matters should have a\nbig long anyway.\n\nSigned-off-by: Glauber Costa \u003cglommer@openvz.org\u003e\nCc: Dave Chinner \u003cdchinner@redhat.com\u003e\nCc: \"Theodore Ts\u0027o\" \u003ctytso@mit.edu\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Artem Bityutskiy \u003cartem.bityutskiy@linux.intel.com\u003e\nCc: Arve Hjønnevåg \u003carve@android.com\u003e\nCc: Carlos Maiolino \u003ccmaiolino@redhat.com\u003e\nCc: Christoph Hellwig \u003chch@lst.de\u003e\nCc: Chuck Lever \u003cchuck.lever@oracle.com\u003e\nCc: Daniel Vetter \u003cdaniel.vetter@ffwll.ch\u003e\nCc: Dave Chinner \u003cdchinner@redhat.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nCc: Gleb Natapov \u003cgleb@redhat.com\u003e\nCc: Greg Thelen \u003cgthelen@google.com\u003e\nCc: J. Bruce Fields \u003cbfields@redhat.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jerome Glisse \u003cjglisse@redhat.com\u003e\nCc: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: KAMEZAWA Hiroyuki \u003ckamezawa.hiroyu@jp.fujitsu.com\u003e\nCc: Kent Overstreet \u003ckoverstreet@google.com\u003e\nCc: Kirill A. Shutemov \u003ckirill.shutemov@linux.intel.com\u003e\nCc: Marcelo Tosatti \u003cmtosatti@redhat.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Steven Whitehouse \u003cswhiteho@redhat.com\u003e\nCc: Thomas Hellstrom \u003cthellstrom@vmware.com\u003e\nCc: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "aac34df11791d25417f7d756dc277b6f95996b47",
      "tree": "0a0becb7fdff62056c065b9223058f0285fd5bf5",
      "parents": [
        "b05430fc9341fea7a6228a3611c850a476809596"
      ],
      "author": {
        "name": "Christoph Hellwig",
        "email": "hch@infradead.org",
        "time": "Mon Sep 09 07:16:41 2013 -0700"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Tue Sep 10 18:56:29 2013 -0400"
      },
      "message": "fs: remove vfs_follow_link\n\nFor a long time no filesystem has been using vfs_follow_link, and as seen\nby recent filesystem submissions any new use is accidental as well.\n\nRemove vfs_follow_link, document the replacement in\nDocumentation/filesystems/porting and also rename __vfs_follow_link\nto match its only caller better.\n\nSigned-off-by: Christoph Hellwig \u003chch@lst.de\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "31f7c3a688f75bceaf2fd009efc489659ad6aa61",
      "tree": "1721765e553c01559d1b784563d4840c5d3dd0b9",
      "parents": [
        "ec5b103ecfde929004b691f29183255aeeadecd5",
        "2bc552df76d83cf1455ac8cf4c87615bfd15df74"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:53:52 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:53:52 2013 -0700"
      },
      "message": "Merge tag \u0027devicetree-for-linus\u0027 of git://git.secretlab.ca/git/linux\n\nPull device tree core updates from Grant Likely:\n \"Generally minor changes.  A bunch of bug fixes, particularly for\n  initialization and some refactoring.  Most notable change if feeding\n  the entire flattened tree into the random pool at boot.  May not be\n  significant, but shouldn\u0027t hurt either\"\n\nTim Bird questions whether the boot time cost of the random feeding may\nbe noticeable.  And \"add_device_randomness()\" is definitely not some\nspeed deamon of a function.\n\n* tag \u0027devicetree-for-linus\u0027 of git://git.secretlab.ca/git/linux:\n  of/platform: add error reporting to of_amba_device_create()\n  irq/of: Fix comment typo for irq_of_parse_and_map\n  of: Feed entire flattened device tree into the random pool\n  of/fdt: Clean up casting in unflattening path\n  of/fdt: Remove duplicate memory clearing on FDT unflattening\n  gpio: implement gpio-ranges binding document fix\n  of: call __of_parse_phandle_with_args from of_parse_phandle\n  of: introduce of_parse_phandle_with_fixed_args\n  of: move of_parse_phandle()\n  of: move documentation of of_parse_phandle_with_args\n  of: Fix missing memory initialization on FDT unflattening\n  of: consolidate definition of early_init_dt_alloc_memory_arch()\n  of: Make of_get_phy_mode() return int i.s.o. const int\n  include: dt-binding: input: create a DT header defining key codes.\n  of/platform: Staticize of_platform_device_create_pdata()\n  of: Specify initrd location using 64-bit\n  dt: Typo fix\n  OF: make of_property_for_each_{u32|string}() use parameters if OF is not enabled\n"
    },
    {
      "commit": "ec5b103ecfde929004b691f29183255aeeadecd5",
      "tree": "3b16d0654c074b5b36d06e56110c7218a8685655",
      "parents": [
        "d0048f0b91ee35ab940ec6cbdfdd238c55b12a14",
        "5622ff1a4dd7dcb1c09953d8066a4e7c4c350b2d"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:37:36 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:37:36 2013 -0700"
      },
      "message": "Merge branch \u0027for-linus\u0027 of git://git.infradead.org/users/vkoul/slave-dma\n\nPull slave-dmaengine updates from Vinod Koul:\n \"This pull brings:\n   - Andy\u0027s DW driver updates\n   - Guennadi\u0027s sh driver updates\n   - Pl08x driver fixes from Tomasz \u0026 Alban\n   - Improvements to mmp_pdma by Daniel\n   - TI EDMA fixes by Joel\n   - New drivers:\n     - Hisilicon k3dma driver\n     - Renesas rcar dma driver\n  - New API for publishing slave driver capablities\n  - Various fixes across the subsystem by Andy, Jingoo, Sachin etc...\"\n\n* \u0027for-linus\u0027 of git://git.infradead.org/users/vkoul/slave-dma: (94 commits)\n  dma: edma: Remove limits on number of slots\n  dma: edma: Leave linked to Null slot instead of DUMMY slot\n  dma: edma: Find missed events and issue them\n  ARM: edma: Add function to manually trigger an EDMA channel\n  dma: edma: Write out and handle MAX_NR_SG at a given time\n  dma: edma: Setup parameters to DMA MAX_NR_SG at a time\n  dmaengine: pl330: use dma_set_max_seg_size to set the sg limit\n  dmaengine: dma_slave_caps: remove sg entries\n  dma: replace devm_request_and_ioremap by devm_ioremap_resource\n  dma: ste_dma40: Fix potential null pointer dereference\n  dma: ste_dma40: Remove duplicate const\n  dma: imx-dma: Remove redundant NULL check\n  dma: dmagengine: fix function names in comments\n  dma: add driver for R-Car HPB-DMAC\n  dma: k3dma: use devm_ioremap_resource() instead of devm_request_and_ioremap()\n  dma: imx-sdma: Staticize sdma_driver_data structures\n  pch_dma: Add MODULE_DEVICE_TABLE\n  dmaengine: PL08x: Add cyclic transfer support\n  dmaengine: PL08x: Fix reading the byte count in cctl\n  dmaengine: PL08x: Add support for different maximum transfer size\n  ...\n"
    },
    {
      "commit": "d0048f0b91ee35ab940ec6cbdfdd238c55b12a14",
      "tree": "72914692414729a14ec1308c326d92359a3825a3",
      "parents": [
        "7426d62871dafbeeed087d609c6469a515c88389",
        "9d731e7539713acc0ec7b67a5a91357c455d2334"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:33:09 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:33:09 2013 -0700"
      },
      "message": "Merge tag \u0027mmc-updates-for-3.12-rc1\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc\n\nPull MMC updates from Chris Ball:\n \"MMC highlights for 3.12:\n\n  Core:\n   - Support Allocation Units 8MB-64MB in SD3.0, previous max was 4MB.\n   - The slot-gpio helper can now handle GPIO debouncing card-detect.\n   - Read supported voltages from DT \"voltage-ranges\" property.\n\n  Drivers:\n   - dw_mmc: Add support for ARC architecture, and support exynos5420.\n   - mmc_spi: Support CD/RO GPIOs.\n   - sh_mobile_sdhi: Add compatibility for more Renesas SoCs.\n   - sh_mmcif: Add DT support for DMA channels\"\n\n* tag \u0027mmc-updates-for-3.12-rc1\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc: (50 commits)\n  Revert \"mmc: tmio-mmc: Remove .set_pwr() callback from platform data\"\n  mmc: dw_mmc: Add support for ARC\n  mmc: sdhci-s3c: initialize host-\u003equirks2 for using quirks2\n  mmc: sdhci-s3c: fix the wrong register value, when clock is disabled\n  mmc: esdhc: add support to get voltage from device-tree\n  mmc: sdhci: get voltage from sdhc host\n  mmc: core: parse voltage from device-tree\n  mmc: omap_hsmmc: use the generic config for omap2plus devices\n  mmc: omap_hsmmc: clear status flags before starting a new command\n  mmc: dw_mmc: exynos: Add a new compatible string for exynos5420\n  mmc: sh_mmcif: revision-specific CLK_CTRL2 handling\n  mmc: sh_mmcif: revision-specific Command Completion Signal handling\n  mmc: sh_mmcif: add support for Device Tree DMA bindings\n  mmc: sh_mmcif: move header include from header into .c\n  mmc: SDHI: add DT compatibility strings for further SoCs\n  mmc: dw_mmc-pci: enable bus-mastering mode\n  mmc: dw_mmc-pci: get resources from a proper BAR\n  mmc: tmio-mmc: Remove .set_pwr() callback from platform data\n  mmc: tmio-mmc: Remove .get_cd() callback from platform data\n  mmc: sh_mobile_sdhi: Remove .set_pwr() callback from platform data\n  ...\n"
    },
    {
      "commit": "7426d62871dafbeeed087d609c6469a515c88389",
      "tree": "7d935f360eeb5e78ba633238a29e9213c291aad7",
      "parents": [
        "4d7696f1b05f4aeb586c74868fe3da2731daca4b",
        "7fff5e8f727285cf54e6aba10f31b196f207b98a"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:06:15 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:06:15 2013 -0700"
      },
      "message": "Merge tag \u0027dm-3.12-changes\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm\n\nPull device-mapper updates from Mike Snitzer:\n \"Add the ability to collect I/O statistics on user-defined regions of a\n  device-mapper device.  This dm-stats code required the reintroduction\n  of a div64_u64_rem() helper, but as a separate method that doesn\u0027t\n  slow down div64_u64() -- especially on 32-bit systems.\n\n  Allow the error target to replace request-based DM devices (e.g.\n  multipath) in addition to bio-based DM devices.\n\n  Various other small code fixes and improvements to thin-provisioning,\n  DM cache and the DM ioctl interface\"\n\n* tag \u0027dm-3.12-changes\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:\n  dm stripe: silence a couple sparse warnings\n  dm: add statistics support\n  dm thin: always return -ENOSPC if no_free_space is set\n  dm ioctl: cleanup error handling in table_load\n  dm ioctl: increase granularity of type_lock when loading table\n  dm ioctl: prevent rename to empty name or uuid\n  dm thin: set pool read-only if breaking_sharing fails block allocation\n  dm thin: prefix pool error messages with pool device name\n  dm: allow error target to replace bio-based and request-based targets\n  math64: New separate div64_u64_rem helper\n  dm space map: optimise sm_ll_dec and sm_ll_inc\n  dm btree: prefetch child nodes when walking tree for a dm_btree_del\n  dm btree: use pop_frame in dm_btree_del to cleanup code\n  dm cache: eliminate holes in cache structure\n  dm cache: fix stacking of geometry limits\n  dm thin: fix stacking of geometry limits\n  dm thin: add data block size limits to Documentation\n  dm cache: add data block size limits to code and Documentation\n  dm cache: document metadata device is exclussive to a cache\n  dm: stop using WQ_NON_REENTRANT\n"
    },
    {
      "commit": "4d7696f1b05f4aeb586c74868fe3da2731daca4b",
      "tree": "dd6cf4d41df2c0a1f52a85a3f8b8af5a9ebdeb5d",
      "parents": [
        "b05430fc9341fea7a6228a3611c850a476809596",
        "bfc90cb0936f5b972706625f38f72c7cb726c20a"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:03:41 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 13:03:41 2013 -0700"
      },
      "message": "Merge tag \u0027md/3.12\u0027 of git://neil.brown.name/md\n\nPull md update from Neil Brown:\n \"Headline item is multithreading for RAID5 so that more IO/sec can be\n  supported on fast (SSD) devices.  Also TILE-Gx SIMD suppor for RAID6\n  calculations and an assortment of bug fixes\"\n\n* tag \u0027md/3.12\u0027 of git://neil.brown.name/md:\n  raid5: only wakeup necessary threads\n  md/raid5: flush out all pending requests before proceeding with reshape.\n  md/raid5: use seqcount to protect access to shape in make_request.\n  raid5: sysfs entry to control worker thread number\n  raid5: offload stripe handle to workqueue\n  raid5: fix stripe release order\n  raid5: make release_stripe lockless\n  md: avoid deadlock when dirty buffers during md_stop.\n  md: Don\u0027t test all of mddev-\u003eflags at once.\n  md: Fix apparent cut-and-paste error in super_90_validate\n  raid6/test: replace echo -e with printf\n  RAID: add tilegx SIMD implementation of raid6\n  md: fix safe_mode buglet.\n  md: don\u0027t call md_allow_write in get_bitmap_file.\n"
    },
    {
      "commit": "b05430fc9341fea7a6228a3611c850a476809596",
      "tree": "91bd662d269a3478db78d6a04a34901f0cfe521b",
      "parents": [
        "d0d272771035a36a7839bb70ab6ebae3f4f4960b",
        "48f5ec21d9c67e881ff35343988e290ef5cf933f"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 12:44:24 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Sep 10 12:44:24 2013 -0700"
      },
      "message": "Merge branch \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs\n\nPull vfs pile 3 (of many) from Al Viro:\n \"Waiman\u0027s conversion of d_path() and bits related to it,\n  kern_path_mountpoint(), several cleanups and fixes (exportfs\n  one is -stable fodder, IMO).\n\n  There definitely will be more...  ;-/\"\n\n* \u0027for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:\n  split read_seqretry_or_unlock(), convert d_walk() to resulting primitives\n  dcache: Translating dentry into pathname without taking rename_lock\n  autofs4 - fix device ioctl mount lookup\n  introduce kern_path_mountpoint()\n  rename user_path_umountat() to user_path_mountpoint_at()\n  take unlazy_walk() into umount_lookup_last()\n  Kill indirect include of file.h from eventfd.h, use fdget() in cgroup.c\n  prune_super(): sb-\u003es_op is never NULL\n  exportfs: don\u0027t assume that -\u003eiterate() won\u0027t feed us too long entries\n  afs: get rid of redundant -\u003ed_name.len checks\n"
    },
    {
      "commit": "26b0332e30c7f93e780aaa054bd84e3437f84354",
      "tree": "e9cf240b67bf7eebae9fabbdba4e6a0fdfd359d7",
      "parents": [
        "640414171818c6293c23e74a28d1c69b2a1a7fe5",
        "4a43f394a08214eaf92cdd8ce3eae75e555323d8"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 18:07:15 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 18:07:15 2013 -0700"
      },
      "message": "Merge tag \u0027dmaengine-3.12\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/dmaengine\n\nPull dmaengine update from Dan Williams:\n \"Collection of random updates to the core and some end-driver fixups\n  for ioatdma and mv_xor:\n   - NUMA aware channel allocation\n   - Cleanup dmatest debugfs interface\n   - ioat: make raid-support Atom only\n   - mv_xor: big endian\n\n  Aside from the top three commits these have all had some soak time in\n  -next.  The top commit fixes a recent build breakage.\n\n  It has been a long while since my last pull request, hopefully it does\n  not show.  Thanks to Vinod for keeping an eye on drivers/dma/ this\n  past year\"\n\n* tag \u0027dmaengine-3.12\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/dmaengine:\n  dmaengine: dma_sync_wait and dma_find_channel undefined\n  MAINTAINERS: update email for Dan Williams\n  dma: mv_xor: Fix incorrect error path\n  ioatdma: silence GCC warnings\n  dmaengine: make dma_channel_rebalance() NUMA aware\n  dmaengine: make dma_submit_error() return an error code\n  ioatdma: disable RAID on non-Atom platforms and reenable unaligned copies\n  mv_xor: support big endian systems using descriptor swap feature\n  mv_xor: use {readl, writel}_relaxed instead of __raw_{readl, writel}\n  dmatest: print message on debug level in case of no error\n  dmatest: remove IS_ERR_OR_NULL checks of debugfs calls\n  dmatest: make module parameters writable\n"
    },
    {
      "commit": "798282a8718347b04a2f0a4bae7d775c48c6bcb9",
      "tree": "fd296df6b0ad8fb74de6aeedbaa83ca1cff941e0",
      "parents": [
        "5136fa56582beadb7fa71eb30bc79148bfcba5c1"
      ],
      "author": {
        "name": "Rafael J. Wysocki",
        "email": "rafael.j.wysocki@intel.com",
        "time": "Tue Sep 10 02:54:50 2013 +0200"
      },
      "committer": {
        "name": "Rafael J. Wysocki",
        "email": "rafael.j.wysocki@intel.com",
        "time": "Tue Sep 10 02:54:50 2013 +0200"
      },
      "message": "Revert \"cpufreq: make sure frequency transitions are serialized\"\n\nCommit 7c30ed5 (cpufreq: make sure frequency transitions are\nserialized) attempted to serialize frequency transitions by\nadding checks to the CPUFREQ_PRECHANGE and CPUFREQ_POSTCHANGE\nnotifications.  However, it assumed that the notifications will\nalways originate from the driver\u0027s .target() callback, but they\nalso can be triggered by cpufreq_out_of_sync() and that leads to\nwarnings like this on some systems:\n\n WARNING: CPU: 0 PID: 14543 at drivers/cpufreq/cpufreq.c:317\n __cpufreq_notify_transition+0x238/0x260()\n In middle of another frequency transition\n\naccompanied by a call trace similar to this one:\n\n [\u003cffffffff81720daa\u003e] dump_stack+0x46/0x58\n [\u003cffffffff8106534c\u003e] warn_slowpath_common+0x8c/0xc0\n [\u003cffffffff815b8560\u003e] ? acpi_cpufreq_target+0x320/0x320\n [\u003cffffffff81065436\u003e] warn_slowpath_fmt+0x46/0x50\n [\u003cffffffff815b1ec8\u003e] __cpufreq_notify_transition+0x238/0x260\n [\u003cffffffff815b33be\u003e] cpufreq_notify_transition+0x3e/0x70\n [\u003cffffffff815b345d\u003e] cpufreq_out_of_sync+0x6d/0xb0\n [\u003cffffffff815b370c\u003e] cpufreq_update_policy+0x10c/0x160\n [\u003cffffffff815b3760\u003e] ? cpufreq_update_policy+0x160/0x160\n [\u003cffffffff81413813\u003e] cpufreq_set_cur_state+0x8c/0xb5\n [\u003cffffffff814138df\u003e] processor_set_cur_state+0xa3/0xcf\n [\u003cffffffff8158e13c\u003e] thermal_cdev_update+0x9c/0xb0\n [\u003cffffffff8159046a\u003e] step_wise_throttle+0x5a/0x90\n [\u003cffffffff8158e21f\u003e] handle_thermal_trip+0x4f/0x140\n [\u003cffffffff8158e377\u003e] thermal_zone_device_update+0x57/0xa0\n [\u003cffffffff81415b36\u003e] acpi_thermal_check+0x2e/0x30\n [\u003cffffffff81415ca0\u003e] acpi_thermal_notify+0x40/0xdc\n [\u003cffffffff813e7dbd\u003e] acpi_device_notify+0x19/0x1b\n [\u003cffffffff813f8241\u003e] acpi_ev_notify_dispatch+0x41/0x5c\n [\u003cffffffff813e3fbe\u003e] acpi_os_execute_deferred+0x25/0x32\n [\u003cffffffff81081060\u003e] process_one_work+0x170/0x4a0\n [\u003cffffffff81082121\u003e] worker_thread+0x121/0x390\n [\u003cffffffff81082000\u003e] ? manage_workers.isra.20+0x170/0x170\n [\u003cffffffff81088fe0\u003e] kthread+0xc0/0xd0\n [\u003cffffffff81088f20\u003e] ? flush_kthread_worker+0xb0/0xb0\n [\u003cffffffff8173582c\u003e] ret_from_fork+0x7c/0xb0\n [\u003cffffffff81088f20\u003e] ? flush_kthread_worker+0xb0/0xb0\n\nFor this reason, revert commit 7c30ed5 along with the fix 266c13d\n(cpufreq: Fix serialization of frequency transitions) on top of it\nand we will revisit the serialization problem later.\n\nReported-by: Alessandro Bono \u003calessandro.bono@gmail.com\u003e\nSigned-off-by: Rafael J. Wysocki \u003crafael.j.wysocki@intel.com\u003e\n"
    },
    {
      "commit": "56d07db274b7b15ca38b60ea4a762d40de093000",
      "tree": "e0168e68e3957a3122b1ed8069799b53078a0b05",
      "parents": [
        "4f750c930822b92df74327a4d1364eff87701360"
      ],
      "author": {
        "name": "Srivatsa S. Bhat",
        "email": "srivatsa.bhat@linux.vnet.ibm.com",
        "time": "Sat Sep 07 01:23:55 2013 +0530"
      },
      "committer": {
        "name": "Rafael J. Wysocki",
        "email": "rafael.j.wysocki@intel.com",
        "time": "Tue Sep 10 02:49:47 2013 +0200"
      },
      "message": "cpufreq: Remove temporary fix for race between CPU hotplug and sysfs-writes\n\nCommit \"cpufreq: serialize calls to __cpufreq_governor()\" had been a temporary\nand partial solution to the race condition between writing to a cpufreq sysfs\nfile and taking a CPU offline. Now that we have a proper and complete solution\nto that problem, remove the temporary fix.\n\nSigned-off-by: Srivatsa S. Bhat \u003csrivatsa.bhat@linux.vnet.ibm.com\u003e\nSigned-off-by: Rafael J. Wysocki \u003crafael.j.wysocki@intel.com\u003e\n"
    },
    {
      "commit": "19c763031acb831a5ab9c1a701b7fedda073eb3f",
      "tree": "86ddfcb2266d1cc4946d7b24f2a6320277517cc2",
      "parents": [
        "f73d39338444d9915c746403bd98b145ff9d2ba4"
      ],
      "author": {
        "name": "Viresh Kumar",
        "email": "viresh.kumar@linaro.org",
        "time": "Sat Aug 31 17:48:23 2013 +0530"
      },
      "committer": {
        "name": "Rafael J. Wysocki",
        "email": "rafael.j.wysocki@intel.com",
        "time": "Tue Sep 10 02:49:46 2013 +0200"
      },
      "message": "cpufreq: serialize calls to __cpufreq_governor()\n\nWe can\u0027t take a big lock around __cpufreq_governor() as this causes\nrecursive locking for some cases. But calls to this routine must be\nserialized for every policy. Otherwise we can see some unpredictable\nevents.\n\nFor example, consider following scenario:\n\n__cpufreq_remove_dev()\n __cpufreq_governor(policy, CPUFREQ_GOV_STOP);\n   policy-\u003egovernor-\u003egovernor(policy, CPUFREQ_GOV_STOP);\n    cpufreq_governor_dbs()\n     case CPUFREQ_GOV_STOP:\n      mutex_destroy(\u0026cpu_cdbs-\u003etimer_mutex)\n      cpu_cdbs-\u003ecur_policy \u003d NULL;\n  \u003cPREEMPT\u003e\nstore()\n __cpufreq_set_policy()\n  __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);\n    policy-\u003egovernor-\u003egovernor(policy, CPUFREQ_GOV_LIMITS);\n     case CPUFREQ_GOV_LIMITS:\n      mutex_lock(\u0026cpu_cdbs-\u003etimer_mutex); \u003c-- Warning (destroyed mutex)\n       if (policy-\u003emax \u003c cpu_cdbs-\u003ecur_policy-\u003ecur) \u003c- cur_policy \u003d\u003d NULL\n\nAnd so store() will eventually result in a crash if cur_policy is\nNULL at this point.\n\nIntroduce an additional variable which would guarantee serialization\nhere.\n\nReported-by: Stephen Boyd \u003csboyd@codeaurora.org\u003e\nSigned-off-by: Viresh Kumar \u003cviresh.kumar@linaro.org\u003e\nSigned-off-by: Rafael J. Wysocki \u003crafael.j.wysocki@intel.com\u003e\n"
    },
    {
      "commit": "4a43f394a08214eaf92cdd8ce3eae75e555323d8",
      "tree": "d0393349b7823dcf715929bb158c1e5904de056f",
      "parents": [
        "ab5f8c6ee8af91a8829677f41c3f6afa9c00d48d"
      ],
      "author": {
        "name": "Jon Mason",
        "email": "jon.mason@intel.com",
        "time": "Mon Sep 09 16:51:59 2013 -0700"
      },
      "committer": {
        "name": "Dan Williams",
        "email": "dan.j.williams@intel.com",
        "time": "Mon Sep 09 17:02:38 2013 -0700"
      },
      "message": "dmaengine: dma_sync_wait and dma_find_channel undefined\n\ndma_sync_wait and dma_find_channel are declared regardless of whether\nCONFIG_DMA_ENGINE is enabled, but calling the function without\nCONFIG_DMA_ENGINE enabled results \"undefined reference\" errors.\n\nTo get around this, declare dma_sync_wait and dma_find_channel as inline\nfunctions if CONFIG_DMA_ENGINE is undefined.\n\nSigned-off-by: Jon Mason \u003cjon.mason@intel.com\u003e\nSigned-off-by: Dan Williams \u003cdan.j.williams@intel.com\u003e\n"
    },
    {
      "commit": "640414171818c6293c23e74a28d1c69b2a1a7fe5",
      "tree": "cb3b10578f0ae39eac2930ce3b2c8a1616f5ba70",
      "parents": [
        "fa91515cbf2375a64c8bd0a033a05b0859dff591",
        "a2bdc32a527e817fdfa6c56eaa6c70f217da6c6c"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 16:35:29 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 16:35:29 2013 -0700"
      },
      "message": "Merge tag \u0027late-for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc\n\nPull ARM SoC late changes from Kevin Hilman:\n \"These are changes that arrived a little late before the merge window,\n  or had dependencies on previous branches.\n\n  Highlights:\n   - ux500: misc.  cleanup, fixup I2C devices\n   - exynos: DT updates for RTC; PM updates\n   - at91: DT updates for NAND; new platforms added to generic defconfig\n   - sunxi: DT updates: cubieboard2, pinctrl driver, gated clocks\n   - highbank: LPAE fixes, select necessary ARM errata\n   - omap: PM fixes and improvements; OMAP5 mailbox support\n   - omap: basic support for new DRA7xx SoCs\"\n\n* tag \u0027late-for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (60 commits)\n  ARM: dts: vexpress: Add CCI node to TC2 device-tree\n  ARM: EXYNOS: Skip C1 cpuidle state for exynos5440\n  ARM: EXYNOS: always enable PM domains support for EXYNOS4X12\n  ARM: highbank: clean-up some unused includes\n  ARM: sun7i: Enable the A20 clocks in the DTSI\n  ARM: sun6i: Enable clock support in the DTSI\n  ARM: sun5i: dt: Use the A10s gates in the DTSI\n  ARM: at91: at91_dt_defconfig: enable rm9200 support\n  ARM: dts: add ADC device tree node for exynos5420/5250\n  ARM: dts: Add RTC DT node to Exynos5420 SoC\n  ARM: dts: Update the \"status\" property of RTC DT node for Exynos5250 SoC\n  ARM: dts: Fix the RTC DT node name for Exynos5250\n  irqchip: mmp: avoid to include irqs head file\n  ARM: mmp: avoid to include head file in mach-mmp\n  irqchip: mmp: support irqchip\n  irqchip: move mmp irq driver\n  ARM: OMAP: AM33xx: clock: Add RNG clock data\n  ARM: OMAP: TI81XX: add always-on powerdomain for TI81XX\n  ARM: OMAP4: clock: Lock PLLs in the right sequence\n  ARM: OMAP: AM33XX: hwmod: Add hwmod data for debugSS\n  ...\n"
    },
    {
      "commit": "a35c6322e52c550b61a04a44df27d22394ee0a2c",
      "tree": "da74b2167097281f38ddffcb13b7b43861ce931f",
      "parents": [
        "bef4a0ab984662d4ccd68d431a7c4ef3daebcb43",
        "158a71f83800f07c0da0f0159d2670bdf4bdd852"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 16:08:13 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 16:08:13 2013 -0700"
      },
      "message": "Merge tag \u0027drivers-for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc\n\nPull ARM SoC driver update from Kevin Hilman:\n \"This contains the ARM SoC related driver updates for v3.12.  The only\n  thing this cycle are core PM updates and CPUidle support for ARM\u0027s TC2\n  big.LITTLE development platform\"\n\n* tag \u0027drivers-for-linus\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:\n  cpuidle: big.LITTLE: vexpress-TC2 CPU idle driver\n  ARM: vexpress: tc2: disable GIC CPU IF in tc2_pm_suspend\n  drivers: irq-chip: irq-gic: introduce gic_cpu_if_down()\n"
    },
    {
      "commit": "bef4a0ab984662d4ccd68d431a7c4ef3daebcb43",
      "tree": "3f1a2797dbf2fde9235c47e023be929e32fa9265",
      "parents": [
        "7eb69529cbaf4229baf5559a400a7a46352c6e52",
        "12d298865ec5d0f14dd570c3506c270880769ed7"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 15:49:04 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 15:49:04 2013 -0700"
      },
      "message": "Merge tag \u0027clk-for-linus-3.12\u0027 of git://git.linaro.org/people/mturquette/linux\n\nPull clock framework changes from Michael Turquette:\n \"The common clk framework changes for 3.12 are dominated by clock\n  driver patches, both new drivers and fixes to existing.  A high\n  percentage of these are for Samsung platforms like Exynos.  Core\n  framework fixes and some new features like automagical clock\n  re-parenting round out the patches\"\n\n* tag \u0027clk-for-linus-3.12\u0027 of git://git.linaro.org/people/mturquette/linux: (102 commits)\n  clk: only call get_parent if there is one\n  clk: samsung: exynos5250: Simplify registration of PLL rate tables\n  clk: samsung: exynos4: Register PLL rate tables for Exynos4x12\n  clk: samsung: exynos4: Register PLL rate tables for Exynos4210\n  clk: samsung: exynos4: Reorder registration of mout_vpllsrc\n  clk: samsung: pll: Add support for rate configuration of PLL46xx\n  clk: samsung: pll: Use new registration method for PLL46xx\n  clk: samsung: pll: Add support for rate configuration of PLL45xx\n  clk: samsung: pll: Use new registration method for PLL45xx\n  clk: samsung: exynos4: Rename exynos4_plls to exynos4x12_plls\n  clk: samsung: exynos4: Remove checks for DT node\n  clk: samsung: exynos4: Remove unused static clkdev aliases\n  clk: samsung: Modify _get_rate() helper to use __clk_lookup()\n  clk: samsung: exynos4: Use separate aliases for cpufreq related clocks\n  clocksource: samsung_pwm_timer: Get clock from device tree\n  ARM: dts: exynos4: Specify PWM clocks in PWM node\n  pwm: samsung: Update DT bindings documentation to cover clocks\n  clk: Move symbol export to proper location\n  clk: fix new_parent dereference before null check\n  clk: wm831x: Initialise wm831x pointer on init\n  ...\n"
    },
    {
      "commit": "798ab48eecdf659df9ae0064ca5c62626c651827",
      "tree": "1b78a050ec898f2647ad3c58c67a10462246740f",
      "parents": [
        "6faaa85f375543ea0d49a27e953ed18aec05ae56"
      ],
      "author": {
        "name": "Kent Overstreet",
        "email": "kmo@daterainc.com",
        "time": "Fri Aug 16 22:04:37 2013 +0000"
      },
      "committer": {
        "name": "Nicholas Bellinger",
        "email": "nab@linux-iscsi.org",
        "time": "Mon Sep 09 14:29:15 2013 -0700"
      },
      "message": "idr: Percpu ida\n\nPercpu frontend for allocating ids. With percpu allocation (that works),\nit\u0027s impossible to guarantee it will always be possible to allocate all\nnr_tags - typically, some will be stuck on a remote percpu freelist\nwhere the current job can\u0027t get to them.\n\nWe do guarantee that it will always be possible to allocate at least\n(nr_tags / 2) tags - this is done by keeping track of which and how many\ncpus have tags on their percpu freelists. On allocation failure if\nenough cpus have tags that there could potentially be (nr_tags / 2) tags\nstuck on remote percpu freelists, we then pick a remote cpu at random to\nsteal from.\n\nNote that there\u0027s no cpu hotplug notifier - we don\u0027t care, because\nsteal_tags() will eventually get the down cpu\u0027s tags. We _could_ satisfy\nmore allocations if we had a notifier - but we\u0027ll still meet our\nguarantees and it\u0027s absolutely not a correctness issue, so I don\u0027t think\nit\u0027s worth the extra code.\n\nFrom akpm:\n\n    \"It looks OK to me (that\u0027s as close as I get to an ack :))\n\nv6 changes:\n  - Add #include \u003clinux/cpumask.h\u003e to include/linux/percpu_ida.h to\n    make alpha/arc builds happy (Fengguang)\n  - Move second (cpu \u003e\u003d nr_cpu_ids) check inside of first check scope\n    in steal_tags() (akpm + nab)\n\nv5 changes:\n  - Change percpu_ida-\u003ecpus_have_tags to cpumask_t (kmo + akpm)\n  - Add comment for percpu_ida_cpu-\u003elock + -\u003enr_free (kmo + akpm)\n  - Convert steal_tags() to use cpumask_weight() + cpumask_next() +\n    cpumask_first() + cpumask_clear_cpu() (kmo + akpm)\n  - Add comment for alloc_global_tags() (kmo + akpm)\n  - Convert percpu_ida_alloc() to use cpumask_set_cpu() (kmo + akpm)\n  - Convert percpu_ida_free() to use cpumask_set_cpu() (kmo + akpm)\n  - Drop percpu_ida-\u003ecpus_have_tags allocation in percpu_ida_init()\n    (kmo + akpm)\n  - Drop percpu_ida-\u003ecpus_have_tags kfree in percpu_ida_destroy()\n    (kmo + akpm)\n  - Add comment for percpu_ida_alloc @ gfp (kmo + akpm)\n  - Move to percpu_ida.c + percpu_ida.h (kmo + akpm + nab)\n\nv4 changes:\n\n  - Fix tags.c reference in percpu_ida_init (akpm)\n\nSigned-off-by: Kent Overstreet \u003ckmo@daterainc.com\u003e\nCc: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Oleg Nesterov \u003coleg@redhat.com\u003e\nCc: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: Andi Kleen \u003candi@firstfloor.org\u003e\nCc: Jens Axboe \u003caxboe@kernel.dk\u003e\nCc: \"Nicholas A. Bellinger\" \u003cnab@linux-iscsi.org\u003e\nSigned-off-by: Nicholas Bellinger \u003cnab@linux-iscsi.org\u003e\n"
    },
    {
      "commit": "300893b08f3bc7057a7a5f84074090ba66c8b5ca",
      "tree": "5fc5aef0b9dbab8e47e161303d57e631786c7d17",
      "parents": [
        "45150c43b1b0c16e665fd0a5cdcca128b8192db1",
        "1d03c6fa88af35e55047a1f2ab116f0fdf2f55aa"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 11:19:09 2013 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Mon Sep 09 11:19:09 2013 -0700"
      },
      "message": "Merge tag \u0027xfs-for-linus-v3.12-rc1\u0027 of git://oss.sgi.com/xfs/xfs\n\nPull xfs updates from Ben Myers:\n \"For 3.12-rc1 there are a number of bugfixes in addition to work to\n  ease usage of shared code between libxfs and the kernel, the rest of\n  the work to enable project and group quotas to be used simultaneously,\n  performance optimisations in the log and the CIL, directory entry file\n  type support, fixes for log space reservations, some spelling/grammar\n  cleanups, and the addition of user namespace support.\n\n   - introduce readahead to log recovery\n   - add directory entry file type support\n   - fix a number of spelling errors in comments\n   - introduce new Q_XGETQSTATV quotactl for project quotas\n   - add USER_NS support\n   - log space reservation rework\n   - CIL optimisations\n  - kernel/userspace libxfs rework\"\n\n* tag \u0027xfs-for-linus-v3.12-rc1\u0027 of git://oss.sgi.com/xfs/xfs: (112 commits)\n  xfs: XFS_MOUNT_QUOTA_ALL needed by userspace\n  xfs: dtype changed xfs_dir2_sfe_put_ino to xfs_dir3_sfe_put_ino\n  Fix wrong flag ASSERT in xfs_attr_shortform_getvalue\n  xfs: finish removing IOP_* macros.\n  xfs: inode log reservations are too small\n  xfs: check correct status variable for xfs_inobt_get_rec() call\n  xfs: inode buffers may not be valid during recovery readahead\n  xfs: check LSN ordering for v5 superblocks during recovery\n  xfs: btree block LSN escaping to disk uninitialised\n  XFS: Assertion failed: first \u003c\u003d last \u0026\u0026 last \u003c BBTOB(bp-\u003eb_length), file: fs/xfs/xfs_trans_buf.c, line: 568\n  xfs: fix bad dquot buffer size in log recovery readahead\n  xfs: don\u0027t account buffer cancellation during log recovery readahead\n  xfs: check for underflow in xfs_iformat_fork()\n  xfs: xfs_dir3_sfe_put_ino can be static\n  xfs: introduce object readahead to log recovery\n  xfs: Simplify xfs_ail_min() with list_first_entry_or_null()\n  xfs: Register hotcpu notifier after initialization\n  xfs: add xfs sb v4 support for dirent filetype field\n  xfs: Add write support for dirent filetype field\n  xfs: Add read-only support for dirent filetype field\n  ...\n"
    }
  ],
  "next": "ef9a61bef917e38f8e096f6df303329aed6cf467"
}
