)]}'
{
  "log": [
    {
      "commit": "1438ade5670b56d5386c220e1ad4b5a824a1e585",
      "tree": "3642109a131da8a00a39c409d746618b2c6db797",
      "parents": [
        "112202d9098aae2c36436e5178c0cf3ced423c7b"
      ],
      "author": {
        "name": "Konstantin Khlebnikov",
        "email": "khlebnikov@openvz.org",
        "time": "Thu Jan 24 16:36:31 2013 +0400"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Feb 19 10:09:13 2013 -0800"
      },
      "message": "workqueue: un-GPL function delayed_work_timer_fn()\n\ncommit d8e794dfd51c368ed3f686b7f4172830b60ae47b (\"workqueue: set\ndelayed_work-\u003etimer function on initialization\") exports function\ndelayed_work_timer_fn() only for GPL modules. This makes delayed-works\nunusable for non-GPL modules, because initialization macro now requires\nGPL symbol. For example schedule_delayed_work() available for non-GPL.\n\nSigned-off-by: Konstantin Khlebnikov \u003ckhlebnikov@openvz.org\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: stable@vger.kernel.org # 3.7\n"
    },
    {
      "commit": "112202d9098aae2c36436e5178c0cf3ced423c7b",
      "tree": "2297f17b2ba0c556173566560f33fe7a1b20a904",
      "parents": [
        "8d03ecfe471802d6afe97da97722b6924533aa82"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 13 19:29:12 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 13 19:29:12 2013 -0800"
      },
      "message": "workqueue: rename cpu_workqueue to pool_workqueue\n\nworkqueue has moved away from global_cwqs to worker_pools and with the\nscheduled custom worker pools, wforkqueues will be associated with\npools which don\u0027t have anything to do with CPUs.  The workqueue code\nwent through significant amount of changes recently and mass renaming\nisn\u0027t likely to hurt much additionally.  Let\u0027s replace \u0027cpu\u0027 with\n\u0027pool\u0027 so that it reflects the current design.\n\n* s/struct cpu_workqueue_struct/struct pool_workqueue/\n* s/cpu_wq/pool_wq/\n* s/cwq/pwq/\n\nThis patch is purely cosmetic.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "8d03ecfe471802d6afe97da97722b6924533aa82",
      "tree": "1178cacfdd36358665f9a4c6325329346b221dd0",
      "parents": [
        "1dd638149f1f9d7d7dbb32591d5c7c2a0ea36264"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 13 19:29:10 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 13 19:29:10 2013 -0800"
      },
      "message": "workqueue: reimplement is_chained_work() using current_wq_worker()\n\nis_chained_work() was added before current_wq_worker() and implemented\nits own ham-fisted way of finding out whether %current is a workqueue\nworker - it iterates through all possible workers.\n\nDrop the custom implementation and reimplement using\ncurrent_wq_worker().\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "1dd638149f1f9d7d7dbb32591d5c7c2a0ea36264",
      "tree": "454399689b5d5016eefbd9f12e39b2674a8ebb33",
      "parents": [
        "8594fade39d3ad02ef856b8c53b5d7cc538a55f5"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 13 19:29:07 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 13 19:29:07 2013 -0800"
      },
      "message": "workqueue: fix is_chained_work() regression\n\nc9e7cf273f (\"workqueue: move busy_hash from global_cwq to\nworker_pool\") incorrectly converted is_chained_work() to use\nget_gcwq() inside for_each_gcwq_cpu() while removing get_gcwq().\n\nAs cwq might not exist for all possible workqueue CPUs, @cwq can be\nNULL and the following cwq deferences can lead to oops.\n\nFix it by using for_each_cwq_cpu() instead, which is the better one to\nuse anyway as we only need to check pools that the wq is associated\nwith.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "8594fade39d3ad02ef856b8c53b5d7cc538a55f5",
      "tree": "7f14598186e3fbc5feb91b1c25905b51d106a104",
      "parents": [
        "54d5b7d079dffa74597715a892473b474babd5b5"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Thu Feb 07 13:14:20 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Feb 07 13:17:51 2013 -0800"
      },
      "message": "workqueue: pick cwq instead of pool in __queue_work()\n\nCurrently, __queue_work() chooses the pool to queue a work item to and\nthen determines cwq from the target wq and the chosen pool.  This is a\nbit backwards in that we can determine cwq first and simply use\ncwq-\u003epool.  This way, we can skip get_std_worker_pool() in queueing\npath which will be a hurdle when implementing custom worker pools.\n\nUpdate __queue_work() such that it chooses the target cwq and then use\ncwq-\u003epool instead of the other way around.  While at it, add missing\n{} in an if statement.\n\nThis patch doesn\u0027t introduce any functional changes.\n\ntj: The original patch had two get_cwq() calls - the first to\n    determine the pool by doing get_cwq(cpu, wq)-\u003epool and the second\n    to determine the matching cwq from get_cwq(pool-\u003ecpu, wq).\n    Updated the function such that it chooses cwq instead of pool and\n    removed the second call.  Rewrote the description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "54d5b7d079dffa74597715a892473b474babd5b5",
      "tree": "33aa61fc2a98acff099a2393665318328448e137",
      "parents": [
        "e19e397a85f33100bfa4210e256bec82fe22e167"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Thu Feb 07 13:14:20 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Feb 07 13:14:20 2013 -0800"
      },
      "message": "workqueue: make get_work_pool_id() cheaper\n\nget_work_pool_id() currently first obtains pool using get_work_pool()\nand then return pool-\u003eid.  For an off-queue work item, this involves\nobtaining pool ID from worker-\u003edata, performing idr_find() to find the\nmatching pool and then returning its pool-\u003eid which of course is the\nsame as the one which went into idr_find().\n\nJust open code WORK_STRUCT_CWQ case and directly return pool ID from\nwork-\u003edata.\n\ntj: The original patch dropped on-queue work item handling and renamed\n    the function to offq_work_pool_id().  There isn\u0027t much benefit in\n    doing so.  Handling it only requires a single if() and we need at\n    least BUG_ON(), which is also a branch, even if we drop on-queue\n    handling.  Open code WORK_STRUCT_CWQ case and keep the function in\n    line with get_work_pool().  Rewrote the description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "e19e397a85f33100bfa4210e256bec82fe22e167",
      "tree": "18b9b0f883561584027a0085586d4f31abcba213",
      "parents": [
        "1606283622689bdc460052b4a1281c36de13fe49"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:39:44 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Feb 07 13:14:20 2013 -0800"
      },
      "message": "workqueue: move nr_running into worker_pool\n\nAs nr_running is likely to be accessed from other CPUs during\ntry_to_wake_up(), it was kept outside worker_pool; however, while less\nfrequent, other fields in worker_pool are accessed from other CPUs\nfor, e.g., non-reentrancy check.  Also, with recent pool related\nchanges, accessing nr_running matching the worker_pool isn\u0027t as simple\nas it used to be.\n\nMove nr_running inside worker_pool.  Keep it aligned to cacheline and\ndefine CPU pools using DEFINE_PER_CPU_SHARED_ALIGNED().  This should\ngive at least the same cacheline behavior.\n\nget_pool_nr_running() is replaced with direct pool-\u003enr_running\naccesses.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Joonsoo Kim \u003cjs1304@gmail.com\u003e\n"
    },
    {
      "commit": "1606283622689bdc460052b4a1281c36de13fe49",
      "tree": "7e23128500a97cd006a9580e96583e681e0084a1",
      "parents": [
        "0b3dae68ac199fac224fea9a31907b44f0d257b3"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "message": "workqueue: cosmetic update in try_to_grab_pending()\n\nWith the recent is-work-queued-here test simplification, the nested\nif() in try_to_grab_pending() can be collapsed.  Collapse it.\n\nThis patch is purely cosmetic.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "0b3dae68ac199fac224fea9a31907b44f0d257b3",
      "tree": "909b0b1d33123c9e8cbd0117e5f42df12e3becde",
      "parents": [
        "4468a00fd9a274fe1b30c886370d662e4a439efb"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "message": "workqueue: simplify is-work-item-queued-here test\n\nCurrently, determining whether a work item is queued on a locked pool\ninvolves somewhat convoluted memory barrier dancing.  It goes like the\nfollowing.\n\n* When a work item is queued on a pool, work-\u003edata is updated before\n  work-\u003eentry is linked to the pending list with a wmb() inbetween.\n\n* When trying to determine whether a work item is currently queued on\n  a pool pointed to by work-\u003edata, it locks the pool and looks at\n  work-\u003eentry.  If work-\u003eentry is linked, we then do rmb() and then\n  check whether work-\u003edata points to the current pool.\n\nThis works because, work-\u003edata can only point to a pool if it\ncurrently is or were on the pool and,\n\n* If it currently is on the pool, the tests would obviously succeed.\n\n* It it left the pool, its work-\u003eentry was cleared under pool-\u003elock,\n  so if we\u0027re seeing non-empty work-\u003eentry, it has to be from the work\n  item being linked on another pool.  Because work-\u003edata is updated\n  before work-\u003eentry is linked with wmb() inbetween, work-\u003edata update\n  from another pool is guaranteed to be visible if we do rmb() after\n  seeing non-empty work-\u003eentry.  So, we either see empty work-\u003eentry\n  or we see updated work-\u003edata pointin to another pool.\n\nWhile this works, it\u0027s convoluted, to put it mildly.  With recent\nupdates, it\u0027s now guaranteed that work-\u003edata points to cwq only while\nthe work item is queued and that updating work-\u003edata to point to cwq\nor back to pool is done under pool-\u003elock, so we can simply test\nwhether work-\u003edata points to cwq which is associated with the\ncurrently locked pool instead of the convoluted memory barrier\ndancing.\n\nThis patch replaces the memory barrier based \"are you still here,\nreally?\" test with much simpler \"does work-\u003edata points to me?\" test -\nif work-\u003edata points to a cwq which is associated with the currently\nlocked pool, the work item is guaranteed to be queued on the pool as\nwork-\u003edata can start and stop pointing to such cwq only under\npool-\u003elock and the start and stop coincide with queue and dequeue.\n\ntj: Rewrote the comments and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "4468a00fd9a274fe1b30c886370d662e4a439efb",
      "tree": "6ead9c97eea5cdb16cfd7fca3b80d1b184949e3e",
      "parents": [
        "60c057bca22285efefbba033624763a778f243bf"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "message": "workqueue: make work-\u003edata point to pool after try_to_grab_pending()\n\nWe plan to use work-\u003edata pointing to cwq as the synchronization\ninvariant when determining whether a given work item is on a locked\npool or not, which requires work-\u003edata pointing to cwq only while the\nwork item is queued on the associated pool.\n\nWith delayed_work updated not to overload work-\u003edata for target\nworkqueue recording, the only case where we still have off-queue\nwork-\u003edata pointing to cwq is try_to_grab_pending() which doesn\u0027t\nupdate work-\u003edata after stealing a queued work item.  There\u0027s no\nreason for try_to_grab_pending() to not update work-\u003edata to point to\nthe pool instead of cwq, like the normal execution does.\n\nThis patch adds set_work_pool_and_keep_pending() which makes\nwork-\u003edata point to pool instead of cwq but keeps the pending bit\nunlike set_work_pool_and_clear_pending() (surprise!).\n\nAfter this patch, it\u0027s guaranteed that only queued work items point to\ncwqs.\n\nThis patch doesn\u0027t introduce any visible behavior change.\n\ntj: Renamed the new helper function to match\n    set_work_pool_and_clear_pending() and rewrote the description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "60c057bca22285efefbba033624763a778f243bf",
      "tree": "8e469c390b5b60ad6b4d7c94bc07522f857032bc",
      "parents": [
        "038366c5cf23ae737b9f72169dd8ade2d105755b"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "message": "workqueue: add delayed_work-\u003ewq to simplify reentrancy handling\n\nTo avoid executing the same work item from multiple CPUs concurrently,\na work_struct records the last pool it was on in its -\u003edata so that,\non the next queueing, the pool can be queried to determine whether the\nwork item is still executing or not.\n\nA delayed_work goes through timer before actually being queued on the\ntarget workqueue and the timer needs to know the target workqueue and\nCPU.  This is currently achieved by modifying delayed_work-\u003ework.data\nsuch that it points to the cwq which points to the target workqueue\nand the last CPU the work item was on.  __queue_delayed_work()\nextracts the last CPU from delayed_work-\u003ework.data and then combines\nit with the target workqueue to create new work.data.\n\nThe only thing this rather ugly hack achieves is encoding the target\nworkqueue into delayed_work-\u003ework.data without using a separate field,\nwhich could be a trade off one can make; unfortunately, this entangles\nwork-\u003edata management between regular workqueue and delayed_work code\nby setting cwq pointer before the work item is actually queued and\nbecomes a hindrance for further improvements of work-\u003edata handling.\n\nThis can be easily made sane by adding a target workqueue field to\ndelayed_work.  While delayed_work is used widely in the kernel and\nthis does make it a bit larger (\u003c5%), I think this is the right\ntrade-off especially given the prospect of much saner handling of\nwork-\u003edata which currently involves quite tricky memory barrier\ndancing, and don\u0027t expect to see any measureable effect.\n\nAdd delayed_work-\u003ewq and drop the delayed_work-\u003ework.data overloading.\n\ntj: Rewrote the description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "038366c5cf23ae737b9f72169dd8ade2d105755b",
      "tree": "7119031b23ba32e02b3fc8621cc8666b41f95f8a",
      "parents": [
        "6be195886ac26abe0194ed1bc7a9224f8a97c310"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "message": "workqueue: make work_busy() test WORK_STRUCT_PENDING first\n\nCurrently, work_busy() first tests whether the work has a pool\nassociated with it and if not, considers it idle.  This works fine\neven for delayed_work.work queued on timer, as __queue_delayed_work()\nsets cwq on delayed_work.work - a queued delayed_work always has its\ncwq and thus pool associated with it.\n\nHowever, we\u0027re about to update delayed_work queueing and this won\u0027t\nhold.  Update work_busy() such that it tests WORK_STRUCT_PENDING\nbefore the associated pool.  This doesn\u0027t make any noticeable behavior\ndifference now.\n\nWith work_pending() test moved, the function read a lot better with\n\"if (!pool)\" test flipped to positive.  Flip it.\n\nWhile at it, lose the comment about now non-existent reentrant\nworkqueues.\n\ntj: Reorganized the function and rewrote the description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "6be195886ac26abe0194ed1bc7a9224f8a97c310",
      "tree": "a414324b9232efaa2fd8f1dc4a28d308aa5d99f5",
      "parents": [
        "706026c2141113886f61e1ad2738c9a7723ec69c"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Feb 06 18:04:53 2013 -0800"
      },
      "message": "workqueue: replace WORK_CPU_NONE/LAST with WORK_CPU_END\n\nNow that workqueue has moved away from gcwqs, workqueue no longer has\nthe need to have a CPU identifier indicating \"no cpu associated\" - we\nnow use WORK_OFFQ_POOL_NONE instead - and most uses of WORK_CPU_NONE\nare gone.\n\nThe only left usage is as the end marker for for_each_*wq*()\niterators, where the name WORK_CPU_NONE is confusing w/o actual\nWORK_CPU_NONE usages.  Similarly, WORK_CPU_LAST which equals\nWORK_CPU_NONE no longer makes sense.\n\nReplace both WORK_CPU_NONE and LAST with WORK_CPU_END.  This patch\ndoesn\u0027t introduce any functional difference.\n\ntj: s/WORK_CPU_LAST/WORK_CPU_END/ and rewrote the description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "706026c2141113886f61e1ad2738c9a7723ec69c",
      "tree": "c61ffa31567cf6b7536a3209503d498f22c6ace6",
      "parents": [
        "e6e380ed92555533740d5f670640f6f1868132de"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "message": "workqueue: post global_cwq removal cleanups\n\nRemove remaining references to gcwq.\n\n* __next_gcwq_cpu() steals __next_wq_cpu() name.  The original\n  __next_wq_cpu() became __next_cwq_cpu().\n\n* s/for_each_gcwq_cpu/for_each_wq_cpu/\n  s/for_each_online_gcwq_cpu/for_each_online_wq_cpu/\n\n* s/gcwq_mayday_timeout/pool_mayday_timeout/\n\n* s/gcwq_unbind_fn/wq_unbind_fn/\n\n* Drop references to gcwq in comments.\n\nThis patch doesn\u0027t introduce any functional changes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "e6e380ed92555533740d5f670640f6f1868132de",
      "tree": "fd24f4293e1c6fa9ab728c59ddc25d26146fd98e",
      "parents": [
        "a60dc39c016a65bfdbd05c43b3707962d5ed04c7"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "message": "workqueue: rename nr_running variables\n\nRename per-cpu and unbound nr_running variables such that they match\nthe pool variables.\n\nThis patch doesn\u0027t introduce any functional changes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "a60dc39c016a65bfdbd05c43b3707962d5ed04c7",
      "tree": "c16982dba52f5f83dc09817c37e86ec201f84c03",
      "parents": [
        "4e8f0a609677a25f504527e50981df146c5b3d08"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "message": "workqueue: remove global_cwq\n\nglobal_cwq is now nothing but a container for per-cpu standard\nworker_pools.  Declare the worker pools directly as\ncpu/unbound_std_worker_pools[] and remove global_cwq.\n\n* ____cacheline_aligned_in_smp moved from global_cwq to worker_pool.\n  This probably would have made sense even before this change as we\n  want each pool to be aligned.\n\n* get_gcwq() is replaced with std_worker_pools() which returns the\n  pointer to the standard pool array for a given CPU.\n\n* __alloc_workqueue_key() updated to use get_std_worker_pool() instead\n  of open-coding pool determination.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nv2: Joonsoo pointed out that it\u0027d better to align struct worker_pool\n    rather than the array so that every pool is aligned.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nCc: Joonsoo Kim \u003cjs1304@gmail.com\u003e\n"
    },
    {
      "commit": "4e8f0a609677a25f504527e50981df146c5b3d08",
      "tree": "e8fc37f309cf58c18bc2d0f0dfc00ccbadda7f4d",
      "parents": [
        "38db41d984f17938631420ff78160dda7f182d24"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "message": "workqueue: remove worker_pool-\u003egcwq\n\nThe only remaining user of pool-\u003egcwq is std_worker_pool_pri().\nReimplement it using get_gcwq() and remove worker_pool-\u003egcwq.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "38db41d984f17938631420ff78160dda7f182d24",
      "tree": "4591d50ecb7fe9749dc5d48b735d3f43aa0b80a7",
      "parents": [
        "a1056305fa98c7e13b38718658a8b07a5d926460"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:34 2013 -0800"
      },
      "message": "workqueue: replace for_each_worker_pool() with for_each_std_worker_pool()\n\nfor_each_std_worker_pool() takes @cpu instead of @gcwq.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "a1056305fa98c7e13b38718658a8b07a5d926460",
      "tree": "d20ce512fdd0e3f07d972d62ecc9cb357c3db69e",
      "parents": [
        "94cf58bb2907bd2702fce2266955e29ab5261f53"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: make freezing/thawing per-pool\n\nInstead of holding locks from both pools and then processing the pools\ntogether, make freezing/thwaing per-pool - grab locks of one pool,\nprocess it, release it and then proceed to the next pool.\n\nWhile this patch changes processing order across pools, order within\neach pool remains the same.  As each pool is independent, this\nshouldn\u0027t break anything.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "94cf58bb2907bd2702fce2266955e29ab5261f53",
      "tree": "32b7998f475bf41754c74e9e55c45213263c89df",
      "parents": [
        "d565ed6309300304de4a865a04adef07a85edc45"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: make hotplug processing per-pool\n\nInstead of holding locks from both pools and then processing the pools\ntogether, make hotplug processing per-pool - grab locks of one pool,\nprocess it, release it and then proceed to the next pool.\n\nrebind_workers() is updated to take and process @pool instead of @gcwq\nwhich results in a lot of de-indentation.  gcwq_claim_assoc_and_lock()\nand its counterpart are replaced with in-line per-pool locking.\n\nWhile this patch changes processing order across pools, order within\neach pool remains the same.  As each pool is independent, this\nshouldn\u0027t break anything.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "d565ed6309300304de4a865a04adef07a85edc45",
      "tree": "b79e83064232d5bbf47550b090d6b1e288e123fb",
      "parents": [
        "ec22ca5eab0bd225588c69ccd06b16504cb05adf"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: move global_cwq-\u003elock to worker_pool\n\nMove gcwq-\u003elock to pool-\u003elock.  The conversion is mostly\nstraight-forward.  Things worth noting are\n\n* In many places, this removes the need to use gcwq completely.  pool\n  is used directly instead.  get_std_worker_pool() is added to help\n  some of these conversions.  This also leaves get_work_gcwq() without\n  any user.  Removed.\n\n* In hotplug and freezer paths, the pools belonging to a CPU are often\n  processed together.  This patch makes those paths hold locks of all\n  pools, with highpri lock nested inside, to keep the conversion\n  straight-forward.  These nested lockings will be removed by\n  following patches.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "ec22ca5eab0bd225588c69ccd06b16504cb05adf",
      "tree": "3282a2b587235879c3f2d286896a003900ab6563",
      "parents": [
        "c9e7cf273fa1876dee8effdb201a6f65eefab3a7"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: move global_cwq-\u003ecpu to worker_pool\n\nMove gcwq-\u003ecpu to pool-\u003ecpu.  This introduces a couple places where\ngcwq-\u003epools[0].cpu is used.  These will soon go away as gcwq is\nfurther reduced.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "c9e7cf273fa1876dee8effdb201a6f65eefab3a7",
      "tree": "fab0d36f4cd595d1d4bc9fb091d323ea66a692e1",
      "parents": [
        "7c3eed5cd60d0f736516e6ade77d90c6255860bd"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: move busy_hash from global_cwq to worker_pool\n\nThere\u0027s no functional necessity for the two pools on the same CPU to\nshare the busy hash table.  It\u0027s also likely to be a bottleneck when\nimplementing pools with user-specified attributes.\n\nThis patch makes busy_hash per-pool.  The conversion is mostly\nstraight-forward.  Changes worth noting are,\n\n* Large block of changes in rebind_workers() is moving the block\n  inside for_each_worker_pool() as now there are separate hash tables\n  for each pool.  This changes the order of operations but doesn\u0027t\n  break anything.\n\n* Thre for_each_worker_pool() loops in gcwq_unbind_fn() are combined\n  into one.  This again changes the order of operaitons but doesn\u0027t\n  break anything.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "7c3eed5cd60d0f736516e6ade77d90c6255860bd",
      "tree": "bfc017307b98a4db8c919ba9fb53399189ecf0ad",
      "parents": [
        "9daf9e678d18585433a4ad90ec51a448e5fd054c"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: record pool ID instead of CPU in work-\u003edata when off-queue\n\nCurrently, when a work item is off-queue, work-\u003edata records the CPU\nit was last on, which is used to locate the last executing instance\nfor non-reentrance, flushing, etc.\n\nWe\u0027re in the process of removing global_cwq and making worker_pool the\ntop level abstraction.  This patch makes work-\u003edata point to the pool\nit was last associated with instead of CPU.\n\nAfter the previous WORK_OFFQ_POOL_CPU and worker_poo-\u003eid additions,\nthe conversion is fairly straight-forward.  WORK_OFFQ constants and\nfunctions are modified to record and read back pool ID instead.\nworker_pool_by_id() is added to allow looking up pool from ID.\nget_work_pool() replaces get_work_gcwq(), which is reimplemented using\nget_work_pool().  get_work_pool_id() replaces work_cpu().\n\nThis patch shouldn\u0027t introduce any observable behavior changes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "9daf9e678d18585433a4ad90ec51a448e5fd054c",
      "tree": "e21d85aa3280cabe420c8c8c992f59e11b4aab82",
      "parents": [
        "715b06b864c99a18cb8368dfb187da4f569788cd"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: add worker_pool-\u003eid\n\nAdd worker_pool-\u003eid which is allocated from worker_pool_idr.  This\nwill be used to record the last associated worker_pool in work-\u003edata.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "715b06b864c99a18cb8368dfb187da4f569788cd",
      "tree": "599ab1152a1c93b83d1be05aaeb370cac2e7e3eb",
      "parents": [
        "35b6bb63b8a288f90e07948867941a553b3d97bc"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: introduce WORK_OFFQ_CPU_NONE\n\nCurrently, when a work item is off queue, high bits of its data\nencodes the last CPU it was on.  This is scheduled to be changed to\npool ID, which will make it impossible to use WORK_CPU_NONE to\nindicate no association.\n\nThis patch limits the number of bits which are used for off-queue cpu\nnumber to 31 (so that the max fits in an int) and uses the highest\npossible value - WORK_OFFQ_CPU_NONE - to indicate no association.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "35b6bb63b8a288f90e07948867941a553b3d97bc",
      "tree": "275528f970a80c9bf403a66450808a006db65ba8",
      "parents": [
        "2464757086b4de0591738d5e30f069d068d70ec0"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: make GCWQ_FREEZING a pool flag\n\nMake GCWQ_FREEZING a pool flag POOL_FREEZING.  This patch doesn\u0027t\nchange locking - FREEZING on both pools of a CPU are set or clear\ntogether while holding gcwq-\u003elock.  It shouldn\u0027t cause any functional\ndifference.\n\nThis leaves gcwq-\u003eflags w/o any flags.  Removed.\n\nWhile at it, convert BUG_ON()s in freeze_workqueue_begin() and\nthaw_workqueues() to WARN_ON_ONCE().\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "2464757086b4de0591738d5e30f069d068d70ec0",
      "tree": "2e7994351d92c24fc20fdb38108a64342bef0daf",
      "parents": [
        "e34cdddb03bdfe98f20c58934fd4c45019f13ae5"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: make GCWQ_DISASSOCIATED a pool flag\n\nMake GCWQ_DISASSOCIATED a pool flag POOL_DISASSOCIATED.  This patch\ndoesn\u0027t change locking - DISASSOCIATED on both pools of a CPU are set\nor clear together while holding gcwq-\u003elock.  It shouldn\u0027t cause any\nfunctional difference.\n\nThis is part of an effort to remove global_cwq and make worker_pool\nthe top level abstraction, which in turn will help implementing worker\npools with user-specified attributes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "e34cdddb03bdfe98f20c58934fd4c45019f13ae5",
      "tree": "3c98a24a407e1f2794e06a48961a2b9da8e208ae",
      "parents": [
        "e2905b29122173b72b612c962b138e3fa07476b8"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:33 2013 -0800"
      },
      "message": "workqueue: use std_ prefix for the standard per-cpu pools\n\nThere are currently two worker pools per cpu (including the unbound\ncpu) and they are the only pools in use.  New class of pools are\nscheduled to be added and some pool related APIs will be added\ninbetween.  Call the existing pools the standard pools and prefix them\nwith std_.  Do this early so that new APIs can use std_ prefix from\nthe beginning.\n\nThis patch doesn\u0027t introduce any functional difference.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "e2905b29122173b72b612c962b138e3fa07476b8",
      "tree": "66d7a8545f1fda113962c4a817673ff5b453c336",
      "parents": [
        "84b233adcca3cacd5cfa8013a5feda7a3db4a9af"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:32 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 24 11:01:32 2013 -0800"
      },
      "message": "workqueue: unexport work_cpu()\n\nThis function no longer has any external users.  Unexport it.  It will\nbe removed later on.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReviewed-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "2eaebdb33e1911c0cf3d44fd3596c42c6f502fab",
      "tree": "240924aae7c1ce31dc850a290ef53e268f071ebd",
      "parents": [
        "ea138446e51f7bfe55cdeffa3f1dd9cafc786bd8"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Jan 18 14:05:55 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Jan 18 14:05:55 2013 -0800"
      },
      "message": "workqueue: move struct worker definition to workqueue_internal.h\n\nThis will be used to implement an inline function to query whether\n%current is a workqueue worker and, if so, allow determining which\nwork item it\u0027s executing.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "ea138446e51f7bfe55cdeffa3f1dd9cafc786bd8",
      "tree": "a441a0546a062817946eb1c28f7d2f9cdaf6062a",
      "parents": [
        "111c225a5f8d872bc9327ada18d13b75edaa34be"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Jan 18 14:05:55 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Jan 18 14:05:55 2013 -0800"
      },
      "message": "workqueue: rename kernel/workqueue_sched.h to kernel/workqueue_internal.h\n\nWorkqueue wants to expose more interface internal to kernel/.  Instead\nof adding a new header file, repurpose kernel/workqueue_sched.h.\nRename it to workqueue_internal.h and add include protector.\n\nThis patch doesn\u0027t introduce any functional changes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\n"
    },
    {
      "commit": "111c225a5f8d872bc9327ada18d13b75edaa34be",
      "tree": "8bb9e31b8345f67c50f5370e6ba03f613afd5b65",
      "parents": [
        "023f27d3d6fcc9048754d879fe5e7d63402a5b16"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 17 17:16:24 2013 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Jan 17 17:19:58 2013 -0800"
      },
      "message": "workqueue: set PF_WQ_WORKER on rescuers\n\nPF_WQ_WORKER is used to tell scheduler that the task is a workqueue\nworker and needs wq_worker_sleeping/waking_up() invoked on it for\nconcurrency management.  As rescuers never participate in concurrency\nmanagement, PF_WQ_WORKER wasn\u0027t set on them.\n\nThere\u0027s a need for an interface which can query whether %current is\nexecuting a work item and if so which.  Such interface requires a way\nto identify all tasks which may execute work items and PF_WQ_WORKER\nwill be used for that.  As all normal workers always have PF_WQ_WORKER\nset, we only need to add it to rescuers.\n\nAs rescuers start with WORKER_PREP but never clear it, it\u0027s always\nNOT_RUNNING and there\u0027s no need to worry about it interfering with\nconcurrency management even if PF_WQ_WORKER is set; however, unlike\nnormal workers, rescuers currently don\u0027t have its worker struct as\nkthread_data().  It uses the associated workqueue_struct instead.\nThis is problematic as wq_worker_sleeping/waking_up() expect struct\nworker at kthread_data().\n\nThis patch adds worker-\u003erescue_wq and start rescuer kthreads with\nworker struct as kthread_data and sets PF_WQ_WORKER on rescuers.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "023f27d3d6fcc9048754d879fe5e7d63402a5b16",
      "tree": "0836e744e107c7506dee26ef87257d7951341441",
      "parents": [
        "a2c1c57be8d9fd5b716113c8991d3d702eeacf77"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Dec 19 11:24:06 2012 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Dec 19 11:24:06 2012 -0800"
      },
      "message": "workqueue: fix find_worker_executing_work() brekage from hashtable conversion\n\n42f8570f43 (\"workqueue: use new hashtable implementation\") incorrectly\nmade busy workers hashed by the pointer value of worker instead of\nwork.  This broke find_worker_executing_work() which in turn broke a\nlot of fundamental operations of workqueue - non-reentrancy and\nflushing among others.  The flush malfunction triggered warning in\ndisk event code in Fengguang\u0027s automated test.\n\n write_dev_root_ (3265) used greatest stack depth: 2704 bytes left\n ------------[ cut here ]------------\n WARNING: at /c/kernel-tests/src/stable/block/genhd.c:1574 disk_clear_events+0x\\\ncf/0x108()\n Hardware name: Bochs\n Modules linked in:\n Pid: 3328, comm: ata_id Not tainted 3.7.0-01930-gbff6343 #1167\n Call Trace:\n  [\u003cffffffff810997c4\u003e] warn_slowpath_common+0x83/0x9c\n  [\u003cffffffff810997f7\u003e] warn_slowpath_null+0x1a/0x1c\n  [\u003cffffffff816aea77\u003e] disk_clear_events+0xcf/0x108\n  [\u003cffffffff811bd8be\u003e] check_disk_change+0x27/0x59\n  [\u003cffffffff822e48e2\u003e] cdrom_open+0x49/0x68b\n  [\u003cffffffff81ab0291\u003e] idecd_open+0x88/0xb7\n  [\u003cffffffff811be58f\u003e] __blkdev_get+0x102/0x3ec\n  [\u003cffffffff811bea08\u003e] blkdev_get+0x18f/0x30f\n  [\u003cffffffff811bebfd\u003e] blkdev_open+0x75/0x80\n  [\u003cffffffff8118f510\u003e] do_dentry_open+0x1ea/0x295\n  [\u003cffffffff8118f5f0\u003e] finish_open+0x35/0x41\n  [\u003cffffffff8119c720\u003e] do_last+0x878/0xa25\n  [\u003cffffffff8119c993\u003e] path_openat+0xc6/0x333\n  [\u003cffffffff8119cf37\u003e] do_filp_open+0x38/0x86\n  [\u003cffffffff81190170\u003e] do_sys_open+0x6c/0xf9\n  [\u003cffffffff8119021e\u003e] sys_open+0x21/0x23\n  [\u003cffffffff82c1c3d9\u003e] system_call_fastpath+0x16/0x1b\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReported-by: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nCc: Sasha Levin \u003csasha.levin@oracle.com\u003e\n"
    },
    {
      "commit": "a2c1c57be8d9fd5b716113c8991d3d702eeacf77",
      "tree": "dd275d53f76528c37e4f8f71fbfd4e2e9954f70b",
      "parents": [
        "42f8570f437b65aaf3ef176a38ad7d7fc5847d8b"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Dec 18 10:35:02 2012 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Dec 18 10:56:14 2012 -0800"
      },
      "message": "workqueue: consider work function when searching for busy work items\n\nTo avoid executing the same work item concurrenlty, workqueue hashes\ncurrently busy workers according to their current work items and looks\nup the the table when it wants to execute a new work item.  If there\nalready is a worker which is executing the new work item, the new item\nis queued to the found worker so that it gets executed only after the\ncurrent execution finishes.\n\nUnfortunately, a work item may be freed while being executed and thus\nrecycled for different purposes.  If it gets recycled for a different\nwork item and queued while the previous execution is still in\nprogress, workqueue may make the new work item wait for the old one\nalthough the two aren\u0027t really related in any way.\n\nIn extreme cases, this false dependency may lead to deadlock although\nit\u0027s extremely unlikely given that there aren\u0027t too many self-freeing\nwork item users and they usually don\u0027t wait for other work items.\n\nTo alleviate the problem, record the current work function in each\nbusy worker and match it together with the work item address in\nfind_worker_executing_work().  While this isn\u0027t complete, it ensures\nthat unrelated work items don\u0027t interact with each other and in the\nvery unlikely case where a twisted wq user triggers it, it\u0027s always\nonto itself making the culprit easy to spot.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReported-by: Andrey Isakov \u003candy51@gmx.ru\u003e\nBugzilla: https://bugzilla.kernel.org/show_bug.cgi?id\u003d51701\nCc: stable@vger.kernel.org\n"
    },
    {
      "commit": "42f8570f437b65aaf3ef176a38ad7d7fc5847d8b",
      "tree": "be5eee8505b195f952afb4d5a7655142a9de1b12",
      "parents": [
        "848b81415c42ff3dc9a4204749087b015c37ef66"
      ],
      "author": {
        "name": "Sasha Levin",
        "email": "sasha.levin@oracle.com",
        "time": "Mon Dec 17 10:01:23 2012 -0500"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Dec 18 09:21:13 2012 -0800"
      },
      "message": "workqueue: use new hashtable implementation\n\nSwitch workqueues to use the new hashtable implementation. This reduces the\namount of generic unrelated code in the workqueues.\n\nThis patch depends on d9b482c (\"hashtable: introduce a small and naive\nhashtable\") which was merged in v3.6.\n\nAcked-by: Tejun Heo \u003ctj@kernel.org\u003e\nSigned-off-by: Sasha Levin \u003csasha.levin@oracle.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "e7b55b8fcd3a32ba1f95ccd95fb9a11ccfa63563",
      "tree": "04d6191dcc1110074c48d569747e3ac94b595f45",
      "parents": [
        "50851c6248e1a13c45d97c41f6ebcf716093aa5e",
        "3657600040a7279e52252af3f9d7e253f4f49ef0"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Dec 12 08:15:13 2012 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed Dec 12 08:15:13 2012 -0800"
      },
      "message": "Merge branch \u0027for-3.8\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq\n\nPull workqueue changes from Tejun Heo:\n \"Nothing exciting.  Just two trivial changes.\"\n\n* \u0027for-3.8\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:\n  workqueue: add WARN_ON_ONCE() on CPU number to wq_worker_waking_up()\n  workqueue: trivial fix for return statement in work_busy()\n"
    },
    {
      "commit": "fc4b514f2727f74a4587c31db87e0e93465518c3",
      "tree": "83c8758213d3492b4c48541c8a3782bdd47adf99",
      "parents": [
        "c1d390d8e6128b050f0f66b1c33d390760deb3f4"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Dec 04 07:40:39 2012 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Dec 04 07:58:47 2012 -0800"
      },
      "message": "workqueue: convert BUG_ON()s in __queue_delayed_work() to WARN_ON_ONCE()s\n\n8852aac25e (\"workqueue: mod_delayed_work_on() shouldn\u0027t queue timer on\n0 delay\") unexpectedly uncovered a very nasty abuse of delayed_work in\nmegaraid - it allocated work_struct, casted it to delayed_work and\nthen pass that into queue_delayed_work().\n\nPreviously, this was okay because 0 @delay short-circuited to\nqueue_work() before doing anything with delayed_work.  8852aac25e\nmoved 0 @delay test into __queue_delayed_work() after sanity check on\ndelayed_work making megaraid trigger BUG_ON().\n\nAlthough megaraid is already fixed by c1d390d8e6 (\"megaraid: fix\nBUG_ON() from incorrect use of delayed work\"), this patch converts\nBUG_ON()s in __queue_delayed_work() to WARN_ON_ONCE()s so that such\nabusers, if there are more, trigger warning but don\u0027t crash the\nmachine.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Xiaotian Feng \u003cxtfeng@gmail.com\u003e\n"
    },
    {
      "commit": "3657600040a7279e52252af3f9d7e253f4f49ef0",
      "tree": "1cc95124c9108e24ee18a7d9af2e511fd735daae",
      "parents": [
        "999767beb1b4a10eabf90e6017e496536cf4db0b"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Fri Oct 26 23:03:49 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Sat Dec 01 16:45:45 2012 -0800"
      },
      "message": "workqueue: add WARN_ON_ONCE() on CPU number to wq_worker_waking_up()\n\nRecently, workqueue code has gone through some changes and we found\nsome bugs related to concurrency management operations happening on\nthe wrong CPU.  When a worker is concurrency managed\n(!WORKER_NOT_RUNNIG), it should be bound to its associated cpu and\nwoken up to that cpu.  Add WARN_ON_ONCE() to verify this.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "999767beb1b4a10eabf90e6017e496536cf4db0b",
      "tree": "52025d802a2d350e42b0857502cb1ac7b473f31b",
      "parents": [
        "8852aac25e79e38cc6529f20298eed154f60b574"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Sun Oct 21 01:30:06 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Sat Dec 01 16:45:40 2012 -0800"
      },
      "message": "workqueue: trivial fix for return statement in work_busy()\n\nReturn type of work_busy() is unsigned int.\nThere is return statement returning boolean value, \u0027false\u0027 in work_busy().\nIt is not problem, because \u0027false\u0027 may be treated \u00270\u0027.\nHowever, fixing it would make code robust.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "8852aac25e79e38cc6529f20298eed154f60b574",
      "tree": "dba9304157032b33339db9c8165b3d4a5a2d05b0",
      "parents": [
        "412d32e6c98527078779e5b515823b2810e40324"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Sat Dec 01 16:23:42 2012 -0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Sat Dec 01 16:43:18 2012 -0800"
      },
      "message": "workqueue: mod_delayed_work_on() shouldn\u0027t queue timer on 0 delay\n\n8376fe22c7 (\"workqueue: implement mod_delayed_work[_on]()\")\nimplemented mod_delayed_work[_on]() using the improved\ntry_to_grab_pending().  The function is later used, among others, to\nreplace [__]candel_delayed_work() + queue_delayed_work() combinations.\n\nUnfortunately, a delayed_work item w/ zero @delay is handled slightly\ndifferently by mod_delayed_work_on() compared to\nqueue_delayed_work_on().  The latter skips timer altogether and\ndirectly queues it using queue_work_on() while the former schedules\ntimer which will expire on the closest tick.  This means, when @delay\nis zero, that [__]cancel_delayed_work() + queue_delayed_work_on()\nmakes the target item immediately executable while\nmod_delayed_work_on() may induce delay of upto a full tick.\n\nThis somewhat subtle difference breaks some of the converted users.\ne.g. block queue plugging uses delayed_work for deferred processing\nand uses mod_delayed_work_on() when the queue needs to be immediately\nunplugged.  The above problem manifested as noticeably higher number\nof context switches under certain circumstances.\n\nThe difference in behavior was caused by missing special case handling\nfor 0 delay in mod_delayed_work_on() compared to\nqueue_delayed_work_on().  Joonsoo Kim posted a patch to add it -\n(\"workqueue: optimize mod_delayed_work_on() when @delay \u003d\u003d 0\")[1].\nThe patch was queued for 3.8 but it was described as optimization and\nI missed that it was a correctness issue.\n\nAs both queue_delayed_work_on() and mod_delayed_work_on() use\n__queue_delayed_work() for queueing, it seems that the better approach\nis to move the 0 delay special handling to the function instead of\nduplicating it in mod_delayed_work_on().\n\nFix the problem by moving 0 delay special case handling from\nqueue_delayed_work_on() to __queue_delayed_work().  This replaces\nJoonsoo\u0027s patch.\n\n[1] http://thread.gmane.org/gmane.linux.kernel/1379011/focus\u003d1379012\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReported-and-tested-by: Anders Kaseorg \u003candersk@MIT.EDU\u003e\nReported-and-tested-by: Zlatko Calusic \u003czlatko.calusic@iskon.hr\u003e\nLKML-Reference: \u003calpine.DEB.2.00.1211280953350.26602@dr-wily.mit.edu\u003e\nLKML-Reference: \u003c50A78AA9.5040904@iskon.hr\u003e\nCc: Joonsoo Kim \u003cjs1304@gmail.com\u003e\n"
    },
    {
      "commit": "412d32e6c98527078779e5b515823b2810e40324",
      "tree": "0221e047afeb05d753d29821a457eef9dbe917d7",
      "parents": [
        "b3c3a9cf2a28ee4a8d0b62e2e58c61e9ca9bb47b"
      ],
      "author": {
        "name": "Mike Galbraith",
        "email": "mgalbraith@suse.de",
        "time": "Wed Nov 28 07:17:18 2012 +0100"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Sat Dec 01 15:56:42 2012 -0800"
      },
      "message": "workqueue: exit rescuer_thread() as TASK_RUNNING\n\nA rescue thread exiting TASK_INTERRUPTIBLE can lead to a task scheduling\noff, never to be seen again.  In the case where this occurred, an exiting\nthread hit reiserfs homebrew conditional resched while holding a mutex,\nbringing the box to its knees.\n\nPID: 18105  TASK: ffff8807fd412180  CPU: 5   COMMAND: \"kdmflush\"\n #0 [ffff8808157e7670] schedule at ffffffff8143f489\n #1 [ffff8808157e77b8] reiserfs_get_block at ffffffffa038ab2d [reiserfs]\n #2 [ffff8808157e79a8] __block_write_begin at ffffffff8117fb14\n #3 [ffff8808157e7a98] reiserfs_write_begin at ffffffffa0388695 [reiserfs]\n #4 [ffff8808157e7ad8] generic_perform_write at ffffffff810ee9e2\n #5 [ffff8808157e7b58] generic_file_buffered_write at ffffffff810eeb41\n #6 [ffff8808157e7ba8] __generic_file_aio_write at ffffffff810f1a3a\n #7 [ffff8808157e7c58] generic_file_aio_write at ffffffff810f1c88\n #8 [ffff8808157e7cc8] do_sync_write at ffffffff8114f850\n #9 [ffff8808157e7dd8] do_acct_process at ffffffff810a268f\n    [exception RIP: kernel_thread_helper]\n    RIP: ffffffff8144a5c0  RSP: ffff8808157e7f58  RFLAGS: 00000202\n    RAX: 0000000000000000  RBX: 0000000000000000  RCX: 0000000000000000\n    RDX: 0000000000000000  RSI: ffffffff8107af60  RDI: ffff8803ee491d18\n    RBP: 0000000000000000   R8: 0000000000000000   R9: 0000000000000000\n    R10: 0000000000000000  R11: 0000000000000000  R12: 0000000000000000\n    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000000000\n    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018\n\nSigned-off-by: Mike Galbraith \u003cmgalbraith@suse.de\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: stable@vger.kernel.org\n"
    },
    {
      "commit": "c0158ca64da5732dfb86a3f28944e9626776692f",
      "tree": "2ce9ff8057b9273905424b2932b35a080cfdbf38",
      "parents": [
        "ddffeb8c4d0331609ef2581d84de4d763607bd37"
      ],
      "author": {
        "name": "Dan Magenheimer",
        "email": "dan.magenheimer@oracle.com",
        "time": "Thu Oct 18 16:31:37 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Oct 24 12:38:16 2012 -0700"
      },
      "message": "workqueue: cancel_delayed_work() should return %false if work item is idle\n\n57b30ae77b (\"workqueue: reimplement cancel_delayed_work() using\ntry_to_grab_pending()\") made cancel_delayed_work() always return %true\nunless someone else is also trying to cancel the work item, which is\nbroken - if the target work item is idle, the return value should be\n%false.\n\ntry_to_grab_pending() indicates that the target work item was idle by\nzero return value.  Use it for return.  Note that this brings\ncancel_delayed_work() in line with __cancel_work_timer() in return\nvalue handling.\n\nSigned-off-by: Dan Magenheimer \u003cdan.magenheimer@oracle.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLKML-Reference: \u003c444a6439-b1a4-4740-9e7e-bc37267cfe73@default\u003e\n"
    },
    {
      "commit": "033d9959ed2dc1029217d4165f80a71702dc578e",
      "tree": "3d306316e44bdabce2e0bf2ef7e466e525f90b4c",
      "parents": [
        "974a847e00cf3ff1695e62b276892137893706ab",
        "7c6e72e46c9ea4a88f3f8ba96edce9db4bd48726"
      ],
      "author": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 02 09:54:49 2012 -0700"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Oct 02 09:54:49 2012 -0700"
      },
      "message": "Merge branch \u0027for-3.7\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq\n\nPull workqueue changes from Tejun Heo:\n \"This is workqueue updates for v3.7-rc1.  A lot of activities this\n  round including considerable API and behavior cleanups.\n\n   * delayed_work combines a timer and a work item.  The handling of the\n     timer part has always been a bit clunky leading to confusing\n     cancelation API with weird corner-case behaviors.  delayed_work is\n     updated to use new IRQ safe timer and cancelation now works as\n     expected.\n\n   * Another deficiency of delayed_work was lack of the counterpart of\n     mod_timer() which led to cancel+queue combinations or open-coded\n     timer+work usages.  mod_delayed_work[_on]() are added.\n\n     These two delayed_work changes make delayed_work provide interface\n     and behave like timer which is executed with process context.\n\n   * A work item could be executed concurrently on multiple CPUs, which\n     is rather unintuitive and made flush_work() behavior confusing and\n     half-broken under certain circumstances.  This problem doesn\u0027t\n     exist for non-reentrant workqueues.  While non-reentrancy check\n     isn\u0027t free, the overhead is incurred only when a work item bounces\n     across different CPUs and even in simulated pathological scenario\n     the overhead isn\u0027t too high.\n\n     All workqueues are made non-reentrant.  This removes the\n     distinction between flush_[delayed_]work() and\n     flush_[delayed_]_work_sync().  The former is now as strong as the\n     latter and the specified work item is guaranteed to have finished\n     execution of any previous queueing on return.\n\n   * In addition to the various bug fixes, Lai redid and simplified CPU\n     hotplug handling significantly.\n\n   * Joonsoo introduced system_highpri_wq and used it during CPU\n     hotplug.\n\n  There are two merge commits - one to pull in IRQ safe timer from\n  tip/timers/core and the other to pull in CPU hotplug fixes from\n  wq/for-3.6-fixes as Lai\u0027s hotplug restructuring depended on them.\"\n\nFixed a number of trivial conflicts, but the more interesting conflicts\nwere silent ones where the deprecated interfaces had been used by new\ncode in the merge window, and thus didn\u0027t cause any real data conflicts.\n\nTejun pointed out a few of them, I fixed a couple more.\n\n* \u0027for-3.7\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: (46 commits)\n  workqueue: remove spurious WARN_ON_ONCE(in_irq()) from try_to_grab_pending()\n  workqueue: use cwq_set_max_active() helper for workqueue_set_max_active()\n  workqueue: introduce cwq_set_max_active() helper for thaw_workqueues()\n  workqueue: remove @delayed from cwq_dec_nr_in_flight()\n  workqueue: fix possible stall on try_to_grab_pending() of a delayed work item\n  workqueue: use hotcpu_notifier() for workqueue_cpu_down_callback()\n  workqueue: use __cpuinit instead of __devinit for cpu callbacks\n  workqueue: rename manager_mutex to assoc_mutex\n  workqueue: WORKER_REBIND is no longer necessary for idle rebinding\n  workqueue: WORKER_REBIND is no longer necessary for busy rebinding\n  workqueue: reimplement idle worker rebinding\n  workqueue: deprecate __cancel_delayed_work()\n  workqueue: reimplement cancel_delayed_work() using try_to_grab_pending()\n  workqueue: use mod_delayed_work() instead of __cancel + queue\n  workqueue: use irqsafe timer for delayed_work\n  workqueue: clean up delayed_work initializers and add missing one\n  workqueue: make deferrable delayed_work initializer names consistent\n  workqueue: cosmetic whitespace updates for macro definitions\n  workqueue: deprecate system_nrt[_freezable]_wq\n  workqueue: deprecate flush[_delayed]_work_sync()\n  ...\n"
    },
    {
      "commit": "7c6e72e46c9ea4a88f3f8ba96edce9db4bd48726",
      "tree": "05f92ef6f69cfa5f0aac1401c4ca4d4fdc2a7ab7",
      "parents": [
        "70369b117a8fc5ac18a635ced23ee49f8e722e7b"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Sep 20 10:03:19 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Sep 20 10:03:19 2012 -0700"
      },
      "message": "workqueue: remove spurious WARN_ON_ONCE(in_irq()) from try_to_grab_pending()\n\ne0aecdd874 (\"workqueue: use irqsafe timer for delayed_work\") made\ntry_to_grab_pending() safe to use from irq context but forgot to\nremove WARN_ON_ONCE(in_irq()).  Remove it.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReported-by: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\n"
    },
    {
      "commit": "70369b117a8fc5ac18a635ced23ee49f8e722e7b",
      "tree": "deec5ade1639ccebe8db218e5b8a5632d86317ac",
      "parents": [
        "9f4bd4cddbb50d7617353102e10ce511c5ef6df2"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Wed Sep 19 10:40:48 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Sep 19 10:40:48 2012 -0700"
      },
      "message": "workqueue: use cwq_set_max_active() helper for workqueue_set_max_active()\n\nworkqueue_set_max_active() may increase -\u003emax_active without\nactivating delayed works and may make the activation order differ from\nthe queueing order.  Both aren\u0027t strictly bugs but the resulting\nbehavior could be a bit odd.\n\nTo make things more consistent, use cwq_set_max_active() helper which\nimmediately makes use of the newly increased max_mactive if there are\ndelayed work items and also keeps the activation order.\n\ntj: Slight update to description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "9f4bd4cddbb50d7617353102e10ce511c5ef6df2",
      "tree": "2f204e107cb878fcc4c0152fe167133570376cc9",
      "parents": [
        "b3f9f405a21a29c06c31fb2d6ab36ef9ba7c027b"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Wed Sep 19 10:40:48 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Sep 19 10:40:48 2012 -0700"
      },
      "message": "workqueue: introduce cwq_set_max_active() helper for thaw_workqueues()\n\nUsing a helper instead of open code makes thaw_workqueues() clearer.\nThe helper will also be used by the next patch.\n\ntj: Slight update to comment and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "ed48ece27cd3d5ee0354c32bbaec0f3e1d4715c3",
      "tree": "9ead3fba10ccd3118e6c4f38ed61cbf2bb2cbb3f",
      "parents": [
        "960bd11bf2daf669d0d910428fd9ef5a15c3d7cb"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 12:48:43 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Sep 19 10:13:12 2012 -0700"
      },
      "message": "workqueue: reimplement work_on_cpu() using system_wq\n\nThe existing work_on_cpu() implementation is hugely inefficient.  It\ncreates a new kthread, execute that single function and then let the\nkthread die on each invocation.\n\nNow that system_wq can handle concurrent executions, there\u0027s no\nadvantage of doing this.  Reimplement work_on_cpu() using system_wq\nwhich makes it simpler and way more efficient.\n\nstable: While this isn\u0027t a fix in itself, it\u0027s needed to fix a\n        workqueue related bug in cpufreq/powernow-k8.  AFAICS, this\n        shouldn\u0027t break other existing users.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: Jiri Kosina \u003cjkosina@suse.cz\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Bjorn Helgaas \u003cbhelgaas@google.com\u003e\nCc: Len Brown \u003clenb@kernel.org\u003e\nCc: Rafael J. Wysocki \u003crjw@sisk.pl\u003e\nCc: stable@vger.kernel.org\n"
    },
    {
      "commit": "b3f9f405a21a29c06c31fb2d6ab36ef9ba7c027b",
      "tree": "31ed49e9848c7595c734fc2eb83b54a6ced90a0c",
      "parents": [
        "3aa62497594430ea522050b75c033f71f2c60ee6"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 10:40:00 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 10:40:00 2012 -0700"
      },
      "message": "workqueue: remove @delayed from cwq_dec_nr_in_flight()\n\n@delayed is now always false for all callers, remove it.\n\ntj: Updated description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "3aa62497594430ea522050b75c033f71f2c60ee6",
      "tree": "ce26e616d6d40a7279ec37fe615d717d849c2532",
      "parents": [
        "a5b4e57d7cc07cb28ccf16de0876a4770ae84920"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 10:40:00 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 10:40:00 2012 -0700"
      },
      "message": "workqueue: fix possible stall on try_to_grab_pending() of a delayed work item\n\nCurrently, when try_to_grab_pending() grabs a delayed work item, it\nleaves its linked work items alone on the delayed_works.  The linked\nwork items are always NO_COLOR and will cause future\ncwq_activate_first_delayed() increase cwq-\u003enr_active incorrectly, and\nmay cause the whole cwq to stall.  For example,\n\nstate: cwq-\u003emax_active \u003d 1, cwq-\u003enr_active \u003d 1\n       one work in cwq-\u003epool, many in cwq-\u003edelayed_works.\n\nstep1: try_to_grab_pending() removes a work item from delayed_works\n       but leaves its NO_COLOR linked work items on it.\n\nstep2: Later on, cwq_activate_first_delayed() activates the linked\n       work item increasing -\u003enr_active.\n\nstep3: cwq-\u003enr_active \u003d 1, but all activated work items of the cwq are\n       NO_COLOR.  When they finish, cwq-\u003enr_active will not be\n       decreased due to NO_COLOR, and no further work items will be\n       activated from cwq-\u003edelayed_works. the cwq stalls.\n\nFix it by ensuring the target work item is activated before stealing\nPENDING in try_to_grab_pending().  This ensures that all the linked\nwork items are activated without incorrectly bumping cwq-\u003enr_active.\n\ntj: Updated comment and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: stable@kernel.org\n"
    },
    {
      "commit": "a5b4e57d7cc07cb28ccf16de0876a4770ae84920",
      "tree": "b73c1549f50a5acc98783e3a29a6612ae4de28ba",
      "parents": [
        "9fdf9b73d61c87a9c16f101bb8bbe069d13046f5"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "message": "workqueue: use hotcpu_notifier() for workqueue_cpu_down_callback()\n\nworkqueue_cpu_down_callback() is used only if HOTPLUG_CPU\u003dy, so\nhotcpu_notifier() fits better than cpu_notifier().\n\nWhen HOTPLUG_CPU\u003dy, hotcpu_notifier() and cpu_notifier() are the same.\n\nWhen HOTPLUG_CPU\u003dn, if we use cpu_notifier(),\nworkqueue_cpu_down_callback() will be called during boot to do\nnothing, and the memory of workqueue_cpu_down_callback() and\ngcwq_unbind_fn() will be discarded after boot.\n\nIf we use hotcpu_notifier(), we can avoid the no-op call of\nworkqueue_cpu_down_callback() and the memory of\nworkqueue_cpu_down_callback() and gcwq_unbind_fn() will be discard at\nbuild time:\n\n$ ls -l kernel/workqueue.o.cpu_notifier kernel/workqueue.o.hotcpu_notifier\n-rw-rw-r-- 1 laijs laijs 484080 Sep 15 11:31 kernel/workqueue.o.cpu_notifier\n-rw-rw-r-- 1 laijs laijs 478240 Sep 15 11:31 kernel/workqueue.o.hotcpu_notifier\n\n$ size kernel/workqueue.o.cpu_notifier kernel/workqueue.o.hotcpu_notifier\n   text\t   data\t    bss\t    dec\t    hex\tfilename\n  18513\t   2387\t   1221\t  22121\t   5669\tkernel/workqueue.o.cpu_notifier\n  18082\t   2355\t   1221\t  21658\t   549a\tkernel/workqueue.o.hotcpu_notifier\n\ntj: Updated description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "9fdf9b73d61c87a9c16f101bb8bbe069d13046f5",
      "tree": "5b77daceffec6c330f66ac13a1687a986344c817",
      "parents": [
        "b2eb83d123c1cc9f96a8e452b26a6ebe631b3ad7"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "message": "workqueue: use __cpuinit instead of __devinit for cpu callbacks\n\nFor workqueue hotplug callbacks, it makes less sense to use __devinit\nwhich discards the memory after boot if !HOTPLUG.  __cpuinit, which\ndiscards the memory after boot if !HOTPLUG_CPU fits better.\n\ntj: Updated description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "b2eb83d123c1cc9f96a8e452b26a6ebe631b3ad7",
      "tree": "0be062bc42bc16e4de48fe1238e61eeb054bdef7",
      "parents": [
        "5f7dabfd5cb115937afb4649e4c73b02f927f6ae"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "message": "workqueue: rename manager_mutex to assoc_mutex\n\nNow that manager_mutex\u0027s role has changed from synchronizing manager\nrole to excluding hotplug against manager, the name is misleading.\n\nAs it is protecting the CPU-association of the gcwq now, rename it to\nassoc_mutex.\n\nThis patch is pure rename and doesn\u0027t introduce any functional change.\n\ntj: Updated comments and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "5f7dabfd5cb115937afb4649e4c73b02f927f6ae",
      "tree": "9b47bba67879363d70d68e3c9209debaf80aca0a",
      "parents": [
        "eab6d82843ee1df244f8847d1bf8bb89160ec4aa"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 09:59:23 2012 -0700"
      },
      "message": "workqueue: WORKER_REBIND is no longer necessary for idle rebinding\n\nNow both worker destruction and idle rebinding remove the worker from\nidle list while it\u0027s still idle, so list_empty(\u0026worker-\u003eentry) can be\nused to test whether either is pending and WORKER_DIE to distinguish\nbetween the two instead making WORKER_REBIND unnecessary.\n\nUse list_empty(\u0026worker-\u003eentry) to determine whether destruction or\nrebinding is pending.  This simplifies worker state transitions.\n\nWORKER_REBIND is not needed anymore.  Remove it.\n\ntj: Updated comments and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "eab6d82843ee1df244f8847d1bf8bb89160ec4aa",
      "tree": "86a43e6ae1734779fe54ea5e62408395e6d0b36a",
      "parents": [
        "ea1abd6197d5805655da1bb589929762f4b4aa08"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 09:59:22 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 09:59:22 2012 -0700"
      },
      "message": "workqueue: WORKER_REBIND is no longer necessary for busy rebinding\n\nBecause the old unbind/rebinding implementation wasn\u0027t atomic w.r.t.\nGCWQ_DISASSOCIATED manipulation which is protected by\nglobal_cwq-\u003elock, we had to use two flags, WORKER_UNBOUND and\nWORKER_REBIND, to avoid incorrectly losing all NOT_RUNNING bits with\nback-to-back CPU hotplug operations; otherwise, completion of\nrebinding while another unbinding is in progress could clear UNBIND\nprematurely.\n\nNow that both unbind/rebinding are atomic w.r.t. GCWQ_DISASSOCIATED,\nthere\u0027s no need to use two flags.  Just one is enough.  Don\u0027t use\nWORKER_REBIND for busy rebinding.\n\ntj: Updated description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "ea1abd6197d5805655da1bb589929762f4b4aa08",
      "tree": "6ba4ac400e9243622558b852583d1cdf3ef61b1c",
      "parents": [
        "6c1423ba5dbdab45bcd8c1bc3bc6e07fe3f6a470"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Tue Sep 18 09:59:22 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 18 09:59:22 2012 -0700"
      },
      "message": "workqueue: reimplement idle worker rebinding\n\nCurrently rebind_workers() uses rebinds idle workers synchronously\nbefore proceeding to requesting busy workers to rebind.  This is\nnecessary because all workers on @worker_pool-\u003eidle_list must be bound\nbefore concurrency management local wake-ups from the busy workers\ntake place.\n\nUnfortunately, the synchronous idle rebinding is quite complicated.\nThis patch reimplements idle rebinding to simplify the code path.\n\nRather than trying to make all idle workers bound before rebinding\nbusy workers, we simply remove all to-be-bound idle workers from the\nidle list and let them add themselves back after completing rebinding\n(successful or not).\n\nAs only workers which finished rebinding can on on the idle worker\nlist, the idle worker list is guaranteed to have only bound workers\nunless CPU went down again and local wake-ups are safe.\n\nAfter the change, @worker_pool-\u003enr_idle may deviate than the actual\nnumber of idle workers on @worker_pool-\u003eidle_list.  More specifically,\nnr_idle may be non-zero while -\u003eidle_list is empty.  All users of\n-\u003enr_idle and -\u003eidle_list are audited.  The only affected one is\ntoo_many_workers() which is updated to check %false if -\u003eidle_list is\nempty regardless of -\u003enr_idle.\n\nAfter this patch, rebind_workers() no longer performs the nasty\nidle-rebind retries which require temporary release of gcwq-\u003elock, and\nboth unbinding and rebinding are atomic w.r.t. global_cwq-\u003elock.\n\nworker-\u003eidle_rebind and global_cwq-\u003erebind_hold are now unnecessary\nand removed along with the definition of struct idle_rebind.\n\nChanged from V1:\n\t1) remove unlikely from too_many_workers(), -\u003eidle_list can be empty\n\t   anytime, even before this patch, no reason to use unlikely.\n\t2) fix a small rebasing mistake.\n\t   (which is from rebasing the orignal fixing patch to for-next)\n\t3) add a lot of comments.\n\t4) clear WORKER_REBIND unconditionaly in idle_worker_rebind()\n\ntj: Updated comments and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "6c1423ba5dbdab45bcd8c1bc3bc6e07fe3f6a470",
      "tree": "3c7899ba9eee94f408faf483622faa23cbdbfed2",
      "parents": [
        "136b5721d75a62a8f02c601c89122e32c1a85a84",
        "960bd11bf2daf669d0d910428fd9ef5a15c3d7cb"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Sep 17 16:07:34 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Sep 17 16:09:09 2012 -0700"
      },
      "message": "Merge branch \u0027for-3.6-fixes\u0027 of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq into for-3.7\n\nThis merge is necessary as Lai\u0027s CPU hotplug restructuring series\ndepends on the CPU hotplug bug fixes in for-3.6-fixes.\n\nThe merge creates one trivial conflict between the following two\ncommits.\n\n 96e65306b8 \"workqueue: UNBOUND -\u003e REBIND morphing in rebind_workers() should be atomic\"\n e2b6a6d570 \"workqueue: use system_highpri_wq for highpri workers in rebind_workers()\"\n\nBoth add local variable definitions to the same block and can be\nmerged in any order.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "960bd11bf2daf669d0d910428fd9ef5a15c3d7cb",
      "tree": "f2649d121f402be4a19a0432a1987615a2e45c09",
      "parents": [
        "ee378aa49b594da9bda6a2c768cc5b2ad585f911"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Mon Sep 17 15:42:31 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Sep 17 15:42:31 2012 -0700"
      },
      "message": "workqueue: always clear WORKER_REBIND in busy_worker_rebind_fn()\n\nbusy_worker_rebind_fn() didn\u0027t clear WORKER_REBIND if rebinding failed\n(CPU is down again).  This used to be okay because the flag wasn\u0027t\nused for anything else.\n\nHowever, after 25511a477 \"workqueue: reimplement CPU online rebinding\nto handle idle workers\", WORKER_REBIND is also used to command idle\nworkers to rebind.  If not cleared, the worker may confuse the next\nCPU_UP cycle by having REBIND spuriously set or oops / get stuck by\nprematurely calling idle_worker_rebind().\n\n  WARNING: at /work/os/wq/kernel/workqueue.c:1323 worker_thread+0x4cd/0x5\n 00()\n  Hardware name: Bochs\n  Modules linked in: test_wq(O-)\n  Pid: 33, comm: kworker/1:1 Tainted: G           O 3.6.0-rc1-work+ #3\n  Call Trace:\n   [\u003cffffffff8109039f\u003e] warn_slowpath_common+0x7f/0xc0\n   [\u003cffffffff810903fa\u003e] warn_slowpath_null+0x1a/0x20\n   [\u003cffffffff810b3f1d\u003e] worker_thread+0x4cd/0x500\n   [\u003cffffffff810bc16e\u003e] kthread+0xbe/0xd0\n   [\u003cffffffff81bd2664\u003e] kernel_thread_helper+0x4/0x10\n  ---[ end trace e977cf20f4661968 ]---\n  BUG: unable to handle kernel NULL pointer dereference at           (null)\n  IP: [\u003cffffffff810b3db0\u003e] worker_thread+0x360/0x500\n  PGD 0\n  Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC\n  Modules linked in: test_wq(O-)\n  CPU 0\n  Pid: 33, comm: kworker/1:1 Tainted: G        W  O 3.6.0-rc1-work+ #3 Bochs Bochs\n  RIP: 0010:[\u003cffffffff810b3db0\u003e]  [\u003cffffffff810b3db0\u003e] worker_thread+0x360/0x500\n  RSP: 0018:ffff88001e1c9de0  EFLAGS: 00010086\n  RAX: 0000000000000000 RBX: ffff88001e633e00 RCX: 0000000000004140\n  RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000009\n  RBP: ffff88001e1c9ea0 R08: 0000000000000000 R09: 0000000000000001\n  R10: 0000000000000002 R11: 0000000000000000 R12: ffff88001fc8d580\n  R13: ffff88001fc8d590 R14: ffff88001e633e20 R15: ffff88001e1c6900\n  FS:  0000000000000000(0000) GS:ffff88001fc00000(0000) knlGS:0000000000000000\n  CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b\n  CR2: 0000000000000000 CR3: 00000000130e8000 CR4: 00000000000006f0\n  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000\n  DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400\n  Process kworker/1:1 (pid: 33, threadinfo ffff88001e1c8000, task ffff88001e1c6900)\n  Stack:\n   ffff880000000000 ffff88001e1c9e40 0000000000000001 ffff88001e1c8010\n   ffff88001e519c78 ffff88001e1c9e58 ffff88001e1c6900 ffff88001e1c6900\n   ffff88001e1c6900 ffff88001e1c6900 ffff88001fc8d340 ffff88001fc8d340\n  Call Trace:\n   [\u003cffffffff810bc16e\u003e] kthread+0xbe/0xd0\n   [\u003cffffffff81bd2664\u003e] kernel_thread_helper+0x4/0x10\n  Code: b1 00 f6 43 48 02 0f 85 91 01 00 00 48 8b 43 38 48 89 df 48 8b 00 48 89 45 90 e8 ac f0 ff ff 3c 01 0f 85 60 01 00 00 48 8b 53 50 \u003c8b\u003e 02 83 e8 01 85 c0 89 02 0f 84 3b 01 00 00 48 8b 43 38 48 8b\n  RIP  [\u003cffffffff810b3db0\u003e] worker_thread+0x360/0x500\n   RSP \u003cffff88001e1c9de0\u003e\n  CR2: 0000000000000000\n\nThere was no reason to keep WORKER_REBIND on failure in the first\nplace - WORKER_UNBOUND is guaranteed to be set in such cases\npreventing incorrectly activating concurrency management.  Always\nclear WORKER_REBIND.\n\ntj: Updated comment and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "ee378aa49b594da9bda6a2c768cc5b2ad585f911",
      "tree": "33d73f93b93388e92fce1d4f4a5b3ae4100060ba",
      "parents": [
        "552a37e9360a293cd20e7f8ff1fb326a244c5f1e"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Mon Sep 10 10:03:44 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Sep 10 10:05:54 2012 -0700"
      },
      "message": "workqueue: fix possible idle worker depletion across CPU hotplug\n\nTo simplify both normal and CPU hotplug paths, worker management is\nprevented while CPU hoplug is in progress.  This is achieved by CPU\nhotplug holding the same exclusion mechanism used by workers to ensure\nthere\u0027s only one manager per pool.\n\nIf someone else seems to be performing the manager role, workers\nproceed to execute work items.  CPU hotplug using the same mechanism\ncan lead to idle worker depletion because all workers could proceed to\nexecute work items while CPU hotplug is in progress and CPU hotplug\nitself wouldn\u0027t actually perform the worker management duty - it\ndoesn\u0027t guarantee that there\u0027s an idle worker left when it releases\nmanagement.\n\nThis idle worker depletion, under extreme circumstances, can break\nforward-progress guarantee and thus lead to deadlock.\n\nThis patch fixes the bug by using separate mechanisms for manager\nexclusion among workers and hotplug exclusion.  For manager exclusion,\nPOOL_MANAGING_WORKERS which was restored by the previous patch is\nused.  pool-\u003emanager_mutex is now only used for exclusion between the\nelected manager and CPU hotplug.  The elected manager won\u0027t proceed\nwithout holding pool-\u003emanager_mutex.\n\nThis ensures that the worker which won the manager position can\u0027t skip\nmanaging while CPU hotplug is in progress.  It will block on\nmanager_mutex and perform management after CPU hotplug is complete.\n\nNote that hotplug may happen while waiting for manager_mutex.  A\nmanager isn\u0027t either on idle or busy list and thus the hoplug code\ncan\u0027t unbind/rebind it.  Make the manager handle its own un/rebinding.\n\ntj: Updated comment and description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "552a37e9360a293cd20e7f8ff1fb326a244c5f1e",
      "tree": "00d89d5778d4ab8320f6bf24d81e33a290f9fcb1",
      "parents": [
        "ec58815ab0409a921a7c9744eb4ca44866b14d71"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Mon Sep 10 10:03:33 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Sep 10 10:04:54 2012 -0700"
      },
      "message": "workqueue: restore POOL_MANAGING_WORKERS\n\nThis patch restores POOL_MANAGING_WORKERS which was replaced by\npool-\u003emanager_mutex by 6037315269 \"workqueue: use mutex for global_cwq\nmanager exclusion\".\n\nThere\u0027s a subtle idle worker depletion bug across CPU hotplug events\nand we need to distinguish an actual manager and CPU hotplug\npreventing management.  POOL_MANAGING_WORKERS will be used for the\nformer and manager_mutex the later.\n\nThis patch just lays POOL_MANAGING_WORKERS on top of the existing\nmanager_mutex and doesn\u0027t introduce any synchronization changes.  The\nnext patch will update it.\n\nNote that this patch fixes a non-critical anomaly where\ntoo_many_workers() may return %true spuriously while CPU hotplug is in\nprogress.  While the issue could schedule idle timer spuriously, it\ndidn\u0027t trigger any actual misbehavior.\n\ntj: Rewrote patch description.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "ec58815ab0409a921a7c9744eb4ca44866b14d71",
      "tree": "228f1fb9035cc0b3f60fc14707614305608c96d1",
      "parents": [
        "90beca5de591e12482a812f23a7f10690962ed4a"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 04 23:16:32 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Sep 05 16:10:15 2012 -0700"
      },
      "message": "workqueue: fix possible deadlock in idle worker rebinding\n\nCurrently, rebind_workers() and idle_worker_rebind() are two-way\ninterlocked.  rebind_workers() waits for idle workers to finish\nrebinding and rebound idle workers wait for rebind_workers() to finish\nrebinding busy workers before proceeding.\n\nUnfortunately, this isn\u0027t enough.  The second wait from idle workers\nis implemented as follows.\n\n\twait_event(gcwq-\u003erebind_hold, !(worker-\u003eflags \u0026 WORKER_REBIND));\n\nrebind_workers() clears WORKER_REBIND, wakes up the idle workers and\nthen returns.  If CPU hotplug cycle happens again before one of the\nidle workers finishes the above wait_event(), rebind_workers() will\nrepeat the first part of the handshake - set WORKER_REBIND again and\nwait for the idle worker to finish rebinding - and this leads to\ndeadlock because the idle worker would be waiting for WORKER_REBIND to\nclear.\n\nThis is fixed by adding another interlocking step at the end -\nrebind_workers() now waits for all the idle workers to finish the\nabove WORKER_REBIND wait before returning.  This ensures that all\nrebinding steps are complete on all idle workers before the next\nhotplug cycle can happen.\n\nThis problem was diagnosed by Lai Jiangshan who also posted a patch to\nfix the issue, upon which this patch is based.\n\nThis is the minimal fix and further patches are scheduled for the next\nmerge window to simplify the CPU hotplug path.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nOriginal-patch-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nLKML-Reference: \u003c1346516916-1991-3-git-send-email-laijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "90beca5de591e12482a812f23a7f10690962ed4a",
      "tree": "39e6a6e4e22ba49908d5542b4de6c01fbff48744",
      "parents": [
        "96e65306b81351b656835c15931d1d237b252f27"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 04 23:12:33 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Sep 05 16:10:14 2012 -0700"
      },
      "message": "workqueue: move WORKER_REBIND clearing in rebind_workers() to the end of the function\n\nThis doesn\u0027t make any functional difference and is purely to help the\nnext patch to be simpler.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\n"
    },
    {
      "commit": "96e65306b81351b656835c15931d1d237b252f27",
      "tree": "af06187bebae44b48ca8e68a639a4ddc6b0a3509",
      "parents": [
        "0d7614f09c1ebdbaa1599a5aba7593f147bf96ee"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Sun Sep 02 00:28:19 2012 +0800"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Sep 04 17:04:45 2012 -0700"
      },
      "message": "workqueue: UNBOUND -\u003e REBIND morphing in rebind_workers() should be atomic\n\nThe compiler may compile the following code into TWO write/modify\ninstructions.\n\n\tworker-\u003eflags \u0026\u003d ~WORKER_UNBOUND;\n\tworker-\u003eflags |\u003d WORKER_REBIND;\n\nso the other CPU may temporarily see worker-\u003eflags which doesn\u0027t have\neither WORKER_UNBOUND or WORKER_REBIND set and perform local wakeup\nprematurely.\n\nFix it by using single explicit assignment via ACCESS_ONCE().\n\nBecause idle workers have another WORKER_NOT_RUNNING flag, this bug\ndoesn\u0027t exist for them; however, update it to use the same pattern for\nconsistency.\n\ntj: Applied the change to idle workers too and updated comments and\n    patch description a bit.\n\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: stable@vger.kernel.org\n"
    },
    {
      "commit": "57b30ae77bf00d2318df711ef9a4d2a9be0a3a2a",
      "tree": "d6e084bf0e2b82bb39302ee0e94e6f3f04762dbc",
      "parents": [
        "e7c2f967445dd2041f0f8e3179cca22bb8bb7f79"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Aug 21 13:18:24 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Aug 21 13:18:24 2012 -0700"
      },
      "message": "workqueue: reimplement cancel_delayed_work() using try_to_grab_pending()\n\ncancel_delayed_work() can\u0027t be called from IRQ handlers due to its use\nof del_timer_sync() and can\u0027t cancel work items which are already\ntransferred from timer to worklist.\n\nAlso, unlike other flush and cancel functions, a canceled delayed_work\nwould still point to the last associated cpu_workqueue.  If the\nworkqueue is destroyed afterwards and the work item is re-used on a\ndifferent workqueue, the queueing code can oops trying to dereference\nalready freed cpu_workqueue.\n\nThis patch reimplements cancel_delayed_work() using\ntry_to_grab_pending() and set_work_cpu_and_clear_pending().  This\nallows the function to be called from IRQ handlers and makes its\nbehavior consistent with other flush / cancel functions.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n"
    },
    {
      "commit": "e0aecdd874d78b7129a64b056c20e529e2c916df",
      "tree": "0eacde209b1f46beb5293537c85ab8217c7023f4",
      "parents": [
        "f991b318cc6627a493b0d317a565bb7c3271f36b"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Aug 21 13:18:24 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Aug 21 13:18:24 2012 -0700"
      },
      "message": "workqueue: use irqsafe timer for delayed_work\n\nUp to now, for delayed_works, try_to_grab_pending() couldn\u0027t be used\nfrom IRQ handlers because IRQs may happen while\ndelayed_work_timer_fn() is in progress leading to indefinite -EAGAIN.\n\nThis patch makes delayed_work use the new TIMER_IRQSAFE flag for\ndelayed_work-\u003etimer.  This makes try_to_grab_pending() and thus\nmod_delayed_work_on() safe to call from IRQ handlers.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "ae930e0f4e66fd540c6fbad9f1e2a7743d8b9afe",
      "tree": "88853ec727834081a79d56bb9829191ca6e243ec",
      "parents": [
        "606a5020b9bdceb20b4f43e11db0054afa349028"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 20 14:51:23 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 20 14:51:23 2012 -0700"
      },
      "message": "workqueue: gut system_nrt[_freezable]_wq()\n\nNow that all workqueues are non-reentrant, system[_freezable]_wq() are\nequivalent to system_nrt[_freezable]_wq().  Replace the latter with\nwrappers around system[_freezable]_wq().  The wrapping goes through\ninline functions so that __deprecated can be added easily.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "606a5020b9bdceb20b4f43e11db0054afa349028",
      "tree": "d5f65b7a94cd4c5987979a814178cc92cf4508d9",
      "parents": [
        "dbf2576e37da0fcc7aacbfbb9fd5d3de7888a3c1"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 20 14:51:23 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 20 14:51:23 2012 -0700"
      },
      "message": "workqueue: gut flush[_delayed]_work_sync()\n\nNow that all workqueues are non-reentrant, flush[_delayed]_work_sync()\nare equivalent to flush[_delayed]_work().  Drop the separate\nimplementation and make them thin wrappers around\nflush[_delayed]_work().\n\n* start_flush_work() no longer takes @wait_executing as the only left\n  user - flush_work() - always sets it to %true.\n\n* __cancel_work_timer() uses flush_work() instead of wait_on_work().\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "dbf2576e37da0fcc7aacbfbb9fd5d3de7888a3c1",
      "tree": "abbebfe5aa155bda6ea41ab00e7f2c417e1eee6b",
      "parents": [
        "044c782ce3a901fbd17cbe701c592f582381174d"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 20 14:51:23 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 20 14:51:23 2012 -0700"
      },
      "message": "workqueue: make all workqueues non-reentrant\n\nBy default, each per-cpu part of a bound workqueue operates separately\nand a work item may be executing concurrently on different CPUs.  The\nbehavior avoids some cross-cpu traffic but leads to subtle weirdities\nand not-so-subtle contortions in the API.\n\n* There\u0027s no sane usefulness in allowing a single work item to be\n  executed concurrently on multiple CPUs.  People just get the\n  behavior unintentionally and get surprised after learning about it.\n  Most either explicitly synchronize or use non-reentrant/ordered\n  workqueue but this is error-prone.\n\n* flush_work() can\u0027t wait for multiple instances of the same work item\n  on different CPUs.  If a work item is executing on cpu0 and then\n  queued on cpu1, flush_work() can only wait for the one on cpu1.\n\n  Unfortunately, work items can easily cross CPU boundaries\n  unintentionally when the queueing thread gets migrated.  This means\n  that if multiple queuers compete, flush_work() can\u0027t even guarantee\n  that the instance queued right before it is finished before\n  returning.\n\n* flush_work_sync() was added to work around some of the deficiencies\n  of flush_work().  In addition to the usual flushing, it ensures that\n  all currently executing instances are finished before returning.\n  This operation is expensive as it has to walk all CPUs and at the\n  same time fails to address competing queuer case.\n\n  Incorrectly using flush_work() when flush_work_sync() is necessary\n  is an easy error to make and can lead to bugs which are difficult to\n  reproduce.\n\n* Similar problems exist for flush_delayed_work[_sync]().\n\nOther than the cross-cpu access concern, there\u0027s no benefit in\nallowing parallel execution and it\u0027s plain silly to have this level of\ncontortion for workqueue which is widely used from core code to\nextremely obscure drivers.\n\nThis patch makes all workqueues non-reentrant.  If a work item is\nexecuting on a different CPU when queueing is requested, it is always\nqueued to that CPU.  This guarantees that any given work item can be\nexecuting on one CPU at maximum and if a work item is queued and\nexecuting, both are on the same CPU.\n\nThe only behavior change which may affect workqueue users negatively\nis that non-reentrancy overrides the affinity specified by\nqueue_work_on().  On a reentrant workqueue, the affinity specified by\nqueue_work_on() is always followed.  Now, if the work item is\nexecuting on one of the CPUs, the work item will be queued there\nregardless of the requested affinity.  I\u0027ve reviewed all workqueue\nusers which request explicit affinity, and, fortunately, none seems to\nbe crazy enough to exploit parallel execution of the same work item.\n\nThis adds an additional busy_hash lookup if the work item was\npreviously queued on a different CPU.  This shouldn\u0027t be noticeable\nunder any sane workload.  Work item queueing isn\u0027t a very\nhigh-frequency operation and they don\u0027t jump across CPUs all the time.\nIn a micro benchmark to exaggerate this difference - measuring the\ntime it takes for two work items to repeatedly jump between two CPUs a\nnumber (10M) of times with busy_hash table densely populated, the\ndifference was around 3%.\n\nWhile the overhead is measureable, it is only visible in pathological\ncases and the difference isn\u0027t huge.  This change brings much needed\nsanity to workqueue and makes its behavior consistent with timer.  I\nthink this is the right tradeoff to make.\n\nThis enables significant simplification of workqueue API.\nSimplification patches will follow.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "044c782ce3a901fbd17cbe701c592f582381174d",
      "tree": "d876983bb2930219181f09b900ce42c61782e4e5",
      "parents": [
        "7635d2fd7f0fa63b6ec03050614c314d7139f14a"
      ],
      "author": {
        "name": "Valentin Ilie",
        "email": "valentin.ilie@gmail.com",
        "time": "Sun Aug 19 00:52:42 2012 +0300"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 20 13:37:07 2012 -0700"
      },
      "message": "workqueue: fix checkpatch issues\n\nFixed some checkpatch warnings.\n\ntj: adapted to wq/for-3.7 and massaged pr_xxx() format strings a bit.\n\nSigned-off-by: Valentin Ilie \u003cvalentin.ilie@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nLKML-Reference: \u003c1345326762-21747-1-git-send-email-valentin.ilie@gmail.com\u003e\n"
    },
    {
      "commit": "7635d2fd7f0fa63b6ec03050614c314d7139f14a",
      "tree": "8d8d0387e6f791ed67ad5d23dfe0fb93a1615337",
      "parents": [
        "e2b6a6d570f070aa90ac00d2d10b1488512f8520"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Wed Aug 15 23:25:41 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Aug 16 14:21:16 2012 -0700"
      },
      "message": "workqueue: use system_highpri_wq for unbind_work\n\nTo speed cpu down processing up, use system_highpri_wq.\nAs scheduling priority of workers on it is higher than system_wq and\nit is not contended by other normal works on this cpu, work on it\nis processed faster than system_wq.\n\ntj: CPU up/downs care quite a bit about latency these days.  This\n    shouldn\u0027t hurt anything and makes sense.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "e2b6a6d570f070aa90ac00d2d10b1488512f8520",
      "tree": "4f61d435e55f764b1bd990540d533fd68d9fbaa2",
      "parents": [
        "1aabe902ca3638d862bf0dad5a697d3a8e046b0a"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Wed Aug 15 23:25:40 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Aug 16 14:21:15 2012 -0700"
      },
      "message": "workqueue: use system_highpri_wq for highpri workers in rebind_workers()\n\nIn rebind_workers(), we do inserting a work to rebind to cpu for busy workers.\nCurrently, in this case, we use only system_wq. This makes a possible\nerror situation as there is mismatch between cwq-\u003epool and worker-\u003epool.\n\nTo prevent this, we should use system_highpri_wq for highpri worker\nto match theses. This implements it.\n\ntj: Rephrased comment a bit.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "1aabe902ca3638d862bf0dad5a697d3a8e046b0a",
      "tree": "9cd5e0db2c42e83de8dfadb38e22b57d8bd24c4b",
      "parents": [
        "e42986de481238204f6e0b0f4434da428895c20b"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Wed Aug 15 23:25:39 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Aug 16 14:21:15 2012 -0700"
      },
      "message": "workqueue: introduce system_highpri_wq\n\nCommit 3270476a6c0ce322354df8679652f060d66526dc (\u0027workqueue: reimplement\nWQ_HIGHPRI using a separate worker_pool\u0027) introduce separate worker pool\nfor HIGHPRI. When we handle busyworkers for gcwq, it can be normal worker\nor highpri worker. But, we don\u0027t consider this difference in rebind_workers(),\nwe use just system_wq for highpri worker. It makes mismatch between\ncwq-\u003epool and worker-\u003epool.\n\nIt doesn\u0027t make error in current implementation, but possible in the future.\nNow, we introduce system_highpri_wq to use proper cwq for highpri workers\nin rebind_workers(). Following patch fix this issue properly.\n\ntj: Even apart from rebinding, having system_highpri_wq generally\n    makes sense.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "e42986de481238204f6e0b0f4434da428895c20b",
      "tree": "613150adf76500507ca69d59d5448fa05affbc98",
      "parents": [
        "b75cac9368fa91636e17d0f7950b35d837154e14"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Wed Aug 15 23:25:38 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Aug 16 14:21:15 2012 -0700"
      },
      "message": "workqueue: change value of lcpu in __queue_delayed_work_on()\n\nWe assign cpu id into work struct\u0027s data field in __queue_delayed_work_on().\nIn current implementation, when work is come in first time,\ncurrent running cpu id is assigned.\nIf we do __queue_delayed_work_on() with CPU A on CPU B,\n__queue_work() invoked in delayed_work_timer_fn() go into\nthe following sub-optimal path in case of WQ_NON_REENTRANT.\n\n\tgcwq \u003d get_gcwq(cpu);\n\tif (wq-\u003eflags \u0026 WQ_NON_REENTRANT \u0026\u0026\n\t\t(last_gcwq \u003d get_work_gcwq(work)) \u0026\u0026 last_gcwq !\u003d gcwq) {\n\nChange lcpu to @cpu and rechange lcpu to local cpu if lcpu is WORK_CPU_UNBOUND.\nIt is sufficient to prevent to go into sub-optimal path.\n\ntj: Slightly rephrased the comment.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "b75cac9368fa91636e17d0f7950b35d837154e14",
      "tree": "7f5fcdb6f57da351732c46d67b5d9d001fa54a0b",
      "parents": [
        "330dad5b9c9555632578c00e94e85c122561c5c7"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Wed Aug 15 23:25:37 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Aug 16 14:21:15 2012 -0700"
      },
      "message": "workqueue: correct req_cpu in trace_workqueue_queue_work()\n\nWhen we do tracing workqueue_queue_work(), it records requested cpu.\nBut, if !(@wq-\u003eflag \u0026 WQ_UNBOUND) and @cpu is WORK_CPU_UNBOUND,\nrequested cpu is changed as local cpu.\nIn case of @wq-\u003eflag \u0026 WQ_UNBOUND, above change is not occured,\ntherefore it is reasonable to correct it.\n\nUse temporary local variable for storing requested cpu.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "330dad5b9c9555632578c00e94e85c122561c5c7",
      "tree": "bc26f30c0d67f3a80cf5a7c97caaec64f88d9c6a",
      "parents": [
        "23657bb192f14b789e4c478def8f11ecc95b4f6c"
      ],
      "author": {
        "name": "Joonsoo Kim",
        "email": "js1304@gmail.com",
        "time": "Wed Aug 15 23:25:36 2012 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Thu Aug 16 14:21:15 2012 -0700"
      },
      "message": "workqueue: use enum value to set array size of pools in gcwq\n\nCommit 3270476a6c0ce322354df8679652f060d66526dc (\u0027workqueue: reimplement\nWQ_HIGHPRI using a separate worker_pool\u0027) introduce separate worker_pool\nfor HIGHPRI. Although there is NR_WORKER_POOLS enum value which represent\nsize of pools, definition of worker_pool in gcwq doesn\u0027t use it.\nUsing it makes code robust and prevent future mistakes.\nSo change code to use this enum value.\n\nSigned-off-by: Joonsoo Kim \u003cjs1304@gmail.com\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "23657bb192f14b789e4c478def8f11ecc95b4f6c",
      "tree": "53d92c542c219d60ccef2ada045235f5e7076863",
      "parents": [
        "1265057fa02c7bed3b6d9ddc8a2048065a370364"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 13 17:08:19 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 13 17:08:19 2012 -0700"
      },
      "message": "workqueue: add missing wmb() in clear_work_data()\n\nAny operation which clears PENDING should be preceded by a wmb to\nguarantee that the next PENDING owner sees all the changes made before\nPENDING release.\n\nThere are only two places where PENDING is cleared -\nset_work_cpu_and_clear_pending() and clear_work_data().  The caller of\nthe former already does smp_wmb() but the latter doesn\u0027t have any.\n\nMove the wmb above set_work_cpu_and_clear_pending() into it and add\none to clear_work_data().\n\nThere hasn\u0027t been any report related to this issue, and, given how\nclear_work_data() is used, it is extremely unlikely to have caused any\nactual problems on any architecture.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Oleg Nesterov \u003coleg@redhat.com\u003e\n"
    },
    {
      "commit": "1265057fa02c7bed3b6d9ddc8a2048065a370364",
      "tree": "b10e631ca6157103fcc71188e972b06e18c3570f",
      "parents": [
        "41f63c5359d14ca995172b8f6eaffd93f60fec54"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Aug 08 09:38:42 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Mon Aug 13 16:27:55 2012 -0700"
      },
      "message": "workqueue: fix CPU binding of flush_delayed_work[_sync]()\n\ndelayed_work encodes the workqueue to use and the last CPU in\ndelayed_work-\u003ework.data while it\u0027s on timer.  The target CPU is\nimplicitly recorded as the CPU the timer is queued on and\ndelayed_work_timer_fn() queues delayed_work-\u003ework to the CPU it is\nrunning on.\n\nUnfortunately, this leaves flush_delayed_work[_sync]() no way to find\nout which CPU the delayed_work was queued for when they try to\nre-queue after killing the timer.  Currently, it chooses the local CPU\nflush is running on.  This can unexpectedly move a delayed_work queued\non a specific CPU to another CPU and lead to subtle errors.\n\nThere isn\u0027t much point in trying to save several bytes in struct\ndelayed_work, which is already close to a hundred bytes on 64bit with\nall debug options turned off.  This patch adds delayed_work-\u003ecpu to\nremember the CPU it\u0027s queued for.\n\nNote that if the timer is migrated during CPU down, the work item\ncould be queued to the downed global_cwq after this change.  As a\ndetached global_cwq behaves like an unbound one, this doesn\u0027t change\nmuch for the delayed_work.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\n"
    },
    {
      "commit": "8376fe22c7e79c7e90857d39f82aeae6cad6c4b8",
      "tree": "3a77fda11324a25abfe1ffe3ea0eba28a4fac03f",
      "parents": [
        "bbb68dfaba73e8338fe0f1dc711cc1d261daec87"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:47 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:47 2012 -0700"
      },
      "message": "workqueue: implement mod_delayed_work[_on]()\n\nWorkqueue was lacking a mechanism to modify the timeout of an already\npending delayed_work.  delayed_work users have been working around\nthis using several methods - using an explicit timer + work item,\nmessing directly with delayed_work-\u003etimer, and canceling before\nre-queueing, all of which are error-prone and/or ugly.\n\nThis patch implements mod_delayed_work[_on]() which behaves similarly\nto mod_timer() - if the delayed_work is idle, it\u0027s queued with the\ngiven delay; otherwise, its timeout is modified to the new value.\nZero @delay guarantees immediate execution.\n\nv2: Updated to reflect try_to_grab_pending() changes.  Now safe to be\n    called from bh context.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\n"
    },
    {
      "commit": "bbb68dfaba73e8338fe0f1dc711cc1d261daec87",
      "tree": "8cafa2786991ea8dc2b8da5005b2c1d92aa204ac",
      "parents": [
        "36e227d242f9ec7cb4a8e968561b3b26e3d8b5d1"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "message": "workqueue: mark a work item being canceled as such\n\nThere can be two reasons try_to_grab_pending() can fail with -EAGAIN.\nOne is when someone else is queueing or deqeueing the work item.  With\nthe previous patches, it is guaranteed that PENDING and queued state\nwill soon agree making it safe to busy-retry in this case.\n\nThe other is if multiple __cancel_work_timer() invocations are racing\none another.  __cancel_work_timer() grabs PENDING and then waits for\nrunning instances of the target work item on all CPUs while holding\nPENDING and !queued.  try_to_grab_pending() invoked from another task\nwill keep returning -EAGAIN while the current owner is waiting.\n\nNot distinguishing the two cases is okay because __cancel_work_timer()\nis the only user of try_to_grab_pending() and it invokes\nwait_on_work() whenever grabbing fails.  For the first case, busy\nlooping should be fine but wait_on_work() doesn\u0027t cause any critical\nproblem.  For the latter case, the new contender usually waits for the\nsame condition as the current owner, so no unnecessarily extended\nbusy-looping happens.  Combined, these make __cancel_work_timer()\ntechnically correct even without irq protection while grabbing PENDING\nor distinguishing the two different cases.\n\nWhile the current code is technically correct, not distinguishing the\ntwo cases makes it difficult to use try_to_grab_pending() for other\npurposes than canceling because it\u0027s impossible to tell whether it\u0027s\nsafe to busy-retry grabbing.\n\nThis patch adds a mechanism to mark a work item being canceled.\ntry_to_grab_pending() now disables irq on success and returns -EAGAIN\nto indicate that grabbing failed but PENDING and queued states are\ngonna agree soon and it\u0027s safe to busy-loop.  It returns -ENOENT if\nthe work item is being canceled and it may stay PENDING \u0026\u0026 !queued for\narbitrary amount of time.\n\n__cancel_work_timer() is modified to mark the work canceling with\nWORK_OFFQ_CANCELING after grabbing PENDING, thus making\ntry_to_grab_pending() fail with -ENOENT instead of -EAGAIN.  Also, it\ninvokes wait_on_work() iff grabbing failed with -ENOENT.  This isn\u0027t\nnecessary for correctness but makes it consistent with other future\nusers of try_to_grab_pending().\n\nv2: try_to_grab_pending() was testing preempt_count() to ensure that\n    the caller has disabled preemption.  This triggers spuriously if\n    !CONFIG_PREEMPT_COUNT.  Use preemptible() instead.  Reported by\n    Fengguang Wu.\n\nv3: Updated so that try_to_grab_pending() disables irq on success\n    rather than requiring preemption disabled by the caller.  This\n    makes busy-looping easier and will allow try_to_grap_pending() to\n    be used from bh/irq contexts.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\n"
    },
    {
      "commit": "36e227d242f9ec7cb4a8e968561b3b26e3d8b5d1",
      "tree": "a35f1711123a22e90e6c06217cead66933404a3d",
      "parents": [
        "7beb2edf44b4dea820c733046ad7666d092bb4b6"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "message": "workqueue: reorganize try_to_grab_pending() and __cancel_timer_work()\n\n* Use bool @is_dwork instead of @timer and let try_to_grab_pending()\n  use to_delayed_work() to determine the delayed_work address.\n\n* Move timer handling from __cancel_work_timer() to\n  try_to_grab_pending().\n\n* Make try_to_grab_pending() use -EAGAIN instead of -1 for\n  busy-looping and drop the ret local variable.\n\n* Add proper function comment to try_to_grab_pending().\n\nThis makes the code a bit easier to understand and will ease further\nchanges.  This patch doesn\u0027t make any functional change.\n\nv2: Use @is_dwork instead of @timer.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "7beb2edf44b4dea820c733046ad7666d092bb4b6",
      "tree": "ef264acb53bf3e0c2349792bceb6a19806d8867c",
      "parents": [
        "b5490077274482efde57a50b060b99bc839acd45"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "message": "workqueue: factor out __queue_delayed_work() from queue_delayed_work_on()\n\nThis is to prepare for mod_delayed_work[_on]() and doesn\u0027t cause any\nfunctional difference.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "b5490077274482efde57a50b060b99bc839acd45",
      "tree": "dae1b67fa7b1c18d116fe97765bad1e52786e7aa",
      "parents": [
        "bf4ede014ea886b71ef71368738da35b316cb7c0"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "message": "workqueue: introduce WORK_OFFQ_FLAG_*\n\nLow WORK_STRUCT_FLAG_BITS bits of work_struct-\u003edata contain\nWORK_STRUCT_FLAG_* and flush color.  If the work item is queued, the\nrest point to the cpu_workqueue with WORK_STRUCT_CWQ set; otherwise,\nWORK_STRUCT_CWQ is clear and the bits contain the last CPU number -\neither a real CPU number or one of WORK_CPU_*.\n\nScheduled addition of mod_delayed_work[_on]() requires an additional\nflag, which is used only while a work item is off queue.  There are\nmore than enough bits to represent off-queue CPU number on both 32 and\n64bits.  This patch introduces WORK_OFFQ_FLAG_* which occupy the lower\npart of the @work-\u003edata high bits while off queue.  This patch doesn\u0027t\ndefine any actual OFFQ flag yet.\n\nOff-queue CPU number is now shifted by WORK_OFFQ_CPU_SHIFT, which adds\nthe number of bits used by OFFQ flags to WORK_STRUCT_FLAG_SHIFT, to\nmake room for OFFQ flags.\n\nTo avoid shift width warning with large WORK_OFFQ_FLAG_BITS, ulong\ncast is added to WORK_STRUCT_NO_CPU and, just in case, BUILD_BUG_ON()\nto check that there are enough bits to accomodate off-queue CPU number\nis added.\n\nThis patch doesn\u0027t make any functional difference.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "bf4ede014ea886b71ef71368738da35b316cb7c0",
      "tree": "9ae8f14883406241c54e5d0febc3c27258f4b45a",
      "parents": [
        "715f1300802e6eaefa85f6cfc70ae99af3d5d497"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "message": "workqueue: move try_to_grab_pending() upwards\n\ntry_to_grab_pending() will be used by to-be-implemented\nmod_delayed_work[_on]().  Move try_to_grab_pending() and related\nfunctions above queueing functions.\n\nThis patch only moves functions around.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "715f1300802e6eaefa85f6cfc70ae99af3d5d497",
      "tree": "1348231ae08bcb722e860aebe2e46a0565a86fd7",
      "parents": [
        "57469821fd5c61f25f783827d7334063cff67d65"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:46 2012 -0700"
      },
      "message": "workqueue: fix zero @delay handling of queue_delayed_work_on()\n\nIf @delay is zero and the dealyed_work is idle, queue_delayed_work()\nqueues it for immediate execution; however, queue_delayed_work_on()\nlacks this logic and always goes through timer regardless of @delay.\n\nThis patch moves 0 @delay handling logic from queue_delayed_work() to\nqueue_delayed_work_on() so that both functions behave the same.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "57469821fd5c61f25f783827d7334063cff67d65",
      "tree": "e77ead09d823125bc4dc9a9cd49864f9340ad363",
      "parents": [
        "d8e794dfd51c368ed3f686b7f4172830b60ae47b"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "message": "workqueue: unify local CPU queueing handling\n\nQueueing functions have been using different methods to determine the\nlocal CPU.\n\n* queue_work() superflously uses get/put_cpu() to acquire and hold the\n  local CPU across queue_work_on().\n\n* delayed_work_timer_fn() uses smp_processor_id().\n\n* queue_delayed_work() calls queue_delayed_work_on() with -1 @cpu\n  which is interpreted as the local CPU.\n\n* flush_delayed_work[_sync]() were using raw_smp_processor_id().\n\n* __queue_work() interprets %WORK_CPU_UNBOUND as local CPU if the\n  target workqueue is bound one but nobody uses this.\n\nThis patch converts all functions to uniformly use %WORK_CPU_UNBOUND\nto indicate local CPU and use the local binding feature of\n__queue_work().  unlikely() is dropped from %WORK_CPU_UNBOUND handling\nin __queue_work().\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "d8e794dfd51c368ed3f686b7f4172830b60ae47b",
      "tree": "72e930ab0a14bf50fa1dc6802722483247b72806",
      "parents": [
        "8930caba3dbdd8b86dd6934a5920bf61b53a931e"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "message": "workqueue: set delayed_work-\u003etimer function on initialization\n\ndelayed_work-\u003etimer.function is currently initialized during\nqueue_delayed_work_on().  Export delayed_work_timer_fn() and set\ndelayed_work timer function during delayed_work initialization\ntogether with other fields.\n\nThis ensures the timer function is always valid on an initialized\ndelayed_work.  This is to help mod_delayed_work() implementation.\n\nTo detect delayed_work users which diddle with the internal timer,\ntrigger WARN if timer function doesn\u0027t match on queue.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "8930caba3dbdd8b86dd6934a5920bf61b53a931e",
      "tree": "1ef91c823238ffe3e26af1d1d48678f299185058",
      "parents": [
        "959d1af8cffc8fd38ed53e8be1cf4ab8782f9c00"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "message": "workqueue: disable irq while manipulating PENDING\n\nQueueing operations use WORK_STRUCT_PENDING_BIT to synchronize access\nto the target work item.  They first try to claim the bit and proceed\nwith queueing only after that succeeds and there\u0027s a window between\nPENDING being set and the actual queueing where the task can be\ninterrupted or preempted.\n\nThere\u0027s also a similar window in process_one_work() when clearing\nPENDING.  A work item is dequeued, gcwq-\u003elock is released and then\nPENDING is cleared and the worker might get interrupted or preempted\nbetween releasing gcwq-\u003elock and clearing PENDING.\n\ncancel[_delayed]_work_sync() tries to claim or steal PENDING.  The\nfunction assumes that a work item with PENDING is either queued or in\nthe process of being [de]queued.  In the latter case, it busy-loops\nuntil either the work item loses PENDING or is queued.  If canceling\ncoincides with the above described interrupts or preemptions, the\ncanceling task will busy-loop while the queueing or executing task is\npreempted.\n\nThis patch keeps irq disabled across claiming PENDING and actual\nqueueing and moves PENDING clearing in process_one_work() inside\ngcwq-\u003elock so that busy looping from PENDING \u0026\u0026 !queued doesn\u0027t wait\nfor interrupted/preempted tasks.  Note that, in process_one_work(),\nsetting last CPU and clearing PENDING got merged into single\noperation.\n\nThis removes possible long busy-loops and will allow using\ntry_to_grab_pending() from bh and irq contexts.\n\nv2: __queue_work() was testing preempt_count() to ensure that the\n    caller has disabled preemption.  This triggers spuriously if\n    !CONFIG_PREEMPT_COUNT.  Use preemptible() instead.  Reported by\n    Fengguang Wu.\n\nv3: Disable irq instead of preemption.  IRQ will be disabled while\n    grabbing gcwq-\u003elock later anyway and this allows using\n    try_to_grab_pending() from bh and irq contexts.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Oleg Nesterov \u003coleg@redhat.com\u003e\nCc: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\n"
    },
    {
      "commit": "959d1af8cffc8fd38ed53e8be1cf4ab8782f9c00",
      "tree": "04ca9a7c88fe42f21fa4a6a209a2c16236615f45",
      "parents": [
        "d4283e9378619c14dc3826a6b0527eb5d967ffde"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:45 2012 -0700"
      },
      "message": "workqueue: add missing smp_wmb() in process_one_work()\n\nWORK_STRUCT_PENDING is used to claim ownership of a work item and\nprocess_one_work() releases it before starting execution.  When\nsomeone else grabs PENDING, all pre-release updates to the work item\nshould be visible and all updates made by the new owner should happen\nafterwards.\n\nGrabbing PENDING uses test_and_set_bit() and thus has a full barrier;\nhowever, clearing doesn\u0027t have a matching wmb.  Given the preceding\nspin_unlock and use of clear_bit, I don\u0027t believe this can be a\nproblem on an actual machine and there hasn\u0027t been any related report\nbut it still is theretically possible for clear_pending to permeate\nupwards and happen before work-\u003eentry update.\n\nAdd an explicit smp_wmb() before work_clear_pending().\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: Oleg Nesterov \u003coleg@redhat.com\u003e\nCc: stable@vger.kernel.org\n"
    },
    {
      "commit": "d4283e9378619c14dc3826a6b0527eb5d967ffde",
      "tree": "1b1e401e51021c90407fae58e000c183a0e6c0e2",
      "parents": [
        "0a13c00e9d4502b8e3fd9260ce781758ff2c3970"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:44 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:44 2012 -0700"
      },
      "message": "workqueue: make queueing functions return bool\n\nAll queueing functions return 1 on success, 0 if the work item was\nalready pending.  Update them to return bool instead.  This signifies\nbetter that they don\u0027t return 0 / -errno.\n\nThis is cleanup and doesn\u0027t cause any functional difference.\n\nWhile at it, fix comment opening for schedule_work_on().\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "0a13c00e9d4502b8e3fd9260ce781758ff2c3970",
      "tree": "4233ef42fad89b4c42e00d8fed76112ce28390ba",
      "parents": [
        "0d7614f09c1ebdbaa1599a5aba7593f147bf96ee"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:44 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Fri Aug 03 10:30:44 2012 -0700"
      },
      "message": "workqueue: reorder queueing functions so that _on() variants are on top\n\nCurrently, queue/schedule[_delayed]_work_on() are located below the\ncounterpart without the _on postifx even though the latter is usually\nimplemented using the former.  Swap them.\n\nThis is cleanup and doesn\u0027t cause any functional difference.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n"
    },
    {
      "commit": "6fec10a1a5866dda3cd6a825a521fc7c2f226ba5",
      "tree": "f45c465a2d5f04e5052324efd114ac07cd668a41",
      "parents": [
        "46f3d976213452350f9d10b0c2780c2681f7075b"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Sun Jul 22 10:16:34 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Sun Jul 22 10:16:34 2012 -0700"
      },
      "message": "workqueue: fix spurious CPU locality WARN from process_one_work()\n\n25511a4776 \"workqueue: reimplement CPU online rebinding to handle idle\nworkers\" added CPU locality sanity check in process_one_work().  It\ntriggers if a worker is executing on a different CPU without UNBOUND\nor REBIND set.\n\nThis works for all normal workers but rescuers can trigger this\nspuriously when they\u0027re serving the unbound or a disassociated\nglobal_cwq - rescuers don\u0027t have either flag set and thus its\ngcwq-\u003ecpu can be a different value including %WORK_CPU_UNBOUND.\n\nFix it by additionally testing %GCWQ_DISASSOCIATED.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nReported-by: \"Paul E. McKenney\" \u003cpaulmck@linux.vnet.ibm.com\u003e\nLKML-Refence: \u003c20120721213656.GA7783@linux.vnet.ibm.com\u003e\n"
    },
    {
      "commit": "8db25e7891a47e03db6f04344a9c92be16e391bb",
      "tree": "e093119c71e655b54b159fed76b654a437b1ff30",
      "parents": [
        "628c78e7ea19d5b70d2b6a59030362168cdbe1ad"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:28 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:28 2012 -0700"
      },
      "message": "workqueue: simplify CPU hotplug code\n\nWith trustee gone, CPU hotplug code can be simplified.\n\n* gcwq_claim/release_management() now grab and release gcwq lock too\n  respectively and gained _and_lock and _and_unlock postfixes.\n\n* All CPU hotplug logic was implemented in workqueue_cpu_callback()\n  which was called by workqueue_cpu_up/down_callback() for the correct\n  priority.  This was because up and down paths shared a lot of logic,\n  which is no longer true.  Remove workqueue_cpu_callback() and move\n  all hotplug logic into the two actual callbacks.\n\nThis patch doesn\u0027t make any functional changes.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "628c78e7ea19d5b70d2b6a59030362168cdbe1ad",
      "tree": "7867a9f82aae3d31c40356f32ae24223ae0ddf0c",
      "parents": [
        "3ce63377305b694f53e7dd0c72907591c5344224"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "message": "workqueue: remove CPU offline trustee\n\nWith the previous changes, a disassociated global_cwq now can run as\nan unbound one on its own - it can create workers as necessary to\ndrain remaining works after the CPU has been brought down and manage\nthe number of workers using the usual idle timer mechanism making\ntrustee completely redundant except for the actual unbinding\noperation.\n\nThis patch removes the trustee and let a disassociated global_cwq\nmanage itself.  Unbinding is moved to a work item (for CPU affinity)\nwhich is scheduled and flushed from CPU_DONW_PREPARE.\n\nThis patch moves nr_running clearing outside gcwq and manager locks to\nsimplify the code.  As nr_running is unused at the point, this is\nsafe.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "3ce63377305b694f53e7dd0c72907591c5344224",
      "tree": "bee43bee96418ebdff5f7ad678584628fd86c52e",
      "parents": [
        "25511a477657884d2164f338341fa89652610507"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "message": "workqueue: don\u0027t butcher idle workers on an offline CPU\n\nCurrently, during CPU offlining, after all pending work items are\ndrained, the trustee butchers all workers.  Also, on CPU onlining\nfailure, workqueue_cpu_callback() ensures that the first idle worker\nis destroyed.  Combined, these guarantee that an offline CPU doesn\u0027t\nhave any worker for it once all the lingering work items are finished.\n\nThis guarantee isn\u0027t really necessary and makes CPU on/offlining more\nexpensive than needs to be, especially for platforms which use CPU\nhotplug for powersaving.\n\nThis patch lets offline CPUs removes idle worker butchering from the\ntrustee and let a CPU which failed onlining keep the created first\nworker.  The first worker is created if the CPU doesn\u0027t have any\nduring CPU_DOWN_PREPARE and started right away.  If onlining succeeds,\nthe rebind_workers() call in CPU_ONLINE will rebind it like any other\nworkers.  If onlining fails, the worker is left alone till the next\ntry.\n\nThis makes CPU hotplugs cheaper by allowing global_cwqs to keep\nworkers across them and simplifies code.\n\nNote that trustee doesn\u0027t re-arm idle timer when it\u0027s done and thus\nthe disassociated global_cwq will keep all workers until it comes back\nonline.  This will be improved by further patches.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "25511a477657884d2164f338341fa89652610507",
      "tree": "dbea343f762f154c28b6db423f0220f090d94d60",
      "parents": [
        "bc2ae0f5bb2f39e6db06a62f9d353e4601a332a1"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "message": "workqueue: reimplement CPU online rebinding to handle idle workers\n\nCurrently, if there are left workers when a CPU is being brough back\nonline, the trustee kills all idle workers and scheduled rebind_work\nso that they re-bind to the CPU after the currently executing work is\nfinished.  This works for busy workers because concurrency management\ndoesn\u0027t try to wake up them from scheduler callbacks, which require\nthe target task to be on the local run queue.  The busy worker bumps\nconcurrency counter appropriately as it clears WORKER_UNBOUND from the\nrebind work item and it\u0027s bound to the CPU before returning to the\nidle state.\n\nTo reduce CPU on/offlining overhead (as many embedded systems use it\nfor powersaving) and simplify the code path, workqueue is planned to\nbe modified to retain idle workers across CPU on/offlining.  This\npatch reimplements CPU online rebinding such that it can also handle\nidle workers.\n\nAs noted earlier, due to the local wakeup requirement, rebinding idle\nworkers is tricky.  All idle workers must be re-bound before scheduler\ncallbacks are enabled.  This is achieved by interlocking idle\nre-binding.  Idle workers are requested to re-bind and then hold until\nall idle re-binding is complete so that no bound worker starts\nexecuting work item.  Only after all idle workers are re-bound and\nparked, CPU_ONLINE proceeds to release them and queue rebind work item\nto busy workers thus guaranteeing scheduler callbacks aren\u0027t invoked\nuntil all idle workers are ready.\n\nworker_rebind_fn() is renamed to busy_worker_rebind_fn() and\nidle_worker_rebind() for idle workers is added.  Rebinding logic is\nmoved to rebind_workers() and now called from CPU_ONLINE after\nflushing trustee.  While at it, add CPU sanity check in\nworker_thread().\n\nNote that now a worker may become idle or the manager between trustee\nrelease and rebinding during CPU_ONLINE.  As the previous patch\nupdated create_worker() so that it can be used by regular manager\nwhile unbound and this patch implements idle re-binding, this is safe.\n\nThis prepares for removal of trustee and keeping idle workers across\nCPU hotplugs.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "bc2ae0f5bb2f39e6db06a62f9d353e4601a332a1",
      "tree": "3f1aa1f72566ac67234799fdd811ba63297de33c",
      "parents": [
        "6037315269d62bf967286ae2670fdd6b6acedab9"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "message": "workqueue: drop @bind from create_worker()\n\nCurrently, create_worker()\u0027s callers are responsible for deciding\nwhether the newly created worker should be bound to the associated CPU\nand create_worker() sets WORKER_UNBOUND only for the workers for the\nunbound global_cwq.  Creation during normal operation is always via\nmaybe_create_worker() and @bind is true.  For workers created during\nhotplug, @bind is false.\n\nNormal operation path is planned to be used even while the CPU is\ngoing through hotplug operations or offline and this static decision\nwon\u0027t work.\n\nDrop @bind from create_worker() and decide whether to bind by looking\nat GCWQ_DISASSOCIATED.  create_worker() will also set WORKER_UNBOUND\nautmatically if disassociated.  To avoid flipping GCWQ_DISASSOCIATED\nwhile create_worker() is in progress, the flag is now allowed to be\nchanged only while holding all manager_mutexes on the global_cwq.\n\nThis requires that GCWQ_DISASSOCIATED is not cleared behind trustee\u0027s\nback.  CPU_ONLINE no longer clears DISASSOCIATED before flushing\ntrustee, which clears DISASSOCIATED before rebinding remaining workers\nif asked to release.  For cases where trustee isn\u0027t around, CPU_ONLINE\nclears DISASSOCIATED after flushing trustee.  Also, now, first_idle\nhas UNBOUND set on creation which is explicitly cleared by CPU_ONLINE\nwhile binding it.  These convolutions will soon be removed by further\nsimplification of CPU hotplug path.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "6037315269d62bf967286ae2670fdd6b6acedab9",
      "tree": "c476298b57c0a33aa7fe3c898d62ce17eb11d2ad",
      "parents": [
        "403c821d452c03be4ced571ac91339a9d3631b17"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "message": "workqueue: use mutex for global_cwq manager exclusion\n\nPOOL_MANAGING_WORKERS is used to ensure that at most one worker takes\nthe manager role at any given time on a given global_cwq.  Trustee\nlater hitched on it to assume manager adding blocking wait for the\nbit.  As trustee already needed a custom wait mechanism, waiting for\nMANAGING_WORKERS was rolled into the same mechanism.\n\nTrustee is scheduled to be removed.  This patch separates out\nMANAGING_WORKERS wait into per-pool mutex.  Workers use\nmutex_trylock() to test for manager role and trustee uses mutex_lock()\nto claim manager roles.\n\ngcwq_claim/release_management() helpers are added to grab and release\nmanager roles of all pools on a global_cwq.  gcwq_claim_management()\nalways grabs pool manager mutexes in ascending pool index order and\nuses pool index as lockdep subclass.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "403c821d452c03be4ced571ac91339a9d3631b17",
      "tree": "022cf4ff47b9652ca550498dc896672c1cec8d05",
      "parents": [
        "f2d5a0ee06c1813f985bb9386f3ccc0d0315720f"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:27 2012 -0700"
      },
      "message": "workqueue: ROGUE workers are UNBOUND workers\n\nCurrently, WORKER_UNBOUND is used to mark workers for the unbound\nglobal_cwq and WORKER_ROGUE is used to mark workers for disassociated\nper-cpu global_cwqs.  Both are used to make the marked worker skip\nconcurrency management and the only place they make any difference is\nin worker_enter_idle() where WORKER_ROGUE is used to skip scheduling\nidle timer, which can easily be replaced with trustee state testing.\n\nThis patch replaces WORKER_ROGUE with WORKER_UNBOUND and drops\nWORKER_ROGUE.  This is to prepare for removing trustee and handling\ndisassociated global_cwqs as unbound.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "f2d5a0ee06c1813f985bb9386f3ccc0d0315720f",
      "tree": "4207975fe000f95931b0c6876657db5b13f92b73",
      "parents": [
        "6575820221f7a4dd6eadecf7bf83cdd154335eda"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:26 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:26 2012 -0700"
      },
      "message": "workqueue: drop CPU_DYING notifier operation\n\nWorkqueue used CPU_DYING notification to mark GCWQ_DISASSOCIATED.\nThis was necessary because workqueue\u0027s CPU_DOWN_PREPARE happened\nbefore other DOWN_PREPARE notifiers and workqueue needed to stay\nassociated across the rest of DOWN_PREPARE.\n\nAfter the previous patch, workqueue\u0027s DOWN_PREPARE happens after\nothers and can set GCWQ_DISASSOCIATED directly.  Drop CPU_DYING and\nlet the trustee set GCWQ_DISASSOCIATED after disabling concurrency\nmanagement.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    },
    {
      "commit": "6575820221f7a4dd6eadecf7bf83cdd154335eda",
      "tree": "2f9061b4eb1b6cf5a4b70acc45cb46a1a287066a",
      "parents": [
        "3270476a6c0ce322354df8679652f060d66526dc"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:26 2012 -0700"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Jul 17 12:39:26 2012 -0700"
      },
      "message": "workqueue: perform cpu down operations from low priority cpu_notifier()\n\nCurrently, all workqueue cpu hotplug operations run off\nCPU_PRI_WORKQUEUE which is higher than normal notifiers.  This is to\nensure that workqueue is up and running while bringing up a CPU before\nother notifiers try to use workqueue on the CPU.\n\nPer-cpu workqueues are supposed to remain working and bound to the CPU\nfor normal CPU_DOWN_PREPARE notifiers.  This holds mostly true even\nwith workqueue offlining running with higher priority because\nworkqueue CPU_DOWN_PREPARE only creates a bound trustee thread which\nruns the per-cpu workqueue without concurrency management without\nexplicitly detaching the existing workers.\n\nHowever, if the trustee needs to create new workers, it creates\nunbound workers which may wander off to other CPUs while\nCPU_DOWN_PREPARE notifiers are in progress.  Furthermore, if the CPU\ndown is cancelled, the per-CPU workqueue may end up with workers which\naren\u0027t bound to the CPU.\n\nWhile reliably reproducible with a convoluted artificial test-case\ninvolving scheduling and flushing CPU burning work items from CPU down\nnotifiers, this isn\u0027t very likely to happen in the wild, and, even\nwhen it happens, the effects are likely to be hidden by the following\nsuccessful CPU down.\n\nFix it by using different priorities for up and down notifiers - high\npriority for up operations and low priority for down operations.\n\nWorkqueue cpu hotplug operations will soon go through further cleanup.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nCc: stable@vger.kernel.org\nAcked-by: \"Rafael J. Wysocki\" \u003crjw@sisk.pl\u003e\n"
    }
  ],
  "next": "3270476a6c0ce322354df8679652f060d66526dc"
}
