)]}'
{
  "log": [
    {
      "commit": "e818ba5bb46162d36e2d890df450f8e95017987b",
      "tree": "efe411f1be9baa61574ff87efc40ec770dc54c0d",
      "parents": [
        "7be1c67c5008758eff81eae27c84da6be101e634",
        "2220ac23c3c583321276c85cbfd7f6378abd8f94"
      ],
      "author": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:51:47 2016 -0600"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:51:47 2016 -0600"
      },
      "message": "Merge remote-tracking branch \u0027f2fs/linux-3.4.y\u0027 into HEAD\n"
    },
    {
      "commit": "908651f4fbf59e4d7d824a6c06d2b430f658865c",
      "tree": "4b91be5e171048da529a6a1f410d392a3dfffc9f",
      "parents": [
        "f238599ef7e2a111f2e714c9dfc2b378c1f24a9b"
      ],
      "author": {
        "name": "Jin Qian",
        "email": "jinqian@google.com",
        "time": "Mon Jul 20 11:33:07 2015 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:18:42 2016 -0600"
      },
      "message": "proc: fix build broken by proc inode per namespace patch\n\nChange-Id: I119e4f31584b4a7ab9d6825499947d59c1293f1b"
    },
    {
      "commit": "f238599ef7e2a111f2e714c9dfc2b378c1f24a9b",
      "tree": "16d19ce785a3b750ff009860725a15fd648c8207",
      "parents": [
        "2de4b42ba87cd1423af77780ca40e3cd1ad4f3f6"
      ],
      "author": {
        "name": "Paul Moore",
        "email": "paul@paul-moore.com",
        "time": "Tue Sep 13 12:41:08 2016 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:18:31 2016 -0600"
      },
      "message": "BACKPORT: audit: fix a double fetch in audit_log_single_execve_arg()\n\n(cherry picked from commit 43761473c254b45883a64441dd0bc85a42f3645c)\n\nThere is a double fetch problem in audit_log_single_execve_arg()\nwhere we first check the execve(2) argumnets for any \"bad\" characters\nwhich would require hex encoding and then re-fetch the arguments for\nlogging in the audit record[1].  Of course this leaves a window of\nopportunity for an unsavory application to munge with the data.\n\nThis patch reworks things by only fetching the argument data once[2]\ninto a buffer where it is scanned and logged into the audit\nrecords(s).  In addition to fixing the double fetch, this patch\nimproves on the original code in a few other ways: better handling\nof large arguments which require encoding, stricter record length\nchecking, and some performance improvements (completely unverified,\nbut we got rid of some strlen() calls, that\u0027s got to be a good\nthing).\n\nAs part of the development of this patch, I\u0027ve also created a basic\nregression test for the audit-testsuite, the test can be tracked on\nGitHub at the following link:\n\n * https://github.com/linux-audit/audit-testsuite/issues/25\n\n[1] If you pay careful attention, there is actually a triple fetch\nproblem due to a strnlen_user() call at the top of the function.\n\n[2] This is a tiny white lie, we do make a call to strnlen_user()\nprior to fetching the argument data.  I don\u0027t like it, but due to the\nway the audit record is structured we really have no choice unless we\ncopy the entire argument at once (which would require a rather\nwasteful allocation).  The good news is that with this patch the\nkernel no longer relies on this strnlen_user() value for anything\nbeyond recording it in the log, we also update it with a trustworthy\nvalue whenever possible.\n\nReported-by: Pengfei Wang \u003cwpengfeinudt@gmail.com\u003e\nCc: \u003cstable@vger.kernel.org\u003e\nSigned-off-by: Paul Moore \u003cpaul@paul-moore.com\u003e\nChange-Id: I10e979e94605e3cf8d461e3e521f8f9837228aa5\nBug: 30956807\n"
    },
    {
      "commit": "c0c3b188fcc1d19a11375ae820a4023e618003b3",
      "tree": "7d2be1470ab3d86704927cf95d852523724ff3f4",
      "parents": [
        "e885207ed04f4a9afaeb01100db38c40acf1f3f5"
      ],
      "author": {
        "name": "Peter Zijlstra",
        "email": "peterz@infradead.org",
        "time": "Tue Dec 15 13:49:05 2015 +0100"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:17:51 2016 -0600"
      },
      "message": "perf: Fix race in swevent hash\n\nThere\u0027s a race on CPU unplug where we free the swevent hash array\nwhile it can still have events on. This will result in a\nuse-after-free which is BAD.\n\nSimply do not free the hash array on unplug. This leaves the thing\naround and no use-after-free takes place.\n\nWhen the last swevent dies, we do a for_each_possible_cpu() iteration\nanyway to clean these up, at which time we\u0027ll free it, so no leakage\nwill occur.\n\nChange-Id: I8eac2ed635daffa1d1a14da13829fecb63e1d231\nReported-by: Sasha Levin \u003csasha.levin@oracle.com\u003e\nTested-by: Sasha Levin \u003csasha.levin@oracle.com\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@redhat.com\u003e\nCc: Frederic Weisbecker \u003cfweisbec@gmail.com\u003e\nCc: Jiri Olsa \u003cjolsa@redhat.com\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: Stephane Eranian \u003ceranian@google.com\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Vince Weaver \u003cvincent.weaver@maine.edu\u003e\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "0d73eaaeac1ca4f6bff788574fb8b99835527ae1",
      "tree": "5c8355ee7f6076c54df3cc83c09e8be9fc239b39",
      "parents": [
        "89648c1dbec95e2049a186237da7d7314885d864"
      ],
      "author": {
        "name": "Peter Zijlstra",
        "email": "peterz@infradead.org",
        "time": "Mon Aug 20 11:26:57 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:12:59 2016 -0600"
      },
      "message": "sched: Fix load avg vs cpu-hotplug\n\nRabik and Paul reported two different issues related to the same few\nlines of code.\n\nRabik\u0027s issue is that the nr_uninterruptible migration code is wrong in\nthat he sees artifacts due to this (Rabik please do expand in more\ndetail).\n\nPaul\u0027s issue is that this code as it stands relies on us using\nstop_machine() for unplug, we all would like to remove this assumption\nso that eventually we can remove this stop_machine() usage altogether.\n\nThe only reason we\u0027d have to migrate nr_uninterruptible is so that we\ncould use for_each_online_cpu() loops in favour of\nfor_each_possible_cpu() loops, however since nr_uninterruptible() is the\nonly such loop and its using possible lets not bother at all.\n\nThe problem Rabik sees is (probably) caused by the fact that by\nmigrating nr_uninterruptible we screw rq-\u003ecalc_load_active for both rqs\ninvolved.\n\nSo don\u0027t bother with fancy migration schemes (meaning we now have to\nkeep using for_each_possible_cpu()) and instead fold any nr_active delta\nafter we migrate all tasks away to make sure we don\u0027t have any skewed\nnr_active accounting.\n\nChange-Id: If72297a98d894c3a415c1499ddcd6b7618159fd4\nReported-by: Rakib Mullick \u003crakib.mullick@gmail.com\u003e\nReported-by: Paul E. McKenney \u003cpaulmck@linux.vnet.ibm.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/1345454817.23018.27.camel@twins\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "89648c1dbec95e2049a186237da7d7314885d864",
      "tree": "42961db1f5c1c9a4d3d84814c36cd742b9aa5944",
      "parents": [
        "58aa30ec9adea4760497ac0e4b245e8b40667752"
      ],
      "author": {
        "name": "Oleg Nesterov",
        "email": "oleg@redhat.com",
        "time": "Mon Aug 12 18:14:00 2013 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:12:34 2016 -0600"
      },
      "message": "sched: fix the theoretical signal_wake_up() vs schedule() race\n\ncommit e0acd0a68ec7dbf6b7a81a87a867ebd7ac9b76c4 upstream.\n\nThis is only theoretical, but after try_to_wake_up(p) was changed\nto check p-\u003estate under p-\u003epi_lock the code like\n\n\t__set_current_state(TASK_INTERRUPTIBLE);\n\tschedule();\n\ncan miss a signal. This is the special case of wait-for-condition,\nit relies on try_to_wake_up/schedule interaction and thus it does\nnot need mb() between __set_current_state() and if(signal_pending).\n\nHowever, this __set_current_state() can move into the critical\nsection protected by rq-\u003elock, now that try_to_wake_up() takes\nanother lock we need to ensure that it can\u0027t be reordered with\n\"if (signal_pending(current))\" check inside that section.\n\nThe patch is actually one-liner, it simply adds smp_wmb() before\nspin_lock_irq(rq-\u003elock). This is what try_to_wake_up() already\ndoes by the same reason.\n\nWe turn this wmb() into the new helper, smp_mb__before_spinlock(),\nfor better documentation and to allow the architectures to change\nthe default implementation.\n\nWhile at it, kill smp_mb__after_lock(), it has no callers.\n\nPerhaps we can also add smp_mb__before/after_spinunlock() for\nprepare_to_wait().\n\nChange-Id: Id679afb25581b6c9f30a6957b1f033fbf5b7a49c\nSigned-off-by: Oleg Nesterov \u003coleg@redhat.com\u003e\nAcked-by: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n"
    },
    {
      "commit": "a2333f57bd0d268a81a51a28eca3e535070f8cdb",
      "tree": "94ef3208322f5694ed86f583c08e91e6d28c4658",
      "parents": [
        "184d099950767dd590799e77aca19eff5fa54bf9"
      ],
      "author": {
        "name": "Jeff Vander Stoep",
        "email": "jeffv@google.com",
        "time": "Sun May 29 14:22:32 2016 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Dec 08 21:05:22 2016 -0600"
      },
      "message": "FROMLIST: security,perf: Allow further restriction of perf_event_open\n\nWhen kernel.perf_event_open is set to 3 (or greater), disallow all\naccess to performance events by users without CAP_SYS_ADMIN.\nAdd a Kconfig symbol CONFIG_SECURITY_PERF_EVENTS_RESTRICT that\nmakes this value the default.\n\nThis is based on a similar feature in grsecurity\n(CONFIG_GRKERNSEC_PERF_HARDEN).  This version doesn\u0027t include making\nthe variable read-only.  It also allows enabling further restriction\nat run-time regardless of whether the default is changed.\n\nhttps://lkml.org/lkml/2016/1/11/587\n\nSigned-off-by: Ben Hutchings \u003cben@decadent.org.uk\u003e\n\nBug: 29054680\nChange-Id: Iff5bff4fc1042e85866df9faa01bce8d04335ab8\n"
    },
    {
      "commit": "96ff6232bc4f04b49514cee57c3ee0721f38c69f",
      "tree": "fe4fb6fa16e6c99dc7a7df8f385310bac3d22d6b",
      "parents": [
        "326e97866c708643b6c3899f75d716a03619554c"
      ],
      "author": {
        "name": "Peter Zijlstra",
        "email": "peterz@infradead.org",
        "time": "Mon Nov 02 10:50:51 2015 +0100"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Sat Dec 03 10:57:16 2016 -0500"
      },
      "message": "perf: Fix inherited events vs. tracepoint filters\n\ncommit b71b437eedaed985062492565d9d421d975ae845 upstream.\n\nArnaldo reported that tracepoint filters seem to misbehave (ie. not\napply) on inherited events.\n\nThe fix is obvious; filters are only set on the actual (parent)\nevent, use the normal pattern of using this parent event for filters.\nThis is safe because each child event has a reference to it.\n\nReported-by: Arnaldo Carvalho de Melo \u003cacme@kernel.org\u003e\nTested-by: Arnaldo Carvalho de Melo \u003cacme@kernel.org\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Adrian Hunter \u003cadrian.hunter@intel.com\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@redhat.com\u003e\nCc: David Ahern \u003cdsahern@gmail.com\u003e\nCc: Frédéric Weisbecker \u003cfweisbec@gmail.com\u003e\nCc: Jiri Olsa \u003cjolsa@kernel.org\u003e\nCc: Jiri Olsa \u003cjolsa@redhat.com\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: Steven Rostedt \u003crostedt@goodmis.org\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Wang Nan \u003cwangnan0@huawei.com\u003e\nLink: http://lkml.kernel.org/r/20151102095051.GN17308@twins.programming.kicks-ass.net\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "0308b2032bd7936d9bca5bc059eaadd79670bae0",
      "tree": "3014d80a35c6be1c5790569a7573e77a266a08be",
      "parents": [
        "38abce8b7fd53c3c85092fa04102db6194de854f"
      ],
      "author": {
        "name": "Steven Rostedt (Red Hat)",
        "email": "rostedt@goodmis.org",
        "time": "Mon Nov 23 10:35:36 2015 -0500"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Sat Dec 03 10:50:24 2016 -0500"
      },
      "message": "ring-buffer: Update read stamp with first real commit on page\n\ncommit b81f472a208d3e2b4392faa6d17037a89442f4ce upstream.\n\nDo not update the read stamp after swapping out the reader page from the\nwrite buffer. If the reader page is swapped out of the buffer before an\nevent is written to it, then the read_stamp may get an out of date\ntimestamp, as the page timestamp is updated on the first commit to that\npage.\n\nrb_get_reader_page() only returns a page if it has an event on it, otherwise\nit will return NULL. At that point, check if the page being returned has\nevents and has not been read yet. Then at that point update the read_stamp\nto match the time stamp of the reader page.\n\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "3907ab6dc6df167f30d16854592767816f066e45",
      "tree": "ba276dc41f78aa21b1e88dd00080478cd6df722c",
      "parents": [
        "c1bbfed712b8404ddadb9203aff867fe2b5e6acc"
      ],
      "author": {
        "name": "Xunlei Pang",
        "email": "xlpang@redhat.com",
        "time": "Wed Dec 02 19:52:59 2015 +0800"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Sat Dec 03 10:46:29 2016 -0500"
      },
      "message": "sched/core: Clear the root_domain cpumasks in init_rootdomain()\n\ncommit 8295c69925ad53ec32ca54ac9fc194ff21bc40e2 upstream.\n\nroot_domain::rto_mask allocated through alloc_cpumask_var()\ncontains garbage data, this may cause problems. For instance,\nWhen doing pull_rt_task(), it may do useless iterations if\nrto_mask retains some extra garbage bits. Worse still, this\nviolates the isolated domain rule for clustered scheduling\nusing cpuset, because the tasks(with all the cpus allowed)\nbelongs to one root domain can be pulled away into another\nroot domain.\n\nThe patch cleans the garbage by using zalloc_cpumask_var()\ninstead of alloc_cpumask_var() for root_domain::rto_mask\nallocation, thereby addressing the issues.\n\nDo the same thing for root_domain\u0027s other cpumask memembers:\ndlo_mask, span, and online.\n\nSigned-off-by: Xunlei Pang \u003cxlpang@redhat.com\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Mike Galbraith \u003cefault@gmx.de\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: Steven Rostedt \u003crostedt@goodmis.org\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nLink: http://lkml.kernel.org/r/1449057179-29321-1-git-send-email-xlpang@redhat.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n[lizf: there\u0027s no rd-\u003edlo_mask, so remove the change to it]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "4f6e18bd30970c30533b624b50aae227a14364ca",
      "tree": "ce779021f6da307d9f3ed3e89de12b98ed8763f0",
      "parents": [
        "eb5f0071cc448489f22b2a34034fe04c5dfb4bba"
      ],
      "author": {
        "name": "Thomas Gleixner",
        "email": "tglx@linutronix.de",
        "time": "Sun Dec 13 18:12:30 2015 +0100"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Sat Dec 03 10:40:51 2016 -0500"
      },
      "message": "genirq: Prevent chip buslock deadlock\n\ncommit abc7e40c81d113ef4bacb556f0a77ca63ac81d85 upstream.\n\nIf a interrupt chip utilizes chip-\u003ebuslock then free_irq() can\ndeadlock in the following way:\n\nCPU0\t\t\t\tCPU1\n\t\t\t\tinterrupt(X) (Shared or spurious)\nfree_irq(X)\t\t\tinterrupt_thread(X)\nchip_bus_lock(X)\n\t\t\t\t   irq_finalize_oneshot(X)\n\t\t\t\t     chip_bus_lock(X)\nsynchronize_irq(X)\n\nsynchronize_irq() waits for the interrupt thread to complete,\ni.e. forever.\n\nSolution is simple: Drop chip_bus_lock() before calling\nsynchronize_irq() as we do with the irq_desc lock. There is nothing to\nbe protected after the point where irq_desc lock has been released.\n\nThis adds chip_bus_lock/unlock() to the remove_irq() code path, but\nthat\u0027s actually correct in the case where remove_irq() is called on\nsuch an interrupt. The current users of remove_irq() are not affected\nas none of those interrupts is on a chip which requires buslock.\n\nReported-by: Fredrik Markström \u003cfredrik.markstrom@gmail.com\u003e\nSigned-off-by: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "1e7c7c234a720db059d671335345a4a7f80cab99",
      "tree": "78638c46829a88b05bcf3b6ab098498132a2e42b",
      "parents": [
        "5cf19f594f093e4c1fc6c77f0257e486bc9579ce"
      ],
      "author": {
        "name": "John Stultz",
        "email": "john.stultz@linaro.org",
        "time": "Thu Jun 11 15:54:55 2015 -0700"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Sat Dec 03 10:18:23 2016 -0500"
      },
      "message": "time: Prevent early expiry of hrtimers[CLOCK_REALTIME] at the leap second edge\n\ncommit 833f32d763028c1bb371c64f457788b933773b3e upstream.\n\nCurrently, leapsecond adjustments are done at tick time. As a result,\nthe leapsecond was applied at the first timer tick *after* the\nleapsecond (~1-10ms late depending on HZ), rather then exactly on the\nsecond edge.\n\nThis was in part historical from back when we were always tick based,\nbut correcting this since has been avoided since it adds extra\nconditional checks in the gettime fastpath, which has performance\noverhead.\n\nHowever, it was recently pointed out that ABS_TIME CLOCK_REALTIME\ntimers set for right after the leapsecond could fire a second early,\nsince some timers may be expired before we trigger the timekeeping\ntimer, which then applies the leapsecond.\n\nThis isn\u0027t quite as bad as it sounds, since behaviorally it is similar\nto what is possible w/ ntpd made leapsecond adjustments done w/o using\nthe kernel discipline. Where due to latencies, timers may fire just\nprior to the settimeofday call. (Also, one should note that all\napplications using CLOCK_REALTIME timers should always be careful,\nsince they are prone to quirks from settimeofday() disturbances.)\n\nHowever, the purpose of having the kernel do the leap adjustment is to\navoid such latencies, so I think this is worth fixing.\n\nSo in order to properly keep those timers from firing a second early,\nthis patch modifies the ntp and timekeeping logic so that we keep\nenough state so that the update_base_offsets_now accessor, which\nprovides the hrtimer core the current time, can check and apply the\nleapsecond adjustment on the second edge. This prevents the hrtimer\ncore from expiring timers too early.\n\nThis patch does not modify any other time read path, so no additional\noverhead is incurred. However, this also means that the leap-second\ncontinues to be applied at tick time for all other read-paths.\n\nApologies to Richard Cochran, who pushed for similar changes years\nago, which I resisted due to the concerns about the performance\noverhead.\n\nWhile I suspect this isn\u0027t extremely critical, folks who care about\nstrict leap-second correctness will likely want to watch\nthis. Potentially a -stable candidate eventually.\n\nOriginally-suggested-by: Richard Cochran \u003crichardcochran@gmail.com\u003e\nReported-by: Daniel Bristot de Oliveira \u003cbristot@redhat.com\u003e\nReported-by: Prarit Bhargava \u003cprarit@redhat.com\u003e\nSigned-off-by: John Stultz \u003cjohn.stultz@linaro.org\u003e\nCc: Richard Cochran \u003crichardcochran@gmail.com\u003e\nCc: Jan Kara \u003cjack@suse.cz\u003e\nCc: Jiri Bohac \u003cjbohac@suse.cz\u003e\nCc: Shuah Khan \u003cshuahkh@osg.samsung.com\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nLink: http://lkml.kernel.org/r/1434063297-28657-4-git-send-email-john.stultz@linaro.org\nSigned-off-by: Thomas Gleixner \u003ctglx@linutronix.de\u003e\n[Yadi: Move do_adjtimex to timekeeping.c and solve context issues]\nSigned-off-by: Hu \u003cyadi.hu@windriver.com\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "ba5d3edb3a7d4408fac68850951beab627939c40",
      "tree": "7b5d38247e16d57564003cdf7d6f33f068a5729f",
      "parents": [
        "05100f7133eef5e13b42a8bc8ef5327abc29751e"
      ],
      "author": {
        "name": "Ruchi Kandoi",
        "email": "kandoiruchi@google.com",
        "time": "Fri Apr 17 16:33:29 2015 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Sat Sep 24 23:43:05 2016 -0500"
      },
      "message": "sched: cpufreq: Adds a field cpu_power in the task_struct\n\ncpu_power has been added to keep track of amount of power each task is\nconsuming. cpu_power is updated whenever stime and utime are updated for\na task. power is computed by taking into account the frequency at which\nthe current core was running and the current for cpu actively\nrunning at hat frequency.\n\nBug: 21498425\nChange-Id: Ic535941e7b339aab5cae9081a34049daeb44b248\nSigned-off-by: Ruchi Kandoi \u003ckandoiruchi@google.com\u003e\n"
    },
    {
      "commit": "9a95f91084ab59ac9e69bef69efe802611e3c6e7",
      "tree": "c347fb922ca155625a98fb902cebc6fb4ecfe618",
      "parents": [
        "729f4061b7d042bdf1632815238bf441937192c9"
      ],
      "author": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Fri Jun 15 03:01:42 2012 +0400"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Aug 24 21:00:53 2016 -0400"
      },
      "message": "get rid of kern_path_parent()\n\nall callers want the same thing, actually - a kinda-sorta analog of\nkern_path_create().  I.e. they want parent vfsmount/dentry (with\n-\u003ei_mutex held, to make sure the child dentry is still their child)\n+ the child dentry.\n\nSigned-off-by Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n\nChange-Id: I58cc7b0a087646516db9af69962447d27fb3ee8b\n"
    },
    {
      "commit": "bd188f9dddeb5cc026de8cbe93b02e31f96d6537",
      "tree": "013f601e46cf60f57dcf92ce1cb02ec654e808ce",
      "parents": [
        "1eca7dab267caf5443c574d2d0f2f2975de414a0"
      ],
      "author": {
        "name": "dcashman",
        "email": "dcashman@google.com",
        "time": "Tue Dec 29 14:24:39 2015 -0800"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Aug 24 12:39:35 2016 -0400"
      },
      "message": "FROMLIST: mm: mmap: Add new /proc tunable for mmap_base ASLR.\n\n(cherry picked from commit https://lkml.org/lkml/2015/12/21/337)\n\nASLR  only uses as few as 8 bits to generate the random offset for the\nmmap base address on 32 bit architectures. This value was chosen to\nprevent a poorly chosen value from dividing the address space in such\na way as to prevent large allocations. This may not be an issue on all\nplatforms. Allow the specification of a minimum number of bits so that\nplatforms desiring greater ASLR protection may determine where to place\nthe trade-off.\n\nBug: 24047224\nSigned-off-by: Daniel Cashman \u003cdcashman@android.com\u003e\nSigned-off-by: Daniel Cashman \u003cdcashman@google.com\u003e\nChange-Id: Ic74424e07710cd9ccb4a02871a829d14ef0cc4bc\n"
    },
    {
      "commit": "7ec9df1fdd1ebbca765d82eeed7dda11e052b294",
      "tree": "b8689f4e74d21ac78f42d0cc840a13a775d41d65",
      "parents": [
        "56a32fec794e088244241868de1c01f2f5bbc0ea"
      ],
      "author": {
        "name": "Kaushal Kumar",
        "email": "kaushalk@codeaurora.org",
        "time": "Thu May 15 19:19:04 2014 +0530"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:36:08 2016 -0500"
      },
      "message": "sched: Remove synchronize rcu/sched calls from _cpu_down\n\nThere is no need for sync_sched() in _cpu_down as stop_machine()\nprovides that barrier implicitly. Removing it also helps improve\nhot-unplug latency.\n\nThe sync_sched/rcu were earlier removed for the same reason by the\ncommit 9ee349ad6d32 (\"sched: Fix set_cpu_active() in cpu_down()\"),\nbut recently got added as part of commit 6acce3ef8452 (\"sched:\nRemove get_online_cpus() usage.\").\n\nCRs-Fixed: 667325\nChange-Id: I97763004454d082d3cc2d9d9dbef7da923608600\nSigned-off-by: Kaushal Kumar \u003ckaushalk@codeaurora.org\u003e\n[mattw@codeaurora.org: fix-up commit hashes in commit text]\nSigned-off-by: Matt Wagantall \u003cmattw@codeaurora.org\u003e\n"
    },
    {
      "commit": "56a32fec794e088244241868de1c01f2f5bbc0ea",
      "tree": "aaf9518752eebc8d333828590aa2e8401ee91e03",
      "parents": [
        "db7eb0fa96d255d6d5905f9777f2a85afcd99dee"
      ],
      "author": {
        "name": "Michael wang",
        "email": "wangyun@linux.vnet.ibm.com",
        "time": "Wed Nov 13 11:10:56 2013 +0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:36:02 2016 -0500"
      },
      "message": "sched: Fix endless sync_sched/rcu() loop inside _cpu_down()\n\nCommit 6acce3ef8:\n\n\tsched: Remove get_online_cpus() usage\n\ntries to do sync_sched/rcu() inside _cpu_down() but triggers:\n\n\tINFO: task swapper/0:1 blocked for more than 120 seconds.\n\t...\n\t[\u003cffffffff811263dc\u003e] synchronize_rcu+0x2c/0x30\n\t[\u003cffffffff81d1bd82\u003e] _cpu_down+0x2b2/0x340\n\t...\n\nIt was caused by that in the rcu boost case we rely on smpboot thread to\nfinish the rcu callback, which has already been parked before sync in here\nand leads to the endless sync_sched/rcu().\n\nThis patch exchanges the sequence of smpboot_park_threads() and\nsync_sched/rcu() to fix the bug.\n\nCRs-fixed: 647141\nChange-Id: Ia8b325d7c3c778f428d9fb51271977c849df32ad\nReported-by: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nTested-by: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nSigned-off-by: Michael Wang \u003cwangyun@linux.vnet.ibm.com\u003e\nSigned-off-by: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nLink: http://lkml.kernel.org/r/5282EDC0.6060003@linux.vnet.ibm.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nGit-commit: 106dd5afde3cd10db7e1370b6ddc77f0b2496a75\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\nSigned-off-by: Kaushal Kumar \u003ckaushalk@codeaurora.org\u003e\n"
    },
    {
      "commit": "db7eb0fa96d255d6d5905f9777f2a85afcd99dee",
      "tree": "ab9e548efc7ffd9641debf70490a5242d497eccf",
      "parents": [
        "efeac5e81fadd63d38142a555375265640d155cc"
      ],
      "author": {
        "name": "Michael wang",
        "email": "wangyun@linux.vnet.ibm.com",
        "time": "Mon Oct 28 10:50:22 2013 +0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:35:55 2016 -0500"
      },
      "message": "sched: Remove extra put_online_cpus() inside sched_setaffinity()\n\nCommit 6acce3ef8:\n\n\tsched: Remove get_online_cpus() usage\n\nhas left one extra put_online_cpus() inside sched_setaffinity(),\nremove it to fix the WARN:\n\n   ------------[ cut here ]------------\n   WARNING: CPU: 0 PID: 3166 at kernel/cpu.c:84 put_online_cpus+0x43/0x70()\n   ...\n   [\u003cffffffff810c3fef\u003e] put_online_cpus+0x43/0x70 [\n   [\u003cffffffff810efd59\u003e] sched_setaffinity+0x7d/0x1f9 [\n   ...\n\nCRs-fixed: 647141\nChange-Id: I33f799f30a963db3e9459832832e9c786931c8c2\nReported-by: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nTested-by: Fengguang Wu \u003cfengguang.wu@intel.com\u003e\nSigned-off-by: Michael Wang \u003cwangyun@linux.vnet.ibm.com\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nLink: http://lkml.kernel.org/r/526DD0EE.1090309@linux.vnet.ibm.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nGit-commit: ac9ff7997b6f2b31949dcd2495ac671fd9ddc990\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\nSigned-off-by: Kaushal Kumar \u003ckaushalk@codeaurora.org\u003e\n"
    },
    {
      "commit": "efeac5e81fadd63d38142a555375265640d155cc",
      "tree": "788bb9c6fac4cee222e58fe3a6cffd1384375ccc",
      "parents": [
        "106cf7e2a55ad474d05fe63bd4831821d35f42e2"
      ],
      "author": {
        "name": "Peter Zijlstra",
        "email": "peterz@infradead.org",
        "time": "Fri Oct 11 14:38:20 2013 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:35:49 2016 -0500"
      },
      "message": "sched: Remove get_online_cpus() usage\n\nRemove get_online_cpus() usage from the scheduler; there\u0027s 4 sites that\nuse it:\n\n - sched_init_smp(); where its completely superfluous since we\u0027re in\n   \u0027early\u0027 boot and there simply cannot be any hotplugging.\n\n - sched_getaffinity(); we already take a raw spinlock to protect the\n   task cpus_allowed mask, this disables preemption and therefore\n   also stabilizes cpu_online_mask as that\u0027s modified using\n   stop_machine. However switch to active mask for symmetry with\n   sched_setaffinity()/set_cpus_allowed_ptr(). We guarantee active\n   mask stability by inserting sync_rcu/sched() into _cpu_down.\n\n - sched_setaffinity(); we don\u0027t appear to need get_online_cpus()\n   either, there\u0027s two sites where hotplug appears relevant:\n    * cpuset_cpus_allowed(); for the !cpuset case we use possible_mask,\n      for the cpuset case we hold task_lock, which is a spinlock and\n      thus for mainline disables preemption (might cause pain on RT).\n    * set_cpus_allowed_ptr(); Holds all scheduler locks and thus has\n      preemption properly disabled; also it already deals with hotplug\n      races explicitly where it releases them.\n\n - migrate_swap(); we can make stop_two_cpus() do the heavy lifting for\n   us with a little trickery. By adding a sync_sched/rcu() after the\n   CPU_DOWN_PREPARE notifier we can provide preempt/rcu guarantees for\n   cpu_active_mask. Use these to validate that both our cpus are active\n   when queueing the stop work before we queue the stop_machine works\n   for take_cpu_down().\n\nCRs-fixed: 647141\nChange-Id: Id41e66659574f716de0e7c29f477e56a86db9404\nSigned-off-by: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: \"Srivatsa S. Bhat\" \u003csrivatsa.bhat@linux.vnet.ibm.com\u003e\nCc: Paul McKenney \u003cpaulmck@linux.vnet.ibm.com\u003e\nCc: Mel Gorman \u003cmgorman@suse.de\u003e\nCc: Rik van Riel \u003criel@redhat.com\u003e\nCc: Srikar Dronamraju \u003csrikar@linux.vnet.ibm.com\u003e\nCc: Andrea Arcangeli \u003caarcange@redhat.com\u003e\nCc: Johannes Weiner \u003channes@cmpxchg.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nCc: Steven Rostedt \u003crostedt@goodmis.org\u003e\nCc: Oleg Nesterov \u003coleg@redhat.com\u003e\nLink: http://lkml.kernel.org/r/20131011123820.GV3081@twins.programming.kicks-ass.net\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nGit-commit: 6acce3ef84520537f8a09a12c9ddbe814a584dd2\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\n[kaushalk@codeaurora.org: get_online_cpus has only 3 sites of usage in\n kernel/sched/core.c of msm-3.10 so migrate_swap changes are not\n applicable here. stop_two_cpus related change is not applicable to\n msm-3.10, so skip it.]\nSigned-off-by: Kaushal Kumar \u003ckaushalk@codeaurora.org\u003e\n"
    },
    {
      "commit": "ddeb06e48d6fcd9917b4db5a81f5181f727dc3cb",
      "tree": "28a2d248ea9a32873de165efccc69c316065f561",
      "parents": [
        "7f4c7bb0b2c9ae7a3163dfbc6d436d0239af7d3f"
      ],
      "author": {
        "name": "Zhao Wei Liew",
        "email": "zhaoweiliew@gmail.com",
        "time": "Thu Jul 28 16:54:15 2016 +0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:35:00 2016 -0500"
      },
      "message": "perf: Pass appropriate parameters when removing event\n\nSince commit 4b2cfc9508d9e509708f85748548e03db696dbcd, the parameters\nwe passed to perf_retry_remove were somewhat duplicated.\n\nPass the remove_event pointer instead of re-creating one\nin the perf_retry_remove function.\n\nChange-Id: I2895643488e87623d8982f2d27a1e5acb76e51fa\n"
    },
    {
      "commit": "7f4c7bb0b2c9ae7a3163dfbc6d436d0239af7d3f",
      "tree": "4c925eb1652de3bdaef89c1b5588732ea66410f5",
      "parents": [
        "a3927fa42d08cc1d029e0e52b1ff9ea658666083"
      ],
      "author": {
        "name": "Neil Leeder",
        "email": "nleeder@codeaurora.org",
        "time": "Fri Sep 20 12:15:38 2013 -0400"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:34:55 2016 -0500"
      },
      "message": "msm: perf: clean up duplicate constraint events\n\nEvents with a duplicate constraint are set to state\u003dOFF when detected,\nso that their duplicate counts are not read. However, they were not being\ncleaned up because the core code only cleaned up ACTIVE events.\nThis resulted in counters not being freed and eventually running out\nof resources.\n\nClean up the events with state\u003d\u003dOFF that were marked that way because of\nconstraint duplication.\nEnsure counts are not updated for OFF events.\n\nChange-Id: If532801c79e6ad6809869eb0a3063774f00c92c3\nSigned-off-by: Neil Leeder \u003cnleeder@codeaurora.org\u003e\n[zhaoweiliew: Merge the missing bits due to Linux 3.4.y mismerge]\nSigned-off-by: Zhao Wei Liew \u003czhaoweiliew@gmail.com\u003e\n"
    },
    {
      "commit": "2a41c3c44d04dd7805f0881e830f123dd4bb95fb",
      "tree": "35af8b677f980a68c7297ca50ecf393cc495a59c",
      "parents": [
        "2bf1c8f66507fde33516820264638a101420329b"
      ],
      "author": {
        "name": "Srivatsa S. Bhat",
        "email": "srivatsa.bhat@linux.vnet.ibm.com",
        "time": "Mon Oct 08 23:28:20 2012 +0000"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:34:29 2016 -0500"
      },
      "message": "CPU hotplug, debug: detect imbalance between get_online_cpus() and put_online_cpus()\n\nThe synchronization between CPU hotplug readers and writers is achieved\nby means of refcounting, safeguarded by the cpu_hotplug.lock.\n\nget_online_cpus() increments the refcount, whereas put_online_cpus()\ndecrements it.  If we ever hit an imbalance between the two, we end up\ncompromising the guarantees of the hotplug synchronization i.e, for\nexample, an extra call to put_online_cpus() can end up allowing a\nhotplug reader to execute concurrently with a hotplug writer.\n\nSo, add a WARN_ON() in put_online_cpus() to detect such cases where the\nrefcount can go negative, and also attempt to fix it up, so that we can\ncontinue to run.\n\nChange-Id: I144efeaa5899a2e8a3cddd21f010679cbaaa2459\nSigned-off-by: Srivatsa S. Bhat \u003csrivatsa.bhat@linux.vnet.ibm.com\u003e\nReviewed-by: Yasuaki Ishimatsu \u003cisimatu.yasuaki@jp.fujitsu.com\u003e\nCc: Jiri Kosina \u003cjkosina@suse.cz\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nGit-commit: 075663d19885eb3738fd2d7dbdb8947e12563b68\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\nSigned-off-by: Osvaldo Banuelos \u003cosvaldob@codeaurora.org\u003e\n"
    },
    {
      "commit": "4de4afc7eac758a72dc4b9c9113613676610757d",
      "tree": "448a9662e0802fa13767798fdc26d8bc830c2a41",
      "parents": [
        "6c4824b2b1050e7b247f74341c30db528e6556bb"
      ],
      "author": {
        "name": "Arun KS",
        "email": "arunks@codeaurora.org",
        "time": "Wed May 11 10:11:36 2016 +0530"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:34:13 2016 -0500"
      },
      "message": "msm: perf: Do not allocate new hw_event if event is duplicate.\n\nDuring a perf_event_enable, kernel/events/core.c calls pmu-\u003eadd() which\nis platform implementation(arch/arm/kernel/perf_event.c). Due to the\nduplicate constraints, arch/arm/mach-msm/perf_event_msm_krait_l2.c\ndrivers marks the event as OFF but returns TRUE to perf_event.c which\ngoes ahead and allocates the hw_event and enables it.\n\nSince event is marked OFF, kernel events core will try to enable this event\nagain during next perf_event_enable. Which results in same event enabled\non multiple hw_events. But during the perf_release, event struct is freed\nand only one hw_event is released. This results in dereferencing the\ninvalid pointer and hence the crash.\n\nFix this by returning error in case of constraint event duplicate. Hence\navoiding the same event programmed on multiple hw event counters.\n\nChange-Id: Ia3360be027dfe87ac753191ffe7e0bc947e72455\nSigned-off-by: Arun KS \u003carunks@codeaurora.org\u003e\n"
    },
    {
      "commit": "6c4824b2b1050e7b247f74341c30db528e6556bb",
      "tree": "95ff0ac54866282b26287815aba4c064b0cd2efb",
      "parents": [
        "58ca0c90e52a44d795ca422d190ce826e688156b"
      ],
      "author": {
        "name": "Kishor PK",
        "email": "kpbhat@codeaurora.org",
        "time": "Fri Jan 29 11:13:25 2016 +0530"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:34:07 2016 -0500"
      },
      "message": "trace: prevent NULL pointer dereference\n\nPrevent unintended NULL pointer dereference in trace_event_perf.\n\nChange-Id: I35151c460b4350ebd414b67c655684c2019f799f\nSigned-off-by: Kishor PK \u003ckpbhat@codeaurora.org\u003e\nSigned-off-by: Srinivasarao P \u003cspathi@codeaurora.org\u003e\n"
    },
    {
      "commit": "aa14be3e4510db2968e37fceeca1893bcec0913c",
      "tree": "5f9b795a1ca55d451782165b0eb54dc6c64cd0e4",
      "parents": [
        "5c50bf7a3e2af43b128ec018d9250e0c3002295b"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Sat Mar 29 16:56:45 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:33:16 2016 -0500"
      },
      "message": "sched: Introduce CONFIG_SCHED_FREQ_INPUT\n\nIntroduce a compile time flag to enable scheduler guidance of\nfrequency selection. This flag is also used to turn on or off\nwindow-based load stats feature.\n\nHaving a compile time flag will let some platforms avoid any\noverhead that may be present with this scheduler feature.\n\nChange-Id: Id8dec9839f90dcac82f58ef7e2bd0ccd0b6bd16c\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\n"
    },
    {
      "commit": "f906ba019ce0abdf3bfde99109c3d3b0ec1dcbcf",
      "tree": "b65bd6bb83ef6f2d44cb0597c71f7852671a3dd4",
      "parents": [
        "7054d49bbcde1124c494b9618bab0f202bcbcc5e"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Sat Mar 29 11:40:16 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:59 2016 -0500"
      },
      "message": "sched: window-based load stats improvements\n\nFollowing cleanups and improvements are made to window-based load\nstats feature:\n\n* Add sysctl to pick max, avg or most recent samples as task\u0027s\n  demand.\n\n* Fix overflow possibility in calculation of sum for average policy.\n\n* Use unscaled statistics when a task is running on a CPU which is\nthermally throttled.\n\nChange-Id: I8293565ca0c2a785dadf8adb6c67f579a445ed29\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\n"
    },
    {
      "commit": "7054d49bbcde1124c494b9618bab0f202bcbcc5e",
      "tree": "df5cd1d4184d4c21f9e625d572dcfb731cefa6fe",
      "parents": [
        "6d536c0d0b094a027ee3b7c68ce5b186b3f45698"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Tue Apr 01 10:57:59 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:51 2016 -0500"
      },
      "message": "sched: Add min_max_freq and rq-\u003emax_possible_freq\n\nrq-\u003emax_possible_freq represents the maximum frequency a cpu is\ncapable of attaining, while rq-\u003emax_freq represents the maximum\nfrequency a cpu can attain at a given instant. rq-\u003emax_freq includes\nconstraints imposed by user or thermal driver.\nrq-\u003emax_freq \u003c\u003d rq-\u003emax_possible_freq.\n\nmax_possible_freq is derived as max(rq-\u003emax_possible_freq) and\nrepresents the \"best\" cpu that can attain best possible frequency.\n\nmin_max_freq is derived as min(rq-\u003emax_possible_freq). For homogeneous\nsystems, max_possible_freq and min_max_freq will be same, while they\ncould be different on heterogeneous systems.\n\nChange-Id: Iec485fde35cfd33f55ebf2c2dce4864faa2083c5\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\n"
    },
    {
      "commit": "6d536c0d0b094a027ee3b7c68ce5b186b3f45698",
      "tree": "b4b9e2555ff84caec026dfaeaca87a74b3a0d8ec",
      "parents": [
        "bc6d706612791d4e83463f58228470cb4ab6c643"
      ],
      "author": {
        "name": "Steve Muckle",
        "email": "smuckle@codeaurora.org",
        "time": "Tue May 20 14:10:18 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:46 2016 -0500"
      },
      "message": "sched: move task load based functions\n\nThe task load based functions will need to make use of LOAD_AVG_MAX\nin a subsequent patch, so move them below the definition of that\nmacro.\n\nChange-Id: I02f18ba069b81033e611f8f8bba6dccd7cd81252\nSigned-off-by: Steve Muckle \u003csmuckle@codeaurora.org\u003e\n"
    },
    {
      "commit": "bc6d706612791d4e83463f58228470cb4ab6c643",
      "tree": "c7459dd55b3c8bdd71c05f7a3911856cb774f3fa",
      "parents": [
        "fac5e0e2237a9dea2f7fb85cca7a8b91f49a5ee2"
      ],
      "author": {
        "name": "Steve Muckle",
        "email": "smuckle@codeaurora.org",
        "time": "Mon Jul 14 03:19:29 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:39 2016 -0500"
      },
      "message": "sched: fix race between try_to_wake_up() and move_task()\n\nUntil a task\u0027s state has been seen as interruptible/uninterruptible\nand it is no longer on_cpu, it is possible that the task may move\nto another CPU (load balancing may cause this). Here is an example\nwhere the race condition results in incorrect operation:\n\n- cpu 0 calls put_prev_task on task A, task A\u0027s state is TASK_RUNNING\n- cpu 0 runs task B, which attempts to wake up A\n- cpu 0 begins try_to_wake_up(), recording src_cpu for task A as cpu 0\n- cpu 1 then pulls task A (perhaps due to idle balance)\n- cpu 1 runs task A, which then sleeps, becoming INTERRUPTIBLE\n- cpu 0 continues in try_to_wake_up(), thinking task A\u0027s previous\n  cpu is 0, where it is actually 1\n- if select_task_rq returns cpu 0, task A will be woken up on cpu 0\n  without properly updating its cpu to 0 in set_task_cpu()\n\nCRs-Fixed: 665958\nChange-Id: Icee004cb320bd8edfc772d9f74e670a9d4978a99\nAuthor: Steve Muckle \u003csmuckle@codeaurora.org\u003e\nSigned-off-by: Steve Muckle \u003csmuckle@codeaurora.org\u003e\n"
    },
    {
      "commit": "fac5e0e2237a9dea2f7fb85cca7a8b91f49a5ee2",
      "tree": "b7b7c07928eac82cef7e8f01c8f0ee2359e6dc65",
      "parents": [
        "8c7a411495d4823fa9c907eba7abf24cb4c7da8b"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Fri May 16 16:15:50 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:32 2016 -0500"
      },
      "message": "sched: Skip load update for idle task\n\nLoad statistics for idle tasks is not useful in any manner. Skip load\nupdate for such idle tasks.\n\nCRs-Fixed: 665706\nChange-Id: If3a908bad7fbb42dcb3d0a1d073a3750cf32fcf9\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\n"
    },
    {
      "commit": "8c7a411495d4823fa9c907eba7abf24cb4c7da8b",
      "tree": "bdd8e0d2f60c604b247821aa1b85d834667234cc",
      "parents": [
        "50c7d47074b390ca2f2cbab2f2b1d6bff122b481"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Thu May 15 19:06:56 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:22 2016 -0500"
      },
      "message": "sched: window-stats: Fix overflow bug\n\nMultiplication over-flow possibility exists in update_task_ravg() when\nupdating task\u0027s window_start. That would lead to incorrect accounting\nof task load. Fix the issue by using 64-bit arithmetic.\n\nCRs-Fixed: 665706\nChange-Id: I92651c41efa6121bb8fe102e495ae956127b237a\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\n"
    },
    {
      "commit": "50c7d47074b390ca2f2cbab2f2b1d6bff122b481",
      "tree": "3d2a8e29a4707398a3f116b4a31491dadc600cf3",
      "parents": [
        "fddadfa9239b2c1648cd91918991190d0f5f7348"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Sat Mar 29 11:40:16 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:17 2016 -0500"
      },
      "message": "sched: Window-based load stat improvements\n\nSome tasks can have a sporadic load pattern such that they can suddenly\nstart running for longer intervals of time after running for shorter\ndurations. To recognize such sharp increase in tasks\u0027 demands, max\nbetween the average of 5 window load samples and the most recent sample\nis chosen as the task demand.\n\nMake the window size (sched_ravg_window) configurable at boot up\ntime. To prevent users from setting inappropriate values for window\nsize, min and max limits are defined. As \u0027ravg\u0027 struct tracks load for\nboth real-time and non real-time tasks it is moved out of sched_entity\nstruct.\n\nIn order to prevent changing function signatures for move_tasks() and\nmove_one_task() per-cpu variables are defined to track the total load\nmoved. In case multiple tasks are selected to migrate in one load\nbalance operation, loads \u003e 100 could be sent through migration notifiers.\nPrevent this scenario by setting mnd.load to 100 in such cases.\n\nDefine wrapper functions to compute cpu demands for tasks and to change\nrq-\u003ecumulative_runnable_avg.\n\nChange-Id: I9abfbf3b5fe23ae615a6acd3db9580cfdeb515b4\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\nSigned-off-by: Rohit Gupta \u003crohgup@codeaurora.org\u003e\n"
    },
    {
      "commit": "fddadfa9239b2c1648cd91918991190d0f5f7348",
      "tree": "80598a4beb481a4e61d835fa640fad2ff29b4afe",
      "parents": [
        "1bd9e1f6d3cf6e9e74199045c5801e93796c8996"
      ],
      "author": {
        "name": "Rohit Gupta",
        "email": "rohgup@codeaurora.org",
        "time": "Tue Apr 15 19:30:53 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:10 2016 -0500"
      },
      "message": "sched: Disable wakeup hints for foreground tasks by default\n\nBy default sched_wakeup_load_threshold is set to 60 and therefore\nwakeup hints are sent out for those tasks whose loads are higher\nthat value. This might cause unnecessary wakeup boosts to happen\nwhen load based syncing is turned ON for cpu-boost.\nDisable the wake up hints by setting the sched_wakeup_load_threshold\nto a value higher than 100 so that wakeup boost doesnt happen unless\nit is explicitly turned ON from adb shell.\n\nChange-Id: I9b8a594c2bfdf2e092cc645e50c0c21efc514c2f\nSigned-off-by: Rohit Gupta \u003crohgup@codeaurora.org\u003e\n"
    },
    {
      "commit": "1bd9e1f6d3cf6e9e74199045c5801e93796c8996",
      "tree": "db9ccade7521bdfc32303f3f55828fb21af43fed",
      "parents": [
        "5ec9df8aed606e02404fe434d0f1b751e9d7158a"
      ],
      "author": {
        "name": "Rohit Gupta",
        "email": "rohgup@codeaurora.org",
        "time": "Thu Mar 13 20:54:47 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:32:04 2016 -0500"
      },
      "message": "sched: Call the notify_on_migrate notifier chain for wakeups as well\n\nAdd a change to send notify_on_migrate hints on wakeups of\nforeground tasks from scheduler if their load is above\nwakeup_load_thresholds (default value is 60).\nThese hints can be used to choose an appropriate CPU frequency\ncorresponding to the load of the task being woken up.\n\nChange-Id: Ieca413c1a8bd2b14a15a7591e8e15d22925c42ca\nSigned-off-by: Rohit Gupta \u003crohgup@codeaurora.org\u003e\n"
    },
    {
      "commit": "5ec9df8aed606e02404fe434d0f1b751e9d7158a",
      "tree": "8e1a7b0bbe294175fb5d0ad1ec3930757ff41c97",
      "parents": [
        "7b7fb127f290c789c87b3f1fa2ebaafcc0423211"
      ],
      "author": {
        "name": "Rohit Gupta",
        "email": "rohgup@codeaurora.org",
        "time": "Fri Mar 14 18:56:14 2014 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:31:57 2016 -0500"
      },
      "message": "cpufreq: cpu-boost: Introduce scheduler assisted load based syncs\n\nPreviously, on getting a migration notification cpu-boost changed\nthe scaling min of the destination frequency to match that of the\nsource frequency or sync_threshold whichever was minimum.\n\nIf the scheduler migration notification is extended with task load\n(cpu demand) information, the cpu boost driver can use this load to\ncompute a suitable frequency for the migrating task. The required\nfrequency for the task is calculated by taking the load percentage\nof the max frequency and no sync is performed if the load is less\nthan a particular value (migration_load_threshold).This change is\nbeneficial for both perf and power as demand of a task is taken into\nconsideration while making cpufreq decisions and unnecessary syncs\nfor lightweight tasks are avoided.\n\nThe task load information provided by scheduler comes from a\nwindow-based load collection mechanism which also normalizes the\nload collected by the scheduler to the max possible frequency\nacross all CPUs.\n\nChange-Id: Id2ba91cc4139c90602557f9b3801fb06b3c38992\nSigned-off-by: Rohit Gupta \u003crohgup@codeaurora.org\u003e\n"
    },
    {
      "commit": "889de8f2515046d8b897c5663e320e088a7ac1fc",
      "tree": "dd7660cd292f88e67ed1cc2d9adb6dfe99d41498",
      "parents": [
        "76173dbc06dd1a20b7b6b2c3afbf29ae935b95bd"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Mon Jan 06 16:24:48 2014 -0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:31:03 2016 -0500"
      },
      "message": "sched: window-based load stats for tasks\n\nProvide a metric per task that specifies how cpu bound a task is. Task\nexecution is monitored over several time windows and the fraction of\nthe window for which task was found to be executing or wanting to run\nis recorded as task\u0027s demand. Windows over which task was sleeping are\nignored. We track last 5 recent windows for every task and the maximum\ndemand seen in any of the previous 5 windows (where task had some\nactivity) drives freq demand for every task.\n\nA per-cpu metric (rq-\u003ecumulative_runnable_avg) is also provided which\nis an aggregation of cpu demand of all tasks currently enqueued on it.\nrq-\u003ecumulative_runnable_avg will be useful to know if cpu frequency\nwill need to be changed to match task demand.\n\nChange-Id: Ib83207b9ba8683cd3304ee8a2290695c34f08fe2\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\n"
    },
    {
      "commit": "76173dbc06dd1a20b7b6b2c3afbf29ae935b95bd",
      "tree": "01cb8e6f452ae972ff6c2e59fb9c60eb251b3167",
      "parents": [
        "5f81291fa85a9386c4135bc254513bf7ea1fd94a"
      ],
      "author": {
        "name": "Srivatsa Vaddagiri",
        "email": "vatsa@codeaurora.org",
        "time": "Thu Dec 12 17:06:11 2013 -0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:57 2016 -0500"
      },
      "message": "sched: Make scheduler aware of cpu frequency state\n\nCapacity of a cpu (how much performance it can deliver) is partly\ndetermined by its frequency (P) state, both current frequency as well\nas max frequency it can reach.  Knowing frequency state of cpus will\nhelp scheduler optimize various functions such as tracking every\ntask\u0027s cpu demand and placing tasks on various cpus.\n\nThis patch has scheduler registering for cpufreq notifications to\nbecome aware of cpu\u0027s frequency state. Subsequent patches will make\nuse of derived information for various purposes, such as task\u0027s scaled\nload (cpu demand) accounting and task placement.\n\nChange-Id: I376dffa1e7f3f47d0496cd7e6ef8b5642ab79016\nSigned-off-by: Srivatsa Vaddagiri \u003cvatsa@codeaurora.org\u003e\n"
    },
    {
      "commit": "5f81291fa85a9386c4135bc254513bf7ea1fd94a",
      "tree": "bf4ca1b369b10df81c6350e84e80725ed41ae0e3",
      "parents": [
        "6927616e84bbc0f249ff1c0a49c11baee2199aa8"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:32 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:52 2016 -0500"
      },
      "message": "sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking\n\nWhile per-entity load-tracking is generally useful, beyond computing shares\ndistribution, e.g. runnable based load-balance (in progress), governors,\npower-management, etc.\n\nThese facilities are not yet consumers of this data.  This may be trivially\nreverted when the information is required; but avoid paying the overhead for\ncalculations we will not use until then.\n\nChange-Id: I459d52082af636d1181edb7acb3af973e95714f9\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141507.422162369@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "6927616e84bbc0f249ff1c0a49c11baee2199aa8",
      "tree": "487948f457ad9181903219c153e6c0975e9aaa46",
      "parents": [
        "bdda1309760e8bf34af3ec721e49e6d2801e061f"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:32 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:44 2016 -0500"
      },
      "message": "sched: Make __update_entity_runnable_avg() fast\n\n__update_entity_runnable_avg forms the core of maintaining an entity\u0027s runnable\nload average.  In this function we charge the accumulated run-time since last\nupdate and handle appropriate decay.  In some cases, e.g. a waking task, this\ntime interval may be much larger than our period unit.\n\nFortunately we can exploit some properties of our series to perform decay for a\nblocked update in constant time and account the contribution for a running\nupdate in essentially-constant* time.\n\n[*]: For any running entity they should be performing updates at the tick which\ngives us a soft limit of 1 jiffy between updates, and we can compute up to a\n32 jiffy update in a single pass.\n\nC program to generate the magic constants in the arrays:\n\n  #include \u003cmath.h\u003e\n  #include \u003cstdio.h\u003e\n\n  #define N 32\n  #define WMULT_SHIFT 32\n\n  const long WMULT_CONST \u003d ((1UL \u003c\u003c N) - 1);\n  double y;\n\n  long runnable_avg_yN_inv[N];\n  void calc_mult_inv() {\n  \tint i;\n  \tdouble yn \u003d 0;\n\n  \tprintf(\"inverses\\n\");\n  \tfor (i \u003d 0; i \u003c N; i++) {\n  \t\tyn \u003d (double)WMULT_CONST * pow(y, i);\n  \t\trunnable_avg_yN_inv[i] \u003d yn;\n  \t\tprintf(\"%2d: 0x%8lx\\n\", i, runnable_avg_yN_inv[i]);\n  \t}\n  \tprintf(\"\\n\");\n  }\n\n  long mult_inv(long c, int n) {\n  \treturn (c * runnable_avg_yN_inv[n]) \u003e\u003e  WMULT_SHIFT;\n  }\n\n  void calc_yn_sum(int n)\n  {\n  \tint i;\n  \tdouble sum \u003d 0, sum_fl \u003d 0, diff \u003d 0;\n\n  \t/*\n  \t * We take the floored sum to ensure the sum of partial sums is never\n  \t * larger than the actual sum.\n  \t */\n  \tprintf(\"sum y^n\\n\");\n  \tprintf(\"   %8s  %8s %8s\\n\", \"exact\", \"floor\", \"error\");\n  \tfor (i \u003d 1; i \u003c\u003d n; i++) {\n  \t\tsum \u003d (y * sum + y * 1024);\n  \t\tsum_fl \u003d floor(y * sum_fl+ y * 1024);\n  \t\tprintf(\"%2d: %8.0f  %8.0f %8.0f\\n\", i, sum, sum_fl,\n  \t\t\tsum_fl - sum);\n  \t}\n  \tprintf(\"\\n\");\n  }\n\n  void calc_conv(long n) {\n  \tlong old_n;\n  \tint i \u003d -1;\n\n  \tprintf(\"convergence (LOAD_AVG_MAX, LOAD_AVG_MAX_N)\\n\");\n  \tdo {\n  \t\told_n \u003d n;\n  \t\tn \u003d mult_inv(n, 1) + 1024;\n  \t\ti++;\n  \t} while (n !\u003d old_n);\n  \tprintf(\"%d\u003e %ld\\n\", i - 1, n);\n  \tprintf(\"\\n\");\n  }\n\n  void main() {\n  \ty \u003d pow(0.5, 1/(double)N);\n  \tcalc_mult_inv();\n  \tcalc_conv(1024);\n  \tcalc_yn_sum(N);\n  }\n\n[ Compile with -lm ]\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141507.277808946@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n\nChange-Id: Ifc6dd06f234483d376bd752e536a49d3d4ca2115\n"
    },
    {
      "commit": "bdda1309760e8bf34af3ec721e49e6d2801e061f",
      "tree": "4c56ba978446e0d34f7bc812f88cd527e7c59ba3",
      "parents": [
        "9d87ebaf52832a894ce0f06ef2da10992f336f9b"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:31 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:38 2016 -0500"
      },
      "message": "sched: Update_cfs_shares at period edge\n\nNow that our measurement intervals are small (~1ms) we can amortize the posting\nof update_shares() to be about each period overflow.  This is a large cost\nsaving for frequently switching tasks.\n\nChange-Id: I46f9413adf39bd2033f5b84d5134b045d0bf1d4a\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141507.200772172@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "9d87ebaf52832a894ce0f06ef2da10992f336f9b",
      "tree": "508aae19781e216c8d5b61cc02f71e9bfbe5e3ad",
      "parents": [
        "3d608482e16e93a2b6a097e06dc5d223dcc44f98"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:31 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:32 2016 -0500"
      },
      "message": "sched: Refactor update_shares_cpu() -\u003e update_blocked_avgs()\n\nNow that running entities maintain their own load-averages the work we must do\nin update_shares() is largely restricted to the periodic decay of blocked\nentities.  This allows us to be a little less pessimistic regarding our\noccupancy on rq-\u003elock and the associated rq-\u003eclock updates required.\n\nChange-Id: Iecd8d3c194270fd5617c0473b257d9d6904386a5\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141507.133999170@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "3d608482e16e93a2b6a097e06dc5d223dcc44f98",
      "tree": "734f4f5367b4ac58e81b9a00e9e7f54704acf993",
      "parents": [
        "5c1cb8e63e058c28aa5128086a7c8baf644da39b"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:31 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:27 2016 -0500"
      },
      "message": "sched: Replace update_shares weight distribution with per-entity computation\n\nNow that the machinery in place is in place to compute contributed load in a\nbottom up fashion; replace the shares distribution code within update_shares()\naccordingly.\n\nChange-Id: Ib53b4ee8d1c1d3e862ce5174d3b7e6ca12276400\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141507.061208672@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "5c1cb8e63e058c28aa5128086a7c8baf644da39b",
      "tree": "0382ee082ce062ade3c6b8a13ec612895f3adef9",
      "parents": [
        "d36b71a03b77347a09f420851ebf61dd2d8a316b"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:31 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:21 2016 -0500"
      },
      "message": "sched: Maintain runnable averages across throttled periods\n\nWith bandwidth control tracked entities may cease execution according to user\nspecified bandwidth limits.  Charging this time as either throttled or blocked\nhowever, is incorrect and would falsely skew in either direction.\n\nWhat we actually want is for any throttled periods to be \"invisible\" to\nload-tracking as they are removed from the system for that interval and\ncontribute normally otherwise.\n\nDo this by moderating the progression of time to omit any periods in which the\nentity belonged to a throttled hierarchy.\n\nChange-Id: I9d2c2d8a0cc984064c37432ecb262178ce0b3d32\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.998912151@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "d36b71a03b77347a09f420851ebf61dd2d8a316b",
      "tree": "a5d2ad25f531be843fe86cbb8f8e1ae04b518f44",
      "parents": [
        "fdb1a24e4d446d86a8be696072cb76f296e1ad26"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:31 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:14 2016 -0500"
      },
      "message": "sched: Normalize tg load contributions against runnable time\n\nEntities of equal weight should receive equitable distribution of cpu time.\nThis is challenging in the case of a task_group\u0027s shares as execution may be\noccurring on multiple cpus simultaneously.\n\nTo handle this we divide up the shares into weights proportionate with the load\non each cfs_rq.  This does not however, account for the fact that the sum of\nthe parts may be less than one cpu and so we need to normalize:\n  load(tg) \u003d min(runnable_avg(tg), 1) * tg-\u003eshares\nWhere runnable_avg is the aggregate time in which the task_group had runnable\nchildren.\n\nChange-Id: I6f8d2ff1e61c1559c892ed9e2da670aed92dbbeb\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e.\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.930124292@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "fdb1a24e4d446d86a8be696072cb76f296e1ad26",
      "tree": "0456bc95326e13dad91425dcedf99ff143d7ad8a",
      "parents": [
        "679f45f82e88be237ce242d77d966e5f2aceef17"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:31 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:07 2016 -0500"
      },
      "message": "sched: Compute load contribution by a group entity\n\nUnlike task entities who have a fixed weight, group entities instead own a\nfraction of their parenting task_group\u0027s shares as their contributed weight.\n\nCompute this fraction so that we can correctly account hierarchies and shared\nentity nodes.\n\nChange-Id: If61e926bb72f56c543b5b85841355f9f6997c51d\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.855074415@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "679f45f82e88be237ce242d77d966e5f2aceef17",
      "tree": "c840d147c1f5cc7ff0ef27f4f12e2b176430c56a",
      "parents": [
        "0fe5038d2a7444df1aef6343bc6e52710da783df"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:30 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:30:01 2016 -0500"
      },
      "message": "sched: Aggregate total task_group load\n\nMaintain a global running sum of the average load seen on each cfs_rq belonging\nto each task group so that it may be used in calculating an appropriate\nshares:weight distribution.\n\nChange-Id: If2726cb0037ddffce651ea17c46cbc4924cca7a2\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.792901086@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "0fe5038d2a7444df1aef6343bc6e52710da783df",
      "tree": "5af38e0c6377251ba07bc2f608581d049734b7d7",
      "parents": [
        "635704dd72526deba57c08b2745edfc34ca251d9"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:30 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:29:56 2016 -0500"
      },
      "message": "sched: Account for blocked load waking back up\n\nWhen a running entity blocks we migrate its tracked load to\ncfs_rq-\u003eblocked_runnable_avg.  In the sleep case this occurs while holding\nrq-\u003elock and so is a natural transition.  Wake-ups however, are potentially\nasynchronous in the presence of migration and so special care must be taken.\n\nWe use an atomic counter to track such migrated load, taking care to match this\nwith the previously introduced decay counters so that we don\u0027t migrate too much\nload.\n\nChange-Id: Ied949a2a42f68e4d4d16275351fd1240939dd519\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.726077467@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "635704dd72526deba57c08b2745edfc34ca251d9",
      "tree": "ed9c3a6bb452424071e2ca83cad66165192d76e0",
      "parents": [
        "5f7c4c5219e97f83a952f1748c065df78fce45c6"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:30 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:29:46 2016 -0500"
      },
      "message": "sched: Add an rq migration call-back to sched_class\n\nSince we are now doing bottom up load accumulation we need explicit\nnotification when a task has been re-parented so that the old hierarchy can be\nupdated.\n\nAdds: migrate_task_rq(struct task_struct *p, int next_cpu)\n\n(The alternative is to do this out of __set_task_cpu, but it was suggested that\nthis would be a cleaner encapsulation.)\n\nChange-Id: Icc4111cd4159f804186dd7c05f0c36f4e368288e\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.660023400@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "5f7c4c5219e97f83a952f1748c065df78fce45c6",
      "tree": "4022e076f170b88667c57015bfddfcb648804a79",
      "parents": [
        "1c5465d1d1d5aea96eb71318c361d434e0a7cce7"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:30 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:29:40 2016 -0500"
      },
      "message": "sched: Maintain the load contribution of blocked entities\n\nWe are currently maintaining:\n\n  runnable_load(cfs_rq) \u003d \\Sum task_load(t)\n\nFor all running children t of cfs_rq.  While this can be naturally updated for\ntasks in a runnable state (as they are scheduled); this does not account for\nthe load contributed by blocked task entities.\n\nThis can be solved by introducing a separate accounting for blocked load:\n\n  blocked_load(cfs_rq) \u003d \\Sum runnable(b) * weight(b)\n\nObviously we do not want to iterate over all blocked entities to account for\ntheir decay, we instead observe that:\n\n  runnable_load(t) \u003d \\Sum p_i*y^i\n\nand that to account for an additional idle period we only need to compute:\n\n  y*runnable_load(t).\n\nThis means that we can compute all blocked entities at once by evaluating:\n\n  blocked_load(cfs_rq)` \u003d y * blocked_load(cfs_rq)\n\nFinally we maintain a decay counter so that when a sleeping entity re-awakens\nwe can determine how much of its load should be removed from the blocked sum.\n\nChange-Id: Ib0b29b3d1ca872a13d7236ed142156ef0a35dc9a\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.585389902@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "1c5465d1d1d5aea96eb71318c361d434e0a7cce7",
      "tree": "060d3fff882c4888cba9fb2fdd3c7c06afc9e53e",
      "parents": [
        "b9283adf8c4c2dcbe1fd5a6171c658bc7924bc65"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:30 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:29:34 2016 -0500"
      },
      "message": "sched: Aggregate load contributed by task entities on parenting cfs_rq\n\nFor a given task t, we can compute its contribution to load as:\n\n  task_load(t) \u003d runnable_avg(t) * weight(t)\n\nOn a parenting cfs_rq we can then aggregate:\n\n  runnable_load(cfs_rq) \u003d \\Sum task_load(t), for all runnable children t\n\nMaintain this bottom up, with task entities adding their contributed load to\nthe parenting cfs_rq sum.  When a task entity\u0027s load changes we add the same\ndelta to the maintained sum.\n\nChange-Id: If3b74d345354a340836bbc356002982e25ab3244\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.514678907@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "b9283adf8c4c2dcbe1fd5a6171c658bc7924bc65",
      "tree": "49340484990b18a85f0ec60eb473e070cbce2f65",
      "parents": [
        "e4079d2be87ac03bacdf2ad4076366b9099f1043"
      ],
      "author": {
        "name": "Ben Segall",
        "email": "bsegall@google.com",
        "time": "Thu Oct 04 12:51:20 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:29:29 2016 -0500"
      },
      "message": "sched: Maintain per-rq runnable averages\n\nSince runqueues do not have a corresponding sched_entity we instead embed a\nsched_avg structure directly.\n\nChange-Id: Ib2a80c77f6b4e017eaa72b31004630c7d95e4b94\nSigned-off-by: Ben Segall \u003cbsegall@google.com\u003e\nReviewed-by: Paul Turner \u003cpjt@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.442637130@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "e4079d2be87ac03bacdf2ad4076366b9099f1043",
      "tree": "7666d57c41be488b79b7de50fe65cc11d7acca2e",
      "parents": [
        "2f9d485e1a6c66b5a27495ca1e86b1df1dda2213"
      ],
      "author": {
        "name": "Paul Turner",
        "email": "pjt@google.com",
        "time": "Thu Oct 04 13:18:29 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:29:24 2016 -0500"
      },
      "message": "sched: Track the runnable average on a per-task entity basis\n\nInstead of tracking averaging the load parented by a cfs_rq, we can track\nentity load directly. With the load for a given cfs_rq then being the sum\nof its children.\n\nTo do this we represent the historical contribution to runnable average\nwithin each trailing 1024us of execution as the coefficients of a\ngeometric series.\n\nWe can express this for a given task t as:\n\n  runnable_sum(t) \u003d \\Sum u_i * y^i, runnable_avg_period(t) \u003d \\Sum 1024 * y^i\n  load(t) \u003d weight_t * runnable_sum(t) / runnable_avg_period(t)\n\nWhere: u_i is the usage in the last i`th 1024us period (approximately 1ms)\n~ms and y is chosen such that y^k \u003d 1/2.  We currently choose k to be 32 which\nroughly translates to about a sched period.\n\nChange-Id: Iafc090fe18ed1835cc501949e8ba1e4ed78c5de1\nSigned-off-by: Paul Turner \u003cpjt@google.com\u003e\nReviewed-by: Ben Segall \u003cbsegall@google.com\u003e\nSigned-off-by: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nLink: http://lkml.kernel.org/r/20120823141506.372695337@google.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n"
    },
    {
      "commit": "9775141e935cee4719ed4b0224e0a9c49d06e3b6",
      "tree": "f94e09e1771b7565e68e926462d42c2b990d4382",
      "parents": [
        "b1a98e2fc61371a8462a7f437028fb67f6b842a5"
      ],
      "author": {
        "name": "Thomas Gleixner",
        "email": "tglx@linutronix.de",
        "time": "Mon Jun 11 15:07:08 2012 +0200"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:29:03 2016 -0500"
      },
      "message": "smpboot: Remove leftover declaration\n\nChange-Id: Ibeb1176232101cf889cd3212b19d2173b2a8ee58\nSigned-off-by: Thomas Gleixner \u003ctglx@linutronix.de\u003e\n"
    },
    {
      "commit": "b1a98e2fc61371a8462a7f437028fb67f6b842a5",
      "tree": "2f7154589e951ba504753db27d344e3a5ac83322",
      "parents": [
        "f07da6943406f2fe2277d4d7ded70547f1ee7d02"
      ],
      "author": {
        "name": "Vignesh Radhakrishnan",
        "email": "vigneshr@codeaurora.org",
        "time": "Mon May 11 16:41:54 2015 +0530"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:28:58 2016 -0500"
      },
      "message": "smpboot: use kmemleak_not_leak for smpboot_thread_data\n\nKmemleak reports the following memory leak :\n\n    [\u003cffffffc0002faef8\u003e] create_object+0x140/0x274\n    [\u003cffffffc000cc3598\u003e] kmemleak_alloc+0x80/0xbc\n    [\u003cffffffc0002f707c\u003e] kmem_cache_alloc_trace+0x148/0x1d8\n    [\u003cffffffc00024504c\u003e] __smpboot_create_thread.part.2+0x2c/0xec\n    [\u003cffffffc0002452b4\u003e] smpboot_register_percpu_thread+0x90/0x118\n    [\u003cffffffc0016067c0\u003e] spawn_ksoftirqd+0x1c/0x30\n    [\u003cffffffc000200824\u003e] do_one_initcall+0xb0/0x14c\n    [\u003cffffffc001600820\u003e] kernel_init_freeable+0x84/0x1e0\n    [\u003cffffffc000cc273c\u003e] kernel_init+0x10/0xcc\n    [\u003cffffffc000203bbc\u003e] ret_from_fork+0xc/0x50\n\nThis memory allocated here points to smpboot_thread_data.\nData is used as an argument for this kthread.\n\nThis will be used when smpboot_thread_fn runs. Therefore,\nis not a leak.\n\nCall kmemleak_not_leak for smpboot_thread_data pointer\nto ensure that kmemleak doesn\u0027t report it as a memory\nleak.\n\nChange-Id: I02b0a7debea3907b606856e069d63d7991b67cd9\nSigned-off-by: Vignesh Radhakrishnan \u003cvigneshr@codeaurora.org\u003e\nSigned-off-by: Prasad Sodagudi \u003cpsodagud@codeaurora.org\u003e\n"
    },
    {
      "commit": "f07da6943406f2fe2277d4d7ded70547f1ee7d02",
      "tree": "41a5903b57957a12d0efd6129205db3facccd639",
      "parents": [
        "1dc72590e398501cd6e28f53b18b94a3eef0b1a6"
      ],
      "author": {
        "name": "Lai Jiangshan",
        "email": "laijs@cn.fujitsu.com",
        "time": "Thu Jul 31 11:30:17 2014 +0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:28:53 2016 -0500"
      },
      "message": "smpboot: Add missing get_online_cpus() in smpboot_register_percpu_thread()\n\ncommit 4bee96860a65c3a62d332edac331b3cf936ba3ad upstream.\n\nThe following race exists in the smpboot percpu threads management:\n\nCPU0\t      \t   \t     CPU1\ncpu_up(2)\n  get_online_cpus();\n  smpboot_create_threads(2);\n\t\t\t     smpboot_register_percpu_thread();\n\t\t\t     for_each_online_cpu();\n\t\t\t       __smpboot_create_thread();\n  __cpu_up(2);\n\nThis results in a missing per cpu thread for the newly onlined cpu2 and\nin a NULL pointer dereference on a consecutive offline of that cpu.\n\nProctect smpboot_register_percpu_thread() with get_online_cpus() to\nprevent that.\n\n[ tglx: Massaged changelog and removed the change in\n        smpboot_unregister_percpu_thread() because that\u0027s an\n        optimization and therefor not stable material. ]\n\nChange-Id: If9ee9e290f13ce909b3deafec0e3e0a805942320\nSigned-off-by: Lai Jiangshan \u003claijs@cn.fujitsu.com\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: Srivatsa S. Bhat \u003csrivatsa.bhat@linux.vnet.ibm.com\u003e\nCc: David Rientjes \u003crientjes@google.com\u003e\nLink: http://lkml.kernel.org/r/1406777421-12830-1-git-send-email-laijs@cn.fujitsu.com\nSigned-off-by: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n"
    },
    {
      "commit": "1dc72590e398501cd6e28f53b18b94a3eef0b1a6",
      "tree": "ef9609674478312b3b1f956fa142fa1c2495b468",
      "parents": [
        "f21320b9195694f87dfb530eec511a2d4436d71a"
      ],
      "author": {
        "name": "Paul E. McKenney",
        "email": "paulmck@linux.vnet.ibm.com",
        "time": "Wed Apr 15 12:45:41 2015 +0300"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:28:48 2016 -0500"
      },
      "message": "cpu: Handle smpboot_unpark_threads() uniformly\n\nCommit 8d33fe6 (cpu: Defer smpboot kthread unparking until CPU known\nto scheduler) put the online path\u0027s call to smpboot_unpark_threads()\ninto a CPU-hotplug notifier.  This commit places the offline-failure\npaths call into the same notifier for the sake of uniformity.\n\nNote that it is not currently possible to place the offline path\u0027s call to\nsmpboot_park_threads() into an existing notifier because the CPU_DYING\nnotifiers run in a restricted environment, and the CPU_UP_PREPARE\nnotifiers run too soon.\n\nChange-Id: I43546fdab4cb921f8210bad39da56e85e72a2122\nSigned-off-by: Paul E. McKenney \u003cpaulmck@linux.vnet.ibm.com\u003e\n"
    },
    {
      "commit": "f21320b9195694f87dfb530eec511a2d4436d71a",
      "tree": "d17b9d73d0ae425f917cee536d74672c11c5cb97",
      "parents": [
        "e0cdcf8dc3c926e44406624aaf70924b58d90f3c"
      ],
      "author": {
        "name": "Paul E. McKenney",
        "email": "paulmck@linux.vnet.ibm.com",
        "time": "Sun Apr 12 08:06:55 2015 -0700"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:28:42 2016 -0500"
      },
      "message": "cpu: Defer smpboot kthread unparking until CPU known to scheduler\n\nCurrently, smpboot_unpark_threads() is invoked before the incoming CPU\nhas been added to the scheduler\u0027s runqueue structures.  This might\npotentially cause the unparked kthread to run on the wrong CPU, since the\ncorrect CPU isn\u0027t fully set up yet.\n\nThat causes a sporadic, hard to debug boot crash triggering on some\nsystems, reported by Borislav Petkov, and bisected down to:\n\n  2a442c9c6453 (\"x86: Use common outgoing-CPU-notification code\")\n\nThis patch places smpboot_unpark_threads() in a CPU hotplug\nnotifier with priority set so that these kthreads are unparked just after\nthe CPU has been added to the runqueues.\n\nChange-Id: I8921987de9c2a2f475cc63dc82662d6ebf6e8725\nReported-and-tested-by: Borislav Petkov \u003cbp@suse.de\u003e\nSigned-off-by: Paul E. McKenney \u003cpaulmck@linux.vnet.ibm.com\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: linux-kernel@vger.kernel.org\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nGit-commit: 00df35f991914db6b8bde8cf09808e19a9cffc3d\nGit-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git\nSigned-off-by: Matt Wagantall \u003cmattw@codeaurora.org\u003e\n"
    },
    {
      "commit": "513d87b573e3e97222706ff429efa1d13d25d0aa",
      "tree": "1b187e4a8223e9a46fbde3e96e63d0cf4caa0053",
      "parents": [
        "8a56c4fff0de334cbade44e2dfe5ab4d0f71bac7"
      ],
      "author": {
        "name": "Andrey Vagin",
        "email": "avagin@openvz.org",
        "time": "Wed Feb 27 17:03:12 2013 -0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:28:08 2016 -0500"
      },
      "message": "BACKPORT: signal: allow to send any siginfo to itself\n\n(cherry picked from commit 66dd34ad31e5963d72a700ec3f2449291d322921)\n\nThe idea is simple.  We need to get the siginfo for each signal on\ncheckpointing dump, and then return it back on restore.\n\nThe first problem is that the kernel doesn\u0027t report complete siginfos to\nuserspace.  In a signal handler the kernel strips SI_CODE from siginfo.\nWhen a siginfo is received from signalfd, it has a different format with\nfixed sizes of fields.  The interface of signalfd was extended.  If a\nsignalfd is created with the flag SFD_RAW, it returns siginfo in a raw\nformat.\n\nrt_sigqueueinfo looks suitable for restoring signals, but it can\u0027t send\nsiginfo with a positive si_code, because these codes are reserved for\nthe kernel.  In the real world each person has right to do anything with\nhimself, so I think a process should able to send any siginfo to itself.\n\nThis patch:\n\nThe kernel prevents sending of siginfo with positive si_code, because\nthese codes are reserved for kernel.  I think we can allow a task to\nsend such a siginfo to itself.  This operation should not be dangerous.\n\nThis functionality is required for restoring signals in\ncheckpoint/restart.\n\nChange-Id: I40101d87eeb53ae05cfa0949439577a8f3f58f94\nSigned-off-by: Andrey Vagin \u003cavagin@openvz.org\u003e\nCc: Serge Hallyn \u003cserge.hallyn@canonical.com\u003e\nCc: \"Eric W. Biederman\" \u003cebiederm@xmission.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Michael Kerrisk \u003cmtk.manpages@gmail.com\u003e\nCc: Pavel Emelyanov \u003cxemul@parallels.com\u003e\nCc: Cyrill Gorcunov \u003cgorcunov@openvz.org\u003e\nCc: Michael Kerrisk \u003cmtk.manpages@gmail.com\u003e\nReviewed-by: Oleg Nesterov \u003coleg@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9c60b9ea998b7e70551371b554a3cc1c92af24b6",
      "tree": "3cdd0677674405ce59796bd3cb3005f67fd09fe5",
      "parents": [
        "accef775d1bdf690ffff927016d26da9859d543c"
      ],
      "author": {
        "name": "John Stultz",
        "email": "john.stultz@linaro.org",
        "time": "Tue Nov 17 08:35:54 2015 -0800"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Wed Aug 10 16:27:42 2016 -0500"
      },
      "message": "ANDROID: exec_domains: Disable request_module() call for personalities\n\n(cherry pick from commit a9ac1262ce80c287562e604f3bb24f232fcb686e)\n\nWith Android M, Android environments use a separate execution\ndomain for 32bit processes.\nSee:\nhttps://android-review.googlesource.com/#/c/122131/\n\nThis results in systems that use kernel modules to see selinux\naudit noise like:\n  type\u003d1400 audit(28.989:15): avc: denied { module_request } for\n  pid\u003d1622 comm\u003d\"app_process32\" kmod\u003d\"personality-8\"\n  scontext\u003du:r:zygote:s0 tcontext\u003du:r:kernel:s0 tclass\u003dsystem\n\nWhile using kernel modules is unadvised, some systems do require\nthem.\n\nThus to avoid developers adding sepolicy exceptions to allow for\nrequest_module calls, this patch disables the logic which tries\nto call request_module for the 32bit personality (ie:\npersonality-8), which doesn\u0027t actually exist.\n\nSigned-off-by: John Stultz \u003cjohn.stultz@linaro.org\u003e\nChange-Id: I9cb90bd1291f0a858befa7d347c85464346702db\n"
    },
    {
      "commit": "53b4f1f0348a4e335fe05906c13591c340d81a4e",
      "tree": "9142eaf20cc68108f3da6672774ccdf0b3f1b2ea",
      "parents": [
        "12cc41c1fae385eb8ceaff6964a155595c98a815"
      ],
      "author": {
        "name": "Willy Tarreau",
        "email": "w@1wt.eu",
        "time": "Mon Jan 18 16:36:09 2016 +0100"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Fri Aug 05 02:07:58 2016 -0500"
      },
      "message": "pipe: limit the per-user amount of pages allocated in pipes\n\nOn no-so-small systems, it is possible for a single process to cause an\nOOM condition by filling large pipes with data that are never read. A\ntypical process filling 4000 pipes with 1 MB of data will use 4 GB of\nmemory. On small systems it may be tricky to set the pipe max size to\nprevent this from happening.\n\nThis patch makes it possible to enforce a per-user soft limit above\nwhich new pipes will be limited to a single page, effectively limiting\nthem to 4 kB each, as well as a hard limit above which no new pipes may\nbe created for this user. This has the effect of protecting the system\nagainst memory abuse without hurting other users, and still allowing\npipes to work correctly though with less data at once.\n\nThe limit are controlled by two new sysctls : pipe-user-pages-soft, and\npipe-user-pages-hard. Both may be disabled by setting them to zero. The\ndefault soft limit allows the default number of FDs per process (1024)\nto create pipes of the default size (64kB), thus reaching a limit of 64MB\nbefore starting to create only smaller pipes. With 256 processes limited\nto 1024 FDs each, this results in 1024*64kB + (256*1024 - 1024) * 4kB \u003d\n1084 MB of memory allocated for a user. The hard limit is disabled by\ndefault to avoid breaking existing applications that make intensive use\nof pipes (eg: for splicing).\n\nReported-by: socketpair@gmail.com\nReported-by: Tetsuo Handa \u003cpenguin-kernel@I-love.SAKURA.ne.jp\u003e\nMitigates: CVE-2013-4312 (Linux 2.0+)\nSuggested-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Willy Tarreau \u003cw@1wt.eu\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n\nConflicts:\n\tDocumentation/sysctl/fs.txt\n\tfs/pipe.c\n\tinclude/linux/sched.h\n\nChange-Id: Ic7c678af18129943e16715fdaa64a97a7f0854be\nSigned-off-by: SteadyQuad \u003cSteadyQuad@gmail.com\u003e\n"
    },
    {
      "commit": "c731a27c67d5200f71ba78613b2e7ad87284d1e7",
      "tree": "dfd4e5af6eaa62f7878f60fcefa5ec845023694f",
      "parents": [
        "dbc29455e7c6c0cfd8f15d4586ef67acfd3e55ca"
      ],
      "author": {
        "name": "Felix Fietkau",
        "email": "nbd@openwrt.org",
        "time": "Fri May 04 21:08:33 2012 -0700"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Jul 13 07:02:28 2016 -0400"
      },
      "message": "timer: optimize apply_slack()\n\n__fls(mask) is equivalent to find_last_bit(\u0026mask, BITS_PER_LONG), but cheaper.\nfind_last_bit was showing up high on the list when I was profiling for stalls\non icache misses on a system with very small cache size (MIPS).\n\nSigned-off-by: Felix Fietkau \u003cnbd@openwrt.org\u003e\nSigned-off-by: edoko \u003cr_data@naver.com\u003e\n\nChange-Id: I8a5021a2fb2936c00ffd456663a76cb1b23e3100\n"
    },
    {
      "commit": "4cb0dcdcf19bb2a1928675b042a85cb91cc293b5",
      "tree": "0c3c993240191799272be738fe2594a399eab3d8",
      "parents": [
        "582bdf5b0c1ab084e9fc3b0ee9a42da92a242039",
        "4a3ed04b969fb3e062ab11a4ce0856744be1203b"
      ],
      "author": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu May 19 13:12:59 2016 -0500"
      },
      "committer": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu May 19 13:13:07 2016 -0500"
      },
      "message": "Merge remote-tracking branch \u0027caf/LA.AF.1.1_rb1.18\u0027 into HEAD\n\nChange-Id: I5ff7ee6a8875318a6bd8e9a7e3828f629c6a3d1c\n"
    },
    {
      "commit": "766ce4e5a952510f9f27511cbfecc884bf5147cd",
      "tree": "93ad1970e254fc3b1fb0650a4dc449d86ad5114e",
      "parents": [
        "3bc527393379fcd740cc66c700da808abdbf5a5d"
      ],
      "author": {
        "name": "Ivan Grinko",
        "email": "iivanich@gmail.com",
        "time": "Thu Apr 28 22:06:41 2016 +0300"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Tue May 17 08:03:16 2016 -0400"
      },
      "message": "Linux 3.4.112\n\nhttps://cdn.kernel.org/pub/linux/kernel/v3.x/ChangeLog-3.4.112\n\nChange-Id: Ic146bc84c10ebcfe256eb6bffa8ffef44c9a1d38\n"
    },
    {
      "commit": "0cf007169665bc6c8eeca0e9089ef0e805c2ac42",
      "tree": "82a470f9ef259808f9d063d24c860bfab5da3809",
      "parents": [
        "25bec49c55f487637b2f9550b6e04ffb51c5863f"
      ],
      "author": {
        "name": "Ivan Grinko",
        "email": "iivanich@gmail.com",
        "time": "Thu Mar 24 09:39:37 2016 +0200"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Tue May 17 07:52:15 2016 -0400"
      },
      "message": "Linux 3.4.111\n"
    },
    {
      "commit": "60489ce5993943a898684e790de90f2f684a50b0",
      "tree": "92d7e37d43a6f22ac211d1d4d3f97802099dfbf5",
      "parents": [
        "981442090d47101c6cc4f1146021c827ff13a4b4"
      ],
      "author": {
        "name": "Paul Reioux",
        "email": "reioux@gmail.com",
        "time": "Mon Jan 04 23:31:47 2016 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Tue May 17 07:50:24 2016 -0400"
      },
      "message": "Add IntelliPlug hotplug 3.8\n\nintelli_plug: intelligent hotplug cpu driver with eco mode\n\nChange-Id: Ic598c947232f98779cfe129f63bee24d2c8514ac\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: tweak for faster wakeup from suspend\n\nbump version to 1.1\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: increase cores on persistence\n\nalso replace hardcoded values with Macros for ease of updating later\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: make it gcc-4.6.x eabi compatible :p\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: use rq_stats to help detect artificial or constant loads\n\nbenchmarks often create fake / artificial loads at a constant rate.  These\ntype of loads are not detected corectly by run average algorithms. Use\nthe run queue stats to help detect these cases and bring up the cores in\ncorrespondence.\n\nbump version to 1.2\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: use mp_decision() algorithm for core 3 and 4\n\nThis replaces the simplistic check for run queue thresholds with a more\nsophisticate algorithm with time awareness\n\nbump version to 1.3\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: tweak mp_decsion parameters and remove unused logic\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: bump threshold slightly for better response\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nwip: intelli_plug: change logic for better benchmark performance\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: use mp_decision to reduce online persistence count for cores\n\nthis should bring down the cores faster when run queue is low\nalso optimized code a bit using local vars\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: disable by default\n\nlet the user enable via sysfs so it won\u0027t clash with mpdecision which is\nenabled by default for qualcomm phones\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nintelli_plug: slow down hotplug activity from 50ms to 200ms\n\nthis will reduce the hotplug chaos which may in turn save more power\n\nSigned-off-by: faux123 \u003creioux@gmail.com\u003e\n\nConflicts:\n\tarch/arm/mach-msm/intelli_plug.c\n\nintelli_plug: code clean up and minor bug fixes\n\nbump to version 1.6\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelliplug: replaced deprecated early_suspend driver with userspace interface\n\nearly_suspend has been deprecated, so move original functionality to sysfs\ninterfae and have userspace app to replicate the early_suspend functionality\n\nbump version to 1.7\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add dynamic load sampling rate logic\n\nAdd dynamic sampling logic based on load. If load is high and required more\nthan 2 cores, increase the sampling rate for a duration of 3 seconds to help\nmanage the load better during this time.  Once duration expires, revert back\nto lazier sampling rate for better batter performance.\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add new power_suspend PM driver\n\nbump version to 1.9\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add touch input logic\n\nchanged intelli_plug sampling rates and bump to version 2.0\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: use a context safe function call instead\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: performance tune-up continued...\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: switch to use dedicated high priority workqueue\n\nfrom the shared global workqueue.  This should prevent hang ups while the\nglobal workqueue is busy and cleaned up some logic issues.\n\ngive input boost its own workqueue\n\nbump to version 2.2\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: code review clean up\n\nset def sampling rate to something sane rather than zero (instantaneous\nrescheduling is not good if intelli_plug is disabled)\n\nmake touch input more generic rather than tied to a touchscreen driver\n\nremove unused code\n\nthanks to @dorimanx for the code reviews and suggestions\n\nbump to version 2.3\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nIntelli_plug: add wakeup cpufreq boost for quicker wakeup\n\nbump version to 2.4\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add screen off max controls\n\nbump to version 2.5\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add parameter to control touch boost on/off\n\nbump version to 2.6\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: refactor stats calculation code to be less intrusive\n\nthis is done for those kernels which do not have 100% source code available\nand must use existing closed source modules such as wifi drivers\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nIntelli_plug: kernel sched/core: add per cpu nr_running stats\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add profiles support and misc code optimization\n\nbump to version 3.0\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: use per cpu nr_runnings stats for unplugging cores\n\nbump version to 3.1\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: allow cpu_nr_running_threshold to be user adjustable\n\nbump version to 3.2\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: tweak cpu_nr_running threshold\n\nand fix minor logic issues\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: remove legacy msm_rq based code\n\njust use the original algorithm based on nr_running_stats\n\nbump version to 3.3\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add nr_running_thresholds based on thread capacity of SOC type\n\ninstead of hard coded nr_running_thresholds, perform compile time calculation\nof thresholds based on thread capacity of SOC types.\n\nbump version to 3.4\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: misc minor code fixup post 3.4 update\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: add automatic dual-core initializations\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: deprecate eco mode. replaced by built-in profile\n\nbump version to 3.5\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: post 3.5 tweaks and code clean up\n\nbump to version 3.6\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: move to its own directory. It\u0027s been cross platform for a while\n\nintelli_plug driver has been working on OMAP44xx, TEGRA 3, Exynos and\nMSM Krait/Cortex multi-core SOCs.  So it doesn\u0027t make sense to patch on a per\nSOC basis, move it to its own ARM platform independent folder so patches can\napply to all supported ARM platforms\n\nChange-Id: Icf778959cbaa59c73fefa2d27f682efa6d8adf93\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: unify powersuspend and earlysuspend drivers\n\nalso minor clean up on the persist logic\n\nbump to version 3.7\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: adjust thread capacity for Cortex A7 SOCs\n\nCortex A7 is much weaker than Krait processors\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: fix thread capacity threshold calculation\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: fix logic error for eco mode profiles\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: only apply suspend/resume logic if active\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: fix incorrect cpufreq API usage\n\nThis resolves the wakeup kick and screen off max issues\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: initialize ip_info struct element during driver init\n\nthis will eliminate race issues\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: use primary CPU\u0027s info data for non-boot cpu\u0027s settings\n\nAsync CPU design has caused quite a bit of grief for controlling cpu\nfrequencies\n\nbump to version 3.8\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nintelli_plug: older Qualcomm kernel compatibility fixup\n\nARGH... really angry at CAF code.\n\nsome cpufreq driver APIs are incredibly unstable with some of the older\nMSM kernels.  The correct way is the fix the cpufreq drivers, but there are\ntons of variations out there, so rather than depending on a fix, make\nintelli_plug more universal by avoiding the troubling APIs altogether\n\nSigned-off-by: Paul Reioux \u003creioux@gmail.com\u003e\n\nChange-Id: Ia530ec03b2b678fdb538b94b272599e96309133f\n"
    },
    {
      "commit": "0c02155162467f29bea0dfef925e6f27df1efc7f",
      "tree": "e882a5f9fb5e307b9b9009951d7ba17874101ace",
      "parents": [
        "3ce7216195cf6c0967508999d2cd75ce78044210"
      ],
      "author": {
        "name": "Yunlei He",
        "email": "heyunlei@huawei.com",
        "time": "Tue Feb 23 12:07:56 2016 +0800"
      },
      "committer": {
        "name": "Jaegeuk Kim",
        "email": "jaegeuk@kernel.org",
        "time": "Mon Mar 07 15:22:23 2016 -0800"
      },
      "message": "f2fs: avoid hungtask problem caused by losing wake_up\n\nThe D state of wait_on_all_pages_writeback should be waken by\nfunction f2fs_write_end_io when all writeback pages have been\nsuccesfully written to device. It\u0027s possible that wake_up comes\nbetween get_pages and io_schedule. Maybe in this case it will\nlost wake_up and still in D state even if all pages have been\nwrite back to device, and finally, the whole system will be into\nthe hungtask state.\n\n                if (!get_pages(sbi, F2FS_WRITEBACK))\n                         break;\n\t\t\t\t\t\u003c---------  wake_up\n                io_schedule();\n\nSigned-off-by: Yunlei He \u003cheyunlei@huawei.com\u003e\nSigned-off-by: Biao He \u003chebiao6@huawei.com\u003e\nSigned-off-by: Jaegeuk Kim \u003cjaegeuk@kernel.org\u003e\n"
    },
    {
      "commit": "5fa350ef605bd6a1b64540cad929a0d6d1ce3607",
      "tree": "eb019d8edd08c03b982d8fbcbefaf52ff75150d3",
      "parents": [
        "15c4ecb2b060da80621b36fa1dd27cc031de0d0c"
      ],
      "author": {
        "name": "Nick Reuter",
        "email": "nreuter85@gmail.com",
        "time": "Thu Feb 25 21:35:15 2016 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Fri Feb 26 05:08:24 2016 -0500"
      },
      "message": "kernel/hz.bc: ignore.\n\nChange-Id: I96e5465162eec64287b715b5e0df726a2636194d\nSigned-off-by: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "17b94bef87f4797675ce9716725c70d4828a3db8",
      "tree": "856de91bf03b73a12e878a470abc8324f46e9c43",
      "parents": [
        "9cc712efc708bf2d25b6a6c013c66c42f2cfccd0"
      ],
      "author": {
        "name": "shumash",
        "email": "shumashgeely@gmail.com",
        "time": "Sun Jan 10 16:44:51 2016 -0700"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Thu Feb 11 10:33:08 2016 -0500"
      },
      "message": "port 3.0 kernel power source in pieces\n source.c\n main.c\n\nChange-Id: Ifa25fab256d8bff11bede4e35236da5857e39d78\n"
    },
    {
      "commit": "9cc712efc708bf2d25b6a6c013c66c42f2cfccd0",
      "tree": "f4b7b9d2354b77f6a63eacded1c5b32282c18abc",
      "parents": [
        "64c363146fe8b4b26285d36fad0fc01b9c8c1285"
      ],
      "author": {
        "name": "José Adolfo Galdámez",
        "email": "josegalre@pac-rom.com",
        "time": "Wed Oct 21 21:52:13 2015 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:04:59 2016 -0500"
      },
      "message": "Merge tag \u0027v3.4.110\u0027 into mm-6.0\n\nChange-Id: I0afc69bce474139d1b70e062d72c0b8054529833\nSigned-off-by: José Adolfo Galdámez \u003cjosegalre@pac-rom.com\u003e\n"
    },
    {
      "commit": "64c363146fe8b4b26285d36fad0fc01b9c8c1285",
      "tree": "f4597aeccc6d37aadbf3a719dfefc62632e4ee10",
      "parents": [
        "900469d0b0c337db19908f77d172f4b17f4573ba"
      ],
      "author": {
        "name": "José Adolfo Galdámez",
        "email": "josegalre@pac-rom.com",
        "time": "Mon Sep 21 22:00:27 2015 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:03:50 2016 -0500"
      },
      "message": "Merge tag \u0027v3.4.109\u0027 into mm-6.0\n\nChange-Id: I93b29443377e338fc5d3b031b130da720f788879\nSigned-off-by: José Adolfo Galdámez \u003cjosegalre@pac-rom.com\u003e\n"
    },
    {
      "commit": "900469d0b0c337db19908f77d172f4b17f4573ba",
      "tree": "e7c8e6e70ad09ecc74c7385269f9a7a908489b88",
      "parents": [
        "3591a444f6b8cb82a9b88a49a4e67d8f4b61a6de"
      ],
      "author": {
        "name": "José Adolfo Galdámez",
        "email": "josegalre@pac-rom.com",
        "time": "Sat Jun 20 23:45:36 2015 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:02:51 2016 -0500"
      },
      "message": "Merge tag \u0027v3.4.108\u0027 into mm-6.0\n\nChange-Id: I5ee718e5c87c9647c6edf0926a887679e065a649\nSigned-off-by: José Adolfo Galdámez \u003cjosegalre@pac-rom.com\u003e\n"
    },
    {
      "commit": "698785c12d3c6da117152dc520e2fb9a46fa31f8",
      "tree": "c17d2e3eb1f6c587c4744835e526a403ca6da57a",
      "parents": [
        "22acfde0a291977c02b5f4461487dd81dd901da7"
      ],
      "author": {
        "name": "John Stultz",
        "email": "john.stultz@linaro.org",
        "time": "Mon Feb 09 23:30:36 2015 -0800"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:02:13 2016 -0500"
      },
      "message": "ntp: Fixup adjtimex freq validation on 32-bit systems\n\ncommit 29183a70b0b828500816bd794b3fe192fce89f73 upstream.\n\nAdditional validation of adjtimex freq values to avoid\npotential multiplication overflows were added in commit\n5e5aeb4367b (time: adjtimex: Validate the ADJ_FREQUENCY values)\n\nUnfortunately the patch used LONG_MAX/MIN instead of\nLLONG_MAX/MIN, which was fine on 64-bit systems, but being\nmuch smaller on 32-bit systems caused false positives\nresulting in most direct frequency adjustments to fail w/\nEINVAL.\n\nntpd only does direct frequency adjustments at startup, so\nthe issue was not as easily observed there, but other time\nsync applications like ptpd and chrony were more effected by\nthe bug.\n\nSee bugs:\n\n  https://bugzilla.kernel.org/show_bug.cgi?id\u003d92481\n  https://bugzilla.redhat.com/show_bug.cgi?id\u003d1188074\n\nThis patch changes the checks to use LLONG_MAX for\nclarity, and additionally the checks are disabled\non 32-bit systems since LLONG_MAX/PPM_SCALE is always\nlarger then the 32-bit long freq value, so multiplication\noverflows aren\u0027t possible there.\n\nReported-by: Josh Boyer \u003cjwboyer@fedoraproject.org\u003e\nReported-by: George Joseph \u003cgeorge.joseph@fairview5.com\u003e\nTested-by: George Joseph \u003cgeorge.joseph@fairview5.com\u003e\nSigned-off-by: John Stultz \u003cjohn.stultz@linaro.org\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nCc: Sasha Levin \u003csasha.levin@oracle.com\u003e\nLink: http://lkml.kernel.org/r/1423553436-29747-1-git-send-email-john.stultz@linaro.org\n[ Prettified the changelog and the comments a bit. ]\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "f39f5a4177f0aadec47f4c3b0c4958913ff2d279",
      "tree": "1f7a7399f750084756567c2cb64f762e5b170955",
      "parents": [
        "180e131b326585b5bc7661e1c7b477a2761b5642"
      ],
      "author": {
        "name": "Tim Chen",
        "email": "tim.c.chen@linux.intel.com",
        "time": "Fri Dec 12 15:38:12 2014 -0800"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:02:08 2016 -0500"
      },
      "message": "sched/rt: Reduce rq lock contention by eliminating locking of non-feasible target\n\ncommit 80e3d87b2c5582db0ab5e39610ce3707d97ba409 upstream.\n\nThis patch adds checks that prevens futile attempts to move rt tasks\nto a CPU with active tasks of equal or higher priority.\n\nThis reduces run queue lock contention and improves the performance of\na well known OLTP benchmark by 0.7%.\n\nSigned-off-by: Tim Chen \u003ctim.c.chen@linux.intel.com\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Shawn Bohrer \u003csbohrer@rgmadvisors.com\u003e\nCc: Suruchi Kadu \u003csuruchi.a.kadu@intel.com\u003e\nCc: Doug Nelson\u003cdoug.nelson@intel.com\u003e\nCc: Steven Rostedt \u003crostedt@goodmis.org\u003e\nCc: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nLink: http://lkml.kernel.org/r/1421430374.2399.27.camel@schen9-desk2.jf.intel.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "a49e6ce46b2174e740908332b4ec1796af9e029a",
      "tree": "abef367c23b20577a299f363a7d76154fd1e7c21",
      "parents": [
        "5aaf989a3f7a116df4a0dd9a6c537f13cb0e32d2"
      ],
      "author": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Sun Oct 26 19:19:16 2014 -0400"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:01:44 2016 -0500"
      },
      "message": "move d_rcu from overlapping d_child to overlapping d_alias\n\ncommit 946e51f2bf37f1656916eb75bd0742ba33983c28 upstream.\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n[bwh: Backported to 3.2:\n - Apply name changes in all the different places we use d_alias and d_child\n - Move the WARN_ON() in __d_free() to d_free() as we don\u0027t have dentry_free()]\nSigned-off-by: Ben Hutchings \u003cben@decadent.org.uk\u003e\n[lizf: Backported to 3.4:\n - adjust context\n - need one more name change in debugfs]\n"
    },
    {
      "commit": "eae81ccc389ac4ce8e6c8925aafecd49e44a5af4",
      "tree": "846f11277d55b986419ad6ad23c8f554dc40270c",
      "parents": [
        "70da5b358e7e43293fac0f78317c339d1eff96fa"
      ],
      "author": {
        "name": "Sasha Levin",
        "email": "sasha.levin@oracle.com",
        "time": "Wed Dec 03 19:25:05 2014 -0500"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:00:36 2016 -0500"
      },
      "message": "time: adjtimex: Validate the ADJ_FREQUENCY values\n\ncommit 5e5aeb4367b450a28f447f6d5ab57d8f2ab16a5f upstream.\n\nVerify that the frequency value from userspace is valid and makes sense.\n\nUnverified values can cause overflows later on.\n\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Sasha Levin \u003csasha.levin@oracle.com\u003e\n[jstultz: Fix up bug for negative values and drop redunent cap check]\nSigned-off-by: John Stultz \u003cjohn.stultz@linaro.org\u003e\n[lizf: Backported to 3.4: adjust context]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "70da5b358e7e43293fac0f78317c339d1eff96fa",
      "tree": "b078039b8f0cf6296f2f348f8569289a5361acf6",
      "parents": [
        "631e8a57ea957b730f271a50049959b4afa6472b"
      ],
      "author": {
        "name": "Sasha Levin",
        "email": "sasha.levin@oracle.com",
        "time": "Wed Dec 03 19:22:48 2014 -0500"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:00:35 2016 -0500"
      },
      "message": "time: settimeofday: Validate the values of tv from user\n\ncommit 6ada1fc0e1c4775de0e043e1bd3ae9d065491aa5 upstream.\n\nAn unvalidated user input is multiplied by a constant, which can result in\nan undefined behaviour for large values. While this is validated later,\nwe should avoid triggering undefined behaviour.\n\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Sasha Levin \u003csasha.levin@oracle.com\u003e\n[jstultz: include trivial milisecond-\u003emicrosecond correction noticed\nby Andy]\nSigned-off-by: John Stultz \u003cjohn.stultz@linaro.org\u003e\n[lizf: Backported to 3.4: adjust filename]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "20ddfdd45db0dfc1193c8c81cb20196befbf0743",
      "tree": "9b3336a9142e118971e302a83cf2b6ca7d42f55f",
      "parents": [
        "306afdf4e512bf760994face0363fc054ce05dc8"
      ],
      "author": {
        "name": "Thomas Gleixner",
        "email": "tglx@linutronix.de",
        "time": "Thu Dec 11 23:01:41 2014 +0100"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 20:00:11 2016 -0500"
      },
      "message": "genirq: Prevent proc race against freeing of irq descriptors\n\ncommit c291ee622165cb2c8d4e7af63fffd499354a23be upstream.\n\nSince the rework of the sparse interrupt code to actually free the\nunused interrupt descriptors there exists a race between the /proc\ninterfaces to the irq subsystem and the code which frees the interrupt\ndescriptor.\n\nCPU0\t\t\t\tCPU1\n\t\t\t\tshow_interrupts()\n\t\t\t\t  desc \u003d irq_to_desc(X);\nfree_desc(desc)\n  remove_from_radix_tree();\n  kfree(desc);\n\t\t\t\t  raw_spinlock_irq(\u0026desc-\u003elock);\n\n/proc/interrupts is the only interface which can actively corrupt\nkernel memory via the lock access. /proc/stat can only read from freed\nmemory. Extremly hard to trigger, but possible.\n\nThe interfaces in /proc/irq/N/ are not affected by this because the\nremoval of the proc file is serialized in procfs against concurrent\nreaders/writers. The removal happens before the descriptor is freed.\n\nFor architectures which have CONFIG_SPARSE_IRQ\u003dn this is a non issue\nas the descriptor is never freed. It\u0027s merely cleared out with the irq\ndescriptor lock held. So any concurrent proc access will either see\nthe old correct value or the cleared out ones.\n\nProtect the lookup and access to the irq descriptor in\nshow_interrupts() with the sparse_irq_lock.\n\nProvide kstat_irqs_usr() which is protecting the lookup and access\nwith sparse_irq_lock and switch /proc/stat to use it.\n\nDocument the existing kstat_irqs interfaces so it\u0027s clear that the\ncaller needs to take care about protection. The users of these\ninterfaces are either not affected due to SPARSE_IRQ\u003dn or already\nprotected against removal.\n\nFixes: 1f5a5b87f78f \"genirq: Implement a sane sparse_irq allocator\"\nSigned-off-by: Thomas Gleixner \u003ctglx@linutronix.de\u003e\n[lizf: Backported to 3.4:\n - define kstat_irqs() for CONFIG_GENERIC_HARDIRQS\n - add ifdef/endif CONFIG_SPARSE_IRQ]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "7f490b21c8edbd5f47320015c05c3086909d51a4",
      "tree": "36eb43eead2da446c7d4c1f5564585d75eaef071",
      "parents": [
        "157ecab851fd1f788fd6d3c7d76dc30dbc6278aa"
      ],
      "author": {
        "name": "shumash",
        "email": "shumashgeely@gmail.com",
        "time": "Tue Oct 06 09:49:52 2015 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 19:57:40 2016 -0500"
      },
      "message": "workqueues: add missing header file\n"
    },
    {
      "commit": "157ecab851fd1f788fd6d3c7d76dc30dbc6278aa",
      "tree": "c4d04c04ddc8195e8ddd1e3a596f65f81e95b41f",
      "parents": [
        "b2f60dfc8a5f68d24936f9cead661b8f7856567d"
      ],
      "author": {
        "name": "shumash",
        "email": "shumashgeely@gmail.com",
        "time": "Sat Jul 18 09:12:19 2015 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 19:57:39 2016 -0500"
      },
      "message": "workqueue: Add system wide power_efficient workqueues\n\nThis patch adds system wide workqueues aligned towards power saving. This is\ndone by allocating them with WQ_UNBOUND flag if \u0027wq_power_efficient\u0027 is set to\n\u0027true\u0027.\n\ntj: updated comments a bit.\n\nSigned-off-by: Viresh Kumar \u003cviresh.kumar@linaro.org\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n(cherry picked from commit 0668106ca3865ba945e155097fb042bf66d364d3)\nSigned-off-by: Mark Brown \u003cbroonie@linaro.org\u003e\n\nChange-Id: Id0614a3d7f96937fa0c396d2197e3580b8b8de80\n"
    },
    {
      "commit": "b2f60dfc8a5f68d24936f9cead661b8f7856567d",
      "tree": "501db0eaab2dfcbeac9fbe38b3ce03ad86660c29",
      "parents": [
        "e1210196448b3a6c20d21944e3016627b14acdb7"
      ],
      "author": {
        "name": "shumash",
        "email": "shumashgeely@gmail.com",
        "time": "Sat Jul 18 09:03:08 2015 -0600"
      },
      "committer": {
        "name": "William Bellavance",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Feb 10 19:57:38 2016 -0500"
      },
      "message": "workqueues: Introduce new flag WQ_POWER_EFFICIENT for power oriented workqueues\n\nWorkqueues can be performance or power-oriented. Currently, most workqueues are\nbound to the CPU they were created on. This gives good performance (due to cache\neffects) at the cost of potentially waking up otherwise idle cores (Idle from\nscheduler\u0027s perspective. Which may or may not be physically idle) just to\nprocess some work. To save power, we can allow the work to be rescheduled on a\ncore that is already awake.\n\nWorkqueues created with the WQ_UNBOUND flag will allow some power savings.\nHowever, we don\u0027t change the default behaviour of the system.  To enable\npower-saving behaviour, a new config option CONFIG_WQ_POWER_EFFICIENT needs to\nbe turned on. This option can also be overridden by the\nworkqueue.power_efficient boot parameter.\n\ntj: Updated config description and comments.  Renamed\n    CONFIG_WQ_POWER_EFFICIENT to CONFIG_WQ_POWER_EFFICIENT_DEFAULT.\n\nSigned-off-by: Viresh Kumar \u003cviresh.kumar@linaro.org\u003e\nReviewed-by: Amit Kucheria \u003camit.kucheria@linaro.org\u003e\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\n(cherry picked from commit cee22a15052faa817e3ec8985a28154d3fabc7aa)\nSigned-off-by: Mark Brown \u003cbroonie@linaro.org\u003e\n\nChange-Id: I5c2f656aaa266deba2dd0887dace9928540910ae\n"
    },
    {
      "commit": "bc591bdc20ee5c4cf94391b5e30c56bedc17e47e",
      "tree": "6ab0e1274bd695b2bdcc4f596f6ccb8466833074",
      "parents": [
        "c9ec5028049b974988c98f11f953c9fb5ef540ac"
      ],
      "author": {
        "name": "Mark Grondona",
        "email": "mgrondona@llnl.gov",
        "time": "Wed Sep 11 14:24:31 2013 -0700"
      },
      "committer": {
        "name": "flintman",
        "email": "flintman@flintmancomputers.com",
        "time": "Thu Dec 10 05:21:44 2015 -0500"
      },
      "message": "__ptrace_may_access() should not deny sub-threads\n\ncommit 73af963f9f3036dffed55c3a2898598186db1045 upstream.\n\n__ptrace_may_access() checks get_dumpable/ptrace_has_cap/etc if task !\u003d\ncurrent, this can can lead to surprising results.\n\nFor example, a sub-thread can\u0027t readlink(\"/proc/self/exe\") if the\nexecutable is not readable.  setup_new_exec()-\u003ewould_dump() notices that\ninode_permission(MAY_READ) fails and then it does\nset_dumpable(suid_dumpable).  After that get_dumpable() fails.\n\n(It is not clear why proc_pid_readlink() checks get_dumpable(), perhaps we\ncould add PTRACE_MODE_NODUMPABLE)\n\nChange __ptrace_may_access() to use same_thread_group() instead of \"task\n\u003d\u003d current\".  Any security check is pointless when the tasks share the\nsame -\u003emm.\n\nChange-Id: I0b871cf8e7c82a6042831f479c17f427fce66bc5\nSigned-off-by: Mark Grondona \u003cmgrondona@llnl.gov\u003e\nSigned-off-by: Ben Woodard \u003cwoodard@redhat.com\u003e\nSigned-off-by: Oleg Nesterov \u003coleg@redhat.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Greg Kroah-Hartman \u003cgregkh@linuxfoundation.org\u003e\n"
    },
    {
      "commit": "c207c4948630601928b3fd5b168a89734f148e76",
      "tree": "2501dbda07c35fc81536b876b97a67ed67e728d2",
      "parents": [
        "7ebabd77613ddc5b1841c788085c5ac8c6b2cd85"
      ],
      "author": {
        "name": "Steven Rostedt (Red Hat)",
        "email": "rostedt@goodmis.org",
        "time": "Thu Jun 25 18:10:09 2015 -0400"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Thu Oct 22 09:20:06 2015 +0800"
      },
      "message": "tracing/filter: Do not allow infix to exceed end of string\n\ncommit 6b88f44e161b9ee2a803e5b2b1fbcf4e20e8b980 upstream.\n\nWhile debugging a WARN_ON() for filtering, I found that it is possible\nfor the filter string to be referenced after its end. With the filter:\n\n # echo \u0027\u003e\u0027 \u003e /sys/kernel/debug/events/ext4/ext4_truncate_exit/filter\n\nThe filter_parse() function can call infix_get_op() which calls\ninfix_advance() that updates the infix filter pointers for the cnt\nand tail without checking if the filter is already at the end, which\nwill put the cnt to zero and the tail beyond the end. The loop then calls\ninfix_next() that has\n\n\tps-\u003einfix.cnt--;\n\treturn ps-\u003einfix.string[ps-\u003einfix.tail++];\n\nThe cnt will now be below zero, and the tail that is returned is\nalready passed the end of the filter string. So far the allocation\nof the filter string usually has some buffer that is zeroed out, but\nif the filter string is of the exact size of the allocated buffer\nthere\u0027s no guarantee that the charater after the nul terminating\ncharacter will be zero.\n\nLuckily, only root can write to the filter.\n\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "7ebabd77613ddc5b1841c788085c5ac8c6b2cd85",
      "tree": "e39de8a4ecf033b01f21d27eca26a68bddc1d2e1",
      "parents": [
        "800e58ae21796a472f39cd6d0601c87b297409af"
      ],
      "author": {
        "name": "Steven Rostedt (Red Hat)",
        "email": "rostedt@goodmis.org",
        "time": "Thu Jun 25 18:02:29 2015 -0400"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Thu Oct 22 09:20:06 2015 +0800"
      },
      "message": "tracing/filter: Do not WARN on operand count going below zero\n\ncommit b4875bbe7e68f139bd3383828ae8e994a0df6d28 upstream.\n\nWhen testing the fix for the trace filter, I could not come up with\na scenario where the operand count goes below zero, so I added a\nWARN_ON_ONCE(cnt \u003c 0) to the logic. But there is legitimate case\nthat it can happen (although the filter would be wrong).\n\n # echo \u0027\u003e\u0027 \u003e /sys/kernel/debug/events/ext4/ext4_truncate_exit/filter\n\nThat is, a single operation without any operands will hit the path\nwhere the WARN_ON_ONCE() can trigger. Although this is harmless,\nand the filter is reported as a error. But instead of spitting out\na warning to the kernel dmesg, just fail nicely and report it via\nthe proper channels.\n\nLink: http://lkml.kernel.org/r/558C6082.90608@oracle.com\n\nReported-by: Vince Weaver \u003cvincent.weaver@maine.edu\u003e\nReported-by: Sasha Levin \u003csasha.levin@oracle.com\u003e\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "d5ea436a754c3b117421d4896eda6bc7dddb2f4e",
      "tree": "38532e817696d37308a207fcd9710f209d6321b0",
      "parents": [
        "272bc28a42deac776c1c45a88a90559c93a015c7"
      ],
      "author": {
        "name": "Paul E. McKenney",
        "email": "paulmck@linux.vnet.ibm.com",
        "time": "Mon May 11 11:13:05 2015 -0700"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Thu Oct 22 09:20:02 2015 +0800"
      },
      "message": "rcu: Correctly handle non-empty Tiny RCU callback list with none ready\n\ncommit 6e91f8cb138625be96070b778d9ba71ce520ea7e upstream.\n\nIf, at the time __rcu_process_callbacks() is invoked,  there are callbacks\nin Tiny RCU\u0027s callback list, but none of them are ready to be invoked,\nthe current list-management code will knit the non-ready callbacks out\nof the list.  This can result in hangs and possibly worse.  This commit\ntherefore inserts a check for there being no callbacks that can be\ninvoked immediately.\n\nThis bug is unlikely to occur -- you have to get a new callback between\nthe time rcu_sched_qs() or rcu_bh_qs() was called, but before we get to\n__rcu_process_callbacks().  It was detected by the addition of RCU-bh\ntesting to rcutorture, which in turn was instigated by Iftekhar Ahmed\u0027s\nmutation testing.  Although this bug was made much more likely by\n915e8a4fe45e (rcu: Remove fastpath from __rcu_process_callbacks()), this\ndid not cause the bug, but rather made it much more probable.   That\nsaid, it takes more than 40 hours of rcutorture testing, on average,\nfor this bug to appear, so this fix cannot be considered an emergency.\n\nSigned-off-by: Paul E. McKenney \u003cpaulmck@linux.vnet.ibm.com\u003e\nReviewed-by: Josh Triplett \u003cjosh@joshtriplett.org\u003e\n[lizf: Backported to 3.4: adjust filename ]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "427841d9ea9213bd066e6b2bddba5a70bce90c6d",
      "tree": "2356fe62cd88e6ca294c94a9129653184bfb3bd4",
      "parents": [
        "4a55c0cfdd8a8b0c39eba5e696c36c33d0879684"
      ],
      "author": {
        "name": "Peter Zijlstra",
        "email": "peterz@infradead.org",
        "time": "Tue May 20 15:49:48 2014 +0200"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Thu Oct 22 09:20:01 2015 +0800"
      },
      "message": "hrtimer: Allow concurrent hrtimer_start() for self restarting timers\n\ncommit 5de2755c8c8b3a6b8414870e2c284914a2b42e4d upstream.\n\nBecause we drop cpu_base-\u003elock around calling hrtimer::function, it is\npossible for hrtimer_start() to come in between and enqueue the timer.\n\nIf hrtimer::function then returns HRTIMER_RESTART we\u0027ll hit the BUG_ON\nbecause HRTIMER_STATE_ENQUEUED will be set.\n\nSince the above is a perfectly valid scenario, remove the BUG_ON and\nmake the enqueue_hrtimer() call conditional on the timer not being\nenqueued already.\n\nNOTE: in that concurrent scenario its entirely common for both sites\nto want to modify the hrtimer, since hrtimers don\u0027t provide\nserialization themselves be sure to provide some such that the\nhrtimer::function and the hrtimer_start() caller don\u0027t both try and\nfudge the expiration state at the same time.\n\nTo that effect, add a WARN when someone tries to forward an already\nenqueued timer, the most common way to change the expiry of self\nrestarting timers. Ideally we\u0027d put the WARN in everything modifying\nthe expiry but most of that is inlines and we don\u0027t need the bloat.\n\nFixes: 2d44ae4d7135 (\"hrtimer: clean up cpu-\u003ebase locking tricks\")\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Ben Segall \u003cbsegall@google.com\u003e\nCc: Roman Gushchin \u003cklamm@yandex-team.ru\u003e\nCc: Paul Turner \u003cpjt@google.com\u003e\nLink: http://lkml.kernel.org/r/20150415113105.GT5029@twins.programming.kicks-ass.net\nSigned-off-by: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "75d9a22306c18844879cddcbf940e22a03918561",
      "tree": "f1cb1be07f5deee75c1f515b60caac68080b1b68",
      "parents": [
        "ba12817ba1b77ce1d8141b7f0f68419aa1ac42eb"
      ],
      "author": {
        "name": "Eric W. Biederman",
        "email": "ebiederm@xmission.com",
        "time": "Wed Jun 15 10:21:48 2011 -0700"
      },
      "committer": {
        "name": "flintman",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Oct 14 06:40:30 2015 -0400"
      },
      "message": "proc: Usable inode numbers for the namespace file descriptors.\n\nAssign a unique proc inode to each namespace, and use that\ninode number to ensure we only allocate at most one proc\ninode for every namespace in proc.\n\nA single proc inode per namespace allows userspace to test\nto see if two processes are in the same namespace.\n\nThis has been a long requested feature and only blocked because\na naive implementation would put the id in a global space and\nwould ultimately require having a namespace for the names of\nnamespaces, making migration and certain virtualization tricks\nimpossible.\n\nWe still don\u0027t have per superblock inode numbers for proc, which\nappears necessary for application unaware checkpoint/restart and\nmigrations (if the application is using namespace file descriptors)\nbut that is now allowd by the design if it becomes important.\n\nI have preallocated the ipc and uts initial proc inode numbers so\ntheir structures can be statically initialized.\n\nSigned-off-by: Eric W. Biederman \u003cebiederm@xmission.com\u003e\n(cherry picked from commit 98f842e675f96ffac96e6c50315790912b2812be)\n"
    },
    {
      "commit": "df3e15428d98ba3d5dde429e71c499da2c74f089",
      "tree": "8af558189fe5d5c00a99529436b88dfa2f114a6d",
      "parents": [
        "ab012bbe936b097f43af565ab2b5076be7022c83"
      ],
      "author": {
        "name": "Eric W. Biederman",
        "email": "ebiederm@xmission.com",
        "time": "Thu Jul 26 21:08:32 2012 -0700"
      },
      "committer": {
        "name": "flintman",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Oct 14 06:40:23 2015 -0400"
      },
      "message": "vfs: Add a user namespace reference from struct mnt_namespace\n\nThis will allow for support for unprivileged mounts in a new user namespace.\n\nAcked-by: \"Serge E. Hallyn\" \u003cserge@hallyn.com\u003e\nSigned-off-by: \"Eric W. Biederman\" \u003cebiederm@xmission.com\u003e\n(cherry picked from commit 771b1371686e0a63e938ada28de020b9a0040f55)\n"
    },
    {
      "commit": "210cff15b0ecfea14195b71ae5fc97f4191a11d0",
      "tree": "56a8fe1cab5feff4b1a2b7904b21d1e78ef04d68",
      "parents": [
        "0bcf8f91545bb3930fa24840dc9a96a94f7ae979"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Mon Jun 25 12:55:18 2012 +0100"
      },
      "committer": {
        "name": "flintman",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Oct 14 06:38:13 2015 -0400"
      },
      "message": "VFS: Make clone_mnt()/copy_tree()/collect_mounts() return errors\n\ncopy_tree() can theoretically fail in a case other than ENOMEM, but always\nreturns NULL which is interpreted by callers as -ENOMEM.  Change it to return\nan explicit error.\n\nAlso change clone_mnt() for consistency and because union mounts will add new\nerror cases.\n\nThanks to Andreas Gruenbacher \u003cagruen@suse.de\u003e for a bug fix.\n[AV: folded braino fix by Dan Carpenter]\n\nOriginal-author: Valerie Aurora \u003cvaurora@redhat.com\u003e\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\nCc: Valerie Aurora \u003cvalerie.aurora@gmail.com\u003e\nCc: Andreas Gruenbacher \u003cagruen@suse.de\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n(cherry picked from commit be34d1a3bc4b6f357a49acb55ae870c81337e4f0)\n"
    },
    {
      "commit": "9011bf7ee179e887dbe2ad2c9e81f1f465fe146b",
      "tree": "97d41fecda289d903c7ac4215704144f368396d4",
      "parents": [
        "bbd581b5bcc4f3651f636cf56eac2f419a933583"
      ],
      "author": {
        "name": "Andi Kleen",
        "email": "ak@linux.intel.com",
        "time": "Tue May 08 13:32:24 2012 +0930"
      },
      "committer": {
        "name": "flintman",
        "email": "flintman@flintmancomputers.com",
        "time": "Wed Oct 14 06:37:58 2015 -0400"
      },
      "message": "brlocks/lglocks: turn into functions\n\nlglocks and brlocks are currently generated with some complicated macros\nin lglock.h.  But there\u0027s no reason to not just use common utility\nfunctions and put all the data into a common data structure.\n\nSince there are at least two users it makes sense to share this code in a\nlibrary.  This is also easier maintainable than a macro forest.\n\nThis will also make it later possible to dynamically allocate lglocks and\nalso use them in modules (this would both still need some additional, but\nnow straightforward, code)\n\n[akpm@linux-foundation.org: checkpatch fixes]\nSigned-off-by: Andi Kleen \u003cak@linux.intel.com\u003e\nCc: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nCc: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Rusty Russell \u003crusty@rustcorp.com.au\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n\n(cherry picked from commit eea62f831b8030b0eeea8314eed73b6132d1de26)\n"
    },
    {
      "commit": "aaedb09057b05c7c9e213dc465bff5f70e708535",
      "tree": "2f2a0c645f970d5f6390ca39ee2e1fe0f2eea790",
      "parents": [
        "a39bf4a8e29c7336c0c72652b7d0dd1cd1b13c51"
      ],
      "author": {
        "name": "Thomas Gleixner",
        "email": "tglx@linutronix.de",
        "time": "Fri Feb 07 20:58:41 2014 +0100"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Sep 18 09:20:47 2015 +0800"
      },
      "message": "sched: Queue RT tasks to head when prio drops\n\ncommit 81a44c5441d7f7d2c3dc9105f4d65ad0d5818617 upstream.\n\nThe following scenario does not work correctly:\n\nRunqueue of CPUx contains two runnable and pinned tasks:\n\n T1: SCHED_FIFO, prio 80\n T2: SCHED_FIFO, prio 80\n\nT1 is on the cpu and executes the following syscalls (classic priority\nceiling scenario):\n\n sys_sched_setscheduler(pid(T1), SCHED_FIFO, .prio \u003d 90);\n ...\n sys_sched_setscheduler(pid(T1), SCHED_FIFO, .prio \u003d 80);\n ...\n\nNow T1 gets preempted by T3 (SCHED_FIFO, prio 95). After T3 goes back\nto sleep the scheduler picks T2. Surprise!\n\nThe same happens w/o actual preemption when T1 is forced into the\nscheduler due to a sporadic NEED_RESCHED event. The scheduler invokes\npick_next_task() which returns T2. So T1 gets preempted and scheduled\nout.\n\nThis happens because sched_setscheduler() dequeues T1 from the prio 90\nlist and then enqueues it on the tail of the prio 80 list behind T2.\nThis violates the POSIX spec and surprises user space which relies on\nthe guarantee that SCHED_FIFO tasks are not scheduled out unless they\ngive the CPU up voluntarily or are preempted by a higher priority\ntask. In the latter case the preempted task must get back on the CPU\nafter the preempting task schedules out again.\n\nWe fixed a similar issue already in commit 60db48c (sched: Queue a\ndeboosted task to the head of the RT prio queue). The same treatment\nis necessary for sched_setscheduler(). So enqueue to head of the prio\nbucket list if the priority of the task is lowered.\n\nIt might be possible that existing user space relies on the current\nbehaviour, but it can be considered highly unlikely due to the corner\ncase nature of the application scenario.\n\nSigned-off-by: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nSigned-off-by: Sebastian Andrzej Siewior \u003cbigeasy@linutronix.de\u003e\nSigned-off-by: Peter Zijlstra \u003cpeterz@infradead.org\u003e\nLink: http://lkml.kernel.org/r/1391803122-4425-6-git-send-email-bigeasy@linutronix.de\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "ea1e8ee07cdf7405111cfa9236935b3da1075f56",
      "tree": "92bdc9ed00dd2bb76a565de25b5e328aeade8a36",
      "parents": [
        "c0e3f102c50b6bab71d4fe4232e45bf5c67b8be0"
      ],
      "author": {
        "name": "Steven Rostedt",
        "email": "rostedt@goodmis.org",
        "time": "Mon Jun 15 17:50:25 2015 -0400"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Sep 18 09:20:45 2015 +0800"
      },
      "message": "tracing: Have filter check for balanced ops\n\ncommit 2cf30dc180cea808077f003c5116388183e54f9e upstream.\n\nWhen the following filter is used it causes a warning to trigger:\n\n # cd /sys/kernel/debug/tracing\n # echo \"((dev\u003d\u003d1)blocks\u003d\u003d2)\" \u003e events/ext4/ext4_truncate_exit/filter\n-bash: echo: write error: Invalid argument\n # cat events/ext4/ext4_truncate_exit/filter\n((dev\u003d\u003d1)blocks\u003d\u003d2)\n^\nparse_error: No error\n\n ------------[ cut here ]------------\n WARNING: CPU: 2 PID: 1223 at kernel/trace/trace_events_filter.c:1640 replace_preds+0x3c5/0x990()\n Modules linked in: bnep lockd grace bluetooth  ...\n CPU: 3 PID: 1223 Comm: bash Tainted: G        W       4.1.0-rc3-test+ #450\n Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012\n  0000000000000668 ffff8800c106bc98 ffffffff816ed4f9 ffff88011ead0cf0\n  0000000000000000 ffff8800c106bcd8 ffffffff8107fb07 ffffffff8136b46c\n  ffff8800c7d81d48 ffff8800d4c2bc00 ffff8800d4d4f920 00000000ffffffea\n Call Trace:\n  [\u003cffffffff816ed4f9\u003e] dump_stack+0x4c/0x6e\n  [\u003cffffffff8107fb07\u003e] warn_slowpath_common+0x97/0xe0\n  [\u003cffffffff8136b46c\u003e] ? _kstrtoull+0x2c/0x80\n  [\u003cffffffff8107fb6a\u003e] warn_slowpath_null+0x1a/0x20\n  [\u003cffffffff81159065\u003e] replace_preds+0x3c5/0x990\n  [\u003cffffffff811596b2\u003e] create_filter+0x82/0xb0\n  [\u003cffffffff81159944\u003e] apply_event_filter+0xd4/0x180\n  [\u003cffffffff81152bbf\u003e] event_filter_write+0x8f/0x120\n  [\u003cffffffff811db2a8\u003e] __vfs_write+0x28/0xe0\n  [\u003cffffffff811dda43\u003e] ? __sb_start_write+0x53/0xf0\n  [\u003cffffffff812e51e0\u003e] ? security_file_permission+0x30/0xc0\n  [\u003cffffffff811dc408\u003e] vfs_write+0xb8/0x1b0\n  [\u003cffffffff811dc72f\u003e] SyS_write+0x4f/0xb0\n  [\u003cffffffff816f5217\u003e] system_call_fastpath+0x12/0x6a\n ---[ end trace e11028bd95818dcd ]---\n\nWorse yet, reading the error message (the filter again) it says that\nthere was no error, when there clearly was. The issue is that the\ncode that checks the input does not check for balanced ops. That is,\nhaving an op between a closed parenthesis and the next token.\n\nThis would only cause a warning, and fail out before doing any real\nharm, but it should still not caues a warning, and the error reported\nshould work:\n\n # cd /sys/kernel/debug/tracing\n # echo \"((dev\u003d\u003d1)blocks\u003d\u003d2)\" \u003e events/ext4/ext4_truncate_exit/filter\n-bash: echo: write error: Invalid argument\n # cat events/ext4/ext4_truncate_exit/filter\n((dev\u003d\u003d1)blocks\u003d\u003d2)\n^\nparse_error: Meaningless filter expression\n\nAnd give no kernel warning.\n\nLink: http://lkml.kernel.org/r/20150615175025.7e809215@gandalf.local.home\n\nCc: Peter Zijlstra \u003ca.p.zijlstra@chello.nl\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@kernel.org\u003e\nReported-by: Vince Weaver \u003cvincent.weaver@maine.edu\u003e\nTested-by: Vince Weaver \u003cvincent.weaver@maine.edu\u003e\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\n[lizf: Backported to 3.4: remove the check for OP_NOT, as it\u0027s not supported.]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "c0e3f102c50b6bab71d4fe4232e45bf5c67b8be0",
      "tree": "adfffc6cdcc1bb5156b9123260bd3407849bc7c2",
      "parents": [
        "501e81d5d6b9434037851749c6194bf3a237b281"
      ],
      "author": {
        "name": "Wang Long",
        "email": "long.wanglong@huawei.com",
        "time": "Wed Jun 10 08:12:37 2015 +0000"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Sep 18 09:20:45 2015 +0800"
      },
      "message": "ring-buffer-benchmark: Fix the wrong sched_priority of producer\n\ncommit 108029323910c5dd1ef8fa2d10da1ce5fbce6e12 upstream.\n\nThe producer should be used producer_fifo as its sched_priority,\nso correct it.\n\nLink: http://lkml.kernel.org/r/1433923957-67842-1-git-send-email-long.wanglong@huawei.com\n\nSigned-off-by: Wang Long \u003clong.wanglong@huawei.com\u003e\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "a12cb100975637baf203b140ffc56057b29bdb86",
      "tree": "ac720f9155e5584738394629bdb7c2f77dd4617f",
      "parents": [
        "241cb82322f19f3194946cddfbb4a21c43f04e1b"
      ],
      "author": {
        "name": "Oleg Nesterov",
        "email": "oleg@redhat.com",
        "time": "Thu Apr 16 12:47:29 2015 -0700"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Sep 18 09:20:31 2015 +0800"
      },
      "message": "ptrace: fix race between ptrace_resume() and wait_task_stopped()\n\ncommit b72c186999e689cb0b055ab1c7b3cd8fffbeb5ed upstream.\n\nptrace_resume() is called when the tracee is still __TASK_TRACED.  We set\ntracee-\u003eexit_code and then wake_up_state() changes tracee-\u003estate.  If the\ntracer\u0027s sub-thread does wait() in between, task_stopped_code(ptrace \u003d\u003e T)\nwrongly looks like another report from tracee.\n\nThis confuses debugger, and since wait_task_stopped() clears -\u003eexit_code\nthe tracee can miss a signal.\n\nTest-case:\n\n\t#include \u003cstdio.h\u003e\n\t#include \u003cunistd.h\u003e\n\t#include \u003csys/wait.h\u003e\n\t#include \u003csys/ptrace.h\u003e\n\t#include \u003cpthread.h\u003e\n\t#include \u003cassert.h\u003e\n\n\tint pid;\n\n\tvoid *waiter(void *arg)\n\t{\n\t\tint stat;\n\n\t\tfor (;;) {\n\t\t\tassert(pid \u003d\u003d wait(\u0026stat));\n\t\t\tassert(WIFSTOPPED(stat));\n\t\t\tif (WSTOPSIG(stat) \u003d\u003d SIGHUP)\n\t\t\t\tcontinue;\n\n\t\t\tassert(WSTOPSIG(stat) \u003d\u003d SIGCONT);\n\t\t\tprintf(\"ERR! extra/wrong report:%x\\n\", stat);\n\t\t}\n\t}\n\n\tint main(void)\n\t{\n\t\tpthread_t thread;\n\n\t\tpid \u003d fork();\n\t\tif (!pid) {\n\t\t\tassert(ptrace(PTRACE_TRACEME, 0,0,0) \u003d\u003d 0);\n\t\t\tfor (;;)\n\t\t\t\tkill(getpid(), SIGHUP);\n\t\t}\n\n\t\tassert(pthread_create(\u0026thread, NULL, waiter, NULL) \u003d\u003d 0);\n\n\t\tfor (;;)\n\t\t\tptrace(PTRACE_CONT, pid, 0, SIGCONT);\n\n\t\treturn 0;\n\t}\n\nNote for stable: the bug is very old, but without 9899d11f6544 \"ptrace:\nensure arch_ptrace/ptrace_request can never race with SIGKILL\" the fix\nshould use lock_task_sighand(child).\n\nSigned-off-by: Oleg Nesterov \u003coleg@redhat.com\u003e\nReported-by: Pavel Labath \u003clabath@google.com\u003e\nTested-by: Pavel Labath \u003clabath@google.com\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "efbfa26cd86a6c305f039510df852e2cb7bb2674",
      "tree": "5b15e876ccf4bfa5d89269b30e04cee995f09ef4",
      "parents": [
        "853844c4c59e75183389c12471db5babc184e7d6"
      ],
      "author": {
        "name": "H. Peter Anvin",
        "email": "hpa@zytor.com",
        "time": "Thu Feb 14 15:13:55 2013 -0800"
      },
      "committer": {
        "name": "flintman",
        "email": "flintman@flintmancomputers.com",
        "time": "Thu Sep 17 16:52:06 2015 -0400"
      },
      "message": "kernel: Replace timeconst.pl with a bc script\n\nbc is the standard tool for multi-precision arithmetic.  We switched\nto Perl because akpm reported a hard-to-reproduce build hang, which\nwas very odd because affected and unaffected machines were all running\nthe same version of GNU bc.\n\nUnfortunately switching to Perl required a really ugly \"canning\"\nmechanism to support Perl \u003c 5.8 installations lacking the Math::BigInt\nmodule.\n\nIt was recently pointed out to me that some very old versions of GNU\nmake had problems with pipes in subshells, which was indeed the\nconstruct used in the Makefile rules in that version of the patch;\nPerl didn\u0027t need it so switching to Perl fixed the problem for\nunrelated reasons.  With the problem (hopefully) root-caused, we can\nswitch back to bc and do the arbitrary-precision arithmetic naturally.\n\nChange-Id: I048a7fb947f2fbd7b454e85b122c0e3601c02136\nSigned-off-by: H. Peter Anvin \u003chpa@zytor.com\u003e\nCc: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nAcked-by: Sam Ravnborg \u003csam@ravnborg.org\u003e\nSigned-off-by: Michal Marek \u003cmmarek@suse.cz\u003e\n"
    },
    {
      "commit": "b674b0adae623283de4f49e1734de675678c456f",
      "tree": "5d2fa2e8d481978046f29a12059d78e525df9352",
      "parents": [
        "8c9c6ffb188714b7d22261c029ec9fbc065bb5d1"
      ],
      "author": {
        "name": "Ben Greear",
        "email": "greearb@candelatech.com",
        "time": "Thu Jun 06 14:29:49 2013 -0700"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Jun 19 11:40:32 2015 +0800"
      },
      "message": "Fix lockup related to stop_machine being stuck in __do_softirq.\n\ncommit 34376a50fb1fa095b9d0636fa41ed2e73125f214 upstream.\n\nThe stop machine logic can lock up if all but one of the migration\nthreads make it through the disable-irq step and the one remaining\nthread gets stuck in __do_softirq.  The reason __do_softirq can hang is\nthat it has a bail-out based on jiffies timeout, but in the lockup case,\njiffies itself is not incremented.\n\nTo work around this, re-add the max_restart counter in __do_irq and stop\nprocessing irqs after 10 restarts.\n\nThanks to Tejun Heo and Rusty Russell and others for helping me track\nthis down.\n\nThis was introduced in 3.9 by commit c10d73671ad3 (\"softirq: reduce\nlatencies\").\n\nIt may be worth looking into ath9k to see if it has issues with its irq\nhandler at a later date.\n\nThe hang stack traces look something like this:\n\n    ------------[ cut here ]------------\n    WARNING: at kernel/watchdog.c:245 watchdog_overflow_callback+0x9c/0xa7()\n    Watchdog detected hard LOCKUP on cpu 2\n    Modules linked in: ath9k ath9k_common ath9k_hw ath mac80211 cfg80211 nfsv4 auth_rpcgss nfs fscache nf_nat_ipv4 nf_nat veth 8021q garp stp mrp llc pktgen lockd sunrpc]\n    Pid: 23, comm: migration/2 Tainted: G         C   3.9.4+ #11\n    Call Trace:\n     \u003cNMI\u003e   warn_slowpath_common+0x85/0x9f\n      warn_slowpath_fmt+0x46/0x48\n      watchdog_overflow_callback+0x9c/0xa7\n      __perf_event_overflow+0x137/0x1cb\n      perf_event_overflow+0x14/0x16\n      intel_pmu_handle_irq+0x2dc/0x359\n      perf_event_nmi_handler+0x19/0x1b\n      nmi_handle+0x7f/0xc2\n      do_nmi+0xbc/0x304\n      end_repeat_nmi+0x1e/0x2e\n     \u003c\u003cEOE\u003e\u003e\n      cpu_stopper_thread+0xae/0x162\n      smpboot_thread_fn+0x258/0x260\n      kthread+0xc7/0xcf\n      ret_from_fork+0x7c/0xb0\n    ---[ end trace 4947dfa9b0a4cec3 ]---\n    BUG: soft lockup - CPU#1 stuck for 22s! [migration/1:17]\n    Modules linked in: ath9k ath9k_common ath9k_hw ath mac80211 cfg80211 nfsv4 auth_rpcgss nfs fscache nf_nat_ipv4 nf_nat veth 8021q garp stp mrp llc pktgen lockd sunrpc]\n    irq event stamp: 835637905\n    hardirqs last  enabled at (835637904): __do_softirq+0x9f/0x257\n    hardirqs last disabled at (835637905): apic_timer_interrupt+0x6d/0x80\n    softirqs last  enabled at (5654720): __do_softirq+0x1ff/0x257\n    softirqs last disabled at (5654725): irq_exit+0x5f/0xbb\n    CPU 1\n    Pid: 17, comm: migration/1 Tainted: G        WC   3.9.4+ #11 To be filled by O.E.M. To be filled by O.E.M./To be filled by O.E.M.\n    RIP: tasklet_hi_action+0xf0/0xf0\n    Process migration/1\n    Call Trace:\n     \u003cIRQ\u003e\n      __do_softirq+0x117/0x257\n      irq_exit+0x5f/0xbb\n      smp_apic_timer_interrupt+0x8a/0x98\n      apic_timer_interrupt+0x72/0x80\n     \u003cEOI\u003e\n      printk+0x4d/0x4f\n      stop_machine_cpu_stop+0x22c/0x274\n      cpu_stopper_thread+0xae/0x162\n      smpboot_thread_fn+0x258/0x260\n      kthread+0xc7/0xcf\n      ret_from_fork+0x7c/0xb0\n\nSigned-off-by: Ben Greear \u003cgreearb@candelatech.com\u003e\nAcked-by: Tejun Heo \u003ctj@kernel.org\u003e\nAcked-by: Pekka Riikonen \u003cpriikone@iki.fi\u003e\nCc: Eric Dumazet \u003ceric.dumazet@gmail.com\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n[xr: Backported to 3.4: Adjust context]\nSigned-off-by: Rui Xiang \u003crui.xiang@huawei.com\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "8c9c6ffb188714b7d22261c029ec9fbc065bb5d1",
      "tree": "5ee033105e8257c1a26dcdd4a7df2a7bbc8f5525",
      "parents": [
        "b9909d5051722bf87a05895fd56517419914136e"
      ],
      "author": {
        "name": "Eric Dumazet",
        "email": "edumazet@google.com",
        "time": "Thu Jan 10 15:26:34 2013 -0800"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Jun 19 11:40:31 2015 +0800"
      },
      "message": "softirq: reduce latencies\n\ncommit c10d73671ad30f54692f7f69f0e09e75d3a8926a upstream.\n\nIn various network workloads, __do_softirq() latencies can be up\nto 20 ms if HZ\u003d1000, and 200 ms if HZ\u003d100.\n\nThis is because we iterate 10 times in the softirq dispatcher,\nand some actions can consume a lot of cycles.\n\nThis patch changes the fallback to ksoftirqd condition to :\n\n- A time limit of 2 ms.\n- need_resched() being set on current task\n\nWhen one of this condition is met, we wakeup ksoftirqd for further\nsoftirq processing if we still have pending softirqs.\n\nUsing need_resched() as the only condition can trigger RCU stalls,\nas we can keep BH disabled for too long.\n\nI ran several benchmarks and got no significant difference in\nthroughput, but a very significant reduction of latencies (one order\nof magnitude) :\n\nIn following bench, 200 antagonist \"netperf -t TCP_RR\" are started in\nbackground, using all available cpus.\n\nThen we start one \"netperf -t TCP_RR\", bound to the cpu handling the NIC\nIRQ (hard+soft)\n\nBefore patch :\n\n# netperf -H 7.7.7.84 -t TCP_RR -T2,2 -- -k\nRT_LATENCY,MIN_LATENCY,MAX_LATENCY,P50_LATENCY,P90_LATENCY,P99_LATENCY,MEAN_LATENCY,STDDEV_LATENCY\nMIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET\nto 7.7.7.84 () port 0 AF_INET : first burst 0 : cpu bind\nRT_LATENCY\u003d550110.424\nMIN_LATENCY\u003d146858\nMAX_LATENCY\u003d997109\nP50_LATENCY\u003d305000\nP90_LATENCY\u003d550000\nP99_LATENCY\u003d710000\nMEAN_LATENCY\u003d376989.12\nSTDDEV_LATENCY\u003d184046.92\n\nAfter patch :\n\n# netperf -H 7.7.7.84 -t TCP_RR -T2,2 -- -k\nRT_LATENCY,MIN_LATENCY,MAX_LATENCY,P50_LATENCY,P90_LATENCY,P99_LATENCY,MEAN_LATENCY,STDDEV_LATENCY\nMIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET\nto 7.7.7.84 () port 0 AF_INET : first burst 0 : cpu bind\nRT_LATENCY\u003d40545.492\nMIN_LATENCY\u003d9834\nMAX_LATENCY\u003d78366\nP50_LATENCY\u003d33583\nP90_LATENCY\u003d59000\nP99_LATENCY\u003d69000\nMEAN_LATENCY\u003d38364.67\nSTDDEV_LATENCY\u003d12865.26\n\nSigned-off-by: Eric Dumazet \u003cedumazet@google.com\u003e\nCc: David Miller \u003cdavem@davemloft.net\u003e\nCc: Tom Herbert \u003ctherbert@google.com\u003e\nCc: Ben Hutchings \u003cbhutchings@solarflare.com\u003e\nSigned-off-by: David S. Miller \u003cdavem@davemloft.net\u003e\n[xr: Backported to 3.4: Adjust context]\nSigned-off-by: Rui Xiang \u003crui.xiang@huawei.com\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "36cddaebe771b9476da10b724da435d5130bb0aa",
      "tree": "c095ee93018c05172cab85deef8749623536a49b",
      "parents": [
        "7afc45bbf2c761175211a41feb5766a56c2f189a"
      ],
      "author": {
        "name": "Brian Silverman",
        "email": "brian@peloton-tech.com",
        "time": "Wed Feb 18 16:23:56 2015 -0800"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Jun 19 11:40:29 2015 +0800"
      },
      "message": "sched: Fix RLIMIT_RTTIME when PI-boosting to RT\n\ncommit 746db9443ea57fd9c059f62c4bfbf41cf224fe13 upstream.\n\nWhen non-realtime tasks get priority-inheritance boosted to a realtime\nscheduling class, RLIMIT_RTTIME starts to apply to them. However, the\ncounter used for checking this (the same one used for SCHED_RR\ntimeslices) was not getting reset. This meant that tasks running with a\nnon-realtime scheduling class which are repeatedly boosted to a realtime\none, but never block while they are running realtime, eventually hit the\ntimeout without ever running for a time over the limit. This patch\nresets the realtime timeslice counter when un-PI-boosting from an RT to\na non-RT scheduling class.\n\nI have some test code with two threads and a shared PTHREAD_PRIO_INHERIT\nmutex which induces priority boosting and spins while boosted that gets\nkilled by a SIGXCPU on non-fixed kernels but doesn\u0027t with this patch\napplied. It happens much faster with a CONFIG_PREEMPT_RT kernel, and\ndoes happen eventually with PREEMPT_VOLUNTARY kernels.\n\nSigned-off-by: Brian Silverman \u003cbrian@peloton-tech.com\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: austin@peloton-tech.com\nLink: http://lkml.kernel.org/r/1424305436-6716-1-git-send-email-brian@peloton-tech.com\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\n[lizf: Backported to 3.4: adjust contest]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "7afc45bbf2c761175211a41feb5766a56c2f189a",
      "tree": "c061ab20787c9701731568d68401c32a27f39dac",
      "parents": [
        "e5b3d85e53f72d0b18908a05b7366aaea3f893f5"
      ],
      "author": {
        "name": "Peter Zijlstra",
        "email": "peterz@infradead.org",
        "time": "Thu Feb 19 18:03:11 2015 +0100"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Jun 19 11:40:28 2015 +0800"
      },
      "message": "perf: Fix irq_work \u0027tail\u0027 recursion\n\ncommit d525211f9d1be8b523ec7633f080f2116f5ea536 upstream.\n\nVince reported a watchdog lockup like:\n\n\t[\u003cffffffff8115e114\u003e] perf_tp_event+0xc4/0x210\n\t[\u003cffffffff810b4f8a\u003e] perf_trace_lock+0x12a/0x160\n\t[\u003cffffffff810b7f10\u003e] lock_release+0x130/0x260\n\t[\u003cffffffff816c7474\u003e] _raw_spin_unlock_irqrestore+0x24/0x40\n\t[\u003cffffffff8107bb4d\u003e] do_send_sig_info+0x5d/0x80\n\t[\u003cffffffff811f69df\u003e] send_sigio_to_task+0x12f/0x1a0\n\t[\u003cffffffff811f71ce\u003e] send_sigio+0xae/0x100\n\t[\u003cffffffff811f72b7\u003e] kill_fasync+0x97/0xf0\n\t[\u003cffffffff8115d0b4\u003e] perf_event_wakeup+0xd4/0xf0\n\t[\u003cffffffff8115d103\u003e] perf_pending_event+0x33/0x60\n\t[\u003cffffffff8114e3fc\u003e] irq_work_run_list+0x4c/0x80\n\t[\u003cffffffff8114e448\u003e] irq_work_run+0x18/0x40\n\t[\u003cffffffff810196af\u003e] smp_trace_irq_work_interrupt+0x3f/0xc0\n\t[\u003cffffffff816c99bd\u003e] trace_irq_work_interrupt+0x6d/0x80\n\nWhich is caused by an irq_work generating new irq_work and therefore\nnot allowing forward progress.\n\nThis happens because processing the perf irq_work triggers another\nperf event (tracepoint stuff) which in turn generates an irq_work ad\ninfinitum.\n\nAvoid this by raising the recursion counter in the irq_work -- which\neffectively disables all software events (including tracepoints) from\nactually triggering again.\n\nReported-by: Vince Weaver \u003cvincent.weaver@maine.edu\u003e\nTested-by: Vince Weaver \u003cvincent.weaver@maine.edu\u003e\nSigned-off-by: Peter Zijlstra (Intel) \u003cpeterz@infradead.org\u003e\nCc: Arnaldo Carvalho de Melo \u003cacme@kernel.org\u003e\nCc: Jiri Olsa \u003cjolsa@redhat.com\u003e\nCc: Paul Mackerras \u003cpaulus@samba.org\u003e\nCc: Steven Rostedt \u003crostedt@goodmis.org\u003e\nLink: http://lkml.kernel.org/r/20150219170311.GH21418@twins.programming.kicks-ass.net\nSigned-off-by: Ingo Molnar \u003cmingo@kernel.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "cf46e6e7354fb1b0d5c39797b60270a88778999e",
      "tree": "5a601d672910aca322acdf5672c272b00eec51b0",
      "parents": [
        "2d4293a85d30bd669f6bf7578689618cd454a2c8"
      ],
      "author": {
        "name": "Steven Rostedt (Red Hat)",
        "email": "rostedt@goodmis.org",
        "time": "Fri Mar 06 19:55:13 2015 -0500"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Jun 19 11:40:23 2015 +0800"
      },
      "message": "ftrace: Fix ftrace enable ordering of sysctl ftrace_enabled\n\ncommit 524a38682573b2e15ab6317ccfe50280441514be upstream.\n\nSome archs (specifically PowerPC), are sensitive with the ordering of\nthe enabling of the calls to function tracing and setting of the\nfunction to use to be traced.\n\nThat is, update_ftrace_function() sets what function the ftrace_caller\ntrampoline should call. Some archs require this to be set before\ncalling ftrace_run_update_code().\n\nAnother bug was discovered, that ftrace_startup_sysctl() called\nftrace_run_update_code() directly. If the function the ftrace_caller\ntrampoline changes, then it will not be updated. Instead a call\nto ftrace_startup_enable() should be called because it tests to see\nif the callback changed since the code was disabled, and will\ntell the arch to update appropriately. Most archs do not need this\nnotification, but PowerPC does.\n\nThe problem could be seen by the following commands:\n\n # echo 0 \u003e /proc/sys/kernel/ftrace_enabled\n # echo function \u003e /sys/kernel/debug/tracing/current_tracer\n # echo 1 \u003e /proc/sys/kernel/ftrace_enabled\n # cat /sys/kernel/debug/tracing/trace\n\nThe trace will show that function tracing was not active.\n\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    },
    {
      "commit": "2d4293a85d30bd669f6bf7578689618cd454a2c8",
      "tree": "21934564fdc2ac61cf35a031cc6eb05fcb0da98f",
      "parents": [
        "2932a0a1abaaab014a5698c26dc95956618b4286"
      ],
      "author": {
        "name": "Pratyush Anand",
        "email": "panand@redhat.com",
        "time": "Fri Mar 06 23:58:06 2015 +0530"
      },
      "committer": {
        "name": "Zefan Li",
        "email": "lizefan@huawei.com",
        "time": "Fri Jun 19 11:40:23 2015 +0800"
      },
      "message": "ftrace: Fix en(dis)able graph caller when en(dis)abling record via sysctl\n\ncommit 1619dc3f8f555ee1cdd3c75db3885d5715442b12 upstream.\n\nWhen ftrace is enabled globally through the proc interface, we must check if\nftrace_graph_active is set. If it is set, then we should also pass the\nFTRACE_START_FUNC_RET command to ftrace_run_update_code(). Similarly, when\nftrace is disabled globally through the proc interface, we must check if\nftrace_graph_active is set. If it is set, then we should also pass the\nFTRACE_STOP_FUNC_RET command to ftrace_run_update_code().\n\nConsider the following situation.\n\n # echo 0 \u003e /proc/sys/kernel/ftrace_enabled\n\nAfter this ftrace_enabled \u003d 0.\n\n # echo function_graph \u003e /sys/kernel/debug/tracing/current_tracer\n\nSince ftrace_enabled \u003d 0, ftrace_enable_ftrace_graph_caller() is never\ncalled.\n\n # echo 1 \u003e /proc/sys/kernel/ftrace_enabled\n\nNow ftrace_enabled will be set to true, but still\nftrace_enable_ftrace_graph_caller() will not be called, which is not\ndesired.\n\nFurther if we execute the following after this:\n  # echo nop \u003e /sys/kernel/debug/tracing/current_tracer\n\nNow since ftrace_enabled is set it will call\nftrace_disable_ftrace_graph_caller(), which causes a kernel warning on\nthe ARM platform.\n\nOn the ARM platform, when ftrace_enable_ftrace_graph_caller() is called,\nit checks whether the old instruction is a nop or not. If it\u0027s not a nop,\nthen it returns an error. If it is a nop then it replaces instruction at\nthat address with a branch to ftrace_graph_caller.\nftrace_disable_ftrace_graph_caller() behaves just the opposite. Therefore,\nif generic ftrace code ever calls either ftrace_enable_ftrace_graph_caller()\nor ftrace_disable_ftrace_graph_caller() consecutively two times in a row,\nthen it will return an error, which will cause the generic ftrace code to\nraise a warning.\n\nNote, x86 does not have an issue with this because the architecture\nspecific code for ftrace_enable_ftrace_graph_caller() and\nftrace_disable_ftrace_graph_caller() does not check the previous state,\nand calling either of these functions twice in a row has no ill effect.\n\nLink: http://lkml.kernel.org/r/e4fbe64cdac0dd0e86a3bf914b0f83c0b419f146.1425666454.git.panand@redhat.com\n\nSigned-off-by: Pratyush Anand \u003cpanand@redhat.com\u003e\n[\n  removed extra if (ftrace_start_up) and defined ftrace_graph_active as 0\n  if CONFIG_FUNCTION_GRAPH_TRACER is not set.\n]\nSigned-off-by: Steven Rostedt \u003crostedt@goodmis.org\u003e\n[lizf: Backported to 3.4: adjust context]\nSigned-off-by: Zefan Li \u003clizefan@huawei.com\u003e\n"
    }
  ],
  "next": "7ebae41be6d18aa63ea086f3522243d090a8fc8d"
}
