)]}'
{
  "log": [
    {
      "commit": "7ac512aa8237c43331ffaf77a4fd8b8d684819ba",
      "tree": "0fe199f0364c5b54012691c9e4ff4a11767d1797",
      "parents": [
        "91af70814105f4c05e6e11b51c3269907b71794b"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Wed May 12 15:34:03 2010 +0100"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed May 12 18:23:58 2010 -0700"
      },
      "message": "CacheFiles: Fix error handling in cachefiles_determine_cache_security()\n\ncachefiles_determine_cache_security() is expected to return with a\nsecurity override in place.  However, if set_create_files_as() fails, we\nfail to do this.  In this case, we should just reinstate the security\noverride that was set by the caller.\n\nFurthermore, if set_create_files_as() fails, we should dispose of the\nnew credentials we were in the process of creating.\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "c61ea31dac0319ec64b33725917bda81fc293a25",
      "tree": "05a4f3011ea8b334795aae606d89bcf27e3e26c5",
      "parents": [
        "7d6fb7bd1919517937ec390f6ca2d7bcf4f89fb6"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Tue May 11 16:51:39 2010 +0100"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue May 11 10:07:53 2010 -0700"
      },
      "message": "CacheFiles: Fix occasional EIO on call to vfs_unlink()\n\nFix an occasional EIO returned by a call to vfs_unlink():\n\n\t[ 4868.465413] CacheFiles: I/O Error: Unlink failed\n\t[ 4868.465444] FS-Cache: Cache cachefiles stopped due to I/O error\n\t[ 4947.320011] CacheFiles: File cache on md3 unregistering\n\t[ 4947.320041] FS-Cache: Withdrawing cache \"mycache\"\n\t[ 5127.348683] FS-Cache: Cache \"mycache\" added (type cachefiles)\n\t[ 5127.348716] CacheFiles: File cache on md3 registered\n\t[ 7076.871081] CacheFiles: I/O Error: Unlink failed\n\t[ 7076.871130] FS-Cache: Cache cachefiles stopped due to I/O error\n\t[ 7116.780891] CacheFiles: File cache on md3 unregistering\n\t[ 7116.780937] FS-Cache: Withdrawing cache \"mycache\"\n\t[ 7296.813394] FS-Cache: Cache \"mycache\" added (type cachefiles)\n\t[ 7296.813432] CacheFiles: File cache on md3 registered\n\nWhat happens is this:\n\n (1) A cached NFS file is seen to have become out of date, so NFS retires the\n     object and immediately acquires a new object with the same key.\n\n (2) Retirement of the old object is done asynchronously - so the lookup/create\n     to generate the new object may be done first.\n\n     This can be a problem as the old object and the new object must exist at\n     the same point in the backing filesystem (i.e. they must have the same\n     pathname).\n\n (3) The lookup for the new object sees that a backing file already exists,\n     checks to see whether it is valid and sees that it isn\u0027t.  It then deletes\n     that file and creates a new one on disk.\n\n (4) The retirement phase for the old file is then performed.  It tries to\n     delete the dentry it has, but ext4_unlink() returns -EIO because the inode\n     attached to that dentry no longer matches the inode number associated with\n     the filename in the parent directory.\n\nThe trace below shows this quite well.\n\n\t[md5sum] \u003d\u003d\u003e __fscache_relinquish_cookie(ffff88002d12fb58{NFS.fh,ffff88002ce62100},1)\n\t[md5sum] \u003d\u003d\u003e __fscache_acquire_cookie({NFS.server},{NFS.fh},ffff88002ce62100)\n\nNFS has retired the old cookie and asked for a new one.\n\n\t[kslowd] \u003d\u003d\u003e fscache_object_state_machine({OBJ52,OBJECT_ACTIVE,24})\n\t[kslowd] \u003c\u003d\u003d fscache_object_state_machine() [-\u003eOBJECT_DYING]\n\t[kslowd] \u003d\u003d\u003e fscache_object_state_machine({OBJ53,OBJECT_INIT,0})\n\t[kslowd] \u003c\u003d\u003d fscache_object_state_machine() [-\u003eOBJECT_LOOKING_UP]\n\t[kslowd] \u003d\u003d\u003e fscache_object_state_machine({OBJ52,OBJECT_DYING,24})\n\t[kslowd] \u003c\u003d\u003d fscache_object_state_machine() [-\u003eOBJECT_RECYCLING]\n\nThe old object (OBJ52) is going through the terminal states to get rid of it,\nwhilst the new object - (OBJ53) - is coming into being.\n\n\t[kslowd] \u003d\u003d\u003e fscache_object_state_machine({OBJ53,OBJECT_LOOKING_UP,0})\n\t[kslowd] \u003d\u003d\u003e cachefiles_walk_to_object({ffff88003029d8b8},OBJ53,@68,)\n\t[kslowd] lookup \u0027@68\u0027\n\t[kslowd] next -\u003e ffff88002ce41bd0 positive\n\t[kslowd] advance\n\t[kslowd] lookup \u0027Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA\u0027\n\t[kslowd] next -\u003e ffff8800369faac8 positive\n\nThe new object has looked up the subdir in which the file would be in (getting\ndentry ffff88002ce41bd0) and then looked up the file itself (getting dentry\nffff8800369faac8).\n\n\t[kslowd] validate \u0027Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA\u0027\n\t[kslowd] \u003d\u003d\u003e cachefiles_bury_object(,\u0027@68\u0027,\u0027Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA\u0027)\n\t[kslowd] remove ffff8800369faac8 from ffff88002ce41bd0\n\t[kslowd] unlink stale object\n\t[kslowd] \u003c\u003d\u003d cachefiles_bury_object() \u003d 0\n\nIt then checks the file\u0027s xattrs to see if it\u0027s valid.  NFS says that the\nauxiliary data indicate the file is out of date (obvious to us - that\u0027s why NFS\nditched the old version and got a new one).  CacheFiles then deletes the old\nfile (dentry ffff8800369faac8).\n\n\t[kslowd] redo lookup\n\t[kslowd] lookup \u0027Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA\u0027\n\t[kslowd] next -\u003e ffff88002cd94288 negative\n\t[kslowd] create -\u003e ffff88002cd94288{ffff88002cdaf238{ino\u003d148247}}\n\nCacheFiles then redoes the lookup and gets a negative result in a new dentry\n(ffff88002cd94288) which it then creates a file for.\n\n\t[kslowd] \u003d\u003d\u003e cachefiles_mark_object_active(,OBJ53)\n\t[kslowd] \u003c\u003d\u003d cachefiles_mark_object_active() \u003d 0\n\t[kslowd] \u003d\u003d\u003d OBTAINED_OBJECT \u003d\u003d\u003d\n\t[kslowd] \u003c\u003d\u003d cachefiles_walk_to_object() \u003d 0 [148247]\n\t[kslowd] \u003c\u003d\u003d fscache_object_state_machine() [-\u003eOBJECT_AVAILABLE]\n\nThe new object is then marked active and the state machine moves to the\navailable state - at which point NFS can start filling the object.\n\n\t[kslowd] \u003d\u003d\u003e fscache_object_state_machine({OBJ52,OBJECT_RECYCLING,20})\n\t[kslowd] \u003d\u003d\u003e fscache_release_object()\n\t[kslowd] \u003d\u003d\u003e cachefiles_drop_object({OBJ52,2})\n\t[kslowd] \u003d\u003d\u003e cachefiles_delete_object(,OBJ52{ffff8800369faac8})\n\nThe old object, meanwhile, goes on with being retired.  If allocation occurs\nfirst, cachefiles_delete_object() has to wait for dir-\u003ed_inode-\u003ei_mutex to\nbecome available before it can continue.\n\n\t[kslowd] \u003d\u003d\u003e cachefiles_bury_object(,\u0027@68\u0027,\u0027Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA\u0027)\n\t[kslowd] remove ffff8800369faac8 from ffff88002ce41bd0\n\t[kslowd] unlink stale object\n\tEXT4-fs warning (device sda6): ext4_unlink: Inode number mismatch in unlink (148247!\u003d148193)\n\tCacheFiles: I/O Error: Unlink failed\n\tFS-Cache: Cache cachefiles stopped due to I/O error\n\nCacheFiles then tries to delete the file for the old object, but the dentry it\nhas (ffff8800369faac8) no longer points to a valid inode for that directory\nentry, and so ext4_unlink() returns -EIO when de-\u003einode does not match i_ino.\n\n\t[kslowd] \u003c\u003d\u003d cachefiles_bury_object() \u003d -5\n\t[kslowd] \u003c\u003d\u003d cachefiles_delete_object() \u003d -5\n\t[kslowd] \u003c\u003d\u003d fscache_object_state_machine() [-\u003eOBJECT_DEAD]\n\t[kslowd] \u003d\u003d\u003e fscache_object_state_machine({OBJ53,OBJECT_AVAILABLE,0})\n\t[kslowd] \u003c\u003d\u003d fscache_object_state_machine() [-\u003eOBJECT_ACTIVE]\n\n(Note that the above trace includes extra information beyond that produced by\nthe upstream code).\n\nThe fix is to note when an object that is being retired has had its object\ndeleted preemptively by a replacement object that is being created, and to\nskip the second removal attempt in such a case.\n\nReported-by: Greg M \u003cgregm@servu.net.au\u003e\nReported-by: Mark Moseley \u003cmoseleymark@gmail.com\u003e\nReported-by: Romain DEGEZ \u003cromain.degez@smartjog.com\u003e\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "5a0e3ad6af8660be21ca98a971cd00f331318c05",
      "tree": "5bfb7be11a03176a87296a43ac6647975c00a1d1",
      "parents": [
        "ed391f4ebf8f701d3566423ce8f17e614cde9806"
      ],
      "author": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Wed Mar 24 17:04:11 2010 +0900"
      },
      "committer": {
        "name": "Tejun Heo",
        "email": "tj@kernel.org",
        "time": "Tue Mar 30 22:02:32 2010 +0900"
      },
      "message": "include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h\n\npercpu.h is included by sched.h and module.h and thus ends up being\nincluded when building most .c files.  percpu.h includes slab.h which\nin turn includes gfp.h making everything defined by the two files\nuniversally available and complicating inclusion dependencies.\n\npercpu.h -\u003e slab.h dependency is about to be removed.  Prepare for\nthis change by updating users of gfp and slab facilities include those\nheaders directly instead of assuming availability.  As this conversion\nneeds to touch large number of source files, the following script is\nused as the basis of conversion.\n\n  http://userweb.kernel.org/~tj/misc/slabh-sweep.py\n\nThe script does the followings.\n\n* Scan files for gfp and slab usages and update includes such that\n  only the necessary includes are there.  ie. if only gfp is used,\n  gfp.h, if slab is used, slab.h.\n\n* When the script inserts a new include, it looks at the include\n  blocks and try to put the new include such that its order conforms\n  to its surrounding.  It\u0027s put in the include block which contains\n  core kernel includes, in the same order that the rest are ordered -\n  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there\n  doesn\u0027t seem to be any matching order.\n\n* If the script can\u0027t find a place to put a new include (mostly\n  because the file doesn\u0027t have fitting include block), it prints out\n  an error message indicating which .h file needs to be added to the\n  file.\n\nThe conversion was done in the following steps.\n\n1. The initial automatic conversion of all .c files updated slightly\n   over 4000 files, deleting around 700 includes and adding ~480 gfp.h\n   and ~3000 slab.h inclusions.  The script emitted errors for ~400\n   files.\n\n2. Each error was manually checked.  Some didn\u0027t need the inclusion,\n   some needed manual addition while adding it to implementation .h or\n   embedding .c file was more appropriate for others.  This step added\n   inclusions to around 150 files.\n\n3. The script was run again and the output was compared to the edits\n   from #2 to make sure no file was left behind.\n\n4. Several build tests were done and a couple of problems were fixed.\n   e.g. lib/decompress_*.c used malloc/free() wrappers around slab\n   APIs requiring slab.h to be added manually.\n\n5. The script was run on all .h files but without automatically\n   editing them as sprinkling gfp.h and slab.h inclusions around .h\n   files could easily lead to inclusion dependency hell.  Most gfp.h\n   inclusion directives were ignored as stuff from gfp.h was usually\n   wildly available and often used in preprocessor macros.  Each\n   slab.h inclusion directive was examined and added manually as\n   necessary.\n\n6. percpu.h was updated not to include slab.h.\n\n7. Build test were done on the following configurations and failures\n   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my\n   distributed build env didn\u0027t work with gcov compiles) and a few\n   more options had to be turned off depending on archs to make things\n   build (like ipr on powerpc/64 which failed due to missing writeq).\n\n   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.\n   * powerpc and powerpc64 SMP allmodconfig\n   * sparc and sparc64 SMP allmodconfig\n   * ia64 SMP allmodconfig\n   * s390 SMP allmodconfig\n   * alpha SMP allmodconfig\n   * um on x86_64 SMP allmodconfig\n\n8. percpu.h modifications were reverted so that it could be applied as\n   a separate patch and serve as bisection point.\n\nGiven the fact that I had only a couple of failures from tests on step\n6, I\u0027m fairly confident about the coverage of this conversion patch.\nIf there is a breakage, it\u0027s likely to be something in one of the arch\nheaders which should be easily discoverable easily on most builds of\nthe specific arch.\n\nSigned-off-by: Tejun Heo \u003ctj@kernel.org\u003e\nGuess-its-ok-by: Christoph Lameter \u003ccl@linux-foundation.org\u003e\nCc: Ingo Molnar \u003cmingo@redhat.com\u003e\nCc: Lee Schermerhorn \u003cLee.Schermerhorn@hp.com\u003e\n"
    },
    {
      "commit": "8f9941aeccc318f243ab3fa55aaa17f4c1cb33f9",
      "tree": "a18890c2ace7ffde0682e29c58230ffc1fcdde15",
      "parents": [
        "aeaa5ccd6421fbf9e7ded0ac67b12ea2b9fcf51e"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Fri Feb 19 18:14:21 2010 +0000"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Sat Feb 20 10:06:35 2010 -0500"
      },
      "message": "CacheFiles: Fix a race in cachefiles_delete_object() vs rename\n\ncachefiles_delete_object() can race with rename.  It gets the parent directory\nof the object it\u0027s asked to delete, then locks it - but rename may have changed\nthe object\u0027s parent between the get and the completion of the lock.\n\nHowever, if such a circumstance is detected, we abandon our attempt to delete\nthe object - since it\u0027s no longer in the index key path, it won\u0027t be seen\nagain by lookups of that key.  The assumption is that cachefilesd may have\nculled it by renaming it to the graveyard for later destruction.\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "b65a9cfc2c38eebc33533280b8ad5841caee8b6e",
      "tree": "d6e5b713615cc5e65c900162ab09235ae4847909",
      "parents": [
        "0552f879d45cecc35d8e372a591fc5ed863bca58"
      ],
      "author": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Wed Dec 16 06:27:40 2009 -0500"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Wed Dec 16 12:16:47 2009 -0500"
      },
      "message": "Untangling ima mess, part 2: deal with counters\n\n* do ima_get_count() in __dentry_open()\n* stop doing that in followups\n* move ima_path_check() to right after nameidata_to_filp()\n* don\u0027t bump counters on it\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "b0446be4be44768c7c7e919fadda98e1315fad09",
      "tree": "b51d19d5a33774573f4edf0a1a3394726cf9b00f",
      "parents": [
        "306bb73d12f13684ffcd735838c3e6f7515ab626"
      ],
      "author": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Sun Aug 09 02:03:00 2009 +0400"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Wed Dec 16 12:16:44 2009 -0500"
      },
      "message": "switch cachefiles to kern_path()\n\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "e7d2860b690d4f3bed6824757c540579638e3d1e",
      "tree": "84268ee28893256fd6a6a7e1d4474f61dbee74e7",
      "parents": [
        "84c95c9acf088c99d8793d78036b67faa5d0b851"
      ],
      "author": {
        "name": "André Goddard Rosa",
        "email": "andre.goddard@gmail.com",
        "time": "Mon Dec 14 18:01:06 2009 -0800"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Dec 15 08:53:32 2009 -0800"
      },
      "message": "tree-wide: convert open calls to remove spaces to skip_spaces() lib function\n\nMakes use of skip_spaces() defined in lib/string.c for removing leading\nspaces from strings all over the tree.\n\nIt decreases lib.a code size by 47 bytes and reuses the function tree-wide:\n   text    data     bss     dec     hex filename\n  64688     584     592   65864   10148 (TOTALS-BEFORE)\n  64641     584     592   65817   10119 (TOTALS-AFTER)\n\nAlso, while at it, if we see (*str \u0026\u0026 isspace(*str)), we can be sure to\nremove the first condition (*str) as the second one (isspace(*str)) also\nevaluates to 0 whenever *str \u003d\u003d 0, making it redundant. In other words,\n\"a char equals zero is never a space\".\n\nJulia Lawall tried the semantic patch (http://coccinelle.lip6.fr) below,\nand found occurrences of this pattern on 3 more files:\n    drivers/leds/led-class.c\n    drivers/leds/ledtrig-timer.c\n    drivers/video/output.c\n\n@@\nexpression str;\n@@\n\n( // ignore skip_spaces cases\nwhile (*str \u0026\u0026  isspace(*str)) { \\(str++;\\|++str;\\) }\n|\n- *str \u0026\u0026\nisspace(*str)\n)\n\nSigned-off-by: André Goddard Rosa \u003candre.goddard@gmail.com\u003e\nCc: Julia Lawall \u003cjulia@diku.dk\u003e\nCc: Martin Schwidefsky \u003cschwidefsky@de.ibm.com\u003e\nCc: Jeff Dike \u003cjdike@addtoit.com\u003e\nCc: Ingo Molnar \u003cmingo@elte.hu\u003e\nCc: Thomas Gleixner \u003ctglx@linutronix.de\u003e\nCc: \"H. Peter Anvin\" \u003chpa@zytor.com\u003e\nCc: Richard Purdie \u003crpurdie@rpsys.net\u003e\nCc: Neil Brown \u003cneilb@suse.de\u003e\nCc: Kyle McMartin \u003ckyle@mcmartin.ca\u003e\nCc: Henrique de Moraes Holschuh \u003chmh@hmh.eng.br\u003e\nCc: David Howells \u003cdhowells@redhat.com\u003e\nCc: \u003clinux-ext4@vger.kernel.org\u003e\nCc: Samuel Ortiz \u003csamuel@sortiz.org\u003e\nCc: Patrick McHardy \u003ckaber@trash.net\u003e\nCc: Takashi Iwai \u003ctiwai@suse.de\u003e\nSigned-off-by: Andrew Morton \u003cakpm@linux-foundation.org\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "3350b2acdd39d23db52710045536b943fe38a35c",
      "tree": "cafa0bd2883411209fd99aeddb92550802510298",
      "parents": [
        "fa1dae4906982b5d896c07613b1fe42456133b1c"
      ],
      "author": {
        "name": "Marc Dionne",
        "email": "marc.c.dionne@gmail.com",
        "time": "Tue Dec 01 14:09:24 2009 +0000"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Tue Dec 01 07:35:11 2009 -0800"
      },
      "message": "CacheFiles: Update IMA counters when using dentry_open\n\nWhen IMA is active, using dentry_open without updating the\nIMA counters will result in free/open imbalance errors when\nfput is eventually called.\n\nSigned-off-by: Marc Dionne \u003cmarc.c.dionne@gmail.com\u003e\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "14e69647c868459bcb910f771851ca7c699efd21",
      "tree": "eaf14450c1dd6894ff7db727b6e8afe179cae6a0",
      "parents": [
        "fee096deb4f33897937b974cb2c5168bab7935be"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:12:08 2009 +0000"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:12:08 2009 +0000"
      },
      "message": "CacheFiles: Don\u0027t log lookup/create failing with ENOBUFS\n\nDon\u0027t log the CacheFiles lookup/create object routined failing with ENOBUFS as\nunder high memory load or high cache load they can do this quite a lot.  This\nerror simply means that the requested object cannot be created on disk due to\nlack of space, or due to failure of the backing filesystem to find sufficient\nresources.\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\n"
    },
    {
      "commit": "fee096deb4f33897937b974cb2c5168bab7935be",
      "tree": "c86e5ed5b3435ff0f0266f343b19f8cc7be63340",
      "parents": [
        "d0e27b7808dc667f3015be0b6888f6d680e222c8"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:12:05 2009 +0000"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:12:05 2009 +0000"
      },
      "message": "CacheFiles: Catch an overly long wait for an old active object\n\nCatch an overly long wait for an old, dying active object when we want to\nreplace it with a new one.  The probability is that all the slow-work threads\nare hogged, and the delete can\u0027t get a look in.\n\nWhat we do instead is:\n\n (1) if there\u0027s nothing in the slow work queue, we sleep until either the dying\n     object has finished dying or there is something in the slow work queue\n     behind which we can queue our object.\n\n (2) if there is something in the slow work queue, we return ETIMEDOUT to\n     fscache_lookup_object(), which then puts us back on the slow work queue,\n     presumably behind the deletion that we\u0027re blocked by.  We are then\n     deferred for a while until we work our way back through the queue -\n     without blocking a slow-work thread unnecessarily.\n\nA backtrace similar to the following may appear in the log without this patch:\n\n\tINFO: task kslowd004:5711 blocked for more than 120 seconds.\n\t\"echo 0 \u003e /proc/sys/kernel/hung_task_timeout_secs\" disables this message.\n\tkslowd004     D 0000000000000000     0  5711      2 0x00000080\n\t ffff88000340bb80 0000000000000046 ffff88002550d000 0000000000000000\n\t ffff88002550d000 0000000000000007 ffff88000340bfd8 ffff88002550d2a8\n\t 000000000000ddf0 00000000000118c0 00000000000118c0 ffff88002550d2a8\n\tCall Trace:\n\t [\u003cffffffff81058e21\u003e] ? trace_hardirqs_on+0xd/0xf\n\t [\u003cffffffffa011c4d8\u003e] ? cachefiles_wait_bit+0x0/0xd [cachefiles]\n\t [\u003cffffffffa011c4e1\u003e] cachefiles_wait_bit+0x9/0xd [cachefiles]\n\t [\u003cffffffff81353153\u003e] __wait_on_bit+0x43/0x76\n\t [\u003cffffffff8111ae39\u003e] ? ext3_xattr_get+0x1ec/0x270\n\t [\u003cffffffff813531ef\u003e] out_of_line_wait_on_bit+0x69/0x74\n\t [\u003cffffffffa011c4d8\u003e] ? cachefiles_wait_bit+0x0/0xd [cachefiles]\n\t [\u003cffffffff8104c125\u003e] ? wake_bit_function+0x0/0x2e\n\t [\u003cffffffffa011bc79\u003e] cachefiles_mark_object_active+0x203/0x23b [cachefiles]\n\t [\u003cffffffffa011c209\u003e] cachefiles_walk_to_object+0x558/0x827 [cachefiles]\n\t [\u003cffffffffa011a429\u003e] cachefiles_lookup_object+0xac/0x12a [cachefiles]\n\t [\u003cffffffffa00aa1e9\u003e] fscache_lookup_object+0x1c7/0x214 [fscache]\n\t [\u003cffffffffa00aafc5\u003e] fscache_object_state_machine+0xa5/0x52d [fscache]\n\t [\u003cffffffffa00ab4ac\u003e] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]\n\t [\u003cffffffff81082093\u003e] slow_work_execute+0x18f/0x2d1\n\t [\u003cffffffff8108239a\u003e] slow_work_thread+0x1c5/0x308\n\t [\u003cffffffff8104c0f1\u003e] ? autoremove_wake_function+0x0/0x34\n\t [\u003cffffffff810821d5\u003e] ? slow_work_thread+0x0/0x308\n\t [\u003cffffffff8104be91\u003e] kthread+0x7a/0x82\n\t [\u003cffffffff8100beda\u003e] child_rip+0xa/0x20\n\t [\u003cffffffff8100b87c\u003e] ? restore_args+0x0/0x30\n\t [\u003cffffffff8104be17\u003e] ? kthread+0x0/0x82\n\t [\u003cffffffff8100bed0\u003e] ? child_rip+0x0/0x20\n\t1 lock held by kslowd004/5711:\n\t #0:  (\u0026sb-\u003es_type-\u003ei_mutex_key#7/1){+.+.+.}, at: [\u003cffffffffa011be64\u003e] cachefiles_walk_to_object+0x1b3/0x827 [cachefiles]\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\n"
    },
    {
      "commit": "d0e27b7808dc667f3015be0b6888f6d680e222c8",
      "tree": "bf8451f0d9a95db14ed1ebda50d701f4f387c0d8",
      "parents": [
        "6511de33c877a53b3df545bc06c29e0f272837ff"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:12:02 2009 +0000"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:12:02 2009 +0000"
      },
      "message": "CacheFiles: Better showing of debugging information in active object problems\n\nShow more debugging information if cachefiles_mark_object_active() is asked to\nactivate an active object.\n\nThis may happen, for instance, if the netfs tries to register an object with\nthe same key multiple times.\n\nThe code is changed to (a) get the appropriate object lock to protect the\ncookie pointer whilst we dereference it, and (b) get and display the cookie key\nif available.\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\n"
    },
    {
      "commit": "6511de33c877a53b3df545bc06c29e0f272837ff",
      "tree": "f5588cf0edcdc5412ab3ca8af655423b2346fd31",
      "parents": [
        "5e929b33c3935ecb029b3e495356b2b8af432efa"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:58 2009 +0000"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:58 2009 +0000"
      },
      "message": "CacheFiles: Mark parent directory locks as I_MUTEX_PARENT to keep lockdep happy\n\nMark parent directory locks as I_MUTEX_PARENT in the callers of\ncachefiles_bury_object() so that lockdep doesn\u0027t complain when that invokes\nvfs_unlink():\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n[ INFO: possible recursive locking detected ]\n2.6.32-rc6-cachefs #47\n---------------------------------------------\nkslowd002/3089 is trying to acquire lock:\n (\u0026sb-\u003es_type-\u003ei_mutex_key#7){+.+.+.}, at: [\u003cffffffff810bbf72\u003e] vfs_unlink+0x8b/0x128\n\nbut task is already holding lock:\n (\u0026sb-\u003es_type-\u003ei_mutex_key#7){+.+.+.}, at: [\u003cffffffffa00e4e61\u003e] cachefiles_walk_to_object+0x1b0/0x831 [cachefiles]\n\nother info that might help us debug this:\n1 lock held by kslowd002/3089:\n #0:  (\u0026sb-\u003es_type-\u003ei_mutex_key#7){+.+.+.}, at: [\u003cffffffffa00e4e61\u003e] cachefiles_walk_to_object+0x1b0/0x831 [cachefiles]\n\nstack backtrace:\nPid: 3089, comm: kslowd002 Not tainted 2.6.32-rc6-cachefs #47\nCall Trace:\n [\u003cffffffff8105ad7b\u003e] __lock_acquire+0x1649/0x16e3\n [\u003cffffffff8118170e\u003e] ? inode_has_perm+0x5f/0x61\n [\u003cffffffff8105ae6c\u003e] lock_acquire+0x57/0x6d\n [\u003cffffffff810bbf72\u003e] ? vfs_unlink+0x8b/0x128\n [\u003cffffffff81353ac3\u003e] mutex_lock_nested+0x54/0x292\n [\u003cffffffff810bbf72\u003e] ? vfs_unlink+0x8b/0x128\n [\u003cffffffff8118179e\u003e] ? selinux_inode_permission+0x8e/0x90\n [\u003cffffffff8117e271\u003e] ? security_inode_permission+0x1c/0x1e\n [\u003cffffffff810bb4fb\u003e] ? inode_permission+0x99/0xa5\n [\u003cffffffff810bbf72\u003e] vfs_unlink+0x8b/0x128\n [\u003cffffffff810adb19\u003e] ? kfree+0xed/0xf9\n [\u003cffffffffa00e3f00\u003e] cachefiles_bury_object+0xb6/0x420 [cachefiles]\n [\u003cffffffff81058e21\u003e] ? trace_hardirqs_on+0xd/0xf\n [\u003cffffffffa00e7e24\u003e] ? cachefiles_check_object_xattr+0x233/0x293 [cachefiles]\n [\u003cffffffffa00e51b0\u003e] cachefiles_walk_to_object+0x4ff/0x831 [cachefiles]\n [\u003cffffffff81032238\u003e] ? finish_task_switch+0x0/0xb2\n [\u003cffffffffa00e3429\u003e] cachefiles_lookup_object+0xac/0x12a [cachefiles]\n [\u003cffffffffa00741e9\u003e] fscache_lookup_object+0x1c7/0x214 [fscache]\n [\u003cffffffffa0074fc5\u003e] fscache_object_state_machine+0xa5/0x52d [fscache]\n [\u003cffffffffa00754ac\u003e] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]\n [\u003cffffffff81082093\u003e] slow_work_execute+0x18f/0x2d1\n [\u003cffffffff8108239a\u003e] slow_work_thread+0x1c5/0x308\n [\u003cffffffff8104c0f1\u003e] ? autoremove_wake_function+0x0/0x34\n [\u003cffffffff810821d5\u003e] ? slow_work_thread+0x0/0x308\n [\u003cffffffff8104be91\u003e] kthread+0x7a/0x82\n [\u003cffffffff8100beda\u003e] child_rip+0xa/0x20\n [\u003cffffffff8100b87c\u003e] ? restore_args+0x0/0x30\n [\u003cffffffff8104be17\u003e] ? kthread+0x0/0x82\n [\u003cffffffff8100bed0\u003e] ? child_rip+0x0/0x20\n\nSigned-off-by: Daivd Howells \u003cdhowells@redhat.com\u003e\n"
    },
    {
      "commit": "5e929b33c3935ecb029b3e495356b2b8af432efa",
      "tree": "99f892f4ea926d94b441856e27f1e08814ab1c75",
      "parents": [
        "a17754fb8c28af19cd70dcbec6d5b0773b94e0c1"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:55 2009 +0000"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:55 2009 +0000"
      },
      "message": "CacheFiles: Handle truncate unlocking the page we\u0027re reading\n\nHandle truncate unlocking the page we\u0027re attempting to read from the backing\ndevice before the read has completed.\n\nThis was causing reports like the following to occur:\n\n\tPid: 4765, comm: kslowd Not tainted 2.6.30.1 #1\n\tCall Trace:\n\t [\u003cffffffffa0331d7a\u003e] ? cachefiles_read_waiter+0xd9/0x147 [cachefiles]\n\t [\u003cffffffff804b74bd\u003e] ? __wait_on_bit+0x60/0x6f\n\t [\u003cffffffff8022bbbb\u003e] ? __wake_up_common+0x3f/0x71\n\t [\u003cffffffff8022cc32\u003e] ? __wake_up+0x30/0x44\n\t [\u003cffffffff8024a41f\u003e] ? __wake_up_bit+0x28/0x2d\n\t [\u003cffffffffa003a793\u003e] ? ext3_truncate+0x4d7/0x8ed [ext3]\n\t [\u003cffffffff80281f90\u003e] ? pagevec_lookup+0x17/0x1f\n\t [\u003cffffffff8028c2ff\u003e] ? unmap_mapping_range+0x59/0x1ff\n\t [\u003cffffffff8022cc32\u003e] ? __wake_up+0x30/0x44\n\t [\u003cffffffff8028e286\u003e] ? vmtruncate+0xc2/0xe2\n\t [\u003cffffffff802b82cf\u003e] ? inode_setattr+0x22/0x10a\n\t [\u003cffffffffa003baa5\u003e] ? ext3_setattr+0x17b/0x1e6 [ext3]\n\t [\u003cffffffff802b853d\u003e] ? notify_change+0x186/0x2c9\n\t [\u003cffffffffa032d9de\u003e] ? cachefiles_attr_changed+0x133/0x1cd [cachefiles]\n\t [\u003cffffffffa032df7f\u003e] ? cachefiles_lookup_object+0xcf/0x12a [cachefiles]\n\t [\u003cffffffffa0318165\u003e] ? fscache_lookup_object+0x110/0x122 [fscache]\n\t [\u003cffffffffa03188c3\u003e] ? fscache_object_slow_work_execute+0x590/0x6bc\n\t[fscache]\n\t [\u003cffffffff80278f82\u003e] ? slow_work_thread+0x285/0x43a\n\t [\u003cffffffff8024a446\u003e] ? autoremove_wake_function+0x0/0x2e\n\t [\u003cffffffff80278cfd\u003e] ? slow_work_thread+0x0/0x43a\n\t [\u003cffffffff8024a317\u003e] ? kthread+0x54/0x81\n\t [\u003cffffffff8020c93a\u003e] ? child_rip+0xa/0x20\n\t [\u003cffffffff8024a2c3\u003e] ? kthread+0x0/0x81\n\t [\u003cffffffff8020c930\u003e] ? child_rip+0x0/0x20\n\tCacheFiles: I/O Error: Readpage failed on backing file 200000000000810\n\tFS-Cache: Cache cachefiles stopped due to I/O error\n\nReported-by: Christian Kujau \u003clists@nerdbynature.de\u003e\nReported-by: Takashi Iwai \u003ctiwai@suse.de\u003e\nReported-by: Duc Le Minh \u003cduclm.vn@gmail.com\u003e\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\n"
    },
    {
      "commit": "a17754fb8c28af19cd70dcbec6d5b0773b94e0c1",
      "tree": "d7c25b217c684153eadbac78ab9b1bbff08b75f6",
      "parents": [
        "868411be3f445a83fafbd734f3e426400138add5"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:52 2009 +0000"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:52 2009 +0000"
      },
      "message": "CacheFiles: Don\u0027t write a full page if there\u0027s only a partial page to cache\n\ncachefiles_write_page() writes a full page to the backing file for the last\npage of the netfs file, even if the netfs file\u0027s last page is only a partial\npage.\n\nThis causes the EOF on the backing file to be extended beyond the EOF of the\nnetfs, and thus the backing file will be truncated by cachefiles_attr_changed()\ncalled from cachefiles_lookup_object().\n\nSo we need to limit the write we make to the backing file on that last page\nsuch that it doesn\u0027t push the EOF too far.\n\nAlso, if a backing file that has a partial page at the end is expanded, we\ndiscard the partial page and refetch it on the basis that we then have a hole\nin the file with invalid data, and should the power go out...  A better way to\ndeal with this could be to record a note that the partial page contains invalid\ndata until the correct data is written into it.\n\nThis isn\u0027t a problem for netfs\u0027s that discard the whole backing file if the\nfile size changes (such as NFS).\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\n"
    },
    {
      "commit": "4fbf4291aa15926cd4fdca0ffe9122e89d0459db",
      "tree": "ec2195c39ef8117acea598af4a5c20c77f67aa0b",
      "parents": [
        "440f0affe247e9990c8f8778f1861da4fd7d5e50"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:04 2009 +0000"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Thu Nov 19 18:11:04 2009 +0000"
      },
      "message": "FS-Cache: Allow the current state of all objects to be dumped\n\nAllow the current state of all fscache objects to be dumped by doing:\n\n\tcat /proc/fs/fscache/objects\n\nBy default, all objects and all fields will be shown.  This can be restricted\nby adding a suitable key to one of the caller\u0027s keyrings (such as the session\nkeyring):\n\n\tkeyctl add user fscache:objlist \"\u003crestrictions\u003e\" @s\n\nThe \u003crestrictions\u003e are:\n\n\tK\tShow hexdump of object key (don\u0027t show if not given)\n\tA\tShow hexdump of object aux data (don\u0027t show if not given)\n\nAnd paired restrictions:\n\n\tC\tShow objects that have a cookie\n\tc\tShow objects that don\u0027t have a cookie\n\tB\tShow objects that are busy\n\tb\tShow objects that aren\u0027t busy\n\tW\tShow objects that have pending writes\n\tw\tShow objects that don\u0027t have pending writes\n\tR\tShow objects that have outstanding reads\n\tr\tShow objects that don\u0027t have outstanding reads\n\tS\tShow objects that have slow work queued\n\ts\tShow objects that don\u0027t have slow work queued\n\nIf neither side of a restriction pair is given, then both are implied.  For\nexample:\n\n\tkeyctl add user fscache:objlist KB @s\n\nshows objects that are busy, and lists their object keys, but does not dump\ntheir auxiliary data.  It also implies \"CcWwRrSs\", but as \u0027B\u0027 is given, \u0027b\u0027 is\nnot implied.\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\n"
    },
    {
      "commit": "5af7926ff33b68b3ba46531471c6e0564b285efc",
      "tree": "a25266f9db482ce9dd8e663148ffb0f1a524bd83",
      "parents": [
        "e5004753388dcf5e1b8a52ac0ab807d232340fbb"
      ],
      "author": {
        "name": "Christoph Hellwig",
        "email": "hch@lst.de",
        "time": "Tue May 05 15:41:25 2009 +0200"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Thu Jun 11 21:36:06 2009 -0400"
      },
      "message": "enforce -\u003esync_fs is only called for rw superblock\n\nMake sure a superblock really is writeable by checking MS_RDONLY\nunder s_umount.  sync_filesystems needed some re-arragement for\nthat, but all but one sync_filesystem caller had the correct locking\nalready so that we could add that check there.  cachefiles grew\ns_umount locking.\n\nI\u0027ve also added a WARN_ON to sync_filesystem to assert this for\nfuture callers.\n\nSigned-off-by: Christoph Hellwig \u003chch@lst.de\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "60b0680fa236ac4e17ce31a50048c9d75f9ec831",
      "tree": "c8ca34340a173326694247eab779e713c57202c2",
      "parents": [
        "c15c54f5f056ee4819da9fde59a5f2cd45445f23"
      ],
      "author": {
        "name": "Jan Kara",
        "email": "jack@suse.cz",
        "time": "Mon Apr 27 16:43:53 2009 +0200"
      },
      "committer": {
        "name": "Al Viro",
        "email": "viro@zeniv.linux.org.uk",
        "time": "Thu Jun 11 21:36:04 2009 -0400"
      },
      "message": "vfs: Rename fsync_super() to sync_filesystem() (version 4)\n\nRename the function so that it better describe what it really does. Also\nremove the unnecessary include of buffer_head.h.\n\nSigned-off-by: Jan Kara \u003cjack@suse.cz\u003e\nSigned-off-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\n"
    },
    {
      "commit": "911e690e70540f009125bacd16c017eb1a7b1916",
      "tree": "c43f99a9f3cd1e1f12d54628f9fa7d02c1bf4685",
      "parents": [
        "348ca1029e8bae6e0c49097ad25439b17c5326f4"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Wed May 27 15:46:55 2009 +0100"
      },
      "committer": {
        "name": "Linus Torvalds",
        "email": "torvalds@linux-foundation.org",
        "time": "Wed May 27 10:20:13 2009 -0700"
      },
      "message": "CacheFiles: Fixup renamed filenames in comments in internal.h\n\nFix up renamed filenames in comments in fs/cachefiles/internal.h.\n\nOriginally, the files were all called cf-xxx.c, but they got renamed to\njust xxx.c.\n\nSigned-off-by: David Howells \u003cdhowells@redhat.com\u003e\nSigned-off-by: Linus Torvalds \u003ctorvalds@linux-foundation.org\u003e\n"
    },
    {
      "commit": "9ae326a69004dea8af2dae4fde58de27db700a8d",
      "tree": "3a1d88a6e297989bfbd17648b398c7aa5ef9bf30",
      "parents": [
        "800a964787faef3509d194fa33268628c3d1daa9"
      ],
      "author": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Fri Apr 03 16:42:41 2009 +0100"
      },
      "committer": {
        "name": "David Howells",
        "email": "dhowells@redhat.com",
        "time": "Fri Apr 03 16:42:41 2009 +0100"
      },
      "message": "CacheFiles: A cache that backs onto a mounted filesystem\n\nAdd an FS-Cache cache-backend that permits a mounted filesystem to be used as a\nbacking store for the cache.\n\nCacheFiles uses a userspace daemon to do some of the cache management - such as\nreaping stale nodes and culling.  This is called cachefilesd and lives in\n/sbin.  The source for the daemon can be downloaded from:\n\n\thttp://people.redhat.com/~dhowells/cachefs/cachefilesd.c\n\nAnd an example configuration from:\n\n\thttp://people.redhat.com/~dhowells/cachefs/cachefilesd.conf\n\nThe filesystem and data integrity of the cache are only as good as those of the\nfilesystem providing the backing services.  Note that CacheFiles does not\nattempt to journal anything since the journalling interfaces of the various\nfilesystems are very specific in nature.\n\nCacheFiles creates a misc character device - \"/dev/cachefiles\" - that is used\nto communication with the daemon.  Only one thing may have this open at once,\nand whilst it is open, a cache is at least partially in existence.  The daemon\nopens this and sends commands down it to control the cache.\n\nCacheFiles is currently limited to a single cache.\n\nCacheFiles attempts to maintain at least a certain percentage of free space on\nthe filesystem, shrinking the cache by culling the objects it contains to make\nspace if necessary - see the \"Cache Culling\" section.  This means it can be\nplaced on the same medium as a live set of data, and will expand to make use of\nspare space and automatically contract when the set of data requires more\nspace.\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nREQUIREMENTS\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nThe use of CacheFiles and its daemon requires the following features to be\navailable in the system and in the cache filesystem:\n\n\t- dnotify.\n\n\t- extended attributes (xattrs).\n\n\t- openat() and friends.\n\n\t- bmap() support on files in the filesystem (FIBMAP ioctl).\n\n\t- The use of bmap() to detect a partial page at the end of the file.\n\nIt is strongly recommended that the \"dir_index\" option is enabled on Ext3\nfilesystems being used as a cache.\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nCONFIGURATION\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nThe cache is configured by a script in /etc/cachefilesd.conf.  These commands\nset up cache ready for use.  The following script commands are available:\n\n (*) brun \u003cN\u003e%\n (*) bcull \u003cN\u003e%\n (*) bstop \u003cN\u003e%\n (*) frun \u003cN\u003e%\n (*) fcull \u003cN\u003e%\n (*) fstop \u003cN\u003e%\n\n\tConfigure the culling limits.  Optional.  See the section on culling\n\tThe defaults are 7% (run), 5% (cull) and 1% (stop) respectively.\n\n\tThe commands beginning with a \u0027b\u0027 are file space (block) limits, those\n\tbeginning with an \u0027f\u0027 are file count limits.\n\n (*) dir \u003cpath\u003e\n\n\tSpecify the directory containing the root of the cache.  Mandatory.\n\n (*) tag \u003cname\u003e\n\n\tSpecify a tag to FS-Cache to use in distinguishing multiple caches.\n\tOptional.  The default is \"CacheFiles\".\n\n (*) debug \u003cmask\u003e\n\n\tSpecify a numeric bitmask to control debugging in the kernel module.\n\tOptional.  The default is zero (all off).  The following values can be\n\tOR\u0027d into the mask to collect various information:\n\n\t\t1\tTurn on trace of function entry (_enter() macros)\n\t\t2\tTurn on trace of function exit (_leave() macros)\n\t\t4\tTurn on trace of internal debug points (_debug())\n\n\tThis mask can also be set through sysfs, eg:\n\n\t\techo 5 \u003e/sys/modules/cachefiles/parameters/debug\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nSTARTING THE CACHE\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nThe cache is started by running the daemon.  The daemon opens the cache device,\nconfigures the cache and tells it to begin caching.  At that point the cache\nbinds to fscache and the cache becomes live.\n\nThe daemon is run as follows:\n\n\t/sbin/cachefilesd [-d]* [-s] [-n] [-f \u003cconfigfile\u003e]\n\nThe flags are:\n\n (*) -d\n\n\tIncrease the debugging level.  This can be specified multiple times and\n\tis cumulative with itself.\n\n (*) -s\n\n\tSend messages to stderr instead of syslog.\n\n (*) -n\n\n\tDon\u0027t daemonise and go into background.\n\n (*) -f \u003cconfigfile\u003e\n\n\tUse an alternative configuration file rather than the default one.\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nTHINGS TO AVOID\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nDo not mount other things within the cache as this will cause problems.  The\nkernel module contains its own very cut-down path walking facility that ignores\nmountpoints, but the daemon can\u0027t avoid them.\n\nDo not create, rename or unlink files and directories in the cache whilst the\ncache is active, as this may cause the state to become uncertain.\n\nRenaming files in the cache might make objects appear to be other objects (the\nfilename is part of the lookup key).\n\nDo not change or remove the extended attributes attached to cache files by the\ncache as this will cause the cache state management to get confused.\n\nDo not create files or directories in the cache, lest the cache get confused or\nserve incorrect data.\n\nDo not chmod files in the cache.  The module creates things with minimal\npermissions to prevent random users being able to access them directly.\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nCACHE CULLING\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nThe cache may need culling occasionally to make space.  This involves\ndiscarding objects from the cache that have been used less recently than\nanything else.  Culling is based on the access time of data objects.  Empty\ndirectories are culled if not in use.\n\nCache culling is done on the basis of the percentage of blocks and the\npercentage of files available in the underlying filesystem.  There are six\n\"limits\":\n\n (*) brun\n (*) frun\n\n     If the amount of free space and the number of available files in the cache\n     rises above both these limits, then culling is turned off.\n\n (*) bcull\n (*) fcull\n\n     If the amount of available space or the number of available files in the\n     cache falls below either of these limits, then culling is started.\n\n (*) bstop\n (*) fstop\n\n     If the amount of available space or the number of available files in the\n     cache falls below either of these limits, then no further allocation of\n     disk space or files is permitted until culling has raised things above\n     these limits again.\n\nThese must be configured thusly:\n\n\t0 \u003c\u003d bstop \u003c bcull \u003c brun \u003c 100\n\t0 \u003c\u003d fstop \u003c fcull \u003c frun \u003c 100\n\nNote that these are percentages of available space and available files, and do\n_not_ appear as 100 minus the percentage displayed by the \"df\" program.\n\nThe userspace daemon scans the cache to build up a table of cullable objects.\nThese are then culled in least recently used order.  A new scan of the cache is\nstarted as soon as space is made in the table.  Objects will be skipped if\ntheir atimes have changed or if the kernel module says it is still using them.\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nCACHE STRUCTURE\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nThe CacheFiles module will create two directories in the directory it was\ngiven:\n\n (*) cache/\n\n (*) graveyard/\n\nThe active cache objects all reside in the first directory.  The CacheFiles\nkernel module moves any retired or culled objects that it can\u0027t simply unlink\nto the graveyard from which the daemon will actually delete them.\n\nThe daemon uses dnotify to monitor the graveyard directory, and will delete\nanything that appears therein.\n\nThe module represents index objects as directories with the filename \"I...\" or\n\"J...\".  Note that the \"cache/\" directory is itself a special index.\n\nData objects are represented as files if they have no children, or directories\nif they do.  Their filenames all begin \"D...\" or \"E...\".  If represented as a\ndirectory, data objects will have a file in the directory called \"data\" that\nactually holds the data.\n\nSpecial objects are similar to data objects, except their filenames begin\n\"S...\" or \"T...\".\n\nIf an object has children, then it will be represented as a directory.\nImmediately in the representative directory are a collection of directories\nnamed for hash values of the child object keys with an \u0027@\u0027 prepended.  Into\nthis directory, if possible, will be placed the representations of the child\nobjects:\n\n\tINDEX     INDEX      INDEX                             DATA FILES\n\t\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d \u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d \u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d \u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\tcache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400\n\tcache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry\n\tcache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry\n\tcache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry\n\nIf the key is so long that it exceeds NAME_MAX with the decorations added on to\nit, then it will be cut into pieces, the first few of which will be used to\nmake a nest of directories, and the last one of which will be the objects\ninside the last directory.  The names of the intermediate directories will have\n\u0027+\u0027 prepended:\n\n\tJ1223/@23/+xy...z/+kl...m/Epqr\n\nNote that keys are raw data, and not only may they exceed NAME_MAX in size,\nthey may also contain things like \u0027/\u0027 and NUL characters, and so they may not\nbe suitable for turning directly into a filename.\n\nTo handle this, CacheFiles will use a suitably printable filename directly and\n\"base-64\" encode ones that aren\u0027t directly suitable.  The two versions of\nobject filenames indicate the encoding:\n\n\tOBJECT TYPE\tPRINTABLE\tENCODED\n\t\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\t\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\t\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\tIndex\t\t\"I...\"\t\t\"J...\"\n\tData\t\t\"D...\"\t\t\"E...\"\n\tSpecial\t\t\"S...\"\t\t\"T...\"\n\nIntermediate directories are always \"@\" or \"+\" as appropriate.\n\nEach object in the cache has an extended attribute label that holds the object\ntype ID (required to distinguish special objects) and the auxiliary data from\nthe netfs.  The latter is used to detect stale objects in the cache and update\nor retire them.\n\nNote that CacheFiles will erase from the cache any file it doesn\u0027t recognise or\nany file of an incorrect type (such as a FIFO file or a device file).\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nSECURITY MODEL AND SELINUX\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nCacheFiles is implemented to deal properly with the LSM security features of\nthe Linux kernel and the SELinux facility.\n\nOne of the problems that CacheFiles faces is that it is generally acting on\nbehalf of a process, and running in that process\u0027s context, and that includes a\nsecurity context that is not appropriate for accessing the cache - either\nbecause the files in the cache are inaccessible to that process, or because if\nthe process creates a file in the cache, that file may be inaccessible to other\nprocesses.\n\nThe way CacheFiles works is to temporarily change the security context (fsuid,\nfsgid and actor security label) that the process acts as - without changing the\nsecurity context of the process when it the target of an operation performed by\nsome other process (so signalling and suchlike still work correctly).\n\nWhen the CacheFiles module is asked to bind to its cache, it:\n\n (1) Finds the security label attached to the root cache directory and uses\n     that as the security label with which it will create files.  By default,\n     this is:\n\n\tcachefiles_var_t\n\n (2) Finds the security label of the process which issued the bind request\n     (presumed to be the cachefilesd daemon), which by default will be:\n\n\tcachefilesd_t\n\n     and asks LSM to supply a security ID as which it should act given the\n     daemon\u0027s label.  By default, this will be:\n\n\tcachefiles_kernel_t\n\n     SELinux transitions the daemon\u0027s security ID to the module\u0027s security ID\n     based on a rule of this form in the policy.\n\n\ttype_transition \u003cdaemon\u0027s-ID\u003e kernel_t : process \u003cmodule\u0027s-ID\u003e;\n\n     For instance:\n\n\ttype_transition cachefilesd_t kernel_t : process cachefiles_kernel_t;\n\nThe module\u0027s security ID gives it permission to create, move and remove files\nand directories in the cache, to find and access directories and files in the\ncache, to set and access extended attributes on cache objects, and to read and\nwrite files in the cache.\n\nThe daemon\u0027s security ID gives it only a very restricted set of permissions: it\nmay scan directories, stat files and erase files and directories.  It may\nnot read or write files in the cache, and so it is precluded from accessing the\ndata cached therein; nor is it permitted to create new files in the cache.\n\nThere are policy source files available in:\n\n\thttp://people.redhat.com/~dhowells/fscache/cachefilesd-0.8.tar.bz2\n\nand later versions.  In that tarball, see the files:\n\n\tcachefilesd.te\n\tcachefilesd.fc\n\tcachefilesd.if\n\nThey are built and installed directly by the RPM.\n\nIf a non-RPM based system is being used, then copy the above files to their own\ndirectory and run:\n\n\tmake -f /usr/share/selinux/devel/Makefile\n\tsemodule -i cachefilesd.pp\n\nYou will need checkpolicy and selinux-policy-devel installed prior to the\nbuild.\n\nBy default, the cache is located in /var/fscache, but if it is desirable that\nit should be elsewhere, than either the above policy files must be altered, or\nan auxiliary policy must be installed to label the alternate location of the\ncache.\n\nFor instructions on how to add an auxiliary policy to enable the cache to be\nlocated elsewhere when SELinux is in enforcing mode, please see:\n\n\t/usr/share/doc/cachefilesd-*/move-cache.txt\n\nWhen the cachefilesd rpm is installed; alternatively, the document can be found\nin the sources.\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\nA NOTE ON SECURITY\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n\nCacheFiles makes use of the split security in the task_struct.  It allocates\nits own task_security structure, and redirects current-\u003eact_as to point to it\nwhen it acts on behalf of another process, in that process\u0027s context.\n\nThe reason it does this is that it calls vfs_mkdir() and suchlike rather than\nbypassing security and calling inode ops directly.  Therefore the VFS and LSM\nmay deny the CacheFiles access to the cache data because under some\ncircumstances the caching code is running in the security context of whatever\nprocess issued the original syscall on the netfs.\n\nFurthermore, should CacheFiles create a file or directory, the security\nparameters with that object is created (UID, GID, security label) would be\nderived from that process that issued the system call, thus potentially\npreventing other processes from accessing the cache - including CacheFiles\u0027s\ncache management daemon (cachefilesd).\n\nWhat is required is to temporarily override the security of the process that\nissued the system call.  We can\u0027t, however, just do an in-place change of the\nsecurity data as that affects the process as an object, not just as a subject.\nThis means it may lose signals or ptrace events for example, and affects what\nthe process looks like in /proc.\n\nSo CacheFiles makes use of a logical split in the security between the\nobjective security (task-\u003esec) and the subjective security (task-\u003eact_as).  The\nobjective security holds the intrinsic security properties of a process and is\nnever overridden.  This is what appears in /proc, and is what is used when a\nprocess is the target of an operation by some other process (SIGKILL for\nexample).\n\nThe subjective security holds the active security properties of a process, and\nmay be overridden.  This is not seen externally, and is used whan a process\nacts upon another object, for example SIGKILLing another process or opening a\nfile.\n\nLSM hooks exist that allow SELinux (or Smack or whatever) to reject a request\nfor CacheFiles to run in a context of a specific security label, or to create\nfiles and directories with another security label.\n\nThis documentation is added by the patch to:\n\n\tDocumentation/filesystems/caching/cachefiles.txt\n\nSigned-Off-By: David Howells \u003cdhowells@redhat.com\u003e\nAcked-by: Steve Dickson \u003csteved@redhat.com\u003e\nAcked-by: Trond Myklebust \u003cTrond.Myklebust@netapp.com\u003e\nAcked-by: Al Viro \u003cviro@zeniv.linux.org.uk\u003e\nTested-by: Daire Byrne \u003cDaire.Byrne@framestore.com\u003e\n"
    }
  ]
}
