)]}'
{
  "log": [
    {
      "commit": "c8b978188c9a0fd3d535c13debd19d522b726f1f",
      "tree": "873628723fb82fe2a7c77adc65fa93eca1d61c0c",
      "parents": [
        "26ce34a9c47334ff7984769e4661b2f1883594ff"
      ],
      "author": {
        "name": "Chris Mason",
        "email": "chris.mason@oracle.com",
        "time": "Wed Oct 29 14:49:59 2008 -0400"
      },
      "committer": {
        "name": "Chris Mason",
        "email": "chris.mason@oracle.com",
        "time": "Wed Oct 29 14:49:59 2008 -0400"
      },
      "message": "Btrfs: Add zlib compression support\n\nThis is a large change for adding compression on reading and writing,\nboth for inline and regular extents.  It does some fairly large\nsurgery to the writeback paths.\n\nCompression is off by default and enabled by mount -o compress.  Even\nwhen the -o compress mount option is not used, it is possible to read\ncompressed extents off the disk.\n\nIf compression for a given set of pages fails to make them smaller, the\nfile is flagged to avoid future compression attempts later.\n\n* While finding delalloc extents, the pages are locked before being sent down\nto the delalloc handler.  This allows the delalloc handler to do complex things\nsuch as cleaning the pages, marking them writeback and starting IO on their\nbehalf.\n\n* Inline extents are inserted at delalloc time now.  This allows us to compress\nthe data before inserting the inline extent, and it allows us to insert\nan inline extent that spans multiple pages.\n\n* All of the in-memory extent representations (extent_map.c, ordered-data.c etc)\nare changed to record both an in-memory size and an on disk size, as well\nas a flag for compression.\n\nFrom a disk format point of view, the extent pointers in the file are changed\nto record the on disk size of a given extent and some encoding flags.\nSpace in the disk format is allocated for compression encoding, as well\nas encryption and a generic \u0027other\u0027 field.  Neither the encryption or the\n\u0027other\u0027 field are currently used.\n\nIn order to limit the amount of data read for a single random read in the\nfile, the size of a compressed extent is limited to 128k.  This is a\nsoftware only limit, the disk format supports u64 sized compressed extents.\n\nIn order to limit the ram consumed while processing extents, the uncompressed\nsize of a compressed extent is limited to 256k.  This is a software only limit\nand will be subject to tuning later.\n\nChecksumming is still done on compressed extents, and it is done on the\nuncompressed version of the data.  This way additional encodings can be\nlayered on without having to figure out which encoding to checksum.\n\nCompression happens at delalloc time, which is basically singled threaded because\nit is usually done by a single pdflush thread.  This makes it tricky to\nspread the compression load across all the cpus on the box.  We\u0027ll have to\nlook at parallel pdflush walks of dirty inodes at a later time.\n\nDecompression is hooked into readpages and it does spread across CPUs nicely.\n\nSigned-off-by: Chris Mason \u003cchris.mason@oracle.com\u003e\n"
    }
  ]
}
