|  | inotify | 
|  | a powerful yet simple file change notification system | 
|  |  | 
|  |  | 
|  |  | 
|  | Document started 15 Mar 2005 by Robert Love <rml@novell.com> | 
|  |  | 
|  |  | 
|  | (i) User Interface | 
|  |  | 
|  | Inotify is controlled by a set of three system calls and normal file I/O on a | 
|  | returned file descriptor. | 
|  |  | 
|  | First step in using inotify is to initialise an inotify instance: | 
|  |  | 
|  | int fd = inotify_init (); | 
|  |  | 
|  | Each instance is associated with a unique, ordered queue. | 
|  |  | 
|  | Change events are managed by "watches".  A watch is an (object,mask) pair where | 
|  | the object is a file or directory and the mask is a bit mask of one or more | 
|  | inotify events that the application wishes to receive.  See <linux/inotify.h> | 
|  | for valid events.  A watch is referenced by a watch descriptor, or wd. | 
|  |  | 
|  | Watches are added via a path to the file. | 
|  |  | 
|  | Watches on a directory will return events on any files inside of the directory. | 
|  |  | 
|  | Adding a watch is simple: | 
|  |  | 
|  | int wd = inotify_add_watch (fd, path, mask); | 
|  |  | 
|  | Where "fd" is the return value from inotify_init(), path is the path to the | 
|  | object to watch, and mask is the watch mask (see <linux/inotify.h>). | 
|  |  | 
|  | You can update an existing watch in the same manner, by passing in a new mask. | 
|  |  | 
|  | An existing watch is removed via | 
|  |  | 
|  | int ret = inotify_rm_watch (fd, wd); | 
|  |  | 
|  | Events are provided in the form of an inotify_event structure that is read(2) | 
|  | from a given inotify instance.  The filename is of dynamic length and follows | 
|  | the struct. It is of size len.  The filename is padded with null bytes to | 
|  | ensure proper alignment.  This padding is reflected in len. | 
|  |  | 
|  | You can slurp multiple events by passing a large buffer, for example | 
|  |  | 
|  | size_t len = read (fd, buf, BUF_LEN); | 
|  |  | 
|  | Where "buf" is a pointer to an array of "inotify_event" structures at least | 
|  | BUF_LEN bytes in size.  The above example will return as many events as are | 
|  | available and fit in BUF_LEN. | 
|  |  | 
|  | Each inotify instance fd is also select()- and poll()-able. | 
|  |  | 
|  | You can find the size of the current event queue via the standard FIONREAD | 
|  | ioctl on the fd returned by inotify_init(). | 
|  |  | 
|  | All watches are destroyed and cleaned up on close. | 
|  |  | 
|  |  | 
|  | (ii) | 
|  |  | 
|  | Prototypes: | 
|  |  | 
|  | int inotify_init (void); | 
|  | int inotify_add_watch (int fd, const char *path, __u32 mask); | 
|  | int inotify_rm_watch (int fd, __u32 mask); | 
|  |  | 
|  |  | 
|  | (iii) Internal Kernel Implementation | 
|  |  | 
|  | Each inotify instance is associated with an inotify_device structure. | 
|  |  | 
|  | Each watch is associated with an inotify_watch structure.  Watches are chained | 
|  | off of each associated device and each associated inode. | 
|  |  | 
|  | See fs/inotify.c for the locking and lifetime rules. | 
|  |  | 
|  |  | 
|  | (iv) Rationale | 
|  |  | 
|  | Q: What is the design decision behind not tying the watch to the open fd of | 
|  | the watched object? | 
|  |  | 
|  | A: Watches are associated with an open inotify device, not an open file. | 
|  | This solves the primary problem with dnotify: keeping the file open pins | 
|  | the file and thus, worse, pins the mount.  Dnotify is therefore infeasible | 
|  | for use on a desktop system with removable media as the media cannot be | 
|  | unmounted.  Watching a file should not require that it be open. | 
|  |  | 
|  | Q: What is the design decision behind using an-fd-per-instance as opposed to | 
|  | an fd-per-watch? | 
|  |  | 
|  | A: An fd-per-watch quickly consumes more file descriptors than are allowed, | 
|  | more fd's than are feasible to manage, and more fd's than are optimally | 
|  | select()-able.  Yes, root can bump the per-process fd limit and yes, users | 
|  | can use epoll, but requiring both is a silly and extraneous requirement. | 
|  | A watch consumes less memory than an open file, separating the number | 
|  | spaces is thus sensible.  The current design is what user-space developers | 
|  | want: Users initialize inotify, once, and add n watches, requiring but one | 
|  | fd and no twiddling with fd limits.  Initializing an inotify instance two | 
|  | thousand times is silly.  If we can implement user-space's preferences | 
|  | cleanly--and we can, the idr layer makes stuff like this trivial--then we | 
|  | should. | 
|  |  | 
|  | There are other good arguments.  With a single fd, there is a single | 
|  | item to block on, which is mapped to a single queue of events.  The single | 
|  | fd returns all watch events and also any potential out-of-band data.  If | 
|  | every fd was a separate watch, | 
|  |  | 
|  | - There would be no way to get event ordering.  Events on file foo and | 
|  | file bar would pop poll() on both fd's, but there would be no way to tell | 
|  | which happened first.  A single queue trivially gives you ordering.  Such | 
|  | ordering is crucial to existing applications such as Beagle.  Imagine | 
|  | "mv a b ; mv b a" events without ordering. | 
|  |  | 
|  | - We'd have to maintain n fd's and n internal queues with state, | 
|  | versus just one.  It is a lot messier in the kernel.  A single, linear | 
|  | queue is the data structure that makes sense. | 
|  |  | 
|  | - User-space developers prefer the current API.  The Beagle guys, for | 
|  | example, love it.  Trust me, I asked.  It is not a surprise: Who'd want | 
|  | to manage and block on 1000 fd's via select? | 
|  |  | 
|  | - No way to get out of band data. | 
|  |  | 
|  | - 1024 is still too low.  ;-) | 
|  |  | 
|  | When you talk about designing a file change notification system that | 
|  | scales to 1000s of directories, juggling 1000s of fd's just does not seem | 
|  | the right interface.  It is too heavy. | 
|  |  | 
|  | Additionally, it _is_ possible to  more than one instance  and | 
|  | juggle more than one queue and thus more than one associated fd.  There | 
|  | need not be a one-fd-per-process mapping; it is one-fd-per-queue and a | 
|  | process can easily want more than one queue. | 
|  |  | 
|  | Q: Why the system call approach? | 
|  |  | 
|  | A: The poor user-space interface is the second biggest problem with dnotify. | 
|  | Signals are a terrible, terrible interface for file notification.  Or for | 
|  | anything, for that matter.  The ideal solution, from all perspectives, is a | 
|  | file descriptor-based one that allows basic file I/O and poll/select. | 
|  | Obtaining the fd and managing the watches could have been done either via a | 
|  | device file or a family of new system calls.  We decided to implement a | 
|  | family of system calls because that is the preffered approach for new kernel | 
|  | interfaces.  The only real difference was whether we wanted to use open(2) | 
|  | and ioctl(2) or a couple of new system calls.  System calls beat ioctls. | 
|  |  |