| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 1 | #ifndef _RAID5_H | 
 | 2 | #define _RAID5_H | 
 | 3 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 4 | #include <linux/raid/xor.h> | 
| Dan Williams | ad283ea | 2009-08-29 19:09:26 -0700 | [diff] [blame] | 5 | #include <linux/dmaengine.h> | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 6 |  | 
 | 7 | /* | 
 | 8 |  * | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 9 |  * Each stripe contains one buffer per device.  Each buffer can be in | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 10 |  * one of a number of states stored in "flags".  Changes between | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 11 |  * these states happen *almost* exclusively under the protection of the | 
 | 12 |  * STRIPE_ACTIVE flag.  Some very specific changes can happen in bi_end_io, and | 
 | 13 |  * these are not protected by STRIPE_ACTIVE. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 14 |  * | 
 | 15 |  * The flag bits that are used to represent these states are: | 
 | 16 |  *   R5_UPTODATE and R5_LOCKED | 
 | 17 |  * | 
 | 18 |  * State Empty == !UPTODATE, !LOCK | 
 | 19 |  *        We have no data, and there is no active request | 
 | 20 |  * State Want == !UPTODATE, LOCK | 
 | 21 |  *        A read request is being submitted for this block | 
 | 22 |  * State Dirty == UPTODATE, LOCK | 
 | 23 |  *        Some new data is in this buffer, and it is being written out | 
 | 24 |  * State Clean == UPTODATE, !LOCK | 
 | 25 |  *        We have valid data which is the same as on disc | 
 | 26 |  * | 
 | 27 |  * The possible state transitions are: | 
 | 28 |  * | 
 | 29 |  *  Empty -> Want   - on read or write to get old data for  parity calc | 
| NeilBrown | ede7ee8 | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 30 |  *  Empty -> Dirty  - on compute_parity to satisfy write/sync request. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 31 |  *  Empty -> Clean  - on compute_block when computing a block for failed drive | 
 | 32 |  *  Want  -> Empty  - on failed read | 
 | 33 |  *  Want  -> Clean  - on successful completion of read request | 
 | 34 |  *  Dirty -> Clean  - on successful completion of write request | 
 | 35 |  *  Dirty -> Clean  - on failed write | 
 | 36 |  *  Clean -> Dirty  - on compute_parity to satisfy write/sync (RECONSTRUCT or RMW) | 
 | 37 |  * | 
 | 38 |  * The Want->Empty, Want->Clean, Dirty->Clean, transitions | 
 | 39 |  * all happen in b_end_io at interrupt time. | 
 | 40 |  * Each sets the Uptodate bit before releasing the Lock bit. | 
 | 41 |  * This leaves one multi-stage transition: | 
 | 42 |  *    Want->Dirty->Clean | 
 | 43 |  * This is safe because thinking that a Clean buffer is actually dirty | 
 | 44 |  * will at worst delay some action, and the stripe will be scheduled | 
 | 45 |  * for attention after the transition is complete. | 
 | 46 |  * | 
 | 47 |  * There is one possibility that is not covered by these states.  That | 
 | 48 |  * is if one drive has failed and there is a spare being rebuilt.  We | 
 | 49 |  * can't distinguish between a clean block that has been generated | 
 | 50 |  * from parity calculations, and a clean block that has been | 
 | 51 |  * successfully written to the spare ( or to parity when resyncing). | 
 | 52 |  * To distingush these states we have a stripe bit STRIPE_INSYNC that | 
 | 53 |  * is set whenever a write is scheduled to the spare, or to the parity | 
 | 54 |  * disc if there is no spare.  A sync request clears this bit, and | 
 | 55 |  * when we find it set with no buffers locked, we know the sync is | 
 | 56 |  * complete. | 
 | 57 |  * | 
 | 58 |  * Buffers for the md device that arrive via make_request are attached | 
 | 59 |  * to the appropriate stripe in one of two lists linked on b_reqnext. | 
 | 60 |  * One list (bh_read) for read requests, one (bh_write) for write. | 
 | 61 |  * There should never be more than one buffer on the two lists | 
 | 62 |  * together, but we are not guaranteed of that so we allow for more. | 
 | 63 |  * | 
 | 64 |  * If a buffer is on the read list when the associated cache buffer is | 
 | 65 |  * Uptodate, the data is copied into the read buffer and it's b_end_io | 
 | 66 |  * routine is called.  This may happen in the end_request routine only | 
 | 67 |  * if the buffer has just successfully been read.  end_request should | 
 | 68 |  * remove the buffers from the list and then set the Uptodate bit on | 
 | 69 |  * the buffer.  Other threads may do this only if they first check | 
 | 70 |  * that the Uptodate bit is set.  Once they have checked that they may | 
 | 71 |  * take buffers off the read queue. | 
 | 72 |  * | 
 | 73 |  * When a buffer on the write list is committed for write it is copied | 
 | 74 |  * into the cache buffer, which is then marked dirty, and moved onto a | 
 | 75 |  * third list, the written list (bh_written).  Once both the parity | 
 | 76 |  * block and the cached buffer are successfully written, any buffer on | 
 | 77 |  * a written list can be returned with b_end_io. | 
 | 78 |  * | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 79 |  * The write list and read list both act as fifos.  The read list, | 
 | 80 |  * write list and written list are protected by the device_lock. | 
 | 81 |  * The device_lock is only for list manipulations and will only be | 
 | 82 |  * held for a very short time.  It can be claimed from interrupts. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 83 |  * | 
 | 84 |  * | 
 | 85 |  * Stripes in the stripe cache can be on one of two lists (or on | 
 | 86 |  * neither).  The "inactive_list" contains stripes which are not | 
 | 87 |  * currently being used for any request.  They can freely be reused | 
 | 88 |  * for another stripe.  The "handle_list" contains stripes that need | 
 | 89 |  * to be handled in some way.  Both of these are fifo queues.  Each | 
 | 90 |  * stripe is also (potentially) linked to a hash bucket in the hash | 
 | 91 |  * table so that it can be found by sector number.  Stripes that are | 
 | 92 |  * not hashed must be on the inactive_list, and will normally be at | 
 | 93 |  * the front.  All stripes start life this way. | 
 | 94 |  * | 
 | 95 |  * The inactive_list, handle_list and hash bucket lists are all protected by the | 
 | 96 |  * device_lock. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 97 |  *  - stripes have a reference counter. If count==0, they are on a list. | 
 | 98 |  *  - If a stripe might need handling, STRIPE_HANDLE is set. | 
 | 99 |  *  - When refcount reaches zero, then if STRIPE_HANDLE it is put on | 
 | 100 |  *    handle_list else inactive_list | 
 | 101 |  * | 
 | 102 |  * This, combined with the fact that STRIPE_HANDLE is only ever | 
 | 103 |  * cleared while a stripe has a non-zero count means that if the | 
 | 104 |  * refcount is 0 and STRIPE_HANDLE is set, then it is on the | 
 | 105 |  * handle_list and if recount is 0 and STRIPE_HANDLE is not set, then | 
 | 106 |  * the stripe is on inactive_list. | 
 | 107 |  * | 
 | 108 |  * The possible transitions are: | 
 | 109 |  *  activate an unhashed/inactive stripe (get_active_stripe()) | 
 | 110 |  *     lockdev check-hash unlink-stripe cnt++ clean-stripe hash-stripe unlockdev | 
 | 111 |  *  activate a hashed, possibly active stripe (get_active_stripe()) | 
 | 112 |  *     lockdev check-hash if(!cnt++)unlink-stripe unlockdev | 
 | 113 |  *  attach a request to an active stripe (add_stripe_bh()) | 
 | 114 |  *     lockdev attach-buffer unlockdev | 
 | 115 |  *  handle a stripe (handle_stripe()) | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 116 |  *     setSTRIPE_ACTIVE,  clrSTRIPE_HANDLE ... | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 117 |  *		(lockdev check-buffers unlockdev) .. | 
 | 118 |  *		change-state .. | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 119 |  *		record io/ops needed clearSTRIPE_ACTIVE schedule io/ops | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 120 |  *  release an active stripe (release_stripe()) | 
 | 121 |  *     lockdev if (!--cnt) { if  STRIPE_HANDLE, add to handle_list else add to inactive-list } unlockdev | 
 | 122 |  * | 
 | 123 |  * The refcount counts each thread that have activated the stripe, | 
 | 124 |  * plus raid5d if it is handling it, plus one for each active request | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 125 |  * on a cached buffer, and plus one if the stripe is undergoing stripe | 
 | 126 |  * operations. | 
 | 127 |  * | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 128 |  * The stripe operations are: | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 129 |  * -copying data between the stripe cache and user application buffers | 
 | 130 |  * -computing blocks to save a disk access, or to recover a missing block | 
 | 131 |  * -updating the parity on a write operation (reconstruct write and | 
 | 132 |  *  read-modify-write) | 
 | 133 |  * -checking parity correctness | 
 | 134 |  * -running i/o to disk | 
 | 135 |  * These operations are carried out by raid5_run_ops which uses the async_tx | 
 | 136 |  * api to (optionally) offload operations to dedicated hardware engines. | 
 | 137 |  * When requesting an operation handle_stripe sets the pending bit for the | 
 | 138 |  * operation and increments the count.  raid5_run_ops is then run whenever | 
 | 139 |  * the count is non-zero. | 
 | 140 |  * There are some critical dependencies between the operations that prevent some | 
 | 141 |  * from being requested while another is in flight. | 
 | 142 |  * 1/ Parity check operations destroy the in cache version of the parity block, | 
 | 143 |  *    so we prevent parity dependent operations like writes and compute_blocks | 
 | 144 |  *    from starting while a check is in progress.  Some dma engines can perform | 
 | 145 |  *    the check without damaging the parity block, in these cases the parity | 
 | 146 |  *    block is re-marked up to date (assuming the check was successful) and is | 
 | 147 |  *    not re-read from disk. | 
 | 148 |  * 2/ When a write operation is requested we immediately lock the affected | 
 | 149 |  *    blocks, and mark them as not up to date.  This causes new read requests | 
 | 150 |  *    to be held off, as well as parity checks and compute block operations. | 
 | 151 |  * 3/ Once a compute block operation has been requested handle_stripe treats | 
 | 152 |  *    that block as if it is up to date.  raid5_run_ops guaruntees that any | 
 | 153 |  *    operation that is dependent on the compute block result is initiated after | 
 | 154 |  *    the compute block completes. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 155 |  */ | 
 | 156 |  | 
| Dan Williams | ecc65c9 | 2008-06-28 08:31:57 +1000 | [diff] [blame] | 157 | /* | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 158 |  * Operations state - intermediate states that are visible outside of  | 
 | 159 |  *   STRIPE_ACTIVE. | 
| Dan Williams | ecc65c9 | 2008-06-28 08:31:57 +1000 | [diff] [blame] | 160 |  * In general _idle indicates nothing is running, _run indicates a data | 
 | 161 |  * processing operation is active, and _result means the data processing result | 
 | 162 |  * is stable and can be acted upon.  For simple operations like biofill and | 
 | 163 |  * compute that only have an _idle and _run state they are indicated with | 
 | 164 |  * sh->state flags (STRIPE_BIOFILL_RUN and STRIPE_COMPUTE_RUN) | 
 | 165 |  */ | 
 | 166 | /** | 
 | 167 |  * enum check_states - handles syncing / repairing a stripe | 
 | 168 |  * @check_state_idle - check operations are quiesced | 
 | 169 |  * @check_state_run - check operation is running | 
 | 170 |  * @check_state_result - set outside lock when check result is valid | 
 | 171 |  * @check_state_compute_run - check failed and we are repairing | 
 | 172 |  * @check_state_compute_result - set outside lock when compute result is valid | 
 | 173 |  */ | 
 | 174 | enum check_states { | 
 | 175 | 	check_state_idle = 0, | 
| Dan Williams | ac6b53b | 2009-07-14 13:40:19 -0700 | [diff] [blame] | 176 | 	check_state_run, /* xor parity check */ | 
 | 177 | 	check_state_run_q, /* q-parity check */ | 
 | 178 | 	check_state_run_pq, /* pq dual parity check */ | 
| Dan Williams | ecc65c9 | 2008-06-28 08:31:57 +1000 | [diff] [blame] | 179 | 	check_state_check_result, | 
 | 180 | 	check_state_compute_run, /* parity repair */ | 
 | 181 | 	check_state_compute_result, | 
 | 182 | }; | 
 | 183 |  | 
 | 184 | /** | 
 | 185 |  * enum reconstruct_states - handles writing or expanding a stripe | 
 | 186 |  */ | 
 | 187 | enum reconstruct_states { | 
 | 188 | 	reconstruct_state_idle = 0, | 
| Dan Williams | d8ee072 | 2008-06-28 08:32:06 +1000 | [diff] [blame] | 189 | 	reconstruct_state_prexor_drain_run,	/* prexor-write */ | 
| Dan Williams | ecc65c9 | 2008-06-28 08:31:57 +1000 | [diff] [blame] | 190 | 	reconstruct_state_drain_run,		/* write */ | 
 | 191 | 	reconstruct_state_run,			/* expand */ | 
| Dan Williams | d8ee072 | 2008-06-28 08:32:06 +1000 | [diff] [blame] | 192 | 	reconstruct_state_prexor_drain_result, | 
| Dan Williams | ecc65c9 | 2008-06-28 08:31:57 +1000 | [diff] [blame] | 193 | 	reconstruct_state_drain_result, | 
 | 194 | 	reconstruct_state_result, | 
 | 195 | }; | 
 | 196 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 197 | struct stripe_head { | 
| NeilBrown | fccddba | 2006-01-06 00:20:33 -0800 | [diff] [blame] | 198 | 	struct hlist_node	hash; | 
| NeilBrown | d0dabf7 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 199 | 	struct list_head	lru;	      /* inactive_list or handle_list */ | 
| NeilBrown | d1688a6 | 2011-10-11 16:49:52 +1100 | [diff] [blame] | 200 | 	struct r5conf		*raid_conf; | 
| NeilBrown | 86b42c7 | 2009-03-31 15:19:03 +1100 | [diff] [blame] | 201 | 	short			generation;	/* increments with every | 
 | 202 | 						 * reshape */ | 
| NeilBrown | d0dabf7 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 203 | 	sector_t		sector;		/* sector of this row */ | 
 | 204 | 	short			pd_idx;		/* parity disk index */ | 
 | 205 | 	short			qd_idx;		/* 'Q' disk index for raid6 */ | 
| NeilBrown | 67cc2b8 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 206 | 	short			ddf_layout;/* use DDF ordering to calculate Q */ | 
| NeilBrown | d0dabf7 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 207 | 	unsigned long		state;		/* state flags */ | 
 | 208 | 	atomic_t		count;	      /* nr of active thread/requests */ | 
| NeilBrown | 7262668 | 2005-09-09 16:23:54 -0700 | [diff] [blame] | 209 | 	int			bm_seq;	/* sequence number for bitmap flushes */ | 
| NeilBrown | d0dabf7 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 210 | 	int			disks;		/* disks in stripe */ | 
| Dan Williams | ecc65c9 | 2008-06-28 08:31:57 +1000 | [diff] [blame] | 211 | 	enum check_states	check_state; | 
| Dan Williams | 600aa10 | 2008-06-28 08:32:05 +1000 | [diff] [blame] | 212 | 	enum reconstruct_states reconstruct_state; | 
| Shaohua Li | b17459c | 2012-07-19 16:01:31 +1000 | [diff] [blame] | 213 | 	spinlock_t		stripe_lock; | 
| Dan Williams | 417b8d4 | 2009-10-16 16:25:22 +1100 | [diff] [blame] | 214 | 	/** | 
 | 215 | 	 * struct stripe_operations | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 216 | 	 * @target - STRIPE_OP_COMPUTE_BLK target | 
| Dan Williams | 417b8d4 | 2009-10-16 16:25:22 +1100 | [diff] [blame] | 217 | 	 * @target2 - 2nd compute target in the raid6 case | 
 | 218 | 	 * @zero_sum_result - P and Q verification flags | 
 | 219 | 	 * @request - async service request flags for raid_run_ops | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 220 | 	 */ | 
 | 221 | 	struct stripe_operations { | 
| Dan Williams | ac6b53b | 2009-07-14 13:40:19 -0700 | [diff] [blame] | 222 | 		int 		     target, target2; | 
| Dan Williams | ad283ea | 2009-08-29 19:09:26 -0700 | [diff] [blame] | 223 | 		enum sum_check_flags zero_sum_result; | 
| Dan Williams | 417b8d4 | 2009-10-16 16:25:22 +1100 | [diff] [blame] | 224 | 		#ifdef CONFIG_MULTICORE_RAID456 | 
 | 225 | 		unsigned long	     request; | 
 | 226 | 		wait_queue_head_t    wait_for_ops; | 
 | 227 | 		#endif | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 228 | 	} ops; | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 229 | 	struct r5dev { | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 230 | 		/* rreq and rvec are used for the replacement device when | 
 | 231 | 		 * writing data to both devices. | 
 | 232 | 		 */ | 
 | 233 | 		struct bio	req, rreq; | 
 | 234 | 		struct bio_vec	vec, rvec; | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 235 | 		struct page	*page; | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 236 | 		struct bio	*toread, *read, *towrite, *written; | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 237 | 		sector_t	sector;			/* sector of this page */ | 
 | 238 | 		unsigned long	flags; | 
 | 239 | 	} dev[1]; /* allocated with extra space depending of RAID geometry */ | 
 | 240 | }; | 
| Dan Williams | a445685 | 2007-07-09 11:56:43 -0700 | [diff] [blame] | 241 |  | 
 | 242 | /* stripe_head_state - collects and tracks the dynamic state of a stripe_head | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 243 |  *     for handle_stripe. | 
| Dan Williams | a445685 | 2007-07-09 11:56:43 -0700 | [diff] [blame] | 244 |  */ | 
 | 245 | struct stripe_head_state { | 
| NeilBrown | 9a3e110 | 2011-12-23 10:17:53 +1100 | [diff] [blame] | 246 | 	/* 'syncing' means that we need to read all devices, either | 
 | 247 | 	 * to check/correct parity, or to reconstruct a missing device. | 
 | 248 | 	 * 'replacing' means we are replacing one or more drives and | 
 | 249 | 	 * the source is valid at this point so we don't need to | 
 | 250 | 	 * read all devices, just the replacement targets. | 
 | 251 | 	 */ | 
 | 252 | 	int syncing, expanding, expanded, replacing; | 
| Dan Williams | a445685 | 2007-07-09 11:56:43 -0700 | [diff] [blame] | 253 | 	int locked, uptodate, to_read, to_write, failed, written; | 
| Dan Williams | b5e98d6 | 2007-01-02 13:52:31 -0700 | [diff] [blame] | 254 | 	int to_fill, compute, req_compute, non_overwrite; | 
| NeilBrown | f2b3b44 | 2011-07-26 11:35:19 +1000 | [diff] [blame] | 255 | 	int failed_num[2]; | 
| NeilBrown | f2b3b44 | 2011-07-26 11:35:19 +1000 | [diff] [blame] | 256 | 	int p_failed, q_failed; | 
| NeilBrown | c5709ef | 2011-07-26 11:35:20 +1000 | [diff] [blame] | 257 | 	int dec_preread_active; | 
 | 258 | 	unsigned long ops_request; | 
 | 259 |  | 
 | 260 | 	struct bio *return_bi; | 
| NeilBrown | 3cb0300 | 2011-10-11 16:45:26 +1100 | [diff] [blame] | 261 | 	struct md_rdev *blocked_rdev; | 
| NeilBrown | bc2607f | 2011-07-28 11:39:22 +1000 | [diff] [blame] | 262 | 	int handle_bad_blocks; | 
| Dan Williams | a445685 | 2007-07-09 11:56:43 -0700 | [diff] [blame] | 263 | }; | 
 | 264 |  | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 265 | /* Flags for struct r5dev.flags */ | 
 | 266 | enum r5dev_flags { | 
 | 267 | 	R5_UPTODATE,	/* page contains current data */ | 
 | 268 | 	R5_LOCKED,	/* IO has been submitted on "req" */ | 
| NeilBrown | 977df36 | 2011-12-23 10:17:53 +1100 | [diff] [blame] | 269 | 	R5_DOUBLE_LOCKED,/* Cannot clear R5_LOCKED until 2 writes complete */ | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 270 | 	R5_OVERWRITE,	/* towrite covers whole page */ | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 271 | /* and some that are internal to handle_stripe */ | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 272 | 	R5_Insync,	/* rdev && rdev->in_sync at start */ | 
 | 273 | 	R5_Wantread,	/* want to schedule a read */ | 
 | 274 | 	R5_Wantwrite, | 
 | 275 | 	R5_Overlap,	/* There is a pending overlapping request | 
 | 276 | 			 * on this block */ | 
| majianpeng | 3f9e7c1 | 2012-07-31 10:04:21 +1000 | [diff] [blame] | 277 | 	R5_ReadNoMerge, /* prevent bio from merging in block-layer */ | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 278 | 	R5_ReadError,	/* seen a read error here recently */ | 
 | 279 | 	R5_ReWrite,	/* have tried to over-write the readerror */ | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 280 |  | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 281 | 	R5_Expanded,	/* This block now has post-expand data */ | 
 | 282 | 	R5_Wantcompute,	/* compute_block in progress treat as | 
 | 283 | 			 * uptodate | 
 | 284 | 			 */ | 
 | 285 | 	R5_Wantfill,	/* dev->toread contains a bio that needs | 
 | 286 | 			 * filling | 
 | 287 | 			 */ | 
 | 288 | 	R5_Wantdrain,	/* dev->towrite needs to be drained */ | 
 | 289 | 	R5_WantFUA,	/* Write should be FUA */ | 
| Shaohua Li | bc0934f | 2012-05-22 13:55:05 +1000 | [diff] [blame] | 290 | 	R5_SyncIO,	/* The IO is sync */ | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 291 | 	R5_WriteError,	/* got a write error - need to record it */ | 
 | 292 | 	R5_MadeGood,	/* A bad block has been fixed by writing to it */ | 
 | 293 | 	R5_ReadRepl,	/* Will/did read from replacement rather than orig */ | 
 | 294 | 	R5_MadeGoodRepl,/* A bad block on the replacement device has been | 
 | 295 | 			 * fixed by writing to it */ | 
| NeilBrown | 9a3e110 | 2011-12-23 10:17:53 +1100 | [diff] [blame] | 296 | 	R5_NeedReplace,	/* This device has a replacement which is not | 
 | 297 | 			 * up-to-date at this stripe. */ | 
 | 298 | 	R5_WantReplace, /* We need to update the replacement, we have read | 
 | 299 | 			 * data in, and now is a good time to write it out. | 
 | 300 | 			 */ | 
| Shaohua Li | 620125f | 2012-10-11 13:49:05 +1100 | [diff] [blame] | 301 | 	R5_Discard,	/* Discard the stripe */ | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 302 | }; | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 303 |  | 
 | 304 | /* | 
 | 305 |  * Stripe state | 
 | 306 |  */ | 
| NeilBrown | 83206d6 | 2011-07-26 11:19:49 +1000 | [diff] [blame] | 307 | enum { | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 308 | 	STRIPE_ACTIVE, | 
| NeilBrown | 83206d6 | 2011-07-26 11:19:49 +1000 | [diff] [blame] | 309 | 	STRIPE_HANDLE, | 
 | 310 | 	STRIPE_SYNC_REQUESTED, | 
 | 311 | 	STRIPE_SYNCING, | 
 | 312 | 	STRIPE_INSYNC, | 
 | 313 | 	STRIPE_PREREAD_ACTIVE, | 
 | 314 | 	STRIPE_DELAYED, | 
 | 315 | 	STRIPE_DEGRADED, | 
 | 316 | 	STRIPE_BIT_DELAY, | 
 | 317 | 	STRIPE_EXPANDING, | 
 | 318 | 	STRIPE_EXPAND_SOURCE, | 
 | 319 | 	STRIPE_EXPAND_READY, | 
 | 320 | 	STRIPE_IO_STARTED,	/* do not count towards 'bypass_count' */ | 
 | 321 | 	STRIPE_FULL_WRITE,	/* all blocks are set to be overwritten */ | 
 | 322 | 	STRIPE_BIOFILL_RUN, | 
 | 323 | 	STRIPE_COMPUTE_RUN, | 
 | 324 | 	STRIPE_OPS_REQ_PENDING, | 
| Shaohua Li | 8811b59 | 2012-08-02 08:33:00 +1000 | [diff] [blame] | 325 | 	STRIPE_ON_UNPLUG_LIST, | 
| NeilBrown | 83206d6 | 2011-07-26 11:19:49 +1000 | [diff] [blame] | 326 | }; | 
| Dan Williams | 417b8d4 | 2009-10-16 16:25:22 +1100 | [diff] [blame] | 327 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 328 | /* | 
| Dan Williams | ecc65c9 | 2008-06-28 08:31:57 +1000 | [diff] [blame] | 329 |  * Operation request flags | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 330 |  */ | 
| NeilBrown | ede7ee8 | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 331 | enum { | 
 | 332 | 	STRIPE_OP_BIOFILL, | 
 | 333 | 	STRIPE_OP_COMPUTE_BLK, | 
 | 334 | 	STRIPE_OP_PREXOR, | 
 | 335 | 	STRIPE_OP_BIODRAIN, | 
 | 336 | 	STRIPE_OP_RECONSTRUCT, | 
 | 337 | 	STRIPE_OP_CHECK, | 
 | 338 | }; | 
| Dan Williams | 91c0092 | 2007-01-02 13:52:30 -0700 | [diff] [blame] | 339 | /* | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 340 |  * Plugging: | 
 | 341 |  * | 
 | 342 |  * To improve write throughput, we need to delay the handling of some | 
 | 343 |  * stripes until there has been a chance that several write requests | 
 | 344 |  * for the one stripe have all been collected. | 
 | 345 |  * In particular, any write request that would require pre-reading | 
 | 346 |  * is put on a "delayed" queue until there are no stripes currently | 
 | 347 |  * in a pre-read phase.  Further, if the "delayed" queue is empty when | 
 | 348 |  * a stripe is put on it then we "plug" the queue and do not process it | 
 | 349 |  * until an unplug call is made. (the unplug_io_fn() is called). | 
 | 350 |  * | 
 | 351 |  * When preread is initiated on a stripe, we set PREREAD_ACTIVE and add | 
 | 352 |  * it to the count of prereading stripes. | 
 | 353 |  * When write is initiated, or the stripe refcnt == 0 (just in case) we | 
 | 354 |  * clear the PREREAD_ACTIVE flag and decrement the count | 
| NeilBrown | b5c124a | 2006-10-03 01:15:45 -0700 | [diff] [blame] | 355 |  * Whenever the 'handle' queue is empty and the device is not plugged, we | 
 | 356 |  * move any strips from delayed to handle and clear the DELAYED flag and set | 
 | 357 |  * PREREAD_ACTIVE. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 358 |  * In stripe_handle, if we find pre-reading is necessary, we do it if | 
 | 359 |  * PREREAD_ACTIVE is set, else we set DELAYED which will send it to the delayed queue. | 
| NeilBrown | c4c1663 | 2011-07-26 11:34:20 +1000 | [diff] [blame] | 360 |  * HANDLE gets cleared if stripe_handle leaves nothing locked. | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 361 |  */ | 
| Christoph Hellwig | ef740c3 | 2009-03-31 14:27:03 +1100 | [diff] [blame] | 362 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 363 |  | 
 | 364 | struct disk_info { | 
| NeilBrown | 671488c | 2011-12-23 10:17:52 +1100 | [diff] [blame] | 365 | 	struct md_rdev	*rdev, *replacement; | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 366 | }; | 
 | 367 |  | 
| NeilBrown | d1688a6 | 2011-10-11 16:49:52 +1100 | [diff] [blame] | 368 | struct r5conf { | 
| NeilBrown | fccddba | 2006-01-06 00:20:33 -0800 | [diff] [blame] | 369 | 	struct hlist_head	*stripe_hashtbl; | 
| NeilBrown | fd01b88 | 2011-10-11 16:47:53 +1100 | [diff] [blame] | 370 | 	struct mddev		*mddev; | 
| Andre Noll | 09c9e5f | 2009-06-18 08:45:55 +1000 | [diff] [blame] | 371 | 	int			chunk_sectors; | 
 | 372 | 	int			level, algorithm; | 
| NeilBrown | 16a53ec | 2006-06-26 00:27:38 -0700 | [diff] [blame] | 373 | 	int			max_degraded; | 
| NeilBrown | 02c2de8 | 2006-10-03 01:15:47 -0700 | [diff] [blame] | 374 | 	int			raid_disks; | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 375 | 	int			max_nr_stripes; | 
 | 376 |  | 
| NeilBrown | fef9c61 | 2009-03-31 15:16:46 +1100 | [diff] [blame] | 377 | 	/* reshape_progress is the leading edge of a 'reshape' | 
 | 378 | 	 * It has value MaxSector when no reshape is happening | 
 | 379 | 	 * If delta_disks < 0, it is the last sector we started work on, | 
 | 380 | 	 * else is it the next sector to work on. | 
 | 381 | 	 */ | 
 | 382 | 	sector_t		reshape_progress; | 
 | 383 | 	/* reshape_safe is the trailing edge of a reshape.  We know that | 
 | 384 | 	 * before (or after) this address, all reshape has completed. | 
 | 385 | 	 */ | 
 | 386 | 	sector_t		reshape_safe; | 
| NeilBrown | 7ecaa1e | 2006-03-27 01:18:08 -0800 | [diff] [blame] | 387 | 	int			previous_raid_disks; | 
| Andre Noll | 09c9e5f | 2009-06-18 08:45:55 +1000 | [diff] [blame] | 388 | 	int			prev_chunk_sectors; | 
 | 389 | 	int			prev_algo; | 
| NeilBrown | 86b42c7 | 2009-03-31 15:19:03 +1100 | [diff] [blame] | 390 | 	short			generation; /* increments with every reshape */ | 
| NeilBrown | c8f517c | 2009-03-31 15:28:40 +1100 | [diff] [blame] | 391 | 	unsigned long		reshape_checkpoint; /* Time we last updated | 
 | 392 | 						     * metadata */ | 
| NeilBrown | b5254dd | 2012-05-21 09:27:01 +1000 | [diff] [blame] | 393 | 	long long		min_offset_diff; /* minimum difference between | 
 | 394 | 						  * data_offset and | 
 | 395 | 						  * new_data_offset across all | 
 | 396 | 						  * devices.  May be negative, | 
 | 397 | 						  * but is closest to zero. | 
 | 398 | 						  */ | 
| NeilBrown | 7ecaa1e | 2006-03-27 01:18:08 -0800 | [diff] [blame] | 399 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 400 | 	struct list_head	handle_list; /* stripes needing handling */ | 
| Dan Williams | 8b3e6cd | 2008-04-28 02:15:53 -0700 | [diff] [blame] | 401 | 	struct list_head	hold_list; /* preread ready stripes */ | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 402 | 	struct list_head	delayed_list; /* stripes that have plugged requests */ | 
| NeilBrown | 7262668 | 2005-09-09 16:23:54 -0700 | [diff] [blame] | 403 | 	struct list_head	bitmap_list; /* stripes delaying awaiting bitmap update */ | 
| Raz Ben-Jehuda(caro) | 46031f9 | 2006-12-10 02:20:47 -0800 | [diff] [blame] | 404 | 	struct bio		*retry_read_aligned; /* currently retrying aligned bios   */ | 
 | 405 | 	struct bio		*retry_read_aligned_list; /* aligned bios retry list  */ | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 406 | 	atomic_t		preread_active_stripes; /* stripes with scheduled io */ | 
| Raz Ben-Jehuda(caro) | 46031f9 | 2006-12-10 02:20:47 -0800 | [diff] [blame] | 407 | 	atomic_t		active_aligned_reads; | 
| Dan Williams | 8b3e6cd | 2008-04-28 02:15:53 -0700 | [diff] [blame] | 408 | 	atomic_t		pending_full_writes; /* full write backlog */ | 
 | 409 | 	int			bypass_count; /* bypassed prereads */ | 
 | 410 | 	int			bypass_threshold; /* preread nice */ | 
 | 411 | 	struct list_head	*last_hold; /* detect hold_list promotions */ | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 412 |  | 
| NeilBrown | f670557 | 2006-03-27 01:18:11 -0800 | [diff] [blame] | 413 | 	atomic_t		reshape_stripes; /* stripes with pending writes for reshape */ | 
| NeilBrown | ad01c9e | 2006-03-27 01:18:07 -0800 | [diff] [blame] | 414 | 	/* unfortunately we need two cache names as we temporarily have | 
 | 415 | 	 * two caches. | 
 | 416 | 	 */ | 
 | 417 | 	int			active_name; | 
| NeilBrown | f4be6b4 | 2010-06-01 19:37:25 +1000 | [diff] [blame] | 418 | 	char			cache_name[2][32]; | 
| Christoph Lameter | e18b890 | 2006-12-06 20:33:20 -0800 | [diff] [blame] | 419 | 	struct kmem_cache		*slab_cache; /* for allocating stripes */ | 
| NeilBrown | 7262668 | 2005-09-09 16:23:54 -0700 | [diff] [blame] | 420 |  | 
 | 421 | 	int			seq_flush, seq_write; | 
 | 422 | 	int			quiesce; | 
 | 423 |  | 
 | 424 | 	int			fullsync;  /* set to 1 if a full sync is needed, | 
 | 425 | 					    * (fresh device added). | 
 | 426 | 					    * Cleared when a sync completes. | 
 | 427 | 					    */ | 
| NeilBrown | 7f0da59 | 2011-07-28 11:39:22 +1000 | [diff] [blame] | 428 | 	int			recovery_disabled; | 
| Dan Williams | 36d1c64 | 2009-07-14 11:48:22 -0700 | [diff] [blame] | 429 | 	/* per cpu variables */ | 
 | 430 | 	struct raid5_percpu { | 
 | 431 | 		struct page	*spare_page; /* Used when checking P/Q in raid6 */ | 
| Dan Williams | d6f38f3 | 2009-07-14 11:50:52 -0700 | [diff] [blame] | 432 | 		void		*scribble;   /* space for constructing buffer | 
 | 433 | 					      * lists and performing address | 
 | 434 | 					      * conversions | 
 | 435 | 					      */ | 
| Tejun Heo | a29d8b8 | 2010-02-02 14:39:15 +0900 | [diff] [blame] | 436 | 	} __percpu *percpu; | 
| Dan Williams | d6f38f3 | 2009-07-14 11:50:52 -0700 | [diff] [blame] | 437 | 	size_t			scribble_len; /* size of scribble region must be | 
 | 438 | 					       * associated with conf to handle | 
 | 439 | 					       * cpu hotplug while reshaping | 
 | 440 | 					       */ | 
| Dan Williams | 36d1c64 | 2009-07-14 11:48:22 -0700 | [diff] [blame] | 441 | #ifdef CONFIG_HOTPLUG_CPU | 
 | 442 | 	struct notifier_block	cpu_notify; | 
 | 443 | #endif | 
| NeilBrown | ca65b73 | 2006-01-06 00:20:17 -0800 | [diff] [blame] | 444 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 445 | 	/* | 
 | 446 | 	 * Free stripes pool | 
 | 447 | 	 */ | 
 | 448 | 	atomic_t		active_stripes; | 
 | 449 | 	struct list_head	inactive_list; | 
 | 450 | 	wait_queue_head_t	wait_for_stripe; | 
 | 451 | 	wait_queue_head_t	wait_for_overlap; | 
 | 452 | 	int			inactive_blocked;	/* release of inactive stripes blocked, | 
 | 453 | 							 * waiting for 25% to be free | 
| NeilBrown | ad01c9e | 2006-03-27 01:18:07 -0800 | [diff] [blame] | 454 | 							 */ | 
 | 455 | 	int			pool_size; /* number of disks in stripeheads in pool */ | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 456 | 	spinlock_t		device_lock; | 
| NeilBrown | b55e6bf | 2006-03-27 01:18:06 -0800 | [diff] [blame] | 457 | 	struct disk_info	*disks; | 
| NeilBrown | 91adb56 | 2009-03-31 14:39:39 +1100 | [diff] [blame] | 458 |  | 
 | 459 | 	/* When taking over an array from a different personality, we store | 
 | 460 | 	 * the new thread here until we fully activate the array. | 
 | 461 | 	 */ | 
| NeilBrown | 2b8bf34 | 2011-10-11 16:48:23 +1100 | [diff] [blame] | 462 | 	struct md_thread	*thread; | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 463 | }; | 
 | 464 |  | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 465 | /* | 
 | 466 |  * Our supported algorithms | 
 | 467 |  */ | 
| NeilBrown | 99c0fb5 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 468 | #define ALGORITHM_LEFT_ASYMMETRIC	0 /* Rotating Parity N with Data Restart */ | 
 | 469 | #define ALGORITHM_RIGHT_ASYMMETRIC	1 /* Rotating Parity 0 with Data Restart */ | 
 | 470 | #define ALGORITHM_LEFT_SYMMETRIC	2 /* Rotating Parity N with Data Continuation */ | 
 | 471 | #define ALGORITHM_RIGHT_SYMMETRIC	3 /* Rotating Parity 0 with Data Continuation */ | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 472 |  | 
| NeilBrown | 99c0fb5 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 473 | /* Define non-rotating (raid4) algorithms.  These allow | 
 | 474 |  * conversion of raid4 to raid5. | 
 | 475 |  */ | 
 | 476 | #define ALGORITHM_PARITY_0		4 /* P or P,Q are initial devices */ | 
 | 477 | #define ALGORITHM_PARITY_N		5 /* P or P,Q are final devices. */ | 
 | 478 |  | 
 | 479 | /* DDF RAID6 layouts differ from md/raid6 layouts in two ways. | 
 | 480 |  * Firstly, the exact positioning of the parity block is slightly | 
 | 481 |  * different between the 'LEFT_*' modes of md and the "_N_*" modes | 
 | 482 |  * of DDF. | 
 | 483 |  * Secondly, or order of datablocks over which the Q syndrome is computed | 
 | 484 |  * is different. | 
 | 485 |  * Consequently we have different layouts for DDF/raid6 than md/raid6. | 
 | 486 |  * These layouts are from the DDFv1.2 spec. | 
 | 487 |  * Interestingly DDFv1.2-Errata-A does not specify N_CONTINUE but | 
 | 488 |  * leaves RLQ=3 as 'Vendor Specific' | 
 | 489 |  */ | 
 | 490 |  | 
 | 491 | #define ALGORITHM_ROTATING_ZERO_RESTART	8 /* DDF PRL=6 RLQ=1 */ | 
 | 492 | #define ALGORITHM_ROTATING_N_RESTART	9 /* DDF PRL=6 RLQ=2 */ | 
 | 493 | #define ALGORITHM_ROTATING_N_CONTINUE	10 /*DDF PRL=6 RLQ=3 */ | 
 | 494 |  | 
 | 495 |  | 
 | 496 | /* For every RAID5 algorithm we define a RAID6 algorithm | 
 | 497 |  * with exactly the same layout for data and parity, and | 
 | 498 |  * with the Q block always on the last device (N-1). | 
 | 499 |  * This allows trivial conversion from RAID5 to RAID6 | 
 | 500 |  */ | 
 | 501 | #define ALGORITHM_LEFT_ASYMMETRIC_6	16 | 
 | 502 | #define ALGORITHM_RIGHT_ASYMMETRIC_6	17 | 
 | 503 | #define ALGORITHM_LEFT_SYMMETRIC_6	18 | 
 | 504 | #define ALGORITHM_RIGHT_SYMMETRIC_6	19 | 
 | 505 | #define ALGORITHM_PARITY_0_6		20 | 
 | 506 | #define ALGORITHM_PARITY_N_6		ALGORITHM_PARITY_N | 
 | 507 |  | 
 | 508 | static inline int algorithm_valid_raid5(int layout) | 
 | 509 | { | 
 | 510 | 	return (layout >= 0) && | 
 | 511 | 		(layout <= 5); | 
 | 512 | } | 
 | 513 | static inline int algorithm_valid_raid6(int layout) | 
 | 514 | { | 
 | 515 | 	return (layout >= 0 && layout <= 5) | 
 | 516 | 		|| | 
| NeilBrown | e4424fe | 2009-10-16 16:27:34 +1100 | [diff] [blame] | 517 | 		(layout >= 8 && layout <= 10) | 
| NeilBrown | 99c0fb5 | 2009-03-31 14:39:38 +1100 | [diff] [blame] | 518 | 		|| | 
 | 519 | 		(layout >= 16 && layout <= 20); | 
 | 520 | } | 
 | 521 |  | 
 | 522 | static inline int algorithm_is_DDF(int layout) | 
 | 523 | { | 
 | 524 | 	return layout >= 8 && layout <= 10; | 
 | 525 | } | 
| NeilBrown | 11d8a6e | 2010-07-26 11:57:07 +1000 | [diff] [blame] | 526 |  | 
| NeilBrown | fd01b88 | 2011-10-11 16:47:53 +1100 | [diff] [blame] | 527 | extern int md_raid5_congested(struct mddev *mddev, int bits); | 
| NeilBrown | d1688a6 | 2011-10-11 16:49:52 +1100 | [diff] [blame] | 528 | extern void md_raid5_kick_device(struct r5conf *conf); | 
| NeilBrown | fd01b88 | 2011-10-11 16:47:53 +1100 | [diff] [blame] | 529 | extern int raid5_set_cache_size(struct mddev *mddev, int size); | 
| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 530 | #endif |