| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1 | /* | 
 | 2 |  * Copyright (c) International Business Machines Corp., 2006 | 
 | 3 |  * | 
 | 4 |  * This program is free software; you can redistribute it and/or modify | 
 | 5 |  * it under the terms of the GNU General Public License as published by | 
 | 6 |  * the Free Software Foundation; either version 2 of the License, or | 
 | 7 |  * (at your option) any later version. | 
 | 8 |  * | 
 | 9 |  * This program is distributed in the hope that it will be useful, | 
 | 10 |  * but WITHOUT ANY WARRANTY; without even the implied warranty of | 
 | 11 |  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See | 
 | 12 |  * the GNU General Public License for more details. | 
 | 13 |  * | 
 | 14 |  * You should have received a copy of the GNU General Public License | 
 | 15 |  * along with this program; if not, write to the Free Software | 
 | 16 |  * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA | 
 | 17 |  * | 
 | 18 |  * Authors: Artem Bityutskiy (Битюцкий Артём), Thomas Gleixner | 
 | 19 |  */ | 
 | 20 |  | 
 | 21 | /* | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 22 |  * UBI wear-leveling sub-system. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 23 |  * | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 24 |  * This sub-system is responsible for wear-leveling. It works in terms of | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 25 |  * physical eraseblocks and erase counters and knows nothing about logical | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 26 |  * eraseblocks, volumes, etc. From this sub-system's perspective all physical | 
 | 27 |  * eraseblocks are of two types - used and free. Used physical eraseblocks are | 
 | 28 |  * those that were "get" by the 'ubi_wl_get_peb()' function, and free physical | 
 | 29 |  * eraseblocks are those that were put by the 'ubi_wl_put_peb()' function. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 30 |  * | 
 | 31 |  * Physical eraseblocks returned by 'ubi_wl_get_peb()' have only erase counter | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 32 |  * header. The rest of the physical eraseblock contains only %0xFF bytes. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 33 |  * | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 34 |  * When physical eraseblocks are returned to the WL sub-system by means of the | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 35 |  * 'ubi_wl_put_peb()' function, they are scheduled for erasure. The erasure is | 
 | 36 |  * done asynchronously in context of the per-UBI device background thread, | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 37 |  * which is also managed by the WL sub-system. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 38 |  * | 
 | 39 |  * The wear-leveling is ensured by means of moving the contents of used | 
 | 40 |  * physical eraseblocks with low erase counter to free physical eraseblocks | 
 | 41 |  * with high erase counter. | 
 | 42 |  * | 
 | 43 |  * The 'ubi_wl_get_peb()' function accepts data type hints which help to pick | 
 | 44 |  * an "optimal" physical eraseblock. For example, when it is known that the | 
 | 45 |  * physical eraseblock will be "put" soon because it contains short-term data, | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 46 |  * the WL sub-system may pick a free physical eraseblock with low erase | 
 | 47 |  * counter, and so forth. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 48 |  * | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 49 |  * If the WL sub-system fails to erase a physical eraseblock, it marks it as | 
 | 50 |  * bad. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 51 |  * | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 52 |  * This sub-system is also responsible for scrubbing. If a bit-flip is detected | 
 | 53 |  * in a physical eraseblock, it has to be moved. Technically this is the same | 
 | 54 |  * as moving it for wear-leveling reasons. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 55 |  * | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 56 |  * As it was said, for the UBI sub-system all physical eraseblocks are either | 
 | 57 |  * "free" or "used". Free eraseblock are kept in the @wl->free RB-tree, while | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 58 |  * used eraseblocks are kept in @wl->used or @wl->scrub RB-trees, or | 
 | 59 |  * (temporarily) in the @wl->pq queue. | 
 | 60 |  * | 
 | 61 |  * When the WL sub-system returns a physical eraseblock, the physical | 
 | 62 |  * eraseblock is protected from being moved for some "time". For this reason, | 
 | 63 |  * the physical eraseblock is not directly moved from the @wl->free tree to the | 
 | 64 |  * @wl->used tree. There is a protection queue in between where this | 
 | 65 |  * physical eraseblock is temporarily stored (@wl->pq). | 
 | 66 |  * | 
 | 67 |  * All this protection stuff is needed because: | 
 | 68 |  *  o we don't want to move physical eraseblocks just after we have given them | 
 | 69 |  *    to the user; instead, we first want to let users fill them up with data; | 
 | 70 |  * | 
 | 71 |  *  o there is a chance that the user will put the physical eraseblock very | 
 | 72 |  *    soon, so it makes sense not to move it for some time, but wait; this is | 
 | 73 |  *    especially important in case of "short term" physical eraseblocks. | 
 | 74 |  * | 
 | 75 |  * Physical eraseblocks stay protected only for limited time. But the "time" is | 
 | 76 |  * measured in erase cycles in this case. This is implemented with help of the | 
 | 77 |  * protection queue. Eraseblocks are put to the tail of this queue when they | 
 | 78 |  * are returned by the 'ubi_wl_get_peb()', and eraseblocks are removed from the | 
 | 79 |  * head of the queue on each erase operation (for any eraseblock). So the | 
 | 80 |  * length of the queue defines how may (global) erase cycles PEBs are protected. | 
 | 81 |  * | 
 | 82 |  * To put it differently, each physical eraseblock has 2 main states: free and | 
 | 83 |  * used. The former state corresponds to the @wl->free tree. The latter state | 
 | 84 |  * is split up on several sub-states: | 
 | 85 |  * o the WL movement is allowed (@wl->used tree); | 
 | 86 |  * o the WL movement is temporarily prohibited (@wl->pq queue); | 
 | 87 |  * o scrubbing is needed (@wl->scrub tree). | 
 | 88 |  * | 
 | 89 |  * Depending on the sub-state, wear-leveling entries of the used physical | 
 | 90 |  * eraseblocks may be kept in one of those structures. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 91 |  * | 
 | 92 |  * Note, in this implementation, we keep a small in-RAM object for each physical | 
 | 93 |  * eraseblock. This is surely not a scalable solution. But it appears to be good | 
 | 94 |  * enough for moderately large flashes and it is simple. In future, one may | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 95 |  * re-work this sub-system and make it more scalable. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 96 |  * | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 97 |  * At the moment this sub-system does not utilize the sequence number, which | 
 | 98 |  * was introduced relatively recently. But it would be wise to do this because | 
 | 99 |  * the sequence number of a logical eraseblock characterizes how old is it. For | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 100 |  * example, when we move a PEB with low erase counter, and we need to pick the | 
 | 101 |  * target PEB, we pick a PEB with the highest EC if our PEB is "old" and we | 
 | 102 |  * pick target PEB with an average EC if our PEB is not very "old". This is a | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 103 |  * room for future re-works of the WL sub-system. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 104 |  */ | 
 | 105 |  | 
 | 106 | #include <linux/slab.h> | 
 | 107 | #include <linux/crc32.h> | 
 | 108 | #include <linux/freezer.h> | 
 | 109 | #include <linux/kthread.h> | 
 | 110 | #include "ubi.h" | 
 | 111 |  | 
 | 112 | /* Number of physical eraseblocks reserved for wear-leveling purposes */ | 
 | 113 | #define WL_RESERVED_PEBS 1 | 
 | 114 |  | 
 | 115 | /* | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 116 |  * Maximum difference between two erase counters. If this threshold is | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 117 |  * exceeded, the WL sub-system starts moving data from used physical | 
 | 118 |  * eraseblocks with low erase counter to free physical eraseblocks with high | 
 | 119 |  * erase counter. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 120 |  */ | 
 | 121 | #define UBI_WL_THRESHOLD CONFIG_MTD_UBI_WL_THRESHOLD | 
 | 122 |  | 
 | 123 | /* | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 124 |  * When a physical eraseblock is moved, the WL sub-system has to pick the target | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 125 |  * physical eraseblock to move to. The simplest way would be just to pick the | 
 | 126 |  * one with the highest erase counter. But in certain workloads this could lead | 
 | 127 |  * to an unlimited wear of one or few physical eraseblock. Indeed, imagine a | 
 | 128 |  * situation when the picked physical eraseblock is constantly erased after the | 
 | 129 |  * data is written to it. So, we have a constant which limits the highest erase | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 130 |  * counter of the free physical eraseblock to pick. Namely, the WL sub-system | 
| Frederik Schwarzer | 025dfda | 2008-10-16 19:02:37 +0200 | [diff] [blame] | 131 |  * does not pick eraseblocks with erase counter greater than the lowest erase | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 132 |  * counter plus %WL_FREE_MAX_DIFF. | 
 | 133 |  */ | 
 | 134 | #define WL_FREE_MAX_DIFF (2*UBI_WL_THRESHOLD) | 
 | 135 |  | 
 | 136 | /* | 
 | 137 |  * Maximum number of consecutive background thread failures which is enough to | 
 | 138 |  * switch to read-only mode. | 
 | 139 |  */ | 
 | 140 | #define WL_MAX_FAILURES 32 | 
 | 141 |  | 
 | 142 | /** | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 143 |  * struct ubi_work - UBI work description data structure. | 
 | 144 |  * @list: a link in the list of pending works | 
 | 145 |  * @func: worker function | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 146 |  * @e: physical eraseblock to erase | 
 | 147 |  * @torture: if the physical eraseblock has to be tortured | 
 | 148 |  * | 
 | 149 |  * The @func pointer points to the worker function. If the @cancel argument is | 
 | 150 |  * not zero, the worker has to free the resources and exit immediately. The | 
 | 151 |  * worker has to return zero in case of success and a negative error code in | 
 | 152 |  * case of failure. | 
 | 153 |  */ | 
 | 154 | struct ubi_work { | 
 | 155 | 	struct list_head list; | 
 | 156 | 	int (*func)(struct ubi_device *ubi, struct ubi_work *wrk, int cancel); | 
 | 157 | 	/* The below fields are only relevant to erasure works */ | 
 | 158 | 	struct ubi_wl_entry *e; | 
 | 159 | 	int torture; | 
 | 160 | }; | 
 | 161 |  | 
 | 162 | #ifdef CONFIG_MTD_UBI_DEBUG_PARANOID | 
| Artem Bityutskiy | e88d6e10 | 2007-08-29 14:51:52 +0300 | [diff] [blame] | 163 | static int paranoid_check_ec(struct ubi_device *ubi, int pnum, int ec); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 164 | static int paranoid_check_in_wl_tree(struct ubi_wl_entry *e, | 
 | 165 | 				     struct rb_root *root); | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 166 | static int paranoid_check_in_pq(struct ubi_device *ubi, struct ubi_wl_entry *e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 167 | #else | 
 | 168 | #define paranoid_check_ec(ubi, pnum, ec) 0 | 
 | 169 | #define paranoid_check_in_wl_tree(e, root) | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 170 | #define paranoid_check_in_pq(ubi, e) 0 | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 171 | #endif | 
 | 172 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 173 | /** | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 174 |  * wl_tree_add - add a wear-leveling entry to a WL RB-tree. | 
 | 175 |  * @e: the wear-leveling entry to add | 
 | 176 |  * @root: the root of the tree | 
 | 177 |  * | 
 | 178 |  * Note, we use (erase counter, physical eraseblock number) pairs as keys in | 
 | 179 |  * the @ubi->used and @ubi->free RB-trees. | 
 | 180 |  */ | 
 | 181 | static void wl_tree_add(struct ubi_wl_entry *e, struct rb_root *root) | 
 | 182 | { | 
 | 183 | 	struct rb_node **p, *parent = NULL; | 
 | 184 |  | 
 | 185 | 	p = &root->rb_node; | 
 | 186 | 	while (*p) { | 
 | 187 | 		struct ubi_wl_entry *e1; | 
 | 188 |  | 
 | 189 | 		parent = *p; | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 190 | 		e1 = rb_entry(parent, struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 191 |  | 
 | 192 | 		if (e->ec < e1->ec) | 
 | 193 | 			p = &(*p)->rb_left; | 
 | 194 | 		else if (e->ec > e1->ec) | 
 | 195 | 			p = &(*p)->rb_right; | 
 | 196 | 		else { | 
 | 197 | 			ubi_assert(e->pnum != e1->pnum); | 
 | 198 | 			if (e->pnum < e1->pnum) | 
 | 199 | 				p = &(*p)->rb_left; | 
 | 200 | 			else | 
 | 201 | 				p = &(*p)->rb_right; | 
 | 202 | 		} | 
 | 203 | 	} | 
 | 204 |  | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 205 | 	rb_link_node(&e->u.rb, parent, p); | 
 | 206 | 	rb_insert_color(&e->u.rb, root); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 207 | } | 
 | 208 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 209 | /** | 
 | 210 |  * do_work - do one pending work. | 
 | 211 |  * @ubi: UBI device description object | 
 | 212 |  * | 
 | 213 |  * This function returns zero in case of success and a negative error code in | 
 | 214 |  * case of failure. | 
 | 215 |  */ | 
 | 216 | static int do_work(struct ubi_device *ubi) | 
 | 217 | { | 
 | 218 | 	int err; | 
 | 219 | 	struct ubi_work *wrk; | 
 | 220 |  | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 221 | 	cond_resched(); | 
 | 222 |  | 
| Artem Bityutskiy | 593dd33 | 2007-12-18 15:54:35 +0200 | [diff] [blame] | 223 | 	/* | 
 | 224 | 	 * @ubi->work_sem is used to synchronize with the workers. Workers take | 
 | 225 | 	 * it in read mode, so many of them may be doing works at a time. But | 
 | 226 | 	 * the queue flush code has to be sure the whole queue of works is | 
 | 227 | 	 * done, and it takes the mutex in write mode. | 
 | 228 | 	 */ | 
 | 229 | 	down_read(&ubi->work_sem); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 230 | 	spin_lock(&ubi->wl_lock); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 231 | 	if (list_empty(&ubi->works)) { | 
 | 232 | 		spin_unlock(&ubi->wl_lock); | 
| Artem Bityutskiy | 593dd33 | 2007-12-18 15:54:35 +0200 | [diff] [blame] | 233 | 		up_read(&ubi->work_sem); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 234 | 		return 0; | 
 | 235 | 	} | 
 | 236 |  | 
 | 237 | 	wrk = list_entry(ubi->works.next, struct ubi_work, list); | 
 | 238 | 	list_del(&wrk->list); | 
| Artem Bityutskiy | 16f557e | 2007-12-19 16:03:17 +0200 | [diff] [blame] | 239 | 	ubi->works_count -= 1; | 
 | 240 | 	ubi_assert(ubi->works_count >= 0); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 241 | 	spin_unlock(&ubi->wl_lock); | 
 | 242 |  | 
 | 243 | 	/* | 
 | 244 | 	 * Call the worker function. Do not touch the work structure | 
 | 245 | 	 * after this call as it will have been freed or reused by that | 
 | 246 | 	 * time by the worker function. | 
 | 247 | 	 */ | 
 | 248 | 	err = wrk->func(ubi, wrk, 0); | 
 | 249 | 	if (err) | 
 | 250 | 		ubi_err("work failed with error code %d", err); | 
| Artem Bityutskiy | 593dd33 | 2007-12-18 15:54:35 +0200 | [diff] [blame] | 251 | 	up_read(&ubi->work_sem); | 
| Artem Bityutskiy | 16f557e | 2007-12-19 16:03:17 +0200 | [diff] [blame] | 252 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 253 | 	return err; | 
 | 254 | } | 
 | 255 |  | 
 | 256 | /** | 
 | 257 |  * produce_free_peb - produce a free physical eraseblock. | 
 | 258 |  * @ubi: UBI device description object | 
 | 259 |  * | 
 | 260 |  * This function tries to make a free PEB by means of synchronous execution of | 
 | 261 |  * pending works. This may be needed if, for example the background thread is | 
 | 262 |  * disabled. Returns zero in case of success and a negative error code in case | 
 | 263 |  * of failure. | 
 | 264 |  */ | 
 | 265 | static int produce_free_peb(struct ubi_device *ubi) | 
 | 266 | { | 
 | 267 | 	int err; | 
 | 268 |  | 
 | 269 | 	spin_lock(&ubi->wl_lock); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 270 | 	while (!ubi->free.rb_node) { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 271 | 		spin_unlock(&ubi->wl_lock); | 
 | 272 |  | 
 | 273 | 		dbg_wl("do one work synchronously"); | 
 | 274 | 		err = do_work(ubi); | 
 | 275 | 		if (err) | 
 | 276 | 			return err; | 
 | 277 |  | 
 | 278 | 		spin_lock(&ubi->wl_lock); | 
 | 279 | 	} | 
 | 280 | 	spin_unlock(&ubi->wl_lock); | 
 | 281 |  | 
 | 282 | 	return 0; | 
 | 283 | } | 
 | 284 |  | 
 | 285 | /** | 
 | 286 |  * in_wl_tree - check if wear-leveling entry is present in a WL RB-tree. | 
 | 287 |  * @e: the wear-leveling entry to check | 
 | 288 |  * @root: the root of the tree | 
 | 289 |  * | 
 | 290 |  * This function returns non-zero if @e is in the @root RB-tree and zero if it | 
 | 291 |  * is not. | 
 | 292 |  */ | 
 | 293 | static int in_wl_tree(struct ubi_wl_entry *e, struct rb_root *root) | 
 | 294 | { | 
 | 295 | 	struct rb_node *p; | 
 | 296 |  | 
 | 297 | 	p = root->rb_node; | 
 | 298 | 	while (p) { | 
 | 299 | 		struct ubi_wl_entry *e1; | 
 | 300 |  | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 301 | 		e1 = rb_entry(p, struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 302 |  | 
 | 303 | 		if (e->pnum == e1->pnum) { | 
 | 304 | 			ubi_assert(e == e1); | 
 | 305 | 			return 1; | 
 | 306 | 		} | 
 | 307 |  | 
 | 308 | 		if (e->ec < e1->ec) | 
 | 309 | 			p = p->rb_left; | 
 | 310 | 		else if (e->ec > e1->ec) | 
 | 311 | 			p = p->rb_right; | 
 | 312 | 		else { | 
 | 313 | 			ubi_assert(e->pnum != e1->pnum); | 
 | 314 | 			if (e->pnum < e1->pnum) | 
 | 315 | 				p = p->rb_left; | 
 | 316 | 			else | 
 | 317 | 				p = p->rb_right; | 
 | 318 | 		} | 
 | 319 | 	} | 
 | 320 |  | 
 | 321 | 	return 0; | 
 | 322 | } | 
 | 323 |  | 
 | 324 | /** | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 325 |  * prot_queue_add - add physical eraseblock to the protection queue. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 326 |  * @ubi: UBI device description object | 
 | 327 |  * @e: the physical eraseblock to add | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 328 |  * | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 329 |  * This function adds @e to the tail of the protection queue @ubi->pq, where | 
 | 330 |  * @e will stay for %UBI_PROT_QUEUE_LEN erase operations and will be | 
 | 331 |  * temporarily protected from the wear-leveling worker. Note, @wl->lock has to | 
 | 332 |  * be locked. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 333 |  */ | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 334 | static void prot_queue_add(struct ubi_device *ubi, struct ubi_wl_entry *e) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 335 | { | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 336 | 	int pq_tail = ubi->pq_head - 1; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 337 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 338 | 	if (pq_tail < 0) | 
 | 339 | 		pq_tail = UBI_PROT_QUEUE_LEN - 1; | 
 | 340 | 	ubi_assert(pq_tail >= 0 && pq_tail < UBI_PROT_QUEUE_LEN); | 
 | 341 | 	list_add_tail(&e->u.list, &ubi->pq[pq_tail]); | 
 | 342 | 	dbg_wl("added PEB %d EC %d to the protection queue", e->pnum, e->ec); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 343 | } | 
 | 344 |  | 
 | 345 | /** | 
 | 346 |  * find_wl_entry - find wear-leveling entry closest to certain erase counter. | 
 | 347 |  * @root: the RB-tree where to look for | 
 | 348 |  * @max: highest possible erase counter | 
 | 349 |  * | 
 | 350 |  * This function looks for a wear leveling entry with erase counter closest to | 
 | 351 |  * @max and less then @max. | 
 | 352 |  */ | 
 | 353 | static struct ubi_wl_entry *find_wl_entry(struct rb_root *root, int max) | 
 | 354 | { | 
 | 355 | 	struct rb_node *p; | 
 | 356 | 	struct ubi_wl_entry *e; | 
 | 357 |  | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 358 | 	e = rb_entry(rb_first(root), struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 359 | 	max += e->ec; | 
 | 360 |  | 
 | 361 | 	p = root->rb_node; | 
 | 362 | 	while (p) { | 
 | 363 | 		struct ubi_wl_entry *e1; | 
 | 364 |  | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 365 | 		e1 = rb_entry(p, struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 366 | 		if (e1->ec >= max) | 
 | 367 | 			p = p->rb_left; | 
 | 368 | 		else { | 
 | 369 | 			p = p->rb_right; | 
 | 370 | 			e = e1; | 
 | 371 | 		} | 
 | 372 | 	} | 
 | 373 |  | 
 | 374 | 	return e; | 
 | 375 | } | 
 | 376 |  | 
 | 377 | /** | 
 | 378 |  * ubi_wl_get_peb - get a physical eraseblock. | 
 | 379 |  * @ubi: UBI device description object | 
 | 380 |  * @dtype: type of data which will be stored in this physical eraseblock | 
 | 381 |  * | 
 | 382 |  * This function returns a physical eraseblock in case of success and a | 
 | 383 |  * negative error code in case of failure. Might sleep. | 
 | 384 |  */ | 
 | 385 | int ubi_wl_get_peb(struct ubi_device *ubi, int dtype) | 
 | 386 | { | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 387 | 	int err, medium_ec; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 388 | 	struct ubi_wl_entry *e, *first, *last; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 389 |  | 
 | 390 | 	ubi_assert(dtype == UBI_LONGTERM || dtype == UBI_SHORTTERM || | 
 | 391 | 		   dtype == UBI_UNKNOWN); | 
 | 392 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 393 | retry: | 
 | 394 | 	spin_lock(&ubi->wl_lock); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 395 | 	if (!ubi->free.rb_node) { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 396 | 		if (ubi->works_count == 0) { | 
 | 397 | 			ubi_assert(list_empty(&ubi->works)); | 
 | 398 | 			ubi_err("no free eraseblocks"); | 
 | 399 | 			spin_unlock(&ubi->wl_lock); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 400 | 			return -ENOSPC; | 
 | 401 | 		} | 
 | 402 | 		spin_unlock(&ubi->wl_lock); | 
 | 403 |  | 
 | 404 | 		err = produce_free_peb(ubi); | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 405 | 		if (err < 0) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 406 | 			return err; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 407 | 		goto retry; | 
 | 408 | 	} | 
 | 409 |  | 
 | 410 | 	switch (dtype) { | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 411 | 	case UBI_LONGTERM: | 
 | 412 | 		/* | 
 | 413 | 		 * For long term data we pick a physical eraseblock with high | 
 | 414 | 		 * erase counter. But the highest erase counter we can pick is | 
 | 415 | 		 * bounded by the the lowest erase counter plus | 
 | 416 | 		 * %WL_FREE_MAX_DIFF. | 
 | 417 | 		 */ | 
 | 418 | 		e = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF); | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 419 | 		break; | 
 | 420 | 	case UBI_UNKNOWN: | 
 | 421 | 		/* | 
 | 422 | 		 * For unknown data we pick a physical eraseblock with medium | 
 | 423 | 		 * erase counter. But we by no means can pick a physical | 
 | 424 | 		 * eraseblock with erase counter greater or equivalent than the | 
 | 425 | 		 * lowest erase counter plus %WL_FREE_MAX_DIFF. | 
 | 426 | 		 */ | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 427 | 		first = rb_entry(rb_first(&ubi->free), struct ubi_wl_entry, | 
 | 428 | 					u.rb); | 
 | 429 | 		last = rb_entry(rb_last(&ubi->free), struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 430 |  | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 431 | 		if (last->ec - first->ec < WL_FREE_MAX_DIFF) | 
 | 432 | 			e = rb_entry(ubi->free.rb_node, | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 433 | 					struct ubi_wl_entry, u.rb); | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 434 | 		else { | 
 | 435 | 			medium_ec = (first->ec + WL_FREE_MAX_DIFF)/2; | 
 | 436 | 			e = find_wl_entry(&ubi->free, medium_ec); | 
 | 437 | 		} | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 438 | 		break; | 
 | 439 | 	case UBI_SHORTTERM: | 
 | 440 | 		/* | 
 | 441 | 		 * For short term data we pick a physical eraseblock with the | 
 | 442 | 		 * lowest erase counter as we expect it will be erased soon. | 
 | 443 | 		 */ | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 444 | 		e = rb_entry(rb_first(&ubi->free), struct ubi_wl_entry, u.rb); | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 445 | 		break; | 
 | 446 | 	default: | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 447 | 		BUG(); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 448 | 	} | 
 | 449 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 450 | 	paranoid_check_in_wl_tree(e, &ubi->free); | 
 | 451 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 452 | 	/* | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 453 | 	 * Move the physical eraseblock to the protection queue where it will | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 454 | 	 * be protected from being moved for some time. | 
 | 455 | 	 */ | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 456 | 	rb_erase(&e->u.rb, &ubi->free); | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 457 | 	dbg_wl("PEB %d EC %d", e->pnum, e->ec); | 
 | 458 | 	prot_queue_add(ubi, e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 459 | 	spin_unlock(&ubi->wl_lock); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 460 | 	return e->pnum; | 
 | 461 | } | 
 | 462 |  | 
 | 463 | /** | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 464 |  * prot_queue_del - remove a physical eraseblock from the protection queue. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 465 |  * @ubi: UBI device description object | 
 | 466 |  * @pnum: the physical eraseblock to remove | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 467 |  * | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 468 |  * This function deletes PEB @pnum from the protection queue and returns zero | 
 | 469 |  * in case of success and %-ENODEV if the PEB was not found. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 470 |  */ | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 471 | static int prot_queue_del(struct ubi_device *ubi, int pnum) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 472 | { | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 473 | 	struct ubi_wl_entry *e; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 474 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 475 | 	e = ubi->lookuptbl[pnum]; | 
 | 476 | 	if (!e) | 
 | 477 | 		return -ENODEV; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 478 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 479 | 	if (paranoid_check_in_pq(ubi, e)) | 
 | 480 | 		return -ENODEV; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 481 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 482 | 	list_del(&e->u.list); | 
 | 483 | 	dbg_wl("deleted PEB %d from the protection queue", e->pnum); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 484 | 	return 0; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 485 | } | 
 | 486 |  | 
 | 487 | /** | 
 | 488 |  * sync_erase - synchronously erase a physical eraseblock. | 
 | 489 |  * @ubi: UBI device description object | 
 | 490 |  * @e: the the physical eraseblock to erase | 
 | 491 |  * @torture: if the physical eraseblock has to be tortured | 
 | 492 |  * | 
 | 493 |  * This function returns zero in case of success and a negative error code in | 
 | 494 |  * case of failure. | 
 | 495 |  */ | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 496 | static int sync_erase(struct ubi_device *ubi, struct ubi_wl_entry *e, | 
 | 497 | 		      int torture) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 498 | { | 
 | 499 | 	int err; | 
 | 500 | 	struct ubi_ec_hdr *ec_hdr; | 
 | 501 | 	unsigned long long ec = e->ec; | 
 | 502 |  | 
 | 503 | 	dbg_wl("erase PEB %d, old EC %llu", e->pnum, ec); | 
 | 504 |  | 
 | 505 | 	err = paranoid_check_ec(ubi, e->pnum, e->ec); | 
 | 506 | 	if (err > 0) | 
 | 507 | 		return -EINVAL; | 
 | 508 |  | 
| Artem Bityutskiy | 33818bb | 2007-08-28 21:29:32 +0300 | [diff] [blame] | 509 | 	ec_hdr = kzalloc(ubi->ec_hdr_alsize, GFP_NOFS); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 510 | 	if (!ec_hdr) | 
 | 511 | 		return -ENOMEM; | 
 | 512 |  | 
 | 513 | 	err = ubi_io_sync_erase(ubi, e->pnum, torture); | 
 | 514 | 	if (err < 0) | 
 | 515 | 		goto out_free; | 
 | 516 |  | 
 | 517 | 	ec += err; | 
 | 518 | 	if (ec > UBI_MAX_ERASECOUNTER) { | 
 | 519 | 		/* | 
 | 520 | 		 * Erase counter overflow. Upgrade UBI and use 64-bit | 
 | 521 | 		 * erase counters internally. | 
 | 522 | 		 */ | 
 | 523 | 		ubi_err("erase counter overflow at PEB %d, EC %llu", | 
 | 524 | 			e->pnum, ec); | 
 | 525 | 		err = -EINVAL; | 
 | 526 | 		goto out_free; | 
 | 527 | 	} | 
 | 528 |  | 
 | 529 | 	dbg_wl("erased PEB %d, new EC %llu", e->pnum, ec); | 
 | 530 |  | 
| Christoph Hellwig | 3261ebd | 2007-05-21 17:41:46 +0300 | [diff] [blame] | 531 | 	ec_hdr->ec = cpu_to_be64(ec); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 532 |  | 
 | 533 | 	err = ubi_io_write_ec_hdr(ubi, e->pnum, ec_hdr); | 
 | 534 | 	if (err) | 
 | 535 | 		goto out_free; | 
 | 536 |  | 
 | 537 | 	e->ec = ec; | 
 | 538 | 	spin_lock(&ubi->wl_lock); | 
 | 539 | 	if (e->ec > ubi->max_ec) | 
 | 540 | 		ubi->max_ec = e->ec; | 
 | 541 | 	spin_unlock(&ubi->wl_lock); | 
 | 542 |  | 
 | 543 | out_free: | 
 | 544 | 	kfree(ec_hdr); | 
 | 545 | 	return err; | 
 | 546 | } | 
 | 547 |  | 
 | 548 | /** | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 549 |  * serve_prot_queue - check if it is time to stop protecting PEBs. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 550 |  * @ubi: UBI device description object | 
 | 551 |  * | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 552 |  * This function is called after each erase operation and removes PEBs from the | 
 | 553 |  * tail of the protection queue. These PEBs have been protected for long enough | 
 | 554 |  * and should be moved to the used tree. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 555 |  */ | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 556 | static void serve_prot_queue(struct ubi_device *ubi) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 557 | { | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 558 | 	struct ubi_wl_entry *e, *tmp; | 
 | 559 | 	int count; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 560 |  | 
 | 561 | 	/* | 
 | 562 | 	 * There may be several protected physical eraseblock to remove, | 
 | 563 | 	 * process them all. | 
 | 564 | 	 */ | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 565 | repeat: | 
 | 566 | 	count = 0; | 
 | 567 | 	spin_lock(&ubi->wl_lock); | 
 | 568 | 	list_for_each_entry_safe(e, tmp, &ubi->pq[ubi->pq_head], u.list) { | 
 | 569 | 		dbg_wl("PEB %d EC %d protection over, move to used tree", | 
 | 570 | 			e->pnum, e->ec); | 
 | 571 |  | 
 | 572 | 		list_del(&e->u.list); | 
 | 573 | 		wl_tree_add(e, &ubi->used); | 
 | 574 | 		if (count++ > 32) { | 
 | 575 | 			/* | 
 | 576 | 			 * Let's be nice and avoid holding the spinlock for | 
 | 577 | 			 * too long. | 
 | 578 | 			 */ | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 579 | 			spin_unlock(&ubi->wl_lock); | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 580 | 			cond_resched(); | 
 | 581 | 			goto repeat; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 582 | 		} | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 583 | 	} | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 584 |  | 
 | 585 | 	ubi->pq_head += 1; | 
 | 586 | 	if (ubi->pq_head == UBI_PROT_QUEUE_LEN) | 
 | 587 | 		ubi->pq_head = 0; | 
 | 588 | 	ubi_assert(ubi->pq_head >= 0 && ubi->pq_head < UBI_PROT_QUEUE_LEN); | 
 | 589 | 	spin_unlock(&ubi->wl_lock); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 590 | } | 
 | 591 |  | 
 | 592 | /** | 
 | 593 |  * schedule_ubi_work - schedule a work. | 
 | 594 |  * @ubi: UBI device description object | 
 | 595 |  * @wrk: the work to schedule | 
 | 596 |  * | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 597 |  * This function adds a work defined by @wrk to the tail of the pending works | 
 | 598 |  * list. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 599 |  */ | 
 | 600 | static void schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk) | 
 | 601 | { | 
 | 602 | 	spin_lock(&ubi->wl_lock); | 
 | 603 | 	list_add_tail(&wrk->list, &ubi->works); | 
 | 604 | 	ubi_assert(ubi->works_count >= 0); | 
 | 605 | 	ubi->works_count += 1; | 
 | 606 | 	if (ubi->thread_enabled) | 
 | 607 | 		wake_up_process(ubi->bgt_thread); | 
 | 608 | 	spin_unlock(&ubi->wl_lock); | 
 | 609 | } | 
 | 610 |  | 
 | 611 | static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk, | 
 | 612 | 			int cancel); | 
 | 613 |  | 
 | 614 | /** | 
 | 615 |  * schedule_erase - schedule an erase work. | 
 | 616 |  * @ubi: UBI device description object | 
 | 617 |  * @e: the WL entry of the physical eraseblock to erase | 
 | 618 |  * @torture: if the physical eraseblock has to be tortured | 
 | 619 |  * | 
 | 620 |  * This function returns zero in case of success and a %-ENOMEM in case of | 
 | 621 |  * failure. | 
 | 622 |  */ | 
 | 623 | static int schedule_erase(struct ubi_device *ubi, struct ubi_wl_entry *e, | 
 | 624 | 			  int torture) | 
 | 625 | { | 
 | 626 | 	struct ubi_work *wl_wrk; | 
 | 627 |  | 
 | 628 | 	dbg_wl("schedule erasure of PEB %d, EC %d, torture %d", | 
 | 629 | 	       e->pnum, e->ec, torture); | 
 | 630 |  | 
| Artem Bityutskiy | 33818bb | 2007-08-28 21:29:32 +0300 | [diff] [blame] | 631 | 	wl_wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 632 | 	if (!wl_wrk) | 
 | 633 | 		return -ENOMEM; | 
 | 634 |  | 
 | 635 | 	wl_wrk->func = &erase_worker; | 
 | 636 | 	wl_wrk->e = e; | 
 | 637 | 	wl_wrk->torture = torture; | 
 | 638 |  | 
 | 639 | 	schedule_ubi_work(ubi, wl_wrk); | 
 | 640 | 	return 0; | 
 | 641 | } | 
 | 642 |  | 
 | 643 | /** | 
 | 644 |  * wear_leveling_worker - wear-leveling worker function. | 
 | 645 |  * @ubi: UBI device description object | 
 | 646 |  * @wrk: the work object | 
 | 647 |  * @cancel: non-zero if the worker has to free memory and exit | 
 | 648 |  * | 
 | 649 |  * This function copies a more worn out physical eraseblock to a less worn out | 
 | 650 |  * one. Returns zero in case of success and a negative error code in case of | 
 | 651 |  * failure. | 
 | 652 |  */ | 
 | 653 | static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk, | 
 | 654 | 				int cancel) | 
 | 655 | { | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 656 | 	int err, scrubbing = 0, torture = 0; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 657 | 	struct ubi_wl_entry *e1, *e2; | 
 | 658 | 	struct ubi_vid_hdr *vid_hdr; | 
 | 659 |  | 
 | 660 | 	kfree(wrk); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 661 | 	if (cancel) | 
 | 662 | 		return 0; | 
 | 663 |  | 
| Artem Bityutskiy | 33818bb | 2007-08-28 21:29:32 +0300 | [diff] [blame] | 664 | 	vid_hdr = ubi_zalloc_vid_hdr(ubi, GFP_NOFS); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 665 | 	if (!vid_hdr) | 
 | 666 | 		return -ENOMEM; | 
 | 667 |  | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 668 | 	mutex_lock(&ubi->move_mutex); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 669 | 	spin_lock(&ubi->wl_lock); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 670 | 	ubi_assert(!ubi->move_from && !ubi->move_to); | 
 | 671 | 	ubi_assert(!ubi->move_to_put); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 672 |  | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 673 | 	if (!ubi->free.rb_node || | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 674 | 	    (!ubi->used.rb_node && !ubi->scrub.rb_node)) { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 675 | 		/* | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 676 | 		 * No free physical eraseblocks? Well, they must be waiting in | 
 | 677 | 		 * the queue to be erased. Cancel movement - it will be | 
 | 678 | 		 * triggered again when a free physical eraseblock appears. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 679 | 		 * | 
 | 680 | 		 * No used physical eraseblocks? They must be temporarily | 
 | 681 | 		 * protected from being moved. They will be moved to the | 
 | 682 | 		 * @ubi->used tree later and the wear-leveling will be | 
 | 683 | 		 * triggered again. | 
 | 684 | 		 */ | 
 | 685 | 		dbg_wl("cancel WL, a list is empty: free %d, used %d", | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 686 | 		       !ubi->free.rb_node, !ubi->used.rb_node); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 687 | 		goto out_cancel; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 688 | 	} | 
 | 689 |  | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 690 | 	if (!ubi->scrub.rb_node) { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 691 | 		/* | 
 | 692 | 		 * Now pick the least worn-out used physical eraseblock and a | 
 | 693 | 		 * highly worn-out free physical eraseblock. If the erase | 
 | 694 | 		 * counters differ much enough, start wear-leveling. | 
 | 695 | 		 */ | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 696 | 		e1 = rb_entry(rb_first(&ubi->used), struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 697 | 		e2 = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF); | 
 | 698 |  | 
 | 699 | 		if (!(e2->ec - e1->ec >= UBI_WL_THRESHOLD)) { | 
 | 700 | 			dbg_wl("no WL needed: min used EC %d, max free EC %d", | 
 | 701 | 			       e1->ec, e2->ec); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 702 | 			goto out_cancel; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 703 | 		} | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 704 | 		paranoid_check_in_wl_tree(e1, &ubi->used); | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 705 | 		rb_erase(&e1->u.rb, &ubi->used); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 706 | 		dbg_wl("move PEB %d EC %d to PEB %d EC %d", | 
 | 707 | 		       e1->pnum, e1->ec, e2->pnum, e2->ec); | 
 | 708 | 	} else { | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 709 | 		/* Perform scrubbing */ | 
 | 710 | 		scrubbing = 1; | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 711 | 		e1 = rb_entry(rb_first(&ubi->scrub), struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 712 | 		e2 = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 713 | 		paranoid_check_in_wl_tree(e1, &ubi->scrub); | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 714 | 		rb_erase(&e1->u.rb, &ubi->scrub); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 715 | 		dbg_wl("scrub PEB %d to PEB %d", e1->pnum, e2->pnum); | 
 | 716 | 	} | 
 | 717 |  | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 718 | 	paranoid_check_in_wl_tree(e2, &ubi->free); | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 719 | 	rb_erase(&e2->u.rb, &ubi->free); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 720 | 	ubi->move_from = e1; | 
 | 721 | 	ubi->move_to = e2; | 
 | 722 | 	spin_unlock(&ubi->wl_lock); | 
 | 723 |  | 
 | 724 | 	/* | 
 | 725 | 	 * Now we are going to copy physical eraseblock @e1->pnum to @e2->pnum. | 
 | 726 | 	 * We so far do not know which logical eraseblock our physical | 
 | 727 | 	 * eraseblock (@e1) belongs to. We have to read the volume identifier | 
 | 728 | 	 * header first. | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 729 | 	 * | 
 | 730 | 	 * Note, we are protected from this PEB being unmapped and erased. The | 
 | 731 | 	 * 'ubi_wl_put_peb()' would wait for moving to be finished if the PEB | 
 | 732 | 	 * which is being moved was unmapped. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 733 | 	 */ | 
 | 734 |  | 
 | 735 | 	err = ubi_io_read_vid_hdr(ubi, e1->pnum, vid_hdr, 0); | 
 | 736 | 	if (err && err != UBI_IO_BITFLIPS) { | 
 | 737 | 		if (err == UBI_IO_PEB_FREE) { | 
 | 738 | 			/* | 
 | 739 | 			 * We are trying to move PEB without a VID header. UBI | 
 | 740 | 			 * always write VID headers shortly after the PEB was | 
 | 741 | 			 * given, so we have a situation when it did not have | 
 | 742 | 			 * chance to write it down because it was preempted. | 
 | 743 | 			 * Just re-schedule the work, so that next time it will | 
 | 744 | 			 * likely have the VID header in place. | 
 | 745 | 			 */ | 
 | 746 | 			dbg_wl("PEB %d has no VID header", e1->pnum); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 747 | 			goto out_not_moved; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 748 | 		} | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 749 |  | 
 | 750 | 		ubi_err("error %d while reading VID header from PEB %d", | 
 | 751 | 			err, e1->pnum); | 
 | 752 | 		if (err > 0) | 
 | 753 | 			err = -EIO; | 
 | 754 | 		goto out_error; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 755 | 	} | 
 | 756 |  | 
 | 757 | 	err = ubi_eba_copy_leb(ubi, e1->pnum, e2->pnum, vid_hdr); | 
 | 758 | 	if (err) { | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 759 | 		if (err == -EAGAIN) | 
 | 760 | 			goto out_not_moved; | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 761 | 		if (err < 0) | 
 | 762 | 			goto out_error; | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 763 | 		if (err == 2) { | 
 | 764 | 			/* Target PEB write error, torture it */ | 
 | 765 | 			torture = 1; | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 766 | 			goto out_not_moved; | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 767 | 		} | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 768 |  | 
 | 769 | 		/* | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 770 | 		 * The LEB has not been moved because the volume is being | 
 | 771 | 		 * deleted or the PEB has been put meanwhile. We should prevent | 
 | 772 | 		 * this PEB from being selected for wear-leveling movement | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 773 | 		 * again, so put it to the protection queue. | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 774 | 		 */ | 
 | 775 |  | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 776 | 		dbg_wl("canceled moving PEB %d", e1->pnum); | 
 | 777 | 		ubi_assert(err == 1); | 
 | 778 |  | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 779 | 		ubi_free_vid_hdr(ubi, vid_hdr); | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 780 | 		vid_hdr = NULL; | 
 | 781 |  | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 782 | 		spin_lock(&ubi->wl_lock); | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 783 | 		prot_queue_add(ubi, e1); | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 784 | 		ubi_assert(!ubi->move_to_put); | 
 | 785 | 		ubi->move_from = ubi->move_to = NULL; | 
 | 786 | 		ubi->wl_scheduled = 0; | 
 | 787 | 		spin_unlock(&ubi->wl_lock); | 
 | 788 |  | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 789 | 		e1 = NULL; | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 790 | 		err = schedule_erase(ubi, e2, 0); | 
 | 791 | 		if (err) | 
 | 792 | 			goto out_error; | 
 | 793 | 		mutex_unlock(&ubi->move_mutex); | 
 | 794 | 		return 0; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 795 | 	} | 
 | 796 |  | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 797 | 	/* The PEB has been successfully moved */ | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 798 | 	ubi_free_vid_hdr(ubi, vid_hdr); | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 799 | 	vid_hdr = NULL; | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 800 | 	if (scrubbing) | 
| Artem Bityutskiy | 8c1e6ee | 2008-07-18 12:20:23 +0300 | [diff] [blame] | 801 | 		ubi_msg("scrubbed PEB %d, data moved to PEB %d", | 
 | 802 | 			e1->pnum, e2->pnum); | 
 | 803 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 804 | 	spin_lock(&ubi->wl_lock); | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 805 | 	if (!ubi->move_to_put) { | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 806 | 		wl_tree_add(e2, &ubi->used); | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 807 | 		e2 = NULL; | 
 | 808 | 	} | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 809 | 	ubi->move_from = ubi->move_to = NULL; | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 810 | 	ubi->move_to_put = ubi->wl_scheduled = 0; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 811 | 	spin_unlock(&ubi->wl_lock); | 
 | 812 |  | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 813 | 	err = schedule_erase(ubi, e1, 0); | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 814 | 	if (err) { | 
 | 815 | 		e1 = NULL; | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 816 | 		goto out_error; | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 817 | 	} | 
| Artem Bityutskiy | 6a8f483 | 2008-12-05 12:23:48 +0200 | [diff] [blame] | 818 |  | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 819 | 	if (e2) { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 820 | 		/* | 
 | 821 | 		 * Well, the target PEB was put meanwhile, schedule it for | 
 | 822 | 		 * erasure. | 
 | 823 | 		 */ | 
 | 824 | 		dbg_wl("PEB %d was put meanwhile, erase", e2->pnum); | 
 | 825 | 		err = schedule_erase(ubi, e2, 0); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 826 | 		if (err) | 
 | 827 | 			goto out_error; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 828 | 	} | 
 | 829 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 830 | 	dbg_wl("done"); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 831 | 	mutex_unlock(&ubi->move_mutex); | 
 | 832 | 	return 0; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 833 |  | 
 | 834 | 	/* | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 835 | 	 * For some reasons the LEB was not moved, might be an error, might be | 
 | 836 | 	 * something else. @e1 was not changed, so return it back. @e2 might | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 837 | 	 * have been changed, schedule it for erasure. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 838 | 	 */ | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 839 | out_not_moved: | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 840 | 	dbg_wl("canceled moving PEB %d", e1->pnum); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 841 | 	ubi_free_vid_hdr(ubi, vid_hdr); | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 842 | 	vid_hdr = NULL; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 843 | 	spin_lock(&ubi->wl_lock); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 844 | 	if (scrubbing) | 
 | 845 | 		wl_tree_add(e1, &ubi->scrub); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 846 | 	else | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 847 | 		wl_tree_add(e1, &ubi->used); | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 848 | 	ubi_assert(!ubi->move_to_put); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 849 | 	ubi->move_from = ubi->move_to = NULL; | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 850 | 	ubi->wl_scheduled = 0; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 851 | 	spin_unlock(&ubi->wl_lock); | 
 | 852 |  | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 853 | 	e1 = NULL; | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 854 | 	err = schedule_erase(ubi, e2, torture); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 855 | 	if (err) | 
 | 856 | 		goto out_error; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 857 |  | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 858 | 	mutex_unlock(&ubi->move_mutex); | 
 | 859 | 	return 0; | 
 | 860 |  | 
 | 861 | out_error: | 
 | 862 | 	ubi_err("error %d while moving PEB %d to PEB %d", | 
 | 863 | 		err, e1->pnum, e2->pnum); | 
 | 864 |  | 
 | 865 | 	ubi_free_vid_hdr(ubi, vid_hdr); | 
 | 866 | 	spin_lock(&ubi->wl_lock); | 
 | 867 | 	ubi->move_from = ubi->move_to = NULL; | 
 | 868 | 	ubi->move_to_put = ubi->wl_scheduled = 0; | 
 | 869 | 	spin_unlock(&ubi->wl_lock); | 
 | 870 |  | 
| Artem Bityutskiy | 3c98b0a | 2008-12-05 12:42:45 +0200 | [diff] [blame] | 871 | 	if (e1) | 
 | 872 | 		kmem_cache_free(ubi_wl_entry_slab, e1); | 
 | 873 | 	if (e2) | 
 | 874 | 		kmem_cache_free(ubi_wl_entry_slab, e2); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 875 | 	ubi_ro_mode(ubi); | 
 | 876 |  | 
 | 877 | 	mutex_unlock(&ubi->move_mutex); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 878 | 	return err; | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 879 |  | 
 | 880 | out_cancel: | 
 | 881 | 	ubi->wl_scheduled = 0; | 
 | 882 | 	spin_unlock(&ubi->wl_lock); | 
 | 883 | 	mutex_unlock(&ubi->move_mutex); | 
 | 884 | 	ubi_free_vid_hdr(ubi, vid_hdr); | 
 | 885 | 	return 0; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 886 | } | 
 | 887 |  | 
 | 888 | /** | 
 | 889 |  * ensure_wear_leveling - schedule wear-leveling if it is needed. | 
 | 890 |  * @ubi: UBI device description object | 
 | 891 |  * | 
 | 892 |  * This function checks if it is time to start wear-leveling and schedules it | 
 | 893 |  * if yes. This function returns zero in case of success and a negative error | 
 | 894 |  * code in case of failure. | 
 | 895 |  */ | 
 | 896 | static int ensure_wear_leveling(struct ubi_device *ubi) | 
 | 897 | { | 
 | 898 | 	int err = 0; | 
 | 899 | 	struct ubi_wl_entry *e1; | 
 | 900 | 	struct ubi_wl_entry *e2; | 
 | 901 | 	struct ubi_work *wrk; | 
 | 902 |  | 
 | 903 | 	spin_lock(&ubi->wl_lock); | 
 | 904 | 	if (ubi->wl_scheduled) | 
 | 905 | 		/* Wear-leveling is already in the work queue */ | 
 | 906 | 		goto out_unlock; | 
 | 907 |  | 
 | 908 | 	/* | 
 | 909 | 	 * If the ubi->scrub tree is not empty, scrubbing is needed, and the | 
 | 910 | 	 * the WL worker has to be scheduled anyway. | 
 | 911 | 	 */ | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 912 | 	if (!ubi->scrub.rb_node) { | 
 | 913 | 		if (!ubi->used.rb_node || !ubi->free.rb_node) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 914 | 			/* No physical eraseblocks - no deal */ | 
 | 915 | 			goto out_unlock; | 
 | 916 |  | 
 | 917 | 		/* | 
 | 918 | 		 * We schedule wear-leveling only if the difference between the | 
 | 919 | 		 * lowest erase counter of used physical eraseblocks and a high | 
| Frederik Schwarzer | 025dfda | 2008-10-16 19:02:37 +0200 | [diff] [blame] | 920 | 		 * erase counter of free physical eraseblocks is greater than | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 921 | 		 * %UBI_WL_THRESHOLD. | 
 | 922 | 		 */ | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 923 | 		e1 = rb_entry(rb_first(&ubi->used), struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 924 | 		e2 = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF); | 
 | 925 |  | 
 | 926 | 		if (!(e2->ec - e1->ec >= UBI_WL_THRESHOLD)) | 
 | 927 | 			goto out_unlock; | 
 | 928 | 		dbg_wl("schedule wear-leveling"); | 
 | 929 | 	} else | 
 | 930 | 		dbg_wl("schedule scrubbing"); | 
 | 931 |  | 
 | 932 | 	ubi->wl_scheduled = 1; | 
 | 933 | 	spin_unlock(&ubi->wl_lock); | 
 | 934 |  | 
| Artem Bityutskiy | 33818bb | 2007-08-28 21:29:32 +0300 | [diff] [blame] | 935 | 	wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 936 | 	if (!wrk) { | 
 | 937 | 		err = -ENOMEM; | 
 | 938 | 		goto out_cancel; | 
 | 939 | 	} | 
 | 940 |  | 
 | 941 | 	wrk->func = &wear_leveling_worker; | 
 | 942 | 	schedule_ubi_work(ubi, wrk); | 
 | 943 | 	return err; | 
 | 944 |  | 
 | 945 | out_cancel: | 
 | 946 | 	spin_lock(&ubi->wl_lock); | 
 | 947 | 	ubi->wl_scheduled = 0; | 
 | 948 | out_unlock: | 
 | 949 | 	spin_unlock(&ubi->wl_lock); | 
 | 950 | 	return err; | 
 | 951 | } | 
 | 952 |  | 
 | 953 | /** | 
 | 954 |  * erase_worker - physical eraseblock erase worker function. | 
 | 955 |  * @ubi: UBI device description object | 
 | 956 |  * @wl_wrk: the work object | 
 | 957 |  * @cancel: non-zero if the worker has to free memory and exit | 
 | 958 |  * | 
 | 959 |  * This function erases a physical eraseblock and perform torture testing if | 
 | 960 |  * needed. It also takes care about marking the physical eraseblock bad if | 
 | 961 |  * needed. Returns zero in case of success and a negative error code in case of | 
 | 962 |  * failure. | 
 | 963 |  */ | 
 | 964 | static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk, | 
 | 965 | 			int cancel) | 
 | 966 | { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 967 | 	struct ubi_wl_entry *e = wl_wrk->e; | 
| Artem Bityutskiy | 784c145 | 2007-07-18 13:42:10 +0300 | [diff] [blame] | 968 | 	int pnum = e->pnum, err, need; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 969 |  | 
 | 970 | 	if (cancel) { | 
 | 971 | 		dbg_wl("cancel erasure of PEB %d EC %d", pnum, e->ec); | 
 | 972 | 		kfree(wl_wrk); | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 973 | 		kmem_cache_free(ubi_wl_entry_slab, e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 974 | 		return 0; | 
 | 975 | 	} | 
 | 976 |  | 
 | 977 | 	dbg_wl("erase PEB %d EC %d", pnum, e->ec); | 
 | 978 |  | 
 | 979 | 	err = sync_erase(ubi, e, wl_wrk->torture); | 
 | 980 | 	if (!err) { | 
 | 981 | 		/* Fine, we've erased it successfully */ | 
 | 982 | 		kfree(wl_wrk); | 
 | 983 |  | 
 | 984 | 		spin_lock(&ubi->wl_lock); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 985 | 		wl_tree_add(e, &ubi->free); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 986 | 		spin_unlock(&ubi->wl_lock); | 
 | 987 |  | 
 | 988 | 		/* | 
| Artem Bityutskiy | 9c9ec14 | 2008-07-18 13:19:52 +0300 | [diff] [blame] | 989 | 		 * One more erase operation has happened, take care about | 
 | 990 | 		 * protected physical eraseblocks. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 991 | 		 */ | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 992 | 		serve_prot_queue(ubi); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 993 |  | 
 | 994 | 		/* And take care about wear-leveling */ | 
 | 995 | 		err = ensure_wear_leveling(ubi); | 
 | 996 | 		return err; | 
 | 997 | 	} | 
 | 998 |  | 
| Artem Bityutskiy | 8d2d401 | 2007-07-22 22:32:51 +0300 | [diff] [blame] | 999 | 	ubi_err("failed to erase PEB %d, error %d", pnum, err); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1000 | 	kfree(wl_wrk); | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1001 | 	kmem_cache_free(ubi_wl_entry_slab, e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1002 |  | 
| Artem Bityutskiy | 784c145 | 2007-07-18 13:42:10 +0300 | [diff] [blame] | 1003 | 	if (err == -EINTR || err == -ENOMEM || err == -EAGAIN || | 
 | 1004 | 	    err == -EBUSY) { | 
 | 1005 | 		int err1; | 
 | 1006 |  | 
 | 1007 | 		/* Re-schedule the LEB for erasure */ | 
 | 1008 | 		err1 = schedule_erase(ubi, e, 0); | 
 | 1009 | 		if (err1) { | 
 | 1010 | 			err = err1; | 
 | 1011 | 			goto out_ro; | 
 | 1012 | 		} | 
 | 1013 | 		return err; | 
 | 1014 | 	} else if (err != -EIO) { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1015 | 		/* | 
 | 1016 | 		 * If this is not %-EIO, we have no idea what to do. Scheduling | 
 | 1017 | 		 * this physical eraseblock for erasure again would cause | 
 | 1018 | 		 * errors again and again. Well, lets switch to RO mode. | 
 | 1019 | 		 */ | 
| Artem Bityutskiy | 784c145 | 2007-07-18 13:42:10 +0300 | [diff] [blame] | 1020 | 		goto out_ro; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1021 | 	} | 
 | 1022 |  | 
 | 1023 | 	/* It is %-EIO, the PEB went bad */ | 
 | 1024 |  | 
 | 1025 | 	if (!ubi->bad_allowed) { | 
 | 1026 | 		ubi_err("bad physical eraseblock %d detected", pnum); | 
| Artem Bityutskiy | 784c145 | 2007-07-18 13:42:10 +0300 | [diff] [blame] | 1027 | 		goto out_ro; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1028 | 	} | 
 | 1029 |  | 
| Artem Bityutskiy | 784c145 | 2007-07-18 13:42:10 +0300 | [diff] [blame] | 1030 | 	spin_lock(&ubi->volumes_lock); | 
 | 1031 | 	need = ubi->beb_rsvd_level - ubi->beb_rsvd_pebs + 1; | 
 | 1032 | 	if (need > 0) { | 
 | 1033 | 		need = ubi->avail_pebs >= need ? need : ubi->avail_pebs; | 
 | 1034 | 		ubi->avail_pebs -= need; | 
 | 1035 | 		ubi->rsvd_pebs += need; | 
 | 1036 | 		ubi->beb_rsvd_pebs += need; | 
 | 1037 | 		if (need > 0) | 
 | 1038 | 			ubi_msg("reserve more %d PEBs", need); | 
 | 1039 | 	} | 
 | 1040 |  | 
 | 1041 | 	if (ubi->beb_rsvd_pebs == 0) { | 
 | 1042 | 		spin_unlock(&ubi->volumes_lock); | 
 | 1043 | 		ubi_err("no reserved physical eraseblocks"); | 
 | 1044 | 		goto out_ro; | 
 | 1045 | 	} | 
 | 1046 |  | 
 | 1047 | 	spin_unlock(&ubi->volumes_lock); | 
 | 1048 | 	ubi_msg("mark PEB %d as bad", pnum); | 
 | 1049 |  | 
 | 1050 | 	err = ubi_io_mark_bad(ubi, pnum); | 
 | 1051 | 	if (err) | 
 | 1052 | 		goto out_ro; | 
 | 1053 |  | 
 | 1054 | 	spin_lock(&ubi->volumes_lock); | 
 | 1055 | 	ubi->beb_rsvd_pebs -= 1; | 
 | 1056 | 	ubi->bad_peb_count += 1; | 
 | 1057 | 	ubi->good_peb_count -= 1; | 
 | 1058 | 	ubi_calculate_reserved(ubi); | 
 | 1059 | 	if (ubi->beb_rsvd_pebs == 0) | 
 | 1060 | 		ubi_warn("last PEB from the reserved pool was used"); | 
 | 1061 | 	spin_unlock(&ubi->volumes_lock); | 
 | 1062 |  | 
 | 1063 | 	return err; | 
 | 1064 |  | 
 | 1065 | out_ro: | 
 | 1066 | 	ubi_ro_mode(ubi); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1067 | 	return err; | 
 | 1068 | } | 
 | 1069 |  | 
 | 1070 | /** | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 1071 |  * ubi_wl_put_peb - return a PEB to the wear-leveling sub-system. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1072 |  * @ubi: UBI device description object | 
 | 1073 |  * @pnum: physical eraseblock to return | 
 | 1074 |  * @torture: if this physical eraseblock has to be tortured | 
 | 1075 |  * | 
 | 1076 |  * This function is called to return physical eraseblock @pnum to the pool of | 
 | 1077 |  * free physical eraseblocks. The @torture flag has to be set if an I/O error | 
 | 1078 |  * occurred to this @pnum and it has to be tested. This function returns zero | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1079 |  * in case of success, and a negative error code in case of failure. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1080 |  */ | 
 | 1081 | int ubi_wl_put_peb(struct ubi_device *ubi, int pnum, int torture) | 
 | 1082 | { | 
 | 1083 | 	int err; | 
 | 1084 | 	struct ubi_wl_entry *e; | 
 | 1085 |  | 
 | 1086 | 	dbg_wl("PEB %d", pnum); | 
 | 1087 | 	ubi_assert(pnum >= 0); | 
 | 1088 | 	ubi_assert(pnum < ubi->peb_count); | 
 | 1089 |  | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1090 | retry: | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1091 | 	spin_lock(&ubi->wl_lock); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1092 | 	e = ubi->lookuptbl[pnum]; | 
 | 1093 | 	if (e == ubi->move_from) { | 
 | 1094 | 		/* | 
 | 1095 | 		 * User is putting the physical eraseblock which was selected to | 
 | 1096 | 		 * be moved. It will be scheduled for erasure in the | 
 | 1097 | 		 * wear-leveling worker. | 
 | 1098 | 		 */ | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1099 | 		dbg_wl("PEB %d is being moved, wait", pnum); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1100 | 		spin_unlock(&ubi->wl_lock); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1101 |  | 
 | 1102 | 		/* Wait for the WL worker by taking the @ubi->move_mutex */ | 
 | 1103 | 		mutex_lock(&ubi->move_mutex); | 
 | 1104 | 		mutex_unlock(&ubi->move_mutex); | 
 | 1105 | 		goto retry; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1106 | 	} else if (e == ubi->move_to) { | 
 | 1107 | 		/* | 
 | 1108 | 		 * User is putting the physical eraseblock which was selected | 
 | 1109 | 		 * as the target the data is moved to. It may happen if the EBA | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 1110 | 		 * sub-system already re-mapped the LEB in 'ubi_eba_copy_leb()' | 
 | 1111 | 		 * but the WL sub-system has not put the PEB to the "used" tree | 
 | 1112 | 		 * yet, but it is about to do this. So we just set a flag which | 
 | 1113 | 		 * will tell the WL worker that the PEB is not needed anymore | 
 | 1114 | 		 * and should be scheduled for erasure. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1115 | 		 */ | 
 | 1116 | 		dbg_wl("PEB %d is the target of data moving", pnum); | 
 | 1117 | 		ubi_assert(!ubi->move_to_put); | 
 | 1118 | 		ubi->move_to_put = 1; | 
 | 1119 | 		spin_unlock(&ubi->wl_lock); | 
 | 1120 | 		return 0; | 
 | 1121 | 	} else { | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1122 | 		if (in_wl_tree(e, &ubi->used)) { | 
 | 1123 | 			paranoid_check_in_wl_tree(e, &ubi->used); | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 1124 | 			rb_erase(&e->u.rb, &ubi->used); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1125 | 		} else if (in_wl_tree(e, &ubi->scrub)) { | 
 | 1126 | 			paranoid_check_in_wl_tree(e, &ubi->scrub); | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 1127 | 			rb_erase(&e->u.rb, &ubi->scrub); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1128 | 		} else { | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1129 | 			err = prot_queue_del(ubi, e->pnum); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1130 | 			if (err) { | 
 | 1131 | 				ubi_err("PEB %d not found", pnum); | 
 | 1132 | 				ubi_ro_mode(ubi); | 
 | 1133 | 				spin_unlock(&ubi->wl_lock); | 
 | 1134 | 				return err; | 
 | 1135 | 			} | 
 | 1136 | 		} | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1137 | 	} | 
 | 1138 | 	spin_unlock(&ubi->wl_lock); | 
 | 1139 |  | 
 | 1140 | 	err = schedule_erase(ubi, e, torture); | 
 | 1141 | 	if (err) { | 
 | 1142 | 		spin_lock(&ubi->wl_lock); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1143 | 		wl_tree_add(e, &ubi->used); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1144 | 		spin_unlock(&ubi->wl_lock); | 
 | 1145 | 	} | 
 | 1146 |  | 
 | 1147 | 	return err; | 
 | 1148 | } | 
 | 1149 |  | 
 | 1150 | /** | 
 | 1151 |  * ubi_wl_scrub_peb - schedule a physical eraseblock for scrubbing. | 
 | 1152 |  * @ubi: UBI device description object | 
 | 1153 |  * @pnum: the physical eraseblock to schedule | 
 | 1154 |  * | 
 | 1155 |  * If a bit-flip in a physical eraseblock is detected, this physical eraseblock | 
 | 1156 |  * needs scrubbing. This function schedules a physical eraseblock for | 
 | 1157 |  * scrubbing which is done in background. This function returns zero in case of | 
 | 1158 |  * success and a negative error code in case of failure. | 
 | 1159 |  */ | 
 | 1160 | int ubi_wl_scrub_peb(struct ubi_device *ubi, int pnum) | 
 | 1161 | { | 
 | 1162 | 	struct ubi_wl_entry *e; | 
 | 1163 |  | 
| Artem Bityutskiy | 8c1e6ee | 2008-07-18 12:20:23 +0300 | [diff] [blame] | 1164 | 	dbg_msg("schedule PEB %d for scrubbing", pnum); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1165 |  | 
 | 1166 | retry: | 
 | 1167 | 	spin_lock(&ubi->wl_lock); | 
 | 1168 | 	e = ubi->lookuptbl[pnum]; | 
 | 1169 | 	if (e == ubi->move_from || in_wl_tree(e, &ubi->scrub)) { | 
 | 1170 | 		spin_unlock(&ubi->wl_lock); | 
 | 1171 | 		return 0; | 
 | 1172 | 	} | 
 | 1173 |  | 
 | 1174 | 	if (e == ubi->move_to) { | 
 | 1175 | 		/* | 
 | 1176 | 		 * This physical eraseblock was used to move data to. The data | 
 | 1177 | 		 * was moved but the PEB was not yet inserted to the proper | 
 | 1178 | 		 * tree. We should just wait a little and let the WL worker | 
 | 1179 | 		 * proceed. | 
 | 1180 | 		 */ | 
 | 1181 | 		spin_unlock(&ubi->wl_lock); | 
 | 1182 | 		dbg_wl("the PEB %d is not in proper tree, retry", pnum); | 
 | 1183 | 		yield(); | 
 | 1184 | 		goto retry; | 
 | 1185 | 	} | 
 | 1186 |  | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1187 | 	if (in_wl_tree(e, &ubi->used)) { | 
 | 1188 | 		paranoid_check_in_wl_tree(e, &ubi->used); | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 1189 | 		rb_erase(&e->u.rb, &ubi->used); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1190 | 	} else { | 
 | 1191 | 		int err; | 
 | 1192 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1193 | 		err = prot_queue_del(ubi, e->pnum); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1194 | 		if (err) { | 
 | 1195 | 			ubi_err("PEB %d not found", pnum); | 
 | 1196 | 			ubi_ro_mode(ubi); | 
 | 1197 | 			spin_unlock(&ubi->wl_lock); | 
 | 1198 | 			return err; | 
 | 1199 | 		} | 
 | 1200 | 	} | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1201 |  | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1202 | 	wl_tree_add(e, &ubi->scrub); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1203 | 	spin_unlock(&ubi->wl_lock); | 
 | 1204 |  | 
 | 1205 | 	/* | 
 | 1206 | 	 * Technically scrubbing is the same as wear-leveling, so it is done | 
 | 1207 | 	 * by the WL worker. | 
 | 1208 | 	 */ | 
 | 1209 | 	return ensure_wear_leveling(ubi); | 
 | 1210 | } | 
 | 1211 |  | 
 | 1212 | /** | 
 | 1213 |  * ubi_wl_flush - flush all pending works. | 
 | 1214 |  * @ubi: UBI device description object | 
 | 1215 |  * | 
 | 1216 |  * This function returns zero in case of success and a negative error code in | 
 | 1217 |  * case of failure. | 
 | 1218 |  */ | 
 | 1219 | int ubi_wl_flush(struct ubi_device *ubi) | 
 | 1220 | { | 
| Artem Bityutskiy | 593dd33 | 2007-12-18 15:54:35 +0200 | [diff] [blame] | 1221 | 	int err; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1222 |  | 
 | 1223 | 	/* | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1224 | 	 * Erase while the pending works queue is not empty, but not more than | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1225 | 	 * the number of currently pending works. | 
 | 1226 | 	 */ | 
| Artem Bityutskiy | 593dd33 | 2007-12-18 15:54:35 +0200 | [diff] [blame] | 1227 | 	dbg_wl("flush (%d pending works)", ubi->works_count); | 
 | 1228 | 	while (ubi->works_count) { | 
 | 1229 | 		err = do_work(ubi); | 
 | 1230 | 		if (err) | 
 | 1231 | 			return err; | 
 | 1232 | 	} | 
 | 1233 |  | 
 | 1234 | 	/* | 
 | 1235 | 	 * Make sure all the works which have been done in parallel are | 
 | 1236 | 	 * finished. | 
 | 1237 | 	 */ | 
 | 1238 | 	down_write(&ubi->work_sem); | 
 | 1239 | 	up_write(&ubi->work_sem); | 
 | 1240 |  | 
 | 1241 | 	/* | 
| Artem Bityutskiy | 6fa6f5b | 2008-12-05 13:37:02 +0200 | [diff] [blame] | 1242 | 	 * And in case last was the WL worker and it canceled the LEB | 
| Artem Bityutskiy | 593dd33 | 2007-12-18 15:54:35 +0200 | [diff] [blame] | 1243 | 	 * movement, flush again. | 
 | 1244 | 	 */ | 
 | 1245 | 	while (ubi->works_count) { | 
 | 1246 | 		dbg_wl("flush more (%d pending works)", ubi->works_count); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1247 | 		err = do_work(ubi); | 
 | 1248 | 		if (err) | 
 | 1249 | 			return err; | 
 | 1250 | 	} | 
 | 1251 |  | 
 | 1252 | 	return 0; | 
 | 1253 | } | 
 | 1254 |  | 
 | 1255 | /** | 
 | 1256 |  * tree_destroy - destroy an RB-tree. | 
 | 1257 |  * @root: the root of the tree to destroy | 
 | 1258 |  */ | 
 | 1259 | static void tree_destroy(struct rb_root *root) | 
 | 1260 | { | 
 | 1261 | 	struct rb_node *rb; | 
 | 1262 | 	struct ubi_wl_entry *e; | 
 | 1263 |  | 
 | 1264 | 	rb = root->rb_node; | 
 | 1265 | 	while (rb) { | 
 | 1266 | 		if (rb->rb_left) | 
 | 1267 | 			rb = rb->rb_left; | 
 | 1268 | 		else if (rb->rb_right) | 
 | 1269 | 			rb = rb->rb_right; | 
 | 1270 | 		else { | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 1271 | 			e = rb_entry(rb, struct ubi_wl_entry, u.rb); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1272 |  | 
 | 1273 | 			rb = rb_parent(rb); | 
 | 1274 | 			if (rb) { | 
| Xiaochuan-Xu | 23553b2 | 2008-12-09 19:44:12 +0800 | [diff] [blame] | 1275 | 				if (rb->rb_left == &e->u.rb) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1276 | 					rb->rb_left = NULL; | 
 | 1277 | 				else | 
 | 1278 | 					rb->rb_right = NULL; | 
 | 1279 | 			} | 
 | 1280 |  | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1281 | 			kmem_cache_free(ubi_wl_entry_slab, e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1282 | 		} | 
 | 1283 | 	} | 
 | 1284 | } | 
 | 1285 |  | 
 | 1286 | /** | 
 | 1287 |  * ubi_thread - UBI background thread. | 
 | 1288 |  * @u: the UBI device description object pointer | 
 | 1289 |  */ | 
| Artem Bityutskiy | cdfa788 | 2007-12-17 20:33:20 +0200 | [diff] [blame] | 1290 | int ubi_thread(void *u) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1291 | { | 
 | 1292 | 	int failures = 0; | 
 | 1293 | 	struct ubi_device *ubi = u; | 
 | 1294 |  | 
 | 1295 | 	ubi_msg("background thread \"%s\" started, PID %d", | 
| Pavel Emelyanov | ba25f9d | 2007-10-18 23:40:40 -0700 | [diff] [blame] | 1296 | 		ubi->bgt_name, task_pid_nr(current)); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1297 |  | 
| Rafael J. Wysocki | 8314418 | 2007-07-17 04:03:35 -0700 | [diff] [blame] | 1298 | 	set_freezable(); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1299 | 	for (;;) { | 
 | 1300 | 		int err; | 
 | 1301 |  | 
 | 1302 | 		if (kthread_should_stop()) | 
| Kyungmin Park | cadb40c | 2008-05-22 10:32:18 +0900 | [diff] [blame] | 1303 | 			break; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1304 |  | 
 | 1305 | 		if (try_to_freeze()) | 
 | 1306 | 			continue; | 
 | 1307 |  | 
 | 1308 | 		spin_lock(&ubi->wl_lock); | 
 | 1309 | 		if (list_empty(&ubi->works) || ubi->ro_mode || | 
 | 1310 | 			       !ubi->thread_enabled) { | 
 | 1311 | 			set_current_state(TASK_INTERRUPTIBLE); | 
 | 1312 | 			spin_unlock(&ubi->wl_lock); | 
 | 1313 | 			schedule(); | 
 | 1314 | 			continue; | 
 | 1315 | 		} | 
 | 1316 | 		spin_unlock(&ubi->wl_lock); | 
 | 1317 |  | 
 | 1318 | 		err = do_work(ubi); | 
 | 1319 | 		if (err) { | 
 | 1320 | 			ubi_err("%s: work failed with error code %d", | 
 | 1321 | 				ubi->bgt_name, err); | 
 | 1322 | 			if (failures++ > WL_MAX_FAILURES) { | 
 | 1323 | 				/* | 
 | 1324 | 				 * Too many failures, disable the thread and | 
 | 1325 | 				 * switch to read-only mode. | 
 | 1326 | 				 */ | 
 | 1327 | 				ubi_msg("%s: %d consecutive failures", | 
 | 1328 | 					ubi->bgt_name, WL_MAX_FAILURES); | 
 | 1329 | 				ubi_ro_mode(ubi); | 
| Vitaliy Gusev | 2ad4988 | 2008-11-05 18:27:18 +0300 | [diff] [blame] | 1330 | 				ubi->thread_enabled = 0; | 
 | 1331 | 				continue; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1332 | 			} | 
 | 1333 | 		} else | 
 | 1334 | 			failures = 0; | 
 | 1335 |  | 
 | 1336 | 		cond_resched(); | 
 | 1337 | 	} | 
 | 1338 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1339 | 	dbg_wl("background thread \"%s\" is killed", ubi->bgt_name); | 
 | 1340 | 	return 0; | 
 | 1341 | } | 
 | 1342 |  | 
 | 1343 | /** | 
 | 1344 |  * cancel_pending - cancel all pending works. | 
 | 1345 |  * @ubi: UBI device description object | 
 | 1346 |  */ | 
 | 1347 | static void cancel_pending(struct ubi_device *ubi) | 
 | 1348 | { | 
 | 1349 | 	while (!list_empty(&ubi->works)) { | 
 | 1350 | 		struct ubi_work *wrk; | 
 | 1351 |  | 
 | 1352 | 		wrk = list_entry(ubi->works.next, struct ubi_work, list); | 
 | 1353 | 		list_del(&wrk->list); | 
 | 1354 | 		wrk->func(ubi, wrk, 1); | 
 | 1355 | 		ubi->works_count -= 1; | 
 | 1356 | 		ubi_assert(ubi->works_count >= 0); | 
 | 1357 | 	} | 
 | 1358 | } | 
 | 1359 |  | 
 | 1360 | /** | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 1361 |  * ubi_wl_init_scan - initialize the WL sub-system using scanning information. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1362 |  * @ubi: UBI device description object | 
 | 1363 |  * @si: scanning information | 
 | 1364 |  * | 
 | 1365 |  * This function returns zero in case of success, and a negative error code in | 
 | 1366 |  * case of failure. | 
 | 1367 |  */ | 
 | 1368 | int ubi_wl_init_scan(struct ubi_device *ubi, struct ubi_scan_info *si) | 
 | 1369 | { | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1370 | 	int err, i; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1371 | 	struct rb_node *rb1, *rb2; | 
 | 1372 | 	struct ubi_scan_volume *sv; | 
 | 1373 | 	struct ubi_scan_leb *seb, *tmp; | 
 | 1374 | 	struct ubi_wl_entry *e; | 
 | 1375 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1376 | 	ubi->used = ubi->free = ubi->scrub = RB_ROOT; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1377 | 	spin_lock_init(&ubi->wl_lock); | 
| Artem Bityutskiy | 43f9b25 | 2007-12-18 15:06:55 +0200 | [diff] [blame] | 1378 | 	mutex_init(&ubi->move_mutex); | 
| Artem Bityutskiy | 593dd33 | 2007-12-18 15:54:35 +0200 | [diff] [blame] | 1379 | 	init_rwsem(&ubi->work_sem); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1380 | 	ubi->max_ec = si->max_ec; | 
 | 1381 | 	INIT_LIST_HEAD(&ubi->works); | 
 | 1382 |  | 
 | 1383 | 	sprintf(ubi->bgt_name, UBI_BGT_NAME_PATTERN, ubi->ubi_num); | 
 | 1384 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1385 | 	err = -ENOMEM; | 
 | 1386 | 	ubi->lookuptbl = kzalloc(ubi->peb_count * sizeof(void *), GFP_KERNEL); | 
 | 1387 | 	if (!ubi->lookuptbl) | 
| Artem Bityutskiy | cdfa788 | 2007-12-17 20:33:20 +0200 | [diff] [blame] | 1388 | 		return err; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1389 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1390 | 	for (i = 0; i < UBI_PROT_QUEUE_LEN; i++) | 
 | 1391 | 		INIT_LIST_HEAD(&ubi->pq[i]); | 
 | 1392 | 	ubi->pq_head = 0; | 
 | 1393 |  | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1394 | 	list_for_each_entry_safe(seb, tmp, &si->erase, u.list) { | 
 | 1395 | 		cond_resched(); | 
 | 1396 |  | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1397 | 		e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1398 | 		if (!e) | 
 | 1399 | 			goto out_free; | 
 | 1400 |  | 
 | 1401 | 		e->pnum = seb->pnum; | 
 | 1402 | 		e->ec = seb->ec; | 
 | 1403 | 		ubi->lookuptbl[e->pnum] = e; | 
 | 1404 | 		if (schedule_erase(ubi, e, 0)) { | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1405 | 			kmem_cache_free(ubi_wl_entry_slab, e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1406 | 			goto out_free; | 
 | 1407 | 		} | 
 | 1408 | 	} | 
 | 1409 |  | 
 | 1410 | 	list_for_each_entry(seb, &si->free, u.list) { | 
 | 1411 | 		cond_resched(); | 
 | 1412 |  | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1413 | 		e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1414 | 		if (!e) | 
 | 1415 | 			goto out_free; | 
 | 1416 |  | 
 | 1417 | 		e->pnum = seb->pnum; | 
 | 1418 | 		e->ec = seb->ec; | 
 | 1419 | 		ubi_assert(e->ec >= 0); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1420 | 		wl_tree_add(e, &ubi->free); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1421 | 		ubi->lookuptbl[e->pnum] = e; | 
 | 1422 | 	} | 
 | 1423 |  | 
 | 1424 | 	list_for_each_entry(seb, &si->corr, u.list) { | 
 | 1425 | 		cond_resched(); | 
 | 1426 |  | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1427 | 		e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1428 | 		if (!e) | 
 | 1429 | 			goto out_free; | 
 | 1430 |  | 
 | 1431 | 		e->pnum = seb->pnum; | 
 | 1432 | 		e->ec = seb->ec; | 
 | 1433 | 		ubi->lookuptbl[e->pnum] = e; | 
 | 1434 | 		if (schedule_erase(ubi, e, 0)) { | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1435 | 			kmem_cache_free(ubi_wl_entry_slab, e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1436 | 			goto out_free; | 
 | 1437 | 		} | 
 | 1438 | 	} | 
 | 1439 |  | 
 | 1440 | 	ubi_rb_for_each_entry(rb1, sv, &si->volumes, rb) { | 
 | 1441 | 		ubi_rb_for_each_entry(rb2, seb, &sv->root, u.rb) { | 
 | 1442 | 			cond_resched(); | 
 | 1443 |  | 
| Artem Bityutskiy | 06b68ba | 2007-12-16 12:49:01 +0200 | [diff] [blame] | 1444 | 			e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1445 | 			if (!e) | 
 | 1446 | 				goto out_free; | 
 | 1447 |  | 
 | 1448 | 			e->pnum = seb->pnum; | 
 | 1449 | 			e->ec = seb->ec; | 
 | 1450 | 			ubi->lookuptbl[e->pnum] = e; | 
 | 1451 | 			if (!seb->scrub) { | 
 | 1452 | 				dbg_wl("add PEB %d EC %d to the used tree", | 
 | 1453 | 				       e->pnum, e->ec); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1454 | 				wl_tree_add(e, &ubi->used); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1455 | 			} else { | 
 | 1456 | 				dbg_wl("add PEB %d EC %d to the scrub tree", | 
 | 1457 | 				       e->pnum, e->ec); | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1458 | 				wl_tree_add(e, &ubi->scrub); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1459 | 			} | 
 | 1460 | 		} | 
 | 1461 | 	} | 
 | 1462 |  | 
| Artem Bityutskiy | 5abde38 | 2007-09-13 14:48:20 +0300 | [diff] [blame] | 1463 | 	if (ubi->avail_pebs < WL_RESERVED_PEBS) { | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1464 | 		ubi_err("no enough physical eraseblocks (%d, need %d)", | 
 | 1465 | 			ubi->avail_pebs, WL_RESERVED_PEBS); | 
 | 1466 | 		goto out_free; | 
 | 1467 | 	} | 
 | 1468 | 	ubi->avail_pebs -= WL_RESERVED_PEBS; | 
 | 1469 | 	ubi->rsvd_pebs += WL_RESERVED_PEBS; | 
 | 1470 |  | 
 | 1471 | 	/* Schedule wear-leveling if needed */ | 
 | 1472 | 	err = ensure_wear_leveling(ubi); | 
 | 1473 | 	if (err) | 
 | 1474 | 		goto out_free; | 
 | 1475 |  | 
 | 1476 | 	return 0; | 
 | 1477 |  | 
 | 1478 | out_free: | 
 | 1479 | 	cancel_pending(ubi); | 
 | 1480 | 	tree_destroy(&ubi->used); | 
 | 1481 | 	tree_destroy(&ubi->free); | 
 | 1482 | 	tree_destroy(&ubi->scrub); | 
 | 1483 | 	kfree(ubi->lookuptbl); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1484 | 	return err; | 
 | 1485 | } | 
 | 1486 |  | 
 | 1487 | /** | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1488 |  * protection_queue_destroy - destroy the protection queue. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1489 |  * @ubi: UBI device description object | 
 | 1490 |  */ | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1491 | static void protection_queue_destroy(struct ubi_device *ubi) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1492 | { | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1493 | 	int i; | 
 | 1494 | 	struct ubi_wl_entry *e, *tmp; | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1495 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1496 | 	for (i = 0; i < UBI_PROT_QUEUE_LEN; ++i) { | 
 | 1497 | 		list_for_each_entry_safe(e, tmp, &ubi->pq[i], u.list) { | 
 | 1498 | 			list_del(&e->u.list); | 
 | 1499 | 			kmem_cache_free(ubi_wl_entry_slab, e); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1500 | 		} | 
 | 1501 | 	} | 
 | 1502 | } | 
 | 1503 |  | 
 | 1504 | /** | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 1505 |  * ubi_wl_close - close the wear-leveling sub-system. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1506 |  * @ubi: UBI device description object | 
 | 1507 |  */ | 
 | 1508 | void ubi_wl_close(struct ubi_device *ubi) | 
 | 1509 | { | 
| Artem Bityutskiy | 85c6e6e | 2008-07-16 10:25:56 +0300 | [diff] [blame] | 1510 | 	dbg_wl("close the WL sub-system"); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1511 | 	cancel_pending(ubi); | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1512 | 	protection_queue_destroy(ubi); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1513 | 	tree_destroy(&ubi->used); | 
 | 1514 | 	tree_destroy(&ubi->free); | 
 | 1515 | 	tree_destroy(&ubi->scrub); | 
 | 1516 | 	kfree(ubi->lookuptbl); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1517 | } | 
 | 1518 |  | 
 | 1519 | #ifdef CONFIG_MTD_UBI_DEBUG_PARANOID | 
 | 1520 |  | 
 | 1521 | /** | 
| Artem Bityutskiy | ebaaf1a | 2008-07-18 13:34:32 +0300 | [diff] [blame] | 1522 |  * paranoid_check_ec - make sure that the erase counter of a PEB is correct. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1523 |  * @ubi: UBI device description object | 
 | 1524 |  * @pnum: the physical eraseblock number to check | 
 | 1525 |  * @ec: the erase counter to check | 
 | 1526 |  * | 
 | 1527 |  * This function returns zero if the erase counter of physical eraseblock @pnum | 
 | 1528 |  * is equivalent to @ec, %1 if not, and a negative error code if an error | 
 | 1529 |  * occurred. | 
 | 1530 |  */ | 
| Artem Bityutskiy | e88d6e10 | 2007-08-29 14:51:52 +0300 | [diff] [blame] | 1531 | static int paranoid_check_ec(struct ubi_device *ubi, int pnum, int ec) | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1532 | { | 
 | 1533 | 	int err; | 
 | 1534 | 	long long read_ec; | 
 | 1535 | 	struct ubi_ec_hdr *ec_hdr; | 
 | 1536 |  | 
| Artem Bityutskiy | 33818bb | 2007-08-28 21:29:32 +0300 | [diff] [blame] | 1537 | 	ec_hdr = kzalloc(ubi->ec_hdr_alsize, GFP_NOFS); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1538 | 	if (!ec_hdr) | 
 | 1539 | 		return -ENOMEM; | 
 | 1540 |  | 
 | 1541 | 	err = ubi_io_read_ec_hdr(ubi, pnum, ec_hdr, 0); | 
 | 1542 | 	if (err && err != UBI_IO_BITFLIPS) { | 
 | 1543 | 		/* The header does not have to exist */ | 
 | 1544 | 		err = 0; | 
 | 1545 | 		goto out_free; | 
 | 1546 | 	} | 
 | 1547 |  | 
| Christoph Hellwig | 3261ebd | 2007-05-21 17:41:46 +0300 | [diff] [blame] | 1548 | 	read_ec = be64_to_cpu(ec_hdr->ec); | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1549 | 	if (ec != read_ec) { | 
 | 1550 | 		ubi_err("paranoid check failed for PEB %d", pnum); | 
 | 1551 | 		ubi_err("read EC is %lld, should be %d", read_ec, ec); | 
 | 1552 | 		ubi_dbg_dump_stack(); | 
 | 1553 | 		err = 1; | 
 | 1554 | 	} else | 
 | 1555 | 		err = 0; | 
 | 1556 |  | 
 | 1557 | out_free: | 
 | 1558 | 	kfree(ec_hdr); | 
 | 1559 | 	return err; | 
 | 1560 | } | 
 | 1561 |  | 
 | 1562 | /** | 
| Artem Bityutskiy | ebaaf1a | 2008-07-18 13:34:32 +0300 | [diff] [blame] | 1563 |  * paranoid_check_in_wl_tree - check that wear-leveling entry is in WL RB-tree. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1564 |  * @e: the wear-leveling entry to check | 
 | 1565 |  * @root: the root of the tree | 
 | 1566 |  * | 
| Artem Bityutskiy | ebaaf1a | 2008-07-18 13:34:32 +0300 | [diff] [blame] | 1567 |  * This function returns zero if @e is in the @root RB-tree and %1 if it is | 
 | 1568 |  * not. | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1569 |  */ | 
 | 1570 | static int paranoid_check_in_wl_tree(struct ubi_wl_entry *e, | 
 | 1571 | 				     struct rb_root *root) | 
 | 1572 | { | 
 | 1573 | 	if (in_wl_tree(e, root)) | 
 | 1574 | 		return 0; | 
 | 1575 |  | 
 | 1576 | 	ubi_err("paranoid check failed for PEB %d, EC %d, RB-tree %p ", | 
 | 1577 | 		e->pnum, e->ec, root); | 
 | 1578 | 	ubi_dbg_dump_stack(); | 
 | 1579 | 	return 1; | 
 | 1580 | } | 
 | 1581 |  | 
| Xiaochuan-Xu | 7b6c32d | 2008-12-15 21:07:41 +0800 | [diff] [blame] | 1582 | /** | 
 | 1583 |  * paranoid_check_in_pq - check if wear-leveling entry is in the protection | 
 | 1584 |  *                        queue. | 
 | 1585 |  * @ubi: UBI device description object | 
 | 1586 |  * @e: the wear-leveling entry to check | 
 | 1587 |  * | 
 | 1588 |  * This function returns zero if @e is in @ubi->pq and %1 if it is not. | 
 | 1589 |  */ | 
 | 1590 | static int paranoid_check_in_pq(struct ubi_device *ubi, struct ubi_wl_entry *e) | 
 | 1591 | { | 
 | 1592 | 	struct ubi_wl_entry *p; | 
 | 1593 | 	int i; | 
 | 1594 |  | 
 | 1595 | 	for (i = 0; i < UBI_PROT_QUEUE_LEN; ++i) | 
 | 1596 | 		list_for_each_entry(p, &ubi->pq[i], u.list) | 
 | 1597 | 			if (p == e) | 
 | 1598 | 				return 0; | 
 | 1599 |  | 
 | 1600 | 	ubi_err("paranoid check failed for PEB %d, EC %d, Protect queue", | 
 | 1601 | 		e->pnum, e->ec); | 
 | 1602 | 	ubi_dbg_dump_stack(); | 
 | 1603 | 	return 1; | 
 | 1604 | } | 
| Artem B. Bityutskiy | 801c135 | 2006-06-27 12:22:22 +0400 | [diff] [blame] | 1605 | #endif /* CONFIG_MTD_UBI_DEBUG_PARANOID */ |