| Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 1 | Reference for various scheduler-related methods in the O(1) scheduler | 
|  | 2 | Robert Love <rml@tech9.net>, MontaVista Software | 
|  | 3 |  | 
|  | 4 |  | 
|  | 5 | Note most of these methods are local to kernel/sched.c - this is by design. | 
|  | 6 | The scheduler is meant to be self-contained and abstracted away.  This document | 
|  | 7 | is primarily for understanding the scheduler, not interfacing to it.  Some of | 
|  | 8 | the discussed interfaces, however, are general process/scheduling methods. | 
|  | 9 | They are typically defined in include/linux/sched.h. | 
|  | 10 |  | 
|  | 11 |  | 
|  | 12 | Main Scheduling Methods | 
|  | 13 | ----------------------- | 
|  | 14 |  | 
|  | 15 | void load_balance(runqueue_t *this_rq, int idle) | 
|  | 16 | Attempts to pull tasks from one cpu to another to balance cpu usage, | 
|  | 17 | if needed.  This method is called explicitly if the runqueues are | 
|  | 18 | inbalanced or periodically by the timer tick.  Prior to calling, | 
|  | 19 | the current runqueue must be locked and interrupts disabled. | 
|  | 20 |  | 
|  | 21 | void schedule() | 
|  | 22 | The main scheduling function.  Upon return, the highest priority | 
|  | 23 | process will be active. | 
|  | 24 |  | 
|  | 25 |  | 
|  | 26 | Locking | 
|  | 27 | ------- | 
|  | 28 |  | 
|  | 29 | Each runqueue has its own lock, rq->lock.  When multiple runqueues need | 
|  | 30 | to be locked, lock acquires must be ordered by ascending &runqueue value. | 
|  | 31 |  | 
|  | 32 | A specific runqueue is locked via | 
|  | 33 |  | 
|  | 34 | task_rq_lock(task_t pid, unsigned long *flags) | 
|  | 35 |  | 
|  | 36 | which disables preemption, disables interrupts, and locks the runqueue pid is | 
|  | 37 | running on.  Likewise, | 
|  | 38 |  | 
|  | 39 | task_rq_unlock(task_t pid, unsigned long *flags) | 
|  | 40 |  | 
|  | 41 | unlocks the runqueue pid is running on, restores interrupts to their previous | 
|  | 42 | state, and reenables preemption. | 
|  | 43 |  | 
|  | 44 | The routines | 
|  | 45 |  | 
|  | 46 | double_rq_lock(runqueue_t *rq1, runqueue_t *rq2) | 
|  | 47 |  | 
|  | 48 | and | 
|  | 49 |  | 
|  | 50 | double_rq_unlock(runqueue_t *rq1, runqueue_t *rq2) | 
|  | 51 |  | 
|  | 52 | safely lock and unlock, respectively, the two specified runqueues.  They do | 
|  | 53 | not, however, disable and restore interrupts.  Users are required to do so | 
|  | 54 | manually before and after calls. | 
|  | 55 |  | 
|  | 56 |  | 
|  | 57 | Values | 
|  | 58 | ------ | 
|  | 59 |  | 
|  | 60 | MAX_PRIO | 
|  | 61 | The maximum priority of the system, stored in the task as task->prio. | 
|  | 62 | Lower priorities are higher.  Normal (non-RT) priorities range from | 
|  | 63 | MAX_RT_PRIO to (MAX_PRIO - 1). | 
|  | 64 | MAX_RT_PRIO | 
|  | 65 | The maximum real-time priority of the system.  Valid RT priorities | 
|  | 66 | range from 0 to (MAX_RT_PRIO - 1). | 
|  | 67 | MAX_USER_RT_PRIO | 
|  | 68 | The maximum real-time priority that is exported to user-space.  Should | 
|  | 69 | always be equal to or less than MAX_RT_PRIO.  Setting it less allows | 
|  | 70 | kernel threads to have higher priorities than any user-space task. | 
|  | 71 | MIN_TIMESLICE | 
|  | 72 | MAX_TIMESLICE | 
|  | 73 | Respectively, the minimum and maximum timeslices (quanta) of a process. | 
|  | 74 |  | 
|  | 75 | Data | 
|  | 76 | ---- | 
|  | 77 |  | 
|  | 78 | struct runqueue | 
|  | 79 | The main per-CPU runqueue data structure. | 
|  | 80 | struct task_struct | 
|  | 81 | The main per-process data structure. | 
|  | 82 |  | 
|  | 83 |  | 
|  | 84 | General Methods | 
|  | 85 | --------------- | 
|  | 86 |  | 
|  | 87 | cpu_rq(cpu) | 
|  | 88 | Returns the runqueue of the specified cpu. | 
|  | 89 | this_rq() | 
|  | 90 | Returns the runqueue of the current cpu. | 
|  | 91 | task_rq(pid) | 
|  | 92 | Returns the runqueue which holds the specified pid. | 
|  | 93 | cpu_curr(cpu) | 
|  | 94 | Returns the task currently running on the given cpu. | 
|  | 95 | rt_task(pid) | 
|  | 96 | Returns true if pid is real-time, false if not. | 
|  | 97 |  | 
|  | 98 |  | 
|  | 99 | Process Control Methods | 
|  | 100 | ----------------------- | 
|  | 101 |  | 
|  | 102 | void set_user_nice(task_t *p, long nice) | 
|  | 103 | Sets the "nice" value of task p to the given value. | 
|  | 104 | int setscheduler(pid_t pid, int policy, struct sched_param *param) | 
|  | 105 | Sets the scheduling policy and parameters for the given pid. | 
|  | 106 | int set_cpus_allowed(task_t *p, unsigned long new_mask) | 
|  | 107 | Sets a given task's CPU affinity and migrates it to a proper cpu. | 
|  | 108 | Callers must have a valid reference to the task and assure the | 
|  | 109 | task not exit prematurely.  No locks can be held during the call. | 
|  | 110 | set_task_state(tsk, state_value) | 
|  | 111 | Sets the given task's state to the given value. | 
|  | 112 | set_current_state(state_value) | 
|  | 113 | Sets the current task's state to the given value. | 
|  | 114 | void set_tsk_need_resched(struct task_struct *tsk) | 
|  | 115 | Sets need_resched in the given task. | 
|  | 116 | void clear_tsk_need_resched(struct task_struct *tsk) | 
|  | 117 | Clears need_resched in the given task. | 
|  | 118 | void set_need_resched() | 
|  | 119 | Sets need_resched in the current task. | 
|  | 120 | void clear_need_resched() | 
|  | 121 | Clears need_resched in the current task. | 
|  | 122 | int need_resched() | 
|  | 123 | Returns true if need_resched is set in the current task, false | 
|  | 124 | otherwise. | 
|  | 125 | yield() | 
|  | 126 | Place the current process at the end of the runqueue and call schedule. |