| Rusty Russell | f938d2c | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1 | /*P:010 | 
 | 2 |  * A hypervisor allows multiple Operating Systems to run on a single machine. | 
 | 3 |  * To quote David Wheeler: "Any problem in computer science can be solved with | 
 | 4 |  * another layer of indirection." | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 5 |  * | 
| Rusty Russell | f938d2c | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 6 |  * We keep things simple in two ways.  First, we start with a normal Linux | 
 | 7 |  * kernel and insert a module (lg.ko) which allows us to run other Linux | 
 | 8 |  * kernels the same way we'd run processes.  We call the first kernel the Host, | 
 | 9 |  * and the others the Guests.  The program which sets up and configures Guests | 
 | 10 |  * (such as the example in Documentation/lguest/lguest.c) is called the | 
 | 11 |  * Launcher. | 
 | 12 |  * | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 13 |  * Secondly, we only run specially modified Guests, not normal kernels: setting | 
 | 14 |  * CONFIG_LGUEST_GUEST to "y" compiles this file into the kernel so it knows | 
 | 15 |  * how to be a Guest at boot time.  This means that you can use the same kernel | 
 | 16 |  * you boot normally (ie. as a Host) as a Guest. | 
| Rusty Russell | f938d2c | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 17 |  * | 
 | 18 |  * These Guests know that they cannot do privileged operations, such as disable | 
 | 19 |  * interrupts, and that they have to ask the Host to do such things explicitly. | 
 | 20 |  * This file consists of all the replacements for such low-level native | 
 | 21 |  * hardware operations: these special Guest versions call the Host. | 
 | 22 |  * | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 23 |  * So how does the kernel know it's a Guest?  We'll see that later, but let's | 
 | 24 |  * just say that we end up here where we replace the native functions various | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 25 |  * "paravirt" structures with our Guest versions, then boot like normal. | 
 | 26 | :*/ | 
| Rusty Russell | f938d2c | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 27 |  | 
 | 28 | /* | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 29 |  * Copyright (C) 2006, Rusty Russell <rusty@rustcorp.com.au> IBM Corporation. | 
 | 30 |  * | 
 | 31 |  * This program is free software; you can redistribute it and/or modify | 
 | 32 |  * it under the terms of the GNU General Public License as published by | 
 | 33 |  * the Free Software Foundation; either version 2 of the License, or | 
 | 34 |  * (at your option) any later version. | 
 | 35 |  * | 
 | 36 |  * This program is distributed in the hope that it will be useful, but | 
 | 37 |  * WITHOUT ANY WARRANTY; without even the implied warranty of | 
 | 38 |  * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or | 
 | 39 |  * NON INFRINGEMENT.  See the GNU General Public License for more | 
 | 40 |  * details. | 
 | 41 |  * | 
 | 42 |  * You should have received a copy of the GNU General Public License | 
 | 43 |  * along with this program; if not, write to the Free Software | 
 | 44 |  * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. | 
 | 45 |  */ | 
 | 46 | #include <linux/kernel.h> | 
 | 47 | #include <linux/start_kernel.h> | 
 | 48 | #include <linux/string.h> | 
 | 49 | #include <linux/console.h> | 
 | 50 | #include <linux/screen_info.h> | 
 | 51 | #include <linux/irq.h> | 
 | 52 | #include <linux/interrupt.h> | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 53 | #include <linux/clocksource.h> | 
 | 54 | #include <linux/clockchips.h> | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 55 | #include <linux/lguest.h> | 
 | 56 | #include <linux/lguest_launcher.h> | 
| Rusty Russell | 19f1537 | 2007-10-22 11:24:21 +1000 | [diff] [blame] | 57 | #include <linux/virtio_console.h> | 
| Jeff Garzik | 4cfe6c3 | 2007-10-25 14:15:09 +1000 | [diff] [blame] | 58 | #include <linux/pm.h> | 
| Ingo Molnar | 7b6aa33 | 2009-02-17 13:58:15 +0100 | [diff] [blame] | 59 | #include <asm/apic.h> | 
| Harvey Harrison | cbc3497 | 2008-02-13 13:14:35 -0800 | [diff] [blame] | 60 | #include <asm/lguest.h> | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 61 | #include <asm/paravirt.h> | 
 | 62 | #include <asm/param.h> | 
 | 63 | #include <asm/page.h> | 
 | 64 | #include <asm/pgtable.h> | 
 | 65 | #include <asm/desc.h> | 
 | 66 | #include <asm/setup.h> | 
 | 67 | #include <asm/e820.h> | 
 | 68 | #include <asm/mce.h> | 
 | 69 | #include <asm/io.h> | 
| Jes Sorensen | 625efab | 2007-10-22 11:03:28 +1000 | [diff] [blame] | 70 | #include <asm/i387.h> | 
| Rusty Russell | 2cb7878 | 2009-06-03 14:52:24 +0930 | [diff] [blame] | 71 | #include <asm/stackprotector.h> | 
| Balaji Rao | ec04b13 | 2007-12-28 14:26:24 +0530 | [diff] [blame] | 72 | #include <asm/reboot.h>		/* for struct machine_ops */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 73 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 74 | /*G:010 Welcome to the Guest! | 
 | 75 |  * | 
 | 76 |  * The Guest in our tale is a simple creature: identical to the Host but | 
 | 77 |  * behaving in simplified but equivalent ways.  In particular, the Guest is the | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 78 |  * same kernel as the Host (or at least, built from the same source code). | 
 | 79 | :*/ | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 80 |  | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 81 | struct lguest_data lguest_data = { | 
 | 82 | 	.hcall_status = { [0 ... LHCALL_RING_SIZE-1] = 0xFF }, | 
 | 83 | 	.noirq_start = (u32)lguest_noirq_start, | 
 | 84 | 	.noirq_end = (u32)lguest_noirq_end, | 
| Rusty Russell | 47436aa | 2007-10-22 11:03:36 +1000 | [diff] [blame] | 85 | 	.kernel_address = PAGE_OFFSET, | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 86 | 	.blocked_interrupts = { 1 }, /* Block timer interrupts */ | 
| Rusty Russell | c18acd7 | 2007-10-22 11:03:35 +1000 | [diff] [blame] | 87 | 	.syscall_vec = SYSCALL_VECTOR, | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 88 | }; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 89 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 90 | /*G:037 | 
 | 91 |  * async_hcall() is pretty simple: I'm quite proud of it really.  We have a | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 92 |  * ring buffer of stored hypercalls which the Host will run though next time we | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 93 |  * do a normal hypercall.  Each entry in the ring has 5 slots for the hypercall | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 94 |  * arguments, and a "hcall_status" word which is 0 if the call is ready to go, | 
 | 95 |  * and 255 once the Host has finished with it. | 
 | 96 |  * | 
 | 97 |  * If we come around to a slot which hasn't been finished, then the table is | 
 | 98 |  * full and we just make the hypercall directly.  This has the nice side | 
 | 99 |  * effect of causing the Host to run all the stored calls in the ring buffer | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 100 |  * which empties it for next time! | 
 | 101 |  */ | 
| Adrian Bunk | 9b56fdb | 2007-11-02 16:43:10 +0100 | [diff] [blame] | 102 | static void async_hcall(unsigned long call, unsigned long arg1, | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 103 | 			unsigned long arg2, unsigned long arg3, | 
 | 104 | 			unsigned long arg4) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 105 | { | 
 | 106 | 	/* Note: This code assumes we're uniprocessor. */ | 
 | 107 | 	static unsigned int next_call; | 
 | 108 | 	unsigned long flags; | 
 | 109 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 110 | 	/* | 
 | 111 | 	 * Disable interrupts if not already disabled: we don't want an | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 112 | 	 * interrupt handler making a hypercall while we're already doing | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 113 | 	 * one! | 
 | 114 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 115 | 	local_irq_save(flags); | 
 | 116 | 	if (lguest_data.hcall_status[next_call] != 0xFF) { | 
 | 117 | 		/* Table full, so do normal hcall which will flush table. */ | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 118 | 		hcall(call, arg1, arg2, arg3, arg4); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 119 | 	} else { | 
| Jes Sorensen | b410e7b | 2007-10-22 11:03:31 +1000 | [diff] [blame] | 120 | 		lguest_data.hcalls[next_call].arg0 = call; | 
 | 121 | 		lguest_data.hcalls[next_call].arg1 = arg1; | 
 | 122 | 		lguest_data.hcalls[next_call].arg2 = arg2; | 
 | 123 | 		lguest_data.hcalls[next_call].arg3 = arg3; | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 124 | 		lguest_data.hcalls[next_call].arg4 = arg4; | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 125 | 		/* Arguments must all be written before we mark it to go */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 126 | 		wmb(); | 
 | 127 | 		lguest_data.hcall_status[next_call] = 0; | 
 | 128 | 		if (++next_call == LHCALL_RING_SIZE) | 
 | 129 | 			next_call = 0; | 
 | 130 | 	} | 
 | 131 | 	local_irq_restore(flags); | 
 | 132 | } | 
| Adrian Bunk | 9b56fdb | 2007-11-02 16:43:10 +0100 | [diff] [blame] | 133 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 134 | /*G:035 | 
 | 135 |  * Notice the lazy_hcall() above, rather than hcall().  This is our first real | 
 | 136 |  * optimization trick! | 
| Rusty Russell | 633872b | 2007-11-05 21:55:57 +1100 | [diff] [blame] | 137 |  * | 
 | 138 |  * When lazy_mode is set, it means we're allowed to defer all hypercalls and do | 
 | 139 |  * them as a batch when lazy_mode is eventually turned off.  Because hypercalls | 
 | 140 |  * are reasonably expensive, batching them up makes sense.  For example, a | 
 | 141 |  * large munmap might update dozens of page table entries: that code calls | 
 | 142 |  * paravirt_enter_lazy_mmu(), does the dozen updates, then calls | 
 | 143 |  * lguest_leave_lazy_mode(). | 
 | 144 |  * | 
 | 145 |  * So, when we're in lazy mode, we call async_hcall() to store the call for | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 146 |  * future processing: | 
 | 147 |  */ | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 148 | static void lazy_hcall1(unsigned long call, unsigned long arg1) | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 149 | { | 
 | 150 | 	if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 151 | 		hcall(call, arg1, 0, 0, 0); | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 152 | 	else | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 153 | 		async_hcall(call, arg1, 0, 0, 0); | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 154 | } | 
 | 155 |  | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 156 | /* You can imagine what lazy_hcall2, 3 and 4 look like. :*/ | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 157 | static void lazy_hcall2(unsigned long call, | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 158 | 			unsigned long arg1, | 
 | 159 | 			unsigned long arg2) | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 160 | { | 
 | 161 | 	if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 162 | 		hcall(call, arg1, arg2, 0, 0); | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 163 | 	else | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 164 | 		async_hcall(call, arg1, arg2, 0, 0); | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 165 | } | 
 | 166 |  | 
 | 167 | static void lazy_hcall3(unsigned long call, | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 168 | 			unsigned long arg1, | 
 | 169 | 			unsigned long arg2, | 
 | 170 | 			unsigned long arg3) | 
| Adrian Bunk | 9b56fdb | 2007-11-02 16:43:10 +0100 | [diff] [blame] | 171 | { | 
 | 172 | 	if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 173 | 		hcall(call, arg1, arg2, arg3, 0); | 
| Adrian Bunk | 9b56fdb | 2007-11-02 16:43:10 +0100 | [diff] [blame] | 174 | 	else | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 175 | 		async_hcall(call, arg1, arg2, arg3, 0); | 
 | 176 | } | 
 | 177 |  | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 178 | #ifdef CONFIG_X86_PAE | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 179 | static void lazy_hcall4(unsigned long call, | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 180 | 			unsigned long arg1, | 
 | 181 | 			unsigned long arg2, | 
 | 182 | 			unsigned long arg3, | 
 | 183 | 			unsigned long arg4) | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 184 | { | 
 | 185 | 	if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 186 | 		hcall(call, arg1, arg2, arg3, arg4); | 
| Matias Zabaljauregui | cefcad1 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 187 | 	else | 
 | 188 | 		async_hcall(call, arg1, arg2, arg3, arg4); | 
| Adrian Bunk | 9b56fdb | 2007-11-02 16:43:10 +0100 | [diff] [blame] | 189 | } | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 190 | #endif | 
| Rusty Russell | 633872b | 2007-11-05 21:55:57 +1100 | [diff] [blame] | 191 |  | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 192 | /*G:036 | 
 | 193 |  * When lazy mode is turned off reset the per-cpu lazy mode variable and then | 
 | 194 |  * issue the do-nothing hypercall to flush any stored calls. | 
 | 195 | :*/ | 
| Jeremy Fitzhardinge | b407fc5 | 2009-02-17 23:46:21 -0800 | [diff] [blame] | 196 | static void lguest_leave_lazy_mmu_mode(void) | 
| Rusty Russell | 633872b | 2007-11-05 21:55:57 +1100 | [diff] [blame] | 197 | { | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 198 | 	hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0, 0); | 
| Jeremy Fitzhardinge | b407fc5 | 2009-02-17 23:46:21 -0800 | [diff] [blame] | 199 | 	paravirt_leave_lazy_mmu(); | 
 | 200 | } | 
 | 201 |  | 
| Jeremy Fitzhardinge | 224101e | 2009-02-18 11:18:57 -0800 | [diff] [blame] | 202 | static void lguest_end_context_switch(struct task_struct *next) | 
| Jeremy Fitzhardinge | b407fc5 | 2009-02-17 23:46:21 -0800 | [diff] [blame] | 203 | { | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 204 | 	hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0, 0); | 
| Jeremy Fitzhardinge | 224101e | 2009-02-18 11:18:57 -0800 | [diff] [blame] | 205 | 	paravirt_end_context_switch(next); | 
| Rusty Russell | 633872b | 2007-11-05 21:55:57 +1100 | [diff] [blame] | 206 | } | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 207 |  | 
| Rusty Russell | 61f4bc8 | 2009-06-12 22:27:03 -0600 | [diff] [blame] | 208 | /*G:032 | 
| Rusty Russell | e1e7296 | 2007-10-25 15:02:50 +1000 | [diff] [blame] | 209 |  * After that diversion we return to our first native-instruction | 
 | 210 |  * replacements: four functions for interrupt control. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 211 |  * | 
 | 212 |  * The simplest way of implementing these would be to have "turn interrupts | 
 | 213 |  * off" and "turn interrupts on" hypercalls.  Unfortunately, this is too slow: | 
 | 214 |  * these are by far the most commonly called functions of those we override. | 
 | 215 |  * | 
 | 216 |  * So instead we keep an "irq_enabled" field inside our "struct lguest_data", | 
 | 217 |  * which the Guest can update with a single instruction.  The Host knows to | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 218 |  * check there before it tries to deliver an interrupt. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 219 |  */ | 
 | 220 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 221 | /* | 
 | 222 |  * save_flags() is expected to return the processor state (ie. "flags").  The | 
| H. Peter Anvin | 65ea5b0 | 2008-01-30 13:30:56 +0100 | [diff] [blame] | 223 |  * flags word contains all kind of stuff, but in practice Linux only cares | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 224 |  * about the interrupt flag.  Our "save_flags()" just returns that. | 
 | 225 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 226 | static unsigned long save_fl(void) | 
 | 227 | { | 
 | 228 | 	return lguest_data.irq_enabled; | 
 | 229 | } | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 230 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 231 | /* Interrupts go off... */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 232 | static void irq_disable(void) | 
 | 233 | { | 
 | 234 | 	lguest_data.irq_enabled = 0; | 
 | 235 | } | 
| Rusty Russell | 61f4bc8 | 2009-06-12 22:27:03 -0600 | [diff] [blame] | 236 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 237 | /* | 
 | 238 |  * Let's pause a moment.  Remember how I said these are called so often? | 
| Rusty Russell | 61f4bc8 | 2009-06-12 22:27:03 -0600 | [diff] [blame] | 239 |  * Jeremy Fitzhardinge optimized them so hard early in 2009 that he had to | 
 | 240 |  * break some rules.  In particular, these functions are assumed to save their | 
 | 241 |  * own registers if they need to: normal C functions assume they can trash the | 
 | 242 |  * eax register.  To use normal C functions, we use | 
 | 243 |  * PV_CALLEE_SAVE_REGS_THUNK(), which pushes %eax onto the stack, calls the | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 244 |  * C function, then restores it. | 
 | 245 |  */ | 
| Rusty Russell | 61f4bc8 | 2009-06-12 22:27:03 -0600 | [diff] [blame] | 246 | PV_CALLEE_SAVE_REGS_THUNK(save_fl); | 
| Jeremy Fitzhardinge | ecb93d1 | 2009-01-28 14:35:05 -0800 | [diff] [blame] | 247 | PV_CALLEE_SAVE_REGS_THUNK(irq_disable); | 
| Rusty Russell | f56a384 | 2007-07-26 10:41:05 -0700 | [diff] [blame] | 248 | /*:*/ | 
| Rusty Russell | 61f4bc8 | 2009-06-12 22:27:03 -0600 | [diff] [blame] | 249 |  | 
 | 250 | /* These are in i386_head.S */ | 
 | 251 | extern void lg_irq_enable(void); | 
 | 252 | extern void lg_restore_fl(unsigned long flags); | 
 | 253 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 254 | /*M:003 | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 255 |  * We could be more efficient in our checking of outstanding interrupts, rather | 
 | 256 |  * than using a branch.  One way would be to put the "irq_enabled" field in a | 
 | 257 |  * page by itself, and have the Host write-protect it when an interrupt comes | 
 | 258 |  * in when irqs are disabled.  There will then be a page fault as soon as | 
 | 259 |  * interrupts are re-enabled. | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 260 |  * | 
 | 261 |  * A better method is to implement soft interrupt disable generally for x86: | 
 | 262 |  * instead of disabling interrupts, we set a flag.  If an interrupt does come | 
 | 263 |  * in, we then disable them for real.  This is uncommon, so we could simply use | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 264 |  * a hypercall for interrupt control and not worry about efficiency. | 
 | 265 | :*/ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 266 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 267 | /*G:034 | 
 | 268 |  * The Interrupt Descriptor Table (IDT). | 
 | 269 |  * | 
 | 270 |  * The IDT tells the processor what to do when an interrupt comes in.  Each | 
 | 271 |  * entry in the table is a 64-bit descriptor: this holds the privilege level, | 
 | 272 |  * address of the handler, and... well, who cares?  The Guest just asks the | 
 | 273 |  * Host to make the change anyway, because the Host controls the real IDT. | 
 | 274 |  */ | 
| Glauber de Oliveira Costa | 8d94734 | 2008-01-30 13:31:12 +0100 | [diff] [blame] | 275 | static void lguest_write_idt_entry(gate_desc *dt, | 
 | 276 | 				   int entrynum, const gate_desc *g) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 277 | { | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 278 | 	/* | 
 | 279 | 	 * The gate_desc structure is 8 bytes long: we hand it to the Host in | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 280 | 	 * two 32-bit chunks.  The whole 32-bit kernel used to hand descriptors | 
 | 281 | 	 * around like this; typesafety wasn't a big concern in Linux's early | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 282 | 	 * years. | 
 | 283 | 	 */ | 
| Glauber de Oliveira Costa | 8d94734 | 2008-01-30 13:31:12 +0100 | [diff] [blame] | 284 | 	u32 *desc = (u32 *)g; | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 285 | 	/* Keep the local copy up to date. */ | 
| Glauber de Oliveira Costa | 8d94734 | 2008-01-30 13:31:12 +0100 | [diff] [blame] | 286 | 	native_write_idt_entry(dt, entrynum, g); | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 287 | 	/* Tell Host about this new entry. */ | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 288 | 	hcall(LHCALL_LOAD_IDT_ENTRY, entrynum, desc[0], desc[1], 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 289 | } | 
 | 290 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 291 | /* | 
 | 292 |  * Changing to a different IDT is very rare: we keep the IDT up-to-date every | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 293 |  * time it is written, so we can simply loop through all entries and tell the | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 294 |  * Host about them. | 
 | 295 |  */ | 
| Glauber de Oliveira Costa | 6b68f01 | 2008-01-30 13:31:12 +0100 | [diff] [blame] | 296 | static void lguest_load_idt(const struct desc_ptr *desc) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 297 | { | 
 | 298 | 	unsigned int i; | 
 | 299 | 	struct desc_struct *idt = (void *)desc->address; | 
 | 300 |  | 
 | 301 | 	for (i = 0; i < (desc->size+1)/8; i++) | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 302 | 		hcall(LHCALL_LOAD_IDT_ENTRY, i, idt[i].a, idt[i].b, 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 303 | } | 
 | 304 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 305 | /* | 
 | 306 |  * The Global Descriptor Table. | 
 | 307 |  * | 
 | 308 |  * The Intel architecture defines another table, called the Global Descriptor | 
 | 309 |  * Table (GDT).  You tell the CPU where it is (and its size) using the "lgdt" | 
 | 310 |  * instruction, and then several other instructions refer to entries in the | 
 | 311 |  * table.  There are three entries which the Switcher needs, so the Host simply | 
 | 312 |  * controls the entire thing and the Guest asks it to make changes using the | 
 | 313 |  * LOAD_GDT hypercall. | 
 | 314 |  * | 
| Rusty Russell | a489f0b | 2009-04-19 23:14:00 -0600 | [diff] [blame] | 315 |  * This is the exactly like the IDT code. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 316 |  */ | 
| Glauber de Oliveira Costa | 6b68f01 | 2008-01-30 13:31:12 +0100 | [diff] [blame] | 317 | static void lguest_load_gdt(const struct desc_ptr *desc) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 318 | { | 
| Rusty Russell | a489f0b | 2009-04-19 23:14:00 -0600 | [diff] [blame] | 319 | 	unsigned int i; | 
 | 320 | 	struct desc_struct *gdt = (void *)desc->address; | 
 | 321 |  | 
 | 322 | 	for (i = 0; i < (desc->size+1)/8; i++) | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 323 | 		hcall(LHCALL_LOAD_GDT_ENTRY, i, gdt[i].a, gdt[i].b, 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 324 | } | 
 | 325 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 326 | /* | 
| Rusty Russell | 9b6efcd | 2010-09-21 10:54:01 -0600 | [diff] [blame] | 327 |  * For a single GDT entry which changes, we simply change our copy and | 
 | 328 |  * then tell the host about it. | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 329 |  */ | 
| Glauber de Oliveira Costa | 014b15b | 2008-01-30 13:31:13 +0100 | [diff] [blame] | 330 | static void lguest_write_gdt_entry(struct desc_struct *dt, int entrynum, | 
 | 331 | 				   const void *desc, int type) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 332 | { | 
| Glauber de Oliveira Costa | 014b15b | 2008-01-30 13:31:13 +0100 | [diff] [blame] | 333 | 	native_write_gdt_entry(dt, entrynum, desc, type); | 
| Rusty Russell | a489f0b | 2009-04-19 23:14:00 -0600 | [diff] [blame] | 334 | 	/* Tell Host about this new entry. */ | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 335 | 	hcall(LHCALL_LOAD_GDT_ENTRY, entrynum, | 
 | 336 | 	      dt[entrynum].a, dt[entrynum].b, 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 337 | } | 
 | 338 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 339 | /* | 
| Rusty Russell | 9b6efcd | 2010-09-21 10:54:01 -0600 | [diff] [blame] | 340 |  * There are three "thread local storage" GDT entries which change | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 341 |  * on every context switch (these three entries are how glibc implements | 
| Rusty Russell | 9b6efcd | 2010-09-21 10:54:01 -0600 | [diff] [blame] | 342 |  * __thread variables).  As an optimization, we have a hypercall | 
 | 343 |  * specifically for this case. | 
 | 344 |  * | 
 | 345 |  * Wouldn't it be nicer to have a general LOAD_GDT_ENTRIES hypercall | 
 | 346 |  * which took a range of entries? | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 347 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 348 | static void lguest_load_tls(struct thread_struct *t, unsigned int cpu) | 
 | 349 | { | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 350 | 	/* | 
 | 351 | 	 * There's one problem which normal hardware doesn't have: the Host | 
| Rusty Russell | 0d027c0 | 2007-08-09 20:57:13 +1000 | [diff] [blame] | 352 | 	 * can't handle us removing entries we're currently using.  So we clear | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 353 | 	 * the GS register here: if it's needed it'll be reloaded anyway. | 
 | 354 | 	 */ | 
| Tejun Heo | ccbeed3 | 2009-02-09 22:17:40 +0900 | [diff] [blame] | 355 | 	lazy_load_gs(0); | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 356 | 	lazy_hcall2(LHCALL_LOAD_TLS, __pa(&t->tls_array), cpu); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 357 | } | 
 | 358 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 359 | /*G:038 | 
 | 360 |  * That's enough excitement for now, back to ploughing through each of the | 
 | 361 |  * different pv_ops structures (we're about 1/3 of the way through). | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 362 |  * | 
 | 363 |  * This is the Local Descriptor Table, another weird Intel thingy.  Linux only | 
 | 364 |  * uses this for some strange applications like Wine.  We don't do anything | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 365 |  * here, so they'll get an informative and friendly Segmentation Fault. | 
 | 366 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 367 | static void lguest_set_ldt(const void *addr, unsigned entries) | 
 | 368 | { | 
 | 369 | } | 
 | 370 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 371 | /* | 
 | 372 |  * This loads a GDT entry into the "Task Register": that entry points to a | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 373 |  * structure called the Task State Segment.  Some comments scattered though the | 
 | 374 |  * kernel code indicate that this used for task switching in ages past, along | 
 | 375 |  * with blood sacrifice and astrology. | 
 | 376 |  * | 
 | 377 |  * Now there's nothing interesting in here that we don't get told elsewhere. | 
 | 378 |  * But the native version uses the "ltr" instruction, which makes the Host | 
 | 379 |  * complain to the Guest about a Segmentation Fault and it'll oops.  So we | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 380 |  * override the native version with a do-nothing version. | 
 | 381 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 382 | static void lguest_load_tr_desc(void) | 
 | 383 | { | 
 | 384 | } | 
 | 385 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 386 | /* | 
 | 387 |  * The "cpuid" instruction is a way of querying both the CPU identity | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 388 |  * (manufacturer, model, etc) and its features.  It was introduced before the | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 389 |  * Pentium in 1993 and keeps getting extended by both Intel, AMD and others. | 
 | 390 |  * As you might imagine, after a decade and a half this treatment, it is now a | 
 | 391 |  * giant ball of hair.  Its entry in the current Intel manual runs to 28 pages. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 392 |  * | 
 | 393 |  * This instruction even it has its own Wikipedia entry.  The Wikipedia entry | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 394 |  * has been translated into 5 languages.  I am not making this up! | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 395 |  * | 
 | 396 |  * We could get funky here and identify ourselves as "GenuineLguest", but | 
 | 397 |  * instead we just use the real "cpuid" instruction.  Then I pretty much turned | 
 | 398 |  * off feature bits until the Guest booted.  (Don't say that: you'll damage | 
 | 399 |  * lguest sales!)  Shut up, inner voice!  (Hey, just pointing out that this is | 
| Lucas De Marchi | 0d2eb44 | 2011-03-17 16:24:16 -0300 | [diff] [blame] | 400 |  * hardly future proof.)  No one's listening!  They don't like you anyway, | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 401 |  * parenthetic weirdo! | 
 | 402 |  * | 
 | 403 |  * Replacing the cpuid so we can turn features off is great for the kernel, but | 
 | 404 |  * anyone (including userspace) can just use the raw "cpuid" instruction and | 
 | 405 |  * the Host won't even notice since it isn't privileged.  So we try not to get | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 406 |  * too worked up about it. | 
 | 407 |  */ | 
| H. Peter Anvin | 65ea5b0 | 2008-01-30 13:30:56 +0100 | [diff] [blame] | 408 | static void lguest_cpuid(unsigned int *ax, unsigned int *bx, | 
 | 409 | 			 unsigned int *cx, unsigned int *dx) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 410 | { | 
| H. Peter Anvin | 65ea5b0 | 2008-01-30 13:30:56 +0100 | [diff] [blame] | 411 | 	int function = *ax; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 412 |  | 
| H. Peter Anvin | 65ea5b0 | 2008-01-30 13:30:56 +0100 | [diff] [blame] | 413 | 	native_cpuid(ax, bx, cx, dx); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 414 | 	switch (function) { | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 415 | 	/* | 
 | 416 | 	 * CPUID 0 gives the highest legal CPUID number (and the ID string). | 
 | 417 | 	 * We futureproof our code a little by sticking to known CPUID values. | 
 | 418 | 	 */ | 
 | 419 | 	case 0: | 
| Rusty Russell | 7a50492 | 2009-07-17 21:47:44 -0600 | [diff] [blame] | 420 | 		if (*ax > 5) | 
 | 421 | 			*ax = 5; | 
 | 422 | 		break; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 423 |  | 
 | 424 | 	/* | 
 | 425 | 	 * CPUID 1 is a basic feature request. | 
 | 426 | 	 * | 
 | 427 | 	 * CX: we only allow kernel to see SSE3, CMPXCHG16B and SSSE3 | 
 | 428 | 	 * DX: SSE, SSE2, FXSR, MMX, CMOV, CMPXCHG8B, TSC, FPU and PAE. | 
 | 429 | 	 */ | 
 | 430 | 	case 1: | 
| H. Peter Anvin | 65ea5b0 | 2008-01-30 13:30:56 +0100 | [diff] [blame] | 431 | 		*cx &= 0x00002201; | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 432 | 		*dx &= 0x07808151; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 433 | 		/* | 
 | 434 | 		 * The Host can do a nice optimization if it knows that the | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 435 | 		 * kernel mappings (addresses above 0xC0000000 or whatever | 
 | 436 | 		 * PAGE_OFFSET is set to) haven't changed.  But Linux calls | 
 | 437 | 		 * flush_tlb_user() for both user and kernel mappings unless | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 438 | 		 * the Page Global Enable (PGE) feature bit is set. | 
 | 439 | 		 */ | 
| H. Peter Anvin | 65ea5b0 | 2008-01-30 13:30:56 +0100 | [diff] [blame] | 440 | 		*dx |= 0x00002000; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 441 | 		/* | 
 | 442 | 		 * We also lie, and say we're family id 5.  6 or greater | 
| Rusty Russell | cbd88c8 | 2009-03-09 10:06:22 -0600 | [diff] [blame] | 443 | 		 * leads to a rdmsr in early_init_intel which we can't handle. | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 444 | 		 * Family ID is returned as bits 8-12 in ax. | 
 | 445 | 		 */ | 
| Rusty Russell | cbd88c8 | 2009-03-09 10:06:22 -0600 | [diff] [blame] | 446 | 		*ax &= 0xFFFFF0FF; | 
 | 447 | 		*ax |= 0x00000500; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 448 | 		break; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 449 | 	/* | 
 | 450 | 	 * 0x80000000 returns the highest Extended Function, so we futureproof | 
 | 451 | 	 * like we do above by limiting it to known fields. | 
 | 452 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 453 | 	case 0x80000000: | 
| H. Peter Anvin | 65ea5b0 | 2008-01-30 13:30:56 +0100 | [diff] [blame] | 454 | 		if (*ax > 0x80000008) | 
 | 455 | 			*ax = 0x80000008; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 456 | 		break; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 457 |  | 
 | 458 | 	/* | 
 | 459 | 	 * PAE systems can mark pages as non-executable.  Linux calls this the | 
 | 460 | 	 * NX bit.  Intel calls it XD (eXecute Disable), AMD EVP (Enhanced | 
 | 461 | 	 * Virus Protection).  We just switch turn if off here, since we don't | 
 | 462 | 	 * support it. | 
 | 463 | 	 */ | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 464 | 	case 0x80000001: | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 465 | 		*dx &= ~(1 << 20); | 
 | 466 | 		break; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 467 | 	} | 
 | 468 | } | 
 | 469 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 470 | /* | 
 | 471 |  * Intel has four control registers, imaginatively named cr0, cr2, cr3 and cr4. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 472 |  * I assume there's a cr1, but it hasn't bothered us yet, so we'll not bother | 
 | 473 |  * it.  The Host needs to know when the Guest wants to change them, so we have | 
 | 474 |  * a whole series of functions like read_cr0() and write_cr0(). | 
 | 475 |  * | 
| Rusty Russell | e1e7296 | 2007-10-25 15:02:50 +1000 | [diff] [blame] | 476 |  * We start with cr0.  cr0 allows you to turn on and off all kinds of basic | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 477 |  * features, but Linux only really cares about one: the horrifically-named Task | 
 | 478 |  * Switched (TS) bit at bit 3 (ie. 8) | 
 | 479 |  * | 
 | 480 |  * What does the TS bit do?  Well, it causes the CPU to trap (interrupt 7) if | 
 | 481 |  * the floating point unit is used.  Which allows us to restore FPU state | 
 | 482 |  * lazily after a task switch, and Linux uses that gratefully, but wouldn't a | 
 | 483 |  * name like "FPUTRAP bit" be a little less cryptic? | 
 | 484 |  * | 
| Rusty Russell | ad5173f | 2008-10-31 11:24:27 -0500 | [diff] [blame] | 485 |  * We store cr0 locally because the Host never changes it.  The Guest sometimes | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 486 |  * wants to read it and we'd prefer not to bother the Host unnecessarily. | 
 | 487 |  */ | 
| Rusty Russell | ad5173f | 2008-10-31 11:24:27 -0500 | [diff] [blame] | 488 | static unsigned long current_cr0; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 489 | static void lguest_write_cr0(unsigned long val) | 
 | 490 | { | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 491 | 	lazy_hcall1(LHCALL_TS, val & X86_CR0_TS); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 492 | 	current_cr0 = val; | 
 | 493 | } | 
 | 494 |  | 
 | 495 | static unsigned long lguest_read_cr0(void) | 
 | 496 | { | 
 | 497 | 	return current_cr0; | 
 | 498 | } | 
 | 499 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 500 | /* | 
 | 501 |  * Intel provided a special instruction to clear the TS bit for people too cool | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 502 |  * to use write_cr0() to do it.  This "clts" instruction is faster, because all | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 503 |  * the vowels have been optimized out. | 
 | 504 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 505 | static void lguest_clts(void) | 
 | 506 | { | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 507 | 	lazy_hcall1(LHCALL_TS, 0); | 
| Rusty Russell | 25c47bb | 2007-10-25 14:09:53 +1000 | [diff] [blame] | 508 | 	current_cr0 &= ~X86_CR0_TS; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 509 | } | 
 | 510 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 511 | /* | 
 | 512 |  * cr2 is the virtual address of the last page fault, which the Guest only ever | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 513 |  * reads.  The Host kindly writes this into our "struct lguest_data", so we | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 514 |  * just read it out of there. | 
 | 515 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 516 | static unsigned long lguest_read_cr2(void) | 
 | 517 | { | 
 | 518 | 	return lguest_data.cr2; | 
 | 519 | } | 
 | 520 |  | 
| Rusty Russell | ad5173f | 2008-10-31 11:24:27 -0500 | [diff] [blame] | 521 | /* See lguest_set_pte() below. */ | 
 | 522 | static bool cr3_changed = false; | 
 | 523 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 524 | /* | 
 | 525 |  * cr3 is the current toplevel pagetable page: the principle is the same as | 
| Rusty Russell | ad5173f | 2008-10-31 11:24:27 -0500 | [diff] [blame] | 526 |  * cr0.  Keep a local copy, and tell the Host when it changes.  The only | 
 | 527 |  * difference is that our local copy is in lguest_data because the Host needs | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 528 |  * to set it upon our initial hypercall. | 
 | 529 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 530 | static void lguest_write_cr3(unsigned long cr3) | 
 | 531 | { | 
| Rusty Russell | ad5173f | 2008-10-31 11:24:27 -0500 | [diff] [blame] | 532 | 	lguest_data.pgdir = cr3; | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 533 | 	lazy_hcall1(LHCALL_NEW_PGTABLE, cr3); | 
| Rusty Russell | bb4093d | 2010-12-16 17:03:15 -0600 | [diff] [blame] | 534 |  | 
 | 535 | 	/* These two page tables are simple, linear, and used during boot */ | 
 | 536 | 	if (cr3 != __pa(swapper_pg_dir) && cr3 != __pa(initial_page_table)) | 
 | 537 | 		cr3_changed = true; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 538 | } | 
 | 539 |  | 
 | 540 | static unsigned long lguest_read_cr3(void) | 
 | 541 | { | 
| Rusty Russell | ad5173f | 2008-10-31 11:24:27 -0500 | [diff] [blame] | 542 | 	return lguest_data.pgdir; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 543 | } | 
 | 544 |  | 
| Rusty Russell | e1e7296 | 2007-10-25 15:02:50 +1000 | [diff] [blame] | 545 | /* cr4 is used to enable and disable PGE, but we don't care. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 546 | static unsigned long lguest_read_cr4(void) | 
 | 547 | { | 
 | 548 | 	return 0; | 
 | 549 | } | 
 | 550 |  | 
 | 551 | static void lguest_write_cr4(unsigned long val) | 
 | 552 | { | 
 | 553 | } | 
 | 554 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 555 | /* | 
 | 556 |  * Page Table Handling. | 
 | 557 |  * | 
 | 558 |  * Now would be a good time to take a rest and grab a coffee or similarly | 
 | 559 |  * relaxing stimulant.  The easy parts are behind us, and the trek gradually | 
 | 560 |  * winds uphill from here. | 
 | 561 |  * | 
 | 562 |  * Quick refresher: memory is divided into "pages" of 4096 bytes each.  The CPU | 
 | 563 |  * maps virtual addresses to physical addresses using "page tables".  We could | 
 | 564 |  * use one huge index of 1 million entries: each address is 4 bytes, so that's | 
 | 565 |  * 1024 pages just to hold the page tables.   But since most virtual addresses | 
| Rusty Russell | e1e7296 | 2007-10-25 15:02:50 +1000 | [diff] [blame] | 566 |  * are unused, we use a two level index which saves space.  The cr3 register | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 567 |  * contains the physical address of the top level "page directory" page, which | 
 | 568 |  * contains physical addresses of up to 1024 second-level pages.  Each of these | 
 | 569 |  * second level pages contains up to 1024 physical addresses of actual pages, | 
 | 570 |  * or Page Table Entries (PTEs). | 
 | 571 |  * | 
 | 572 |  * Here's a diagram, where arrows indicate physical addresses: | 
 | 573 |  * | 
| Rusty Russell | e1e7296 | 2007-10-25 15:02:50 +1000 | [diff] [blame] | 574 |  * cr3 ---> +---------+ | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 575 |  *	    |  	   --------->+---------+ | 
 | 576 |  *	    |	      |	     | PADDR1  | | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 577 |  *	  Mid-level   |	     | PADDR2  | | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 578 |  *	  (PMD) page  |	     | 	       | | 
 | 579 |  *	    |	      |	   Lower-level | | 
 | 580 |  *	    |	      |	   (PTE) page  | | 
 | 581 |  *	    |	      |	     |	       | | 
 | 582 |  *	      ....    	     	 .... | 
 | 583 |  * | 
 | 584 |  * So to convert a virtual address to a physical address, we look up the top | 
 | 585 |  * level, which points us to the second level, which gives us the physical | 
 | 586 |  * address of that page.  If the top level entry was not present, or the second | 
 | 587 |  * level entry was not present, then the virtual address is invalid (we | 
 | 588 |  * say "the page was not mapped"). | 
 | 589 |  * | 
 | 590 |  * Put another way, a 32-bit virtual address is divided up like so: | 
 | 591 |  * | 
 | 592 |  *  1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 | 
 | 593 |  * |<---- 10 bits ---->|<---- 10 bits ---->|<------ 12 bits ------>| | 
 | 594 |  *    Index into top     Index into second      Offset within page | 
 | 595 |  *  page directory page    pagetable page | 
 | 596 |  * | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 597 |  * Now, unfortunately, this isn't the whole story: Intel added Physical Address | 
 | 598 |  * Extension (PAE) to allow 32 bit systems to use 64GB of memory (ie. 36 bits). | 
 | 599 |  * These are held in 64-bit page table entries, so we can now only fit 512 | 
 | 600 |  * entries in a page, and the neat three-level tree breaks down. | 
 | 601 |  * | 
 | 602 |  * The result is a four level page table: | 
 | 603 |  * | 
 | 604 |  * cr3 --> [ 4 Upper  ] | 
 | 605 |  *	   [   Level  ] | 
 | 606 |  *	   [  Entries ] | 
 | 607 |  *	   [(PUD Page)]---> +---------+ | 
 | 608 |  *	 		    |  	   --------->+---------+ | 
 | 609 |  *	 		    |	      |	     | PADDR1  | | 
 | 610 |  *	 		  Mid-level   |	     | PADDR2  | | 
 | 611 |  *	 		  (PMD) page  |	     | 	       | | 
 | 612 |  *	 		    |	      |	   Lower-level | | 
 | 613 |  *	 		    |	      |	   (PTE) page  | | 
 | 614 |  *	 		    |	      |	     |	       | | 
 | 615 |  *	 		      ....    	     	 .... | 
 | 616 |  * | 
 | 617 |  * | 
 | 618 |  * And the virtual address is decoded as: | 
 | 619 |  * | 
 | 620 |  *         1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 | 
 | 621 |  *      |<-2->|<--- 9 bits ---->|<---- 9 bits --->|<------ 12 bits ------>| | 
 | 622 |  * Index into    Index into mid    Index into lower    Offset within page | 
 | 623 |  * top entries   directory page     pagetable page | 
 | 624 |  * | 
 | 625 |  * It's too hard to switch between these two formats at runtime, so Linux only | 
 | 626 |  * supports one or the other depending on whether CONFIG_X86_PAE is set.  Many | 
 | 627 |  * distributions turn it on, and not just for people with silly amounts of | 
 | 628 |  * memory: the larger PTE entries allow room for the NX bit, which lets the | 
 | 629 |  * kernel disable execution of pages and increase security. | 
 | 630 |  * | 
 | 631 |  * This was a problem for lguest, which couldn't run on these distributions; | 
 | 632 |  * then Matias Zabaljauregui figured it all out and implemented it, and only a | 
 | 633 |  * handful of puppies were crushed in the process! | 
 | 634 |  * | 
 | 635 |  * Back to our point: the kernel spends a lot of time changing both the | 
 | 636 |  * top-level page directory and lower-level pagetable pages.  The Guest doesn't | 
 | 637 |  * know physical addresses, so while it maintains these page tables exactly | 
 | 638 |  * like normal, it also needs to keep the Host informed whenever it makes a | 
 | 639 |  * change: the Host will create the real page tables based on the Guests'. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 640 |  */ | 
 | 641 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 642 | /* | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 643 |  * The Guest calls this after it has set a second-level entry (pte), ie. to map | 
 | 644 |  * a page into a process' address space.  Wetell the Host the toplevel and | 
 | 645 |  * address this corresponds to.  The Guest uses one pagetable per process, so | 
 | 646 |  * we need to tell the Host which one we're changing (mm->pgd). | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 647 |  */ | 
| Rusty Russell | b7ff99e | 2009-03-30 21:55:23 -0600 | [diff] [blame] | 648 | static void lguest_pte_update(struct mm_struct *mm, unsigned long addr, | 
 | 649 | 			       pte_t *ptep) | 
 | 650 | { | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 651 | #ifdef CONFIG_X86_PAE | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 652 | 	/* PAE needs to hand a 64 bit page table entry, so it uses two args. */ | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 653 | 	lazy_hcall4(LHCALL_SET_PTE, __pa(mm->pgd), addr, | 
 | 654 | 		    ptep->pte_low, ptep->pte_high); | 
 | 655 | #else | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 656 | 	lazy_hcall3(LHCALL_SET_PTE, __pa(mm->pgd), addr, ptep->pte_low); | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 657 | #endif | 
| Rusty Russell | b7ff99e | 2009-03-30 21:55:23 -0600 | [diff] [blame] | 658 | } | 
 | 659 |  | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 660 | /* This is the "set and update" combo-meal-deal version. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 661 | static void lguest_set_pte_at(struct mm_struct *mm, unsigned long addr, | 
 | 662 | 			      pte_t *ptep, pte_t pteval) | 
 | 663 | { | 
| Matias Zabaljauregui | 90603d1 | 2009-06-12 22:27:06 -0600 | [diff] [blame] | 664 | 	native_set_pte(ptep, pteval); | 
| Rusty Russell | b7ff99e | 2009-03-30 21:55:23 -0600 | [diff] [blame] | 665 | 	lguest_pte_update(mm, addr, ptep); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 666 | } | 
 | 667 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 668 | /* | 
 | 669 |  * The Guest calls lguest_set_pud to set a top-level entry and lguest_set_pmd | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 670 |  * to set a middle-level entry when PAE is activated. | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 671 |  * | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 672 |  * Again, we set the entry then tell the Host which page we changed, | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 673 |  * and the index of the entry we changed. | 
 | 674 |  */ | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 675 | #ifdef CONFIG_X86_PAE | 
 | 676 | static void lguest_set_pud(pud_t *pudp, pud_t pudval) | 
 | 677 | { | 
 | 678 | 	native_set_pud(pudp, pudval); | 
 | 679 |  | 
 | 680 | 	/* 32 bytes aligned pdpt address and the index. */ | 
 | 681 | 	lazy_hcall2(LHCALL_SET_PGD, __pa(pudp) & 0xFFFFFFE0, | 
 | 682 | 		   (__pa(pudp) & 0x1F) / sizeof(pud_t)); | 
 | 683 | } | 
 | 684 |  | 
 | 685 | static void lguest_set_pmd(pmd_t *pmdp, pmd_t pmdval) | 
 | 686 | { | 
 | 687 | 	native_set_pmd(pmdp, pmdval); | 
 | 688 | 	lazy_hcall2(LHCALL_SET_PMD, __pa(pmdp) & PAGE_MASK, | 
 | 689 | 		   (__pa(pmdp) & (PAGE_SIZE - 1)) / sizeof(pmd_t)); | 
 | 690 | } | 
 | 691 | #else | 
 | 692 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 693 | /* The Guest calls lguest_set_pmd to set a top-level entry when !PAE. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 694 | static void lguest_set_pmd(pmd_t *pmdp, pmd_t pmdval) | 
 | 695 | { | 
| Matias Zabaljauregui | 90603d1 | 2009-06-12 22:27:06 -0600 | [diff] [blame] | 696 | 	native_set_pmd(pmdp, pmdval); | 
| Matias Zabaljauregui | ebe0ba84 | 2009-05-30 15:48:08 -0300 | [diff] [blame] | 697 | 	lazy_hcall2(LHCALL_SET_PGD, __pa(pmdp) & PAGE_MASK, | 
| Matias Zabaljauregui | 90603d1 | 2009-06-12 22:27:06 -0600 | [diff] [blame] | 698 | 		   (__pa(pmdp) & (PAGE_SIZE - 1)) / sizeof(pmd_t)); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 699 | } | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 700 | #endif | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 701 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 702 | /* | 
 | 703 |  * There are a couple of legacy places where the kernel sets a PTE, but we | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 704 |  * don't know the top level any more.  This is useless for us, since we don't | 
 | 705 |  * know which pagetable is changing or what address, so we just tell the Host | 
 | 706 |  * to forget all of them.  Fortunately, this is very rare. | 
 | 707 |  * | 
 | 708 |  * ... except in early boot when the kernel sets up the initial pagetables, | 
| Rusty Russell | bb4093d | 2010-12-16 17:03:15 -0600 | [diff] [blame] | 709 |  * which makes booting astonishingly slow: 48 seconds!  So we don't even tell | 
 | 710 |  * the Host anything changed until we've done the first real page table switch, | 
 | 711 |  * which brings boot back to 4.3 seconds. | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 712 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 713 | static void lguest_set_pte(pte_t *ptep, pte_t pteval) | 
 | 714 | { | 
| Matias Zabaljauregui | 90603d1 | 2009-06-12 22:27:06 -0600 | [diff] [blame] | 715 | 	native_set_pte(ptep, pteval); | 
| Rusty Russell | ad5173f | 2008-10-31 11:24:27 -0500 | [diff] [blame] | 716 | 	if (cr3_changed) | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 717 | 		lazy_hcall1(LHCALL_FLUSH_TLB, 1); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 718 | } | 
 | 719 |  | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 720 | #ifdef CONFIG_X86_PAE | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 721 | /* | 
 | 722 |  * With 64-bit PTE values, we need to be careful setting them: if we set 32 | 
 | 723 |  * bits at a time, the hardware could see a weird half-set entry.  These | 
 | 724 |  * versions ensure we update all 64 bits at once. | 
 | 725 |  */ | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 726 | static void lguest_set_pte_atomic(pte_t *ptep, pte_t pte) | 
 | 727 | { | 
 | 728 | 	native_set_pte_atomic(ptep, pte); | 
 | 729 | 	if (cr3_changed) | 
 | 730 | 		lazy_hcall1(LHCALL_FLUSH_TLB, 1); | 
 | 731 | } | 
 | 732 |  | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 733 | static void lguest_pte_clear(struct mm_struct *mm, unsigned long addr, | 
 | 734 | 			     pte_t *ptep) | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 735 | { | 
 | 736 | 	native_pte_clear(mm, addr, ptep); | 
 | 737 | 	lguest_pte_update(mm, addr, ptep); | 
 | 738 | } | 
 | 739 |  | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 740 | static void lguest_pmd_clear(pmd_t *pmdp) | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 741 | { | 
 | 742 | 	lguest_set_pmd(pmdp, __pmd(0)); | 
 | 743 | } | 
 | 744 | #endif | 
 | 745 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 746 | /* | 
 | 747 |  * Unfortunately for Lguest, the pv_mmu_ops for page tables were based on | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 748 |  * native page table operations.  On native hardware you can set a new page | 
 | 749 |  * table entry whenever you want, but if you want to remove one you have to do | 
 | 750 |  * a TLB flush (a TLB is a little cache of page table entries kept by the CPU). | 
 | 751 |  * | 
 | 752 |  * So the lguest_set_pte_at() and lguest_set_pmd() functions above are only | 
 | 753 |  * called when a valid entry is written, not when it's removed (ie. marked not | 
 | 754 |  * present).  Instead, this is where we come when the Guest wants to remove a | 
 | 755 |  * page table entry: we tell the Host to set that entry to 0 (ie. the present | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 756 |  * bit is zero). | 
 | 757 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 758 | static void lguest_flush_tlb_single(unsigned long addr) | 
 | 759 | { | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 760 | 	/* Simply set it to zero: if it was not, it will fault back in. */ | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 761 | 	lazy_hcall3(LHCALL_SET_PTE, lguest_data.pgdir, addr, 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 762 | } | 
 | 763 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 764 | /* | 
 | 765 |  * This is what happens after the Guest has removed a large number of entries. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 766 |  * This tells the Host that any of the page table entries for userspace might | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 767 |  * have changed, ie. virtual addresses below PAGE_OFFSET. | 
 | 768 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 769 | static void lguest_flush_tlb_user(void) | 
 | 770 | { | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 771 | 	lazy_hcall1(LHCALL_FLUSH_TLB, 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 772 | } | 
 | 773 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 774 | /* | 
 | 775 |  * This is called when the kernel page tables have changed.  That's not very | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 776 |  * common (unless the Guest is using highmem, which makes the Guest extremely | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 777 |  * slow), so it's worth separating this from the user flushing above. | 
 | 778 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 779 | static void lguest_flush_tlb_kernel(void) | 
 | 780 | { | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 781 | 	lazy_hcall1(LHCALL_FLUSH_TLB, 1); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 782 | } | 
 | 783 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 784 | /* | 
 | 785 |  * The Unadvanced Programmable Interrupt Controller. | 
 | 786 |  * | 
 | 787 |  * This is an attempt to implement the simplest possible interrupt controller. | 
 | 788 |  * I spent some time looking though routines like set_irq_chip_and_handler, | 
 | 789 |  * set_irq_chip_and_handler_name, set_irq_chip_data and set_phasers_to_stun and | 
 | 790 |  * I *think* this is as simple as it gets. | 
 | 791 |  * | 
 | 792 |  * We can tell the Host what interrupts we want blocked ready for using the | 
 | 793 |  * lguest_data.interrupts bitmap, so disabling (aka "masking") them is as | 
 | 794 |  * simple as setting a bit.  We don't actually "ack" interrupts as such, we | 
 | 795 |  * just mask and unmask them.  I wonder if we should be cleverer? | 
 | 796 |  */ | 
| Thomas Gleixner | fe25c7f | 2010-09-28 14:57:24 +0200 | [diff] [blame] | 797 | static void disable_lguest_irq(struct irq_data *data) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 798 | { | 
| Thomas Gleixner | fe25c7f | 2010-09-28 14:57:24 +0200 | [diff] [blame] | 799 | 	set_bit(data->irq, lguest_data.blocked_interrupts); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 800 | } | 
 | 801 |  | 
| Thomas Gleixner | fe25c7f | 2010-09-28 14:57:24 +0200 | [diff] [blame] | 802 | static void enable_lguest_irq(struct irq_data *data) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 803 | { | 
| Thomas Gleixner | fe25c7f | 2010-09-28 14:57:24 +0200 | [diff] [blame] | 804 | 	clear_bit(data->irq, lguest_data.blocked_interrupts); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 805 | } | 
 | 806 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 807 | /* This structure describes the lguest IRQ controller. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 808 | static struct irq_chip lguest_irq_controller = { | 
 | 809 | 	.name		= "lguest", | 
| Thomas Gleixner | fe25c7f | 2010-09-28 14:57:24 +0200 | [diff] [blame] | 810 | 	.irq_mask	= disable_lguest_irq, | 
 | 811 | 	.irq_mask_ack	= disable_lguest_irq, | 
 | 812 | 	.irq_unmask	= enable_lguest_irq, | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 813 | }; | 
 | 814 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 815 | /* | 
 | 816 |  * This sets up the Interrupt Descriptor Table (IDT) entry for each hardware | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 817 |  * interrupt (except 128, which is used for system calls), and then tells the | 
 | 818 |  * Linux infrastructure that each interrupt is controlled by our level-based | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 819 |  * lguest interrupt controller. | 
 | 820 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 821 | static void __init lguest_init_IRQ(void) | 
 | 822 | { | 
 | 823 | 	unsigned int i; | 
 | 824 |  | 
| Rusty Russell | 1028375 | 2009-06-12 22:26:59 -0600 | [diff] [blame] | 825 | 	for (i = FIRST_EXTERNAL_VECTOR; i < NR_VECTORS; i++) { | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 826 | 		/* Some systems map "vectors" to interrupts weirdly.  Not us! */ | 
| Rusty Russell | ced05dd | 2011-01-20 21:37:29 -0600 | [diff] [blame] | 827 | 		__this_cpu_write(vector_irq[i], i - FIRST_EXTERNAL_VECTOR); | 
| Rusty Russell | 1028375 | 2009-06-12 22:26:59 -0600 | [diff] [blame] | 828 | 		if (i != SYSCALL_VECTOR) | 
 | 829 | 			set_intr_gate(i, interrupt[i - FIRST_EXTERNAL_VECTOR]); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 830 | 	} | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 831 |  | 
 | 832 | 	/* | 
 | 833 | 	 * This call is required to set up for 4k stacks, where we have | 
 | 834 | 	 * separate stacks for hard and soft interrupts. | 
 | 835 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 836 | 	irq_ctx_init(smp_processor_id()); | 
 | 837 | } | 
 | 838 |  | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 839 | /* | 
 | 840 |  * With CONFIG_SPARSE_IRQ, interrupt descriptors are allocated as-needed, so | 
 | 841 |  * rather than set them in lguest_init_IRQ we are called here every time an | 
 | 842 |  * lguest device needs an interrupt. | 
 | 843 |  * | 
| Thomas Gleixner | c2f31c3 | 2010-09-30 12:19:03 +0200 | [diff] [blame] | 844 |  * FIXME: irq_alloc_desc_at() can fail due to lack of memory, we should | 
| Rusty Russell | a91d74a | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 845 |  * pass that up! | 
 | 846 |  */ | 
| Rusty Russell | 6db6a5f | 2009-03-09 10:06:28 -0600 | [diff] [blame] | 847 | void lguest_setup_irq(unsigned int irq) | 
 | 848 | { | 
| Thomas Gleixner | c2f31c3 | 2010-09-30 12:19:03 +0200 | [diff] [blame] | 849 | 	irq_alloc_desc_at(irq, 0); | 
| Thomas Gleixner | 2c77865 | 2011-03-12 12:20:43 +0100 | [diff] [blame] | 850 | 	irq_set_chip_and_handler_name(irq, &lguest_irq_controller, | 
| Rusty Russell | 6db6a5f | 2009-03-09 10:06:28 -0600 | [diff] [blame] | 851 | 				      handle_level_irq, "level"); | 
 | 852 | } | 
 | 853 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 854 | /* | 
 | 855 |  * Time. | 
 | 856 |  * | 
 | 857 |  * It would be far better for everyone if the Guest had its own clock, but | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 858 |  * until then the Host gives us the time on every interrupt. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 859 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 860 | static unsigned long lguest_get_wallclock(void) | 
 | 861 | { | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 862 | 	return lguest_data.time.tv_sec; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 863 | } | 
 | 864 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 865 | /* | 
 | 866 |  * The TSC is an Intel thing called the Time Stamp Counter.  The Host tells us | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 867 |  * what speed it runs at, or 0 if it's unusable as a reliable clock source. | 
 | 868 |  * This matches what we want here: if we return 0 from this function, the x86 | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 869 |  * TSC clock will give up and not register itself. | 
 | 870 |  */ | 
| Alok Kataria | e93ef94 | 2008-07-01 11:43:36 -0700 | [diff] [blame] | 871 | static unsigned long lguest_tsc_khz(void) | 
| Rusty Russell | 3fabc55 | 2008-03-11 09:35:56 -0500 | [diff] [blame] | 872 | { | 
 | 873 | 	return lguest_data.tsc_khz; | 
 | 874 | } | 
 | 875 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 876 | /* | 
 | 877 |  * If we can't use the TSC, the kernel falls back to our lower-priority | 
 | 878 |  * "lguest_clock", where we read the time value given to us by the Host. | 
 | 879 |  */ | 
| Magnus Damm | 8e19608 | 2009-04-21 12:24:00 -0700 | [diff] [blame] | 880 | static cycle_t lguest_clock_read(struct clocksource *cs) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 881 | { | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 882 | 	unsigned long sec, nsec; | 
 | 883 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 884 | 	/* | 
 | 885 | 	 * Since the time is in two parts (seconds and nanoseconds), we risk | 
| Rusty Russell | 3fabc55 | 2008-03-11 09:35:56 -0500 | [diff] [blame] | 886 | 	 * reading it just as it's changing from 99 & 0.999999999 to 100 and 0, | 
 | 887 | 	 * and getting 99 and 0.  As Linux tends to come apart under the stress | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 888 | 	 * of time travel, we must be careful: | 
 | 889 | 	 */ | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 890 | 	do { | 
 | 891 | 		/* First we read the seconds part. */ | 
 | 892 | 		sec = lguest_data.time.tv_sec; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 893 | 		/* | 
 | 894 | 		 * This read memory barrier tells the compiler and the CPU that | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 895 | 		 * this can't be reordered: we have to complete the above | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 896 | 		 * before going on. | 
 | 897 | 		 */ | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 898 | 		rmb(); | 
 | 899 | 		/* Now we read the nanoseconds part. */ | 
 | 900 | 		nsec = lguest_data.time.tv_nsec; | 
 | 901 | 		/* Make sure we've done that. */ | 
 | 902 | 		rmb(); | 
 | 903 | 		/* Now if the seconds part has changed, try again. */ | 
 | 904 | 	} while (unlikely(lguest_data.time.tv_sec != sec)); | 
 | 905 |  | 
| Rusty Russell | 3fabc55 | 2008-03-11 09:35:56 -0500 | [diff] [blame] | 906 | 	/* Our lguest clock is in real nanoseconds. */ | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 907 | 	return sec*1000000000ULL + nsec; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 908 | } | 
 | 909 |  | 
| Rusty Russell | 3fabc55 | 2008-03-11 09:35:56 -0500 | [diff] [blame] | 910 | /* This is the fallback clocksource: lower priority than the TSC clocksource. */ | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 911 | static struct clocksource lguest_clock = { | 
 | 912 | 	.name		= "lguest", | 
| Rusty Russell | 3fabc55 | 2008-03-11 09:35:56 -0500 | [diff] [blame] | 913 | 	.rating		= 200, | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 914 | 	.read		= lguest_clock_read, | 
| Rusty Russell | 6c8dca5 | 2007-07-27 13:42:52 +1000 | [diff] [blame] | 915 | 	.mask		= CLOCKSOURCE_MASK(64), | 
| Rusty Russell | 3725009 | 2007-08-09 20:52:35 +1000 | [diff] [blame] | 916 | 	.mult		= 1 << 22, | 
 | 917 | 	.shift		= 22, | 
| Tony Breeds | 05aa026 | 2007-10-22 10:56:25 +1000 | [diff] [blame] | 918 | 	.flags		= CLOCK_SOURCE_IS_CONTINUOUS, | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 919 | }; | 
 | 920 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 921 | /* | 
 | 922 |  * We also need a "struct clock_event_device": Linux asks us to set it to go | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 923 |  * off some time in the future.  Actually, James Morris figured all this out, I | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 924 |  * just applied the patch. | 
 | 925 |  */ | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 926 | static int lguest_clockevent_set_next_event(unsigned long delta, | 
 | 927 |                                            struct clock_event_device *evt) | 
 | 928 | { | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 929 | 	/* FIXME: I don't think this can ever happen, but James tells me he had | 
 | 930 | 	 * to put this code in.  Maybe we should remove it now.  Anyone? */ | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 931 | 	if (delta < LG_CLOCK_MIN_DELTA) { | 
 | 932 | 		if (printk_ratelimit()) | 
 | 933 | 			printk(KERN_DEBUG "%s: small delta %lu ns\n", | 
| Harvey Harrison | 77bf90e | 2008-03-03 11:37:23 -0800 | [diff] [blame] | 934 | 			       __func__, delta); | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 935 | 		return -ETIME; | 
 | 936 | 	} | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 937 |  | 
 | 938 | 	/* Please wake us this far in the future. */ | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 939 | 	hcall(LHCALL_SET_CLOCKEVENT, delta, 0, 0, 0); | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 940 | 	return 0; | 
 | 941 | } | 
 | 942 |  | 
 | 943 | static void lguest_clockevent_set_mode(enum clock_event_mode mode, | 
 | 944 |                                       struct clock_event_device *evt) | 
 | 945 | { | 
 | 946 | 	switch (mode) { | 
 | 947 | 	case CLOCK_EVT_MODE_UNUSED: | 
 | 948 | 	case CLOCK_EVT_MODE_SHUTDOWN: | 
 | 949 | 		/* A 0 argument shuts the clock down. */ | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 950 | 		hcall(LHCALL_SET_CLOCKEVENT, 0, 0, 0, 0); | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 951 | 		break; | 
 | 952 | 	case CLOCK_EVT_MODE_ONESHOT: | 
 | 953 | 		/* This is what we expect. */ | 
 | 954 | 		break; | 
 | 955 | 	case CLOCK_EVT_MODE_PERIODIC: | 
 | 956 | 		BUG(); | 
| Thomas Gleixner | 18de5bc | 2007-07-21 04:37:34 -0700 | [diff] [blame] | 957 | 	case CLOCK_EVT_MODE_RESUME: | 
 | 958 | 		break; | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 959 | 	} | 
 | 960 | } | 
 | 961 |  | 
 | 962 | /* This describes our primitive timer chip. */ | 
 | 963 | static struct clock_event_device lguest_clockevent = { | 
 | 964 | 	.name                   = "lguest", | 
 | 965 | 	.features               = CLOCK_EVT_FEAT_ONESHOT, | 
 | 966 | 	.set_next_event         = lguest_clockevent_set_next_event, | 
 | 967 | 	.set_mode               = lguest_clockevent_set_mode, | 
 | 968 | 	.rating                 = INT_MAX, | 
 | 969 | 	.mult                   = 1, | 
 | 970 | 	.shift                  = 0, | 
 | 971 | 	.min_delta_ns           = LG_CLOCK_MIN_DELTA, | 
 | 972 | 	.max_delta_ns           = LG_CLOCK_MAX_DELTA, | 
 | 973 | }; | 
 | 974 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 975 | /* | 
 | 976 |  * This is the Guest timer interrupt handler (hardware interrupt 0).  We just | 
 | 977 |  * call the clockevent infrastructure and it does whatever needs doing. | 
 | 978 |  */ | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 979 | static void lguest_time_irq(unsigned int irq, struct irq_desc *desc) | 
 | 980 | { | 
 | 981 | 	unsigned long flags; | 
 | 982 |  | 
 | 983 | 	/* Don't interrupt us while this is running. */ | 
 | 984 | 	local_irq_save(flags); | 
 | 985 | 	lguest_clockevent.event_handler(&lguest_clockevent); | 
 | 986 | 	local_irq_restore(flags); | 
 | 987 | } | 
 | 988 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 989 | /* | 
 | 990 |  * At some point in the boot process, we get asked to set up our timing | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 991 |  * infrastructure.  The kernel doesn't expect timer interrupts before this, but | 
 | 992 |  * we cleverly initialized the "blocked_interrupts" field of "struct | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 993 |  * lguest_data" so that timer interrupts were blocked until now. | 
 | 994 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 995 | static void lguest_time_init(void) | 
 | 996 | { | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 997 | 	/* Set up the timer interrupt (0) to go to our simple timer routine */ | 
| Thomas Gleixner | 2c77865 | 2011-03-12 12:20:43 +0100 | [diff] [blame] | 998 | 	irq_set_handler(0, lguest_time_irq); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 999 |  | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 1000 | 	clocksource_register(&lguest_clock); | 
 | 1001 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1002 | 	/* We can't set cpumask in the initializer: damn C limitations!  Set it | 
 | 1003 | 	 * here and register our timer device. */ | 
| Rusty Russell | 320ab2b | 2008-12-13 21:20:26 +1030 | [diff] [blame] | 1004 | 	lguest_clockevent.cpumask = cpumask_of(0); | 
| Rusty Russell | d7e28ff | 2007-07-19 01:49:23 -0700 | [diff] [blame] | 1005 | 	clockevents_register_device(&lguest_clockevent); | 
 | 1006 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1007 | 	/* Finally, we unblock the timer interrupt. */ | 
| Rusty Russell | bb6f1d9 | 2010-12-16 17:03:13 -0600 | [diff] [blame] | 1008 | 	clear_bit(0, lguest_data.blocked_interrupts); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1009 | } | 
 | 1010 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1011 | /* | 
 | 1012 |  * Miscellaneous bits and pieces. | 
 | 1013 |  * | 
 | 1014 |  * Here is an oddball collection of functions which the Guest needs for things | 
 | 1015 |  * to work.  They're pretty simple. | 
 | 1016 |  */ | 
 | 1017 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1018 | /* | 
 | 1019 |  * The Guest needs to tell the Host what stack it expects traps to use.  For | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1020 |  * native hardware, this is part of the Task State Segment mentioned above in | 
 | 1021 |  * lguest_load_tr_desc(), but to help hypervisors there's this special call. | 
 | 1022 |  * | 
 | 1023 |  * We tell the Host the segment we want to use (__KERNEL_DS is the kernel data | 
 | 1024 |  * segment), the privilege level (we're privilege level 1, the Host is 0 and | 
 | 1025 |  * will not tolerate us trying to use that), the stack pointer, and the number | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1026 |  * of pages in the stack. | 
 | 1027 |  */ | 
| H. Peter Anvin | faca622 | 2008-01-30 13:31:02 +0100 | [diff] [blame] | 1028 | static void lguest_load_sp0(struct tss_struct *tss, | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1029 | 			    struct thread_struct *thread) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1030 | { | 
| Matias Zabaljauregui | 4cd8b5e | 2009-03-14 13:37:52 -0200 | [diff] [blame] | 1031 | 	lazy_hcall3(LHCALL_SET_STACK, __KERNEL_DS | 0x1, thread->sp0, | 
 | 1032 | 		   THREAD_SIZE / PAGE_SIZE); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1033 | } | 
 | 1034 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1035 | /* Let's just say, I wouldn't do debugging under a Guest. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1036 | static void lguest_set_debugreg(int regno, unsigned long value) | 
 | 1037 | { | 
 | 1038 | 	/* FIXME: Implement */ | 
 | 1039 | } | 
 | 1040 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1041 | /* | 
 | 1042 |  * There are times when the kernel wants to make sure that no memory writes are | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1043 |  * caught in the cache (that they've all reached real hardware devices).  This | 
 | 1044 |  * doesn't matter for the Guest which has virtual hardware. | 
 | 1045 |  * | 
 | 1046 |  * On the Pentium 4 and above, cpuid() indicates that the Cache Line Flush | 
 | 1047 |  * (clflush) instruction is available and the kernel uses that.  Otherwise, it | 
 | 1048 |  * uses the older "Write Back and Invalidate Cache" (wbinvd) instruction. | 
 | 1049 |  * Unlike clflush, wbinvd can only be run at privilege level 0.  So we can | 
 | 1050 |  * ignore clflush, but replace wbinvd. | 
 | 1051 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1052 | static void lguest_wbinvd(void) | 
 | 1053 | { | 
 | 1054 | } | 
 | 1055 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1056 | /* | 
 | 1057 |  * If the Guest expects to have an Advanced Programmable Interrupt Controller, | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1058 |  * we play dumb by ignoring writes and returning 0 for reads.  So it's no | 
 | 1059 |  * longer Programmable nor Controlling anything, and I don't think 8 lines of | 
 | 1060 |  * code qualifies for Advanced.  It will also never interrupt anything.  It | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1061 |  * does, however, allow us to get through the Linux boot code. | 
 | 1062 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1063 | #ifdef CONFIG_X86_LOCAL_APIC | 
| Suresh Siddha | ad66dd3 | 2008-07-11 13:11:56 -0700 | [diff] [blame] | 1064 | static void lguest_apic_write(u32 reg, u32 v) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1065 | { | 
 | 1066 | } | 
 | 1067 |  | 
| Suresh Siddha | ad66dd3 | 2008-07-11 13:11:56 -0700 | [diff] [blame] | 1068 | static u32 lguest_apic_read(u32 reg) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1069 | { | 
 | 1070 | 	return 0; | 
 | 1071 | } | 
| Suresh Siddha | 511d9d3 | 2008-07-14 09:49:14 -0700 | [diff] [blame] | 1072 |  | 
 | 1073 | static u64 lguest_apic_icr_read(void) | 
 | 1074 | { | 
 | 1075 | 	return 0; | 
 | 1076 | } | 
 | 1077 |  | 
 | 1078 | static void lguest_apic_icr_write(u32 low, u32 id) | 
 | 1079 | { | 
 | 1080 | 	/* Warn to see if there's any stray references */ | 
 | 1081 | 	WARN_ON(1); | 
 | 1082 | } | 
 | 1083 |  | 
 | 1084 | static void lguest_apic_wait_icr_idle(void) | 
 | 1085 | { | 
 | 1086 | 	return; | 
 | 1087 | } | 
 | 1088 |  | 
 | 1089 | static u32 lguest_apic_safe_wait_icr_idle(void) | 
 | 1090 | { | 
 | 1091 | 	return 0; | 
 | 1092 | } | 
 | 1093 |  | 
| Yinghai Lu | c1eeb2d | 2009-02-16 23:02:14 -0800 | [diff] [blame] | 1094 | static void set_lguest_basic_apic_ops(void) | 
 | 1095 | { | 
 | 1096 | 	apic->read = lguest_apic_read; | 
 | 1097 | 	apic->write = lguest_apic_write; | 
 | 1098 | 	apic->icr_read = lguest_apic_icr_read; | 
 | 1099 | 	apic->icr_write = lguest_apic_icr_write; | 
 | 1100 | 	apic->wait_icr_idle = lguest_apic_wait_icr_idle; | 
 | 1101 | 	apic->safe_wait_icr_idle = lguest_apic_safe_wait_icr_idle; | 
| Suresh Siddha | 511d9d3 | 2008-07-14 09:49:14 -0700 | [diff] [blame] | 1102 | }; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1103 | #endif | 
 | 1104 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1105 | /* STOP!  Until an interrupt comes in. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1106 | static void lguest_safe_halt(void) | 
 | 1107 | { | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 1108 | 	hcall(LHCALL_HALT, 0, 0, 0, 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1109 | } | 
 | 1110 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1111 | /* | 
 | 1112 |  * The SHUTDOWN hypercall takes a string to describe what's happening, and | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1113 |  * an argument which says whether this to restart (reboot) the Guest or not. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1114 |  * | 
 | 1115 |  * Note that the Host always prefers that the Guest speak in physical addresses | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1116 |  * rather than virtual addresses, so we use __pa() here. | 
 | 1117 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1118 | static void lguest_power_off(void) | 
 | 1119 | { | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 1120 | 	hcall(LHCALL_SHUTDOWN, __pa("Power down"), | 
 | 1121 | 	      LGUEST_SHUTDOWN_POWEROFF, 0, 0); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1122 | } | 
 | 1123 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1124 | /* | 
 | 1125 |  * Panicing. | 
 | 1126 |  * | 
 | 1127 |  * Don't.  But if you did, this is what happens. | 
 | 1128 |  */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1129 | static int lguest_panic(struct notifier_block *nb, unsigned long l, void *p) | 
 | 1130 | { | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 1131 | 	hcall(LHCALL_SHUTDOWN, __pa(p), LGUEST_SHUTDOWN_POWEROFF, 0, 0); | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1132 | 	/* The hcall won't return, but to keep gcc happy, we're "done". */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1133 | 	return NOTIFY_DONE; | 
 | 1134 | } | 
 | 1135 |  | 
 | 1136 | static struct notifier_block paniced = { | 
 | 1137 | 	.notifier_call = lguest_panic | 
 | 1138 | }; | 
 | 1139 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1140 | /* Setting up memory is fairly easy. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1141 | static __init char *lguest_memory_setup(void) | 
 | 1142 | { | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1143 | 	/* | 
 | 1144 | 	 *The Linux bootloader header contains an "e820" memory map: the | 
 | 1145 | 	 * Launcher populated the first entry with our memory limit. | 
 | 1146 | 	 */ | 
| Yinghai Lu | d0be6bd | 2008-06-15 18:58:51 -0700 | [diff] [blame] | 1147 | 	e820_add_region(boot_params.e820_map[0].addr, | 
| H. Peter Anvin | 30c8264 | 2007-10-15 17:13:22 -0700 | [diff] [blame] | 1148 | 			  boot_params.e820_map[0].size, | 
 | 1149 | 			  boot_params.e820_map[0].type); | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1150 |  | 
 | 1151 | 	/* This string is for the boot messages. */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1152 | 	return "LGUEST"; | 
 | 1153 | } | 
 | 1154 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1155 | /* | 
 | 1156 |  * We will eventually use the virtio console device to produce console output, | 
| Rusty Russell | e1e7296 | 2007-10-25 15:02:50 +1000 | [diff] [blame] | 1157 |  * but before that is set up we use LHCALL_NOTIFY on normal memory to produce | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1158 |  * console output. | 
 | 1159 |  */ | 
| Rusty Russell | 19f1537 | 2007-10-22 11:24:21 +1000 | [diff] [blame] | 1160 | static __init int early_put_chars(u32 vtermno, const char *buf, int count) | 
 | 1161 | { | 
 | 1162 | 	char scratch[17]; | 
 | 1163 | 	unsigned int len = count; | 
 | 1164 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1165 | 	/* We use a nul-terminated string, so we make a copy.  Icky, huh? */ | 
| Rusty Russell | 19f1537 | 2007-10-22 11:24:21 +1000 | [diff] [blame] | 1166 | 	if (len > sizeof(scratch) - 1) | 
 | 1167 | 		len = sizeof(scratch) - 1; | 
 | 1168 | 	scratch[len] = '\0'; | 
 | 1169 | 	memcpy(scratch, buf, len); | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 1170 | 	hcall(LHCALL_NOTIFY, __pa(scratch), 0, 0, 0); | 
| Rusty Russell | 19f1537 | 2007-10-22 11:24:21 +1000 | [diff] [blame] | 1171 |  | 
 | 1172 | 	/* This routine returns the number of bytes actually written. */ | 
 | 1173 | 	return len; | 
 | 1174 | } | 
 | 1175 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1176 | /* | 
 | 1177 |  * Rebooting also tells the Host we're finished, but the RESTART flag tells the | 
 | 1178 |  * Launcher to reboot us. | 
 | 1179 |  */ | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1180 | static void lguest_restart(char *reason) | 
 | 1181 | { | 
| Rusty Russell | 091ebf0 | 2010-04-14 21:43:54 -0600 | [diff] [blame] | 1182 | 	hcall(LHCALL_SHUTDOWN, __pa(reason), LGUEST_SHUTDOWN_RESTART, 0, 0); | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1183 | } | 
 | 1184 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1185 | /*G:050 | 
 | 1186 |  * Patching (Powerfully Placating Performance Pedants) | 
 | 1187 |  * | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1188 |  * We have already seen that pv_ops structures let us replace simple native | 
 | 1189 |  * instructions with calls to the appropriate back end all throughout the | 
 | 1190 |  * kernel.  This allows the same kernel to run as a Guest and as a native | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1191 |  * kernel, but it's slow because of all the indirect branches. | 
 | 1192 |  * | 
 | 1193 |  * Remember that David Wheeler quote about "Any problem in computer science can | 
 | 1194 |  * be solved with another layer of indirection"?  The rest of that quote is | 
 | 1195 |  * "... But that usually will create another problem."  This is the first of | 
 | 1196 |  * those problems. | 
 | 1197 |  * | 
 | 1198 |  * Our current solution is to allow the paravirt back end to optionally patch | 
 | 1199 |  * over the indirect calls to replace them with something more efficient.  We | 
| Rusty Russell | a32a881 | 2009-06-12 22:27:02 -0600 | [diff] [blame] | 1200 |  * patch two of the simplest of the most commonly called functions: disable | 
 | 1201 |  * interrupts and save interrupts.  We usually have 6 or 10 bytes to patch | 
 | 1202 |  * into: the Guest versions of these operations are small enough that we can | 
 | 1203 |  * fit comfortably. | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1204 |  * | 
 | 1205 |  * First we need assembly templates of each of the patchable Guest operations, | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1206 |  * and these are in i386_head.S. | 
 | 1207 |  */ | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1208 |  | 
 | 1209 | /*G:060 We construct a table from the assembler templates: */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1210 | static const struct lguest_insns | 
 | 1211 | { | 
 | 1212 | 	const char *start, *end; | 
 | 1213 | } lguest_insns[] = { | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1214 | 	[PARAVIRT_PATCH(pv_irq_ops.irq_disable)] = { lgstart_cli, lgend_cli }, | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1215 | 	[PARAVIRT_PATCH(pv_irq_ops.save_fl)] = { lgstart_pushf, lgend_pushf }, | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1216 | }; | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1217 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1218 | /* | 
 | 1219 |  * Now our patch routine is fairly simple (based on the native one in | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1220 |  * paravirt.c).  If we have a replacement, we copy it in and return how much of | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1221 |  * the available space we used. | 
 | 1222 |  */ | 
| Andi Kleen | ab144f5 | 2007-08-10 22:31:03 +0200 | [diff] [blame] | 1223 | static unsigned lguest_patch(u8 type, u16 clobber, void *ibuf, | 
 | 1224 | 			     unsigned long addr, unsigned len) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1225 | { | 
 | 1226 | 	unsigned int insn_len; | 
 | 1227 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1228 | 	/* Don't do anything special if we don't have a replacement */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1229 | 	if (type >= ARRAY_SIZE(lguest_insns) || !lguest_insns[type].start) | 
| Andi Kleen | ab144f5 | 2007-08-10 22:31:03 +0200 | [diff] [blame] | 1230 | 		return paravirt_patch_default(type, clobber, ibuf, addr, len); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1231 |  | 
 | 1232 | 	insn_len = lguest_insns[type].end - lguest_insns[type].start; | 
 | 1233 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1234 | 	/* Similarly if it can't fit (doesn't happen, but let's be thorough). */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1235 | 	if (len < insn_len) | 
| Andi Kleen | ab144f5 | 2007-08-10 22:31:03 +0200 | [diff] [blame] | 1236 | 		return paravirt_patch_default(type, clobber, ibuf, addr, len); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1237 |  | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1238 | 	/* Copy in our instructions. */ | 
| Andi Kleen | ab144f5 | 2007-08-10 22:31:03 +0200 | [diff] [blame] | 1239 | 	memcpy(ibuf, lguest_insns[type].start, insn_len); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1240 | 	return insn_len; | 
 | 1241 | } | 
 | 1242 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1243 | /*G:029 | 
 | 1244 |  * Once we get to lguest_init(), we know we're a Guest.  The various | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1245 |  * pv_ops structures in the kernel provide points for (almost) every routine we | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1246 |  * have to override to avoid privileged instructions. | 
 | 1247 |  */ | 
| Rusty Russell | 814a0e5 | 2007-10-22 11:29:44 +1000 | [diff] [blame] | 1248 | __init void lguest_init(void) | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1249 | { | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1250 | 	/* We're under lguest. */ | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1251 | 	pv_info.name = "lguest"; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1252 | 	/* Paravirt is enabled. */ | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1253 | 	pv_info.paravirt_enabled = 1; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1254 | 	/* We're running at privilege level 1, not 0 as normal. */ | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1255 | 	pv_info.kernel_rpl = 1; | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1256 | 	/* Everyone except Xen runs with this set. */ | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 1257 | 	pv_info.shared_kernel_pmd = 1; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1258 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1259 | 	/* | 
 | 1260 | 	 * We set up all the lguest overrides for sensitive operations.  These | 
 | 1261 | 	 * are detailed with the operations themselves. | 
 | 1262 | 	 */ | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1263 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1264 | 	/* Interrupt-related operations */ | 
| Jeremy Fitzhardinge | ecb93d1 | 2009-01-28 14:35:05 -0800 | [diff] [blame] | 1265 | 	pv_irq_ops.save_fl = PV_CALLEE_SAVE(save_fl); | 
| Rusty Russell | 61f4bc8 | 2009-06-12 22:27:03 -0600 | [diff] [blame] | 1266 | 	pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(lg_restore_fl); | 
| Jeremy Fitzhardinge | ecb93d1 | 2009-01-28 14:35:05 -0800 | [diff] [blame] | 1267 | 	pv_irq_ops.irq_disable = PV_CALLEE_SAVE(irq_disable); | 
| Rusty Russell | 61f4bc8 | 2009-06-12 22:27:03 -0600 | [diff] [blame] | 1268 | 	pv_irq_ops.irq_enable = __PV_IS_CALLEE_SAVE(lg_irq_enable); | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1269 | 	pv_irq_ops.safe_halt = lguest_safe_halt; | 
 | 1270 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1271 | 	/* Setup operations */ | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1272 | 	pv_init_ops.patch = lguest_patch; | 
 | 1273 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1274 | 	/* Intercepts of various CPU instructions */ | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1275 | 	pv_cpu_ops.load_gdt = lguest_load_gdt; | 
 | 1276 | 	pv_cpu_ops.cpuid = lguest_cpuid; | 
 | 1277 | 	pv_cpu_ops.load_idt = lguest_load_idt; | 
 | 1278 | 	pv_cpu_ops.iret = lguest_iret; | 
| H. Peter Anvin | faca622 | 2008-01-30 13:31:02 +0100 | [diff] [blame] | 1279 | 	pv_cpu_ops.load_sp0 = lguest_load_sp0; | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1280 | 	pv_cpu_ops.load_tr_desc = lguest_load_tr_desc; | 
 | 1281 | 	pv_cpu_ops.set_ldt = lguest_set_ldt; | 
 | 1282 | 	pv_cpu_ops.load_tls = lguest_load_tls; | 
 | 1283 | 	pv_cpu_ops.set_debugreg = lguest_set_debugreg; | 
 | 1284 | 	pv_cpu_ops.clts = lguest_clts; | 
 | 1285 | 	pv_cpu_ops.read_cr0 = lguest_read_cr0; | 
 | 1286 | 	pv_cpu_ops.write_cr0 = lguest_write_cr0; | 
 | 1287 | 	pv_cpu_ops.read_cr4 = lguest_read_cr4; | 
 | 1288 | 	pv_cpu_ops.write_cr4 = lguest_write_cr4; | 
 | 1289 | 	pv_cpu_ops.write_gdt_entry = lguest_write_gdt_entry; | 
 | 1290 | 	pv_cpu_ops.write_idt_entry = lguest_write_idt_entry; | 
 | 1291 | 	pv_cpu_ops.wbinvd = lguest_wbinvd; | 
| Jeremy Fitzhardinge | 224101e | 2009-02-18 11:18:57 -0800 | [diff] [blame] | 1292 | 	pv_cpu_ops.start_context_switch = paravirt_start_context_switch; | 
 | 1293 | 	pv_cpu_ops.end_context_switch = lguest_end_context_switch; | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1294 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1295 | 	/* Pagetable management */ | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1296 | 	pv_mmu_ops.write_cr3 = lguest_write_cr3; | 
 | 1297 | 	pv_mmu_ops.flush_tlb_user = lguest_flush_tlb_user; | 
 | 1298 | 	pv_mmu_ops.flush_tlb_single = lguest_flush_tlb_single; | 
 | 1299 | 	pv_mmu_ops.flush_tlb_kernel = lguest_flush_tlb_kernel; | 
 | 1300 | 	pv_mmu_ops.set_pte = lguest_set_pte; | 
 | 1301 | 	pv_mmu_ops.set_pte_at = lguest_set_pte_at; | 
 | 1302 | 	pv_mmu_ops.set_pmd = lguest_set_pmd; | 
| Matias Zabaljauregui | acdd0b6 | 2009-06-12 22:27:07 -0600 | [diff] [blame] | 1303 | #ifdef CONFIG_X86_PAE | 
 | 1304 | 	pv_mmu_ops.set_pte_atomic = lguest_set_pte_atomic; | 
 | 1305 | 	pv_mmu_ops.pte_clear = lguest_pte_clear; | 
 | 1306 | 	pv_mmu_ops.pmd_clear = lguest_pmd_clear; | 
 | 1307 | 	pv_mmu_ops.set_pud = lguest_set_pud; | 
 | 1308 | #endif | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1309 | 	pv_mmu_ops.read_cr2 = lguest_read_cr2; | 
 | 1310 | 	pv_mmu_ops.read_cr3 = lguest_read_cr3; | 
| Jeremy Fitzhardinge | 8965c1c | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1311 | 	pv_mmu_ops.lazy_mode.enter = paravirt_enter_lazy_mmu; | 
| Jeremy Fitzhardinge | b407fc5 | 2009-02-17 23:46:21 -0800 | [diff] [blame] | 1312 | 	pv_mmu_ops.lazy_mode.leave = lguest_leave_lazy_mmu_mode; | 
| Rusty Russell | b7ff99e | 2009-03-30 21:55:23 -0600 | [diff] [blame] | 1313 | 	pv_mmu_ops.pte_update = lguest_pte_update; | 
 | 1314 | 	pv_mmu_ops.pte_update_defer = lguest_pte_update; | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1315 |  | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1316 | #ifdef CONFIG_X86_LOCAL_APIC | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1317 | 	/* APIC read/write intercepts */ | 
| Yinghai Lu | c1eeb2d | 2009-02-16 23:02:14 -0800 | [diff] [blame] | 1318 | 	set_lguest_basic_apic_ops(); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1319 | #endif | 
| Jeremy Fitzhardinge | 93b1eab | 2007-10-16 11:51:29 -0700 | [diff] [blame] | 1320 |  | 
| Thomas Gleixner | 6b18ae3 | 2009-08-20 10:19:54 +0200 | [diff] [blame] | 1321 | 	x86_init.resources.memory_setup = lguest_memory_setup; | 
| Thomas Gleixner | 66bcaf0 | 2009-08-20 09:59:09 +0200 | [diff] [blame] | 1322 | 	x86_init.irqs.intr_init = lguest_init_IRQ; | 
| Thomas Gleixner | 845b394 | 2009-08-19 15:37:03 +0200 | [diff] [blame] | 1323 | 	x86_init.timers.timer_init = lguest_time_init; | 
| Thomas Gleixner | 2d82640 | 2009-08-20 17:06:25 +0200 | [diff] [blame] | 1324 | 	x86_platform.calibrate_tsc = lguest_tsc_khz; | 
| Feng Tang | 7bd867d | 2009-09-10 10:48:56 +0800 | [diff] [blame] | 1325 | 	x86_platform.get_wallclock =  lguest_get_wallclock; | 
| Thomas Gleixner | 6b18ae3 | 2009-08-20 10:19:54 +0200 | [diff] [blame] | 1326 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1327 | 	/* | 
 | 1328 | 	 * Now is a good time to look at the implementations of these functions | 
 | 1329 | 	 * before returning to the rest of lguest_init(). | 
 | 1330 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1331 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1332 | 	/*G:070 | 
 | 1333 | 	 * Now we've seen all the paravirt_ops, we return to | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1334 | 	 * lguest_init() where the rest of the fairly chaotic boot setup | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1335 | 	 * occurs. | 
 | 1336 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1337 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1338 | 	/* | 
 | 1339 | 	 * The stack protector is a weird thing where gcc places a canary | 
| Rusty Russell | 2cb7878 | 2009-06-03 14:52:24 +0930 | [diff] [blame] | 1340 | 	 * value on the stack and then checks it on return.  This file is | 
 | 1341 | 	 * compiled with -fno-stack-protector it, so we got this far without | 
 | 1342 | 	 * problems.  The value of the canary is kept at offset 20 from the | 
 | 1343 | 	 * %gs register, so we need to set that up before calling C functions | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1344 | 	 * in other files. | 
 | 1345 | 	 */ | 
| Rusty Russell | 2cb7878 | 2009-06-03 14:52:24 +0930 | [diff] [blame] | 1346 | 	setup_stack_canary_segment(0); | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1347 |  | 
 | 1348 | 	/* | 
 | 1349 | 	 * We could just call load_stack_canary_segment(), but we might as well | 
 | 1350 | 	 * call switch_to_new_gdt() which loads the whole table and sets up the | 
 | 1351 | 	 * per-cpu segment descriptor register %fs as well. | 
 | 1352 | 	 */ | 
| Rusty Russell | 2cb7878 | 2009-06-03 14:52:24 +0930 | [diff] [blame] | 1353 | 	switch_to_new_gdt(0); | 
 | 1354 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1355 | 	/* | 
 | 1356 | 	 * The Host<->Guest Switcher lives at the top of our address space, and | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1357 | 	 * the Host told us how big it is when we made LGUEST_INIT hypercall: | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1358 | 	 * it put the answer in lguest_data.reserve_mem | 
 | 1359 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1360 | 	reserve_top_address(lguest_data.reserve_mem); | 
 | 1361 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1362 | 	/* | 
 | 1363 | 	 * If we don't initialize the lock dependency checker now, it crashes | 
| Rusty Russell | cdae0ad | 2009-09-23 22:26:42 -0600 | [diff] [blame] | 1364 | 	 * atomic_notifier_chain_register, then paravirt_disable_iospace. | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1365 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1366 | 	lockdep_init(); | 
 | 1367 |  | 
| Rusty Russell | cdae0ad | 2009-09-23 22:26:42 -0600 | [diff] [blame] | 1368 | 	/* Hook in our special panic hypercall code. */ | 
 | 1369 | 	atomic_notifier_chain_register(&panic_notifier_list, &paniced); | 
 | 1370 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1371 | 	/* | 
 | 1372 | 	 * The IDE code spends about 3 seconds probing for disks: if we reserve | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1373 | 	 * all the I/O ports up front it can't get them and so doesn't probe. | 
 | 1374 | 	 * Other device drivers are similar (but less severe).  This cuts the | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1375 | 	 * kernel boot time on my machine from 4.1 seconds to 0.45 seconds. | 
 | 1376 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1377 | 	paravirt_disable_iospace(); | 
 | 1378 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1379 | 	/* | 
 | 1380 | 	 * This is messy CPU setup stuff which the native boot code does before | 
 | 1381 | 	 * start_kernel, so we have to do, too: | 
 | 1382 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1383 | 	cpu_detect(&new_cpu_data); | 
 | 1384 | 	/* head.S usually sets up the first capability word, so do it here. */ | 
 | 1385 | 	new_cpu_data.x86_capability[0] = cpuid_edx(1); | 
 | 1386 |  | 
 | 1387 | 	/* Math is always hard! */ | 
 | 1388 | 	new_cpu_data.hard_math = 1; | 
 | 1389 |  | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1390 | 	/* We don't have features.  We have puppies!  Puppies! */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1391 | #ifdef CONFIG_X86_MCE | 
 | 1392 | 	mce_disabled = 1; | 
 | 1393 | #endif | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1394 | #ifdef CONFIG_ACPI | 
 | 1395 | 	acpi_disabled = 1; | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1396 | #endif | 
 | 1397 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1398 | 	/* | 
 | 1399 | 	 * We set the preferred console to "hvc".  This is the "hypervisor | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1400 | 	 * virtual console" driver written by the PowerPC people, which we also | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1401 | 	 * adapted for lguest's use. | 
 | 1402 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1403 | 	add_preferred_console("hvc", 0, NULL); | 
 | 1404 |  | 
| Rusty Russell | 19f1537 | 2007-10-22 11:24:21 +1000 | [diff] [blame] | 1405 | 	/* Register our very early console. */ | 
 | 1406 | 	virtio_cons_early_init(early_put_chars); | 
 | 1407 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1408 | 	/* | 
 | 1409 | 	 * Last of all, we set the power management poweroff hook to point to | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1410 | 	 * the Guest routine to power off, and the reboot hook to our restart | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1411 | 	 * routine. | 
 | 1412 | 	 */ | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1413 | 	pm_power_off = lguest_power_off; | 
| Balaji Rao | ec04b13 | 2007-12-28 14:26:24 +0530 | [diff] [blame] | 1414 | 	machine_ops.restart = lguest_restart; | 
| Rusty Russell | a6bd8e1 | 2008-03-28 11:05:53 -0500 | [diff] [blame] | 1415 |  | 
| Rusty Russell | 2e04ef7 | 2009-07-30 16:03:45 -0600 | [diff] [blame] | 1416 | 	/* | 
 | 1417 | 	 * Now we're set up, call i386_start_kernel() in head32.c and we proceed | 
 | 1418 | 	 * to boot as normal.  It never returns. | 
 | 1419 | 	 */ | 
| Yinghai Lu | f0d4310 | 2008-05-29 12:56:36 -0700 | [diff] [blame] | 1420 | 	i386_start_kernel(); | 
| Rusty Russell | 07ad157 | 2007-07-19 01:49:22 -0700 | [diff] [blame] | 1421 | } | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1422 | /* | 
 | 1423 |  * This marks the end of stage II of our journey, The Guest. | 
 | 1424 |  * | 
| Rusty Russell | e1e7296 | 2007-10-25 15:02:50 +1000 | [diff] [blame] | 1425 |  * It is now time for us to explore the layer of virtual drivers and complete | 
 | 1426 |  * our understanding of the Guest in "make Drivers". | 
| Rusty Russell | b2b47c2 | 2007-07-26 10:41:02 -0700 | [diff] [blame] | 1427 |  */ |