alpha: fix RTC on marvel
Unlike other alphas, marvel doesn't have real PC-style CMOS clock hardware
- RTC accesses are emulated via PAL calls. Unfortunately, for unknown
reason these calls work only on CPU #0. So current implementation for
arbitrary CPU makes CMOS_READ/WRITE to be executed on CPU #0 via IPI.
However, for obvious reason this doesn't work with standard
get/set_rtc_time() functions, where a bunch of CMOS accesses is done with
disabled interrupts.
Solved by making the IPI calls for entire get/set_rtc_time() functions,
not for individual CMOS accesses. Which is also a lot more effective
performance-wise.
The patch is largely based on the code from Jay Estabrook.
My changes:
- tweak asm-generic/rtc.h by adding a couple of #defines to
avoid a massive code duplication in arch/alpha/include/asm/rtc.h;
- sys_marvel.c: fix get/set_rtc_time() return values (Jay's FIXMEs).
NOTE: this fixes *only* LIB_RTC drivers. Legacy (CONFIG_RTC) driver
wont't work on marvel. Actually I think that we should just disable
CONFIG_RTC on alpha (maybe in 2.6.30?), like most other arches - AFAIK,
all modern distributions use LIB_RTC anyway.
Signed-off-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
diff --git a/arch/alpha/kernel/machvec_impl.h b/arch/alpha/kernel/machvec_impl.h
index 466c9df..512685f 100644
--- a/arch/alpha/kernel/machvec_impl.h
+++ b/arch/alpha/kernel/machvec_impl.h
@@ -40,7 +40,10 @@
#define CAT1(x,y) x##y
#define CAT(x,y) CAT1(x,y)
-#define DO_DEFAULT_RTC .rtc_port = 0x70
+#define DO_DEFAULT_RTC \
+ .rtc_port = 0x70, \
+ .rtc_get_time = common_get_rtc_time, \
+ .rtc_set_time = common_set_rtc_time
#define DO_EV4_MMU \
.max_asn = EV4_MAX_ASN, \