summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorDavid Mosberger-Tang <davidm@hpl.hp.com>2005-04-25 13:20:38 -0700
committerTony Luck <tony.luck@intel.com>2005-04-25 13:20:38 -0700
commita37d98f6a98254c05315e0bbf45c4602942d14b1 (patch)
tree8d6a3b98118866319a76d719efa9d7fbe5914854 /arch
parent93a07d0a0e7b013ee73fb39d4edb07b47288912e (diff)
[IA64] fix syscall-optimization goof
Sadly, I goofed in this syscall-tuning patch: ChangeSet 1.1966.1.40 2005/01/22 13:31:05 davidm@hpl.hp.com [IA64] Improve ia64_leave_syscall() for McKinley-type cores. Optimize ia64_leave_syscall() a bit better for McKinley-type cores. The patch looks big, but that's mostly due to renaming r16/r17 to r2/r3. Good for a 13 cycle improvement. The problem is that the size of the physical stacked registers was loaded into the wrong register (r3 instead of r17). Since r17 by coincidence always had the value 1, this had the effect of turning rse_clear_invalid into a no-op. That poses the risk of leaking kernel state back to user-land and is hence not acceptable. The fix below is simple, but unfortunately it costs us about 28 cycles in syscall overhead. ;-( Unfortunately, there isn't much we can do about that since those registers have to be cleared one way or another. --david Signed-off-by: Tony Luck <tony.luck@intel.com>
Diffstat (limited to 'arch')
-rw-r--r--arch/ia64/kernel/entry.S2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/ia64/kernel/entry.S b/arch/ia64/kernel/entry.S
index 73e23dafe8e9..bd86fea49a0c 100644
--- a/arch/ia64/kernel/entry.S
+++ b/arch/ia64/kernel/entry.S
@@ -759,7 +759,7 @@ ENTRY(ia64_leave_syscall)
(pUStk) st1 [r14]=r17
addl r3=THIS_CPU(ia64_phys_stacked_size_p8),r0
;;
-(pUStk) ld4 r3=[r3] // r3 = cpu_data->phys_stacked_size_p8
+(pUStk) ld4 r17=[r3] // r17 = cpu_data->phys_stacked_size_p8
mov.m ar.csd=r0 // M2 clear ar.csd
mov b6=r18 // I0 restore b6
;;