x86/asm/entry/64: Enable interrupts *after* we fetch PER_CPU_VAR(old_rsp)
authorDenys Vlasenko <dvlasenk@redhat.com>
Tue, 17 Mar 2015 13:52:24 +0000 (14:52 +0100)
committerIngo Molnar <mingo@kernel.org>
Tue, 17 Mar 2015 15:01:40 +0000 (16:01 +0100)
We want to use PER_CPU_VAR(old_rsp) as a simple temporary register,
to shuffle user-space RSP into (and from) when we set up the system
call stack frame. At that point we cannot shuffle values into general
purpose registers, because we have not saved them yet.

To be able to do this shuffling into a memory location, we must be
atomic and must not be preempted while we do the shuffling, otherwise
the 'temporary' register gets overwritten by some other task's
temporary register contents ...

Tested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Borislav Petkov <bp@alien8.de>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1426600344-8254-1-git-send-email-dvlasenk@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/kernel/entry_64.S

index d86788c3257b84fcd8c792213b7e784cbf1ad2cc..aed3f11c373b0713d040640e3a3aa6f6cf3d7262 100644 (file)
@@ -241,16 +241,16 @@ GLOBAL(system_call_after_swapgs)
        movq    %rsp,PER_CPU_VAR(old_rsp)
        /* kernel_stack is set so that 5 slots (iret frame) are preallocated */
        movq    PER_CPU_VAR(kernel_stack),%rsp
-       /*
-        * No need to follow this irqs off/on section - it's straight
-        * and short:
-        */
-       ENABLE_INTERRUPTS(CLBR_NONE)
        ALLOC_PT_GPREGS_ON_STACK 8              /* +8: space for orig_ax */
        movq    %rcx,RIP(%rsp)
        movq    PER_CPU_VAR(old_rsp),%rcx
        movq    %r11,EFLAGS(%rsp)
        movq    %rcx,RSP(%rsp)
+       /*
+        * No need to follow this irqs off/on section - it's straight
+        * and short:
+        */
+       ENABLE_INTERRUPTS(CLBR_NONE)
        movq_cfi rax,ORIG_RAX
        SAVE_C_REGS_EXCEPT_RAX_RCX_R11
        movq    $-ENOSYS,RAX(%rsp)
This page took 0.033373 seconds and 5 git commands to generate.