x86/nmi/64: Fix a paravirt stack-clobbering bug in the NMI code
authorAndy Lutomirski <luto@kernel.org>
Sun, 20 Sep 2015 23:32:05 +0000 (16:32 -0700)
committerThomas Gleixner <tglx@linutronix.de>
Tue, 22 Sep 2015 20:40:36 +0000 (22:40 +0200)
The NMI entry code that switches to the normal kernel stack needs to
be very careful not to clobber any extra stack slots on the NMI
stack.  The code is fine under the assumption that SWAPGS is just a
normal instruction, but that assumption isn't really true.  Use
SWAPGS_UNSAFE_STACK instead.

This is part of a fix for some random crashes that Sasha saw.

Fixes: 9b6e6a8334d5 ("x86/nmi/64: Switch stacks on userspace NMI entry")
Reported-and-tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/974bc40edffdb5c2950a5c4977f821a446b76178.1442791737.git.luto@kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
arch/x86/entry/entry_64.S

index 404ca97c471565e6b66b212b5f0fe5d2c702e363..055a01de7c8da6e052cfdebe0be8447b14933c87 100644 (file)
@@ -1190,9 +1190,12 @@ ENTRY(nmi)
         * we don't want to enable interrupts, because then we'll end
         * up in an awkward situation in which IRQs are on but NMIs
         * are off.
+        *
+        * We also must not push anything to the stack before switching
+        * stacks lest we corrupt the "NMI executing" variable.
         */
 
-       SWAPGS
+       SWAPGS_UNSAFE_STACK
        cld
        movq    %rsp, %rdx
        movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp
This page took 0.034797 seconds and 5 git commands to generate.