x86/numa: Improve internode cache alignment
authorAlex Shi <alex.shi@intel.com>
Sat, 3 Mar 2012 11:27:27 +0000 (19:27 +0800)
committerIngo Molnar <mingo@elte.hu>
Mon, 5 Mar 2012 08:19:20 +0000 (09:19 +0100)
Currently cache alignment among nodes in the kernel is still 128
bytes on x86 NUMA machines - we got that X86_INTERNODE_CACHE_SHIFT
default from old P4 processors.

But now most modern x86 CPUs use the same size: 64 bytes from L1 to
last level L3. so let's remove the incorrect setting, and directly
use the L1 cache size to do SMP cache line alignment.

This patch saves some memory space on kernel data, and it also
improves the cache locality of kernel data.

The System.map is quite different with/without this change:

before patch after patch
  ...
  000000000000b000 d tlb_vector_|  000000000000b000 d tlb_vector
  000000000000b080 d cpu_loops_p|  000000000000b040 d cpu_loops_
  ...

Signed-off-by: Alex Shi <alex.shi@intel.com>
Cc: asit.k.mallick@intel.com
Link: http://lkml.kernel.org/r/1330774047-18597-1-git-send-email-alex.shi@intel.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
arch/x86/Kconfig.cpu

index 3c57033e22118f2ce7771d10fa8305193c94af20..6443c6f038e8f4b23ab55ec1402aedf2996d1ea5 100644 (file)
@@ -303,7 +303,6 @@ config X86_GENERIC
 config X86_INTERNODE_CACHE_SHIFT
        int
        default "12" if X86_VSMP
-       default "7" if NUMA
        default X86_L1_CACHE_SHIFT
 
 config X86_CMPXCHG
This page took 0.03772 seconds and 5 git commands to generate.