fs/proc/task_mmu.c: make the task_mmu walk_page_range() limit in clear_refs_write...
authorJames Morse <james.morse@arm.com>
Sat, 10 Sep 2016 10:34:14 +0000 (20:34 +1000)
committerStephen Rothwell <sfr@canb.auug.org.au>
Sat, 10 Sep 2016 10:34:14 +0000 (20:34 +1000)
commit13f5e25729d972e4aeacb7a626605d419fc2fed5
treebdbb3cc7ed6c921c0a63c064510932211f3f12c8
parente03102b32d41028b7b63f35b7b2b2c6cee96b4c0
fs/proc/task_mmu.c: make the task_mmu walk_page_range() limit in clear_refs_write() obvious

Trying to walk all of virtual memory requires architecture specific
knowledge.  On x86_64, addresses must be sign extended from bit 48,
whereas on arm64 the top VA_BITS of address space have their own set of
page tables.

clear_refs_write() calls walk_page_range() on the range 0 to ~0UL, it
provides a test_walk() callback that only expects to be walking over VMAs.
Currently walk_pmd_range() will skip memory regions that don't have a
VMA, reporting them as a hole.

As this call only expects to walk user address space, make it walk 0 to
'highest_vm_end'.

Link: http://lkml.kernel.org/r/1472655792-22439-1-git-send-email-james.morse@arm.com
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
fs/proc/task_mmu.c
This page took 0.026678 seconds and 5 git commands to generate.