KVM: Fix __set_bit() race in mark_page_dirty() during dirty logging
authorTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Wed, 4 Jan 2012 06:06:43 +0000 (15:06 +0900)
committerAvi Kivity <avi@redhat.com>
Wed, 1 Feb 2012 09:42:32 +0000 (11:42 +0200)
It is possible that the __set_bit() in mark_page_dirty() is called
simultaneously on the same region of memory, which may result in only
one bit being set, because some callers do not take mmu_lock before
mark_page_dirty().

This problem is hard to produce because when we reach mark_page_dirty()
beginning from, e.g., tdp_page_fault(), mmu_lock is being held during
__direct_map():  making kvm-unit-tests' dirty log api test write to two
pages concurrently was not useful for this reason.

So we have confirmed that there can actually be race condition by
checking if some callers really reach there without holding mmu_lock
using spin_is_locked():  probably they were from kvm_write_guest_page().

To fix this race, this patch changes the bit operation to the atomic
version:  note that nr_dirty_pages also suffers from the race but we do
not need exactly correct numbers for now.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
virt/kvm/kvm_main.c

index 7287bf5d1c9edc1fa84681aea4f989c9c750fa9c..a91f980077d843ca319dfe5d7b3b95e252e02bc3 100644 (file)
@@ -1543,7 +1543,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot,
        if (memslot && memslot->dirty_bitmap) {
                unsigned long rel_gfn = gfn - memslot->base_gfn;
 
-               if (!__test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap))
+               if (!test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap))
                        memslot->nr_dirty_pages++;
        }
 }
This page took 0.04425 seconds and 5 git commands to generate.