瀏覽代碼

KVM: MMU: Fix dirty page setting for pages removed from rmap

Right now rmap_remove won't set the page as dirty if the shadow pte
pointed to this page had write access and then it became readonly.
This patches fixes that, by setting the page as dirty for spte changes from
write to readonly access.

Signed-off-by: Izik Eidus <izike@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Izik Eidus 17 年之前
父節點
當前提交
75e68e6078
共有 1 個文件被更改,包括 6 次插入2 次删除
  1. 6 2
      arch/x86/kvm/mmu.c

+ 6 - 2
arch/x86/kvm/mmu.c

@@ -890,6 +890,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
 {
 	u64 spte;
 	int was_rmapped = is_rmap_pte(*shadow_pte);
+	int was_writeble = is_writeble_pte(*shadow_pte);
 
 	pgprintk("%s: spte %llx access %x write_fault %d"
 		 " user_fault %d gfn %lx\n",
@@ -956,9 +957,12 @@ unshadowed:
 		rmap_add(vcpu, shadow_pte, gfn);
 		if (!is_rmap_pte(*shadow_pte))
 			kvm_release_page_clean(page);
+	} else {
+		if (was_writeble)
+			kvm_release_page_dirty(page);
+		else
+			kvm_release_page_clean(page);
 	}
-	else
-		kvm_release_page_clean(page);
 	if (!ptwrite || !*ptwrite)
 		vcpu->arch.last_pte_updated = shadow_pte;
 }