Răsfoiți Sursa

mremap: don't do mm_populate(new_addr) on failure

move_vma() sets *locked even if move_page_tables() or ->mremap() fails,
change sys_mremap() to check "ret & ~PAGE_MASK".

I think we should simply remove the VM_LOCKED code in move_vma(), that is
why this patch doesn't change move_vma().  But this needs more cleanups.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Oleg Nesterov 10 ani în urmă
părinte
comite
d456fb9e52
1 a modificat fișierele cu 3 adăugiri și 1 ștergeri
  1. 3 1
      mm/mremap.c

+ 3 - 1
mm/mremap.c

@@ -578,8 +578,10 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
 		ret = move_vma(vma, addr, old_len, new_len, new_addr, &locked);
 		ret = move_vma(vma, addr, old_len, new_len, new_addr, &locked);
 	}
 	}
 out:
 out:
-	if (ret & ~PAGE_MASK)
+	if (ret & ~PAGE_MASK) {
 		vm_unacct_memory(charged);
 		vm_unacct_memory(charged);
+		locked = 0;
+	}
 	up_write(&current->mm->mmap_sem);
 	up_write(&current->mm->mmap_sem);
 	if (locked && new_len > old_len)
 	if (locked && new_len > old_len)
 		mm_populate(new_addr + old_len, new_len - old_len);
 		mm_populate(new_addr + old_len, new_len - old_len);