Browse Source

mm/hmm: move hmm_pfns_clear() closer to where it is used

Move hmm_pfns_clear() closer to where it is used to make it clear it is
not use by page table walkers.

Link: http://lkml.kernel.org/r/20180323005527.758-13-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Evgeny Baskakov <ebaskakov@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jérôme Glisse 7 years ago
parent
commit
33cd47dcbb
1 changed files with 8 additions and 8 deletions
  1. 8 8
      mm/hmm.c

+ 8 - 8
mm/hmm.c

@@ -340,14 +340,6 @@ static int hmm_pfns_bad(unsigned long addr,
 	return 0;
 	return 0;
 }
 }
 
 
-static void hmm_pfns_clear(uint64_t *pfns,
-			   unsigned long addr,
-			   unsigned long end)
-{
-	for (; addr < end; addr += PAGE_SIZE, pfns++)
-		*pfns = 0;
-}
-
 /*
 /*
  * hmm_vma_walk_hole() - handle a range lacking valid pmd or pte(s)
  * hmm_vma_walk_hole() - handle a range lacking valid pmd or pte(s)
  * @start: range virtual start address (inclusive)
  * @start: range virtual start address (inclusive)
@@ -506,6 +498,14 @@ fault:
 	return 0;
 	return 0;
 }
 }
 
 
+static void hmm_pfns_clear(uint64_t *pfns,
+			   unsigned long addr,
+			   unsigned long end)
+{
+	for (; addr < end; addr += PAGE_SIZE, pfns++)
+		*pfns = 0;
+}
+
 static void hmm_pfns_special(struct hmm_range *range)
 static void hmm_pfns_special(struct hmm_range *range)
 {
 {
 	unsigned long addr = range->start, i = 0;
 	unsigned long addr = range->start, i = 0;