浏览代码

selftests: vm: Try harder to allocate huge pages

If we need to increase the number of huge pages, drop caches first
to reduce fragmentation and then check that we actually allocated
as many as we wanted.  Retry once if that doesn't work.

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Ben Hutchings 9 年之前
父节点
当前提交
ee00479d67
共有 1 个文件被更改,包括 14 次插入1 次删除
  1. 14 1
      tools/testing/selftests/vm/run_vmtests

+ 14 - 1
tools/testing/selftests/vm/run_vmtests

@@ -20,13 +20,26 @@ done < /proc/meminfo
 if [ -n "$freepgs" ] && [ -n "$pgsize" ]; then
 	nr_hugepgs=`cat /proc/sys/vm/nr_hugepages`
 	needpgs=`expr $needmem / $pgsize`
-	if [ $freepgs -lt $needpgs ]; then
+	tries=2
+	while [ $tries -gt 0 ] && [ $freepgs -lt $needpgs ]; do
 		lackpgs=$(( $needpgs - $freepgs ))
+		echo 3 > /proc/sys/vm/drop_caches
 		echo $(( $lackpgs + $nr_hugepgs )) > /proc/sys/vm/nr_hugepages
 		if [ $? -ne 0 ]; then
 			echo "Please run this test as root"
 			exit 1
 		fi
+		while read name size unit; do
+			if [ "$name" = "HugePages_Free:" ]; then
+				freepgs=$size
+			fi
+		done < /proc/meminfo
+		tries=$((tries - 1))
+	done
+	if [ $freepgs -lt $needpgs ]; then
+		printf "Not enough huge pages available (%d < %d)\n" \
+		       $freepgs $needpgs
+		exit 1
 	fi
 else
 	echo "no hugetlbfs support in kernel?"