Răsfoiți Sursa

arm64: head.S: use memset to clear BSS

Currently we use an open-coded memzero to clear the BSS. As it is a
trivial implementation, it is sub-optimal.

Our optimised memset doesn't use the stack, is position-independent, and
for the memzero case can use of DC ZVA to clear large blocks
efficiently. In __mmap_switched the MMU is on and there are no live
caller-saved registers, so we can safely call an uninstrumented memset.

This patch changes __mmap_switched to use memset when clearing the BSS.
We use the __pi_memset alias so as to avoid any instrumentation in all
kernel configurations.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Mark Rutland 9 ani în urmă
părinte
comite
2a803c4db6
1 a modificat fișierele cu 7 adăugiri și 8 ștergeri
  1. 7 8
      arch/arm64/kernel/head.S

+ 7 - 8
arch/arm64/kernel/head.S

@@ -415,14 +415,13 @@ ENDPROC(__create_page_tables)
  */
  */
 	.set	initial_sp, init_thread_union + THREAD_START_SP
 	.set	initial_sp, init_thread_union + THREAD_START_SP
 __mmap_switched:
 __mmap_switched:
-	adr_l	x6, __bss_start
-	adr_l	x7, __bss_stop
-
-1:	cmp	x6, x7
-	b.hs	2f
-	str	xzr, [x6], #8			// Clear BSS
-	b	1b
-2:
+	// Clear BSS
+	adr_l	x0, __bss_start
+	mov	x1, xzr
+	adr_l	x2, __bss_stop
+	sub	x2, x2, x0
+	bl	__pi_memset
+
 	adr_l	sp, initial_sp, x4
 	adr_l	sp, initial_sp, x4
 	mov	x4, sp
 	mov	x4, sp
 	and	x4, x4, #~(THREAD_SIZE - 1)
 	and	x4, x4, #~(THREAD_SIZE - 1)