Browse Source

arm64: smp: prepare for smp_processor_id() rework

Subsequent patches will make smp_processor_id() use a percpu variable.
This will make smp_processor_id() dependent on the percpu offset, and
thus we cannot use smp_processor_id() to figure out what to initialise
the offset to.

Prepare for this by initialising the percpu offset based on
current::cpu, which will work regardless of how smp_processor_id() is
implemented. Also, make this relationship obvious by placing this code
together at the start of secondary_start_kernel().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Mark Rutland 8 năm trước cách đây
mục cha
commit
580efaa7cc
1 tập tin đã thay đổi với 4 bổ sung3 xóa
  1. 4 3
      arch/arm64/kernel/smp.c

+ 4 - 3
arch/arm64/kernel/smp.c

@@ -208,7 +208,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 asmlinkage void secondary_start_kernel(void)
 asmlinkage void secondary_start_kernel(void)
 {
 {
 	struct mm_struct *mm = &init_mm;
 	struct mm_struct *mm = &init_mm;
-	unsigned int cpu = smp_processor_id();
+	unsigned int cpu;
+
+	cpu = task_cpu(current);
+	set_my_cpu_offset(per_cpu_offset(cpu));
 
 
 	/*
 	/*
 	 * All kernel threads share the same mm context; grab a
 	 * All kernel threads share the same mm context; grab a
@@ -217,8 +220,6 @@ asmlinkage void secondary_start_kernel(void)
 	atomic_inc(&mm->mm_count);
 	atomic_inc(&mm->mm_count);
 	current->active_mm = mm;
 	current->active_mm = mm;
 
 
-	set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
-
 	/*
 	/*
 	 * TTBR0 is only used for the identity mapping at this stage. Make it
 	 * TTBR0 is only used for the identity mapping at this stage. Make it
 	 * point to zero page to avoid speculatively fetching new entries.
 	 * point to zero page to avoid speculatively fetching new entries.