|
@@ -1310,7 +1310,7 @@ bool is_kernel_percpu_address(unsigned long addr)
|
|
|
* and, from the second one, the backing allocator (currently either vm or
|
|
|
* km) provides translation.
|
|
|
*
|
|
|
- * The addr can be tranlated simply without checking if it falls into the
|
|
|
+ * The addr can be translated simply without checking if it falls into the
|
|
|
* first chunk. But the current code reflects better how percpu allocator
|
|
|
* actually works, and the verification can discover both bugs in percpu
|
|
|
* allocator itself and per_cpu_ptr_to_phys() callers. So we keep current
|
|
@@ -1762,7 +1762,7 @@ early_param("percpu_alloc", percpu_alloc_setup);
|
|
|
* and other parameters considering needed percpu size, allocation
|
|
|
* atom size and distances between CPUs.
|
|
|
*
|
|
|
- * Groups are always mutliples of atom size and CPUs which are of
|
|
|
+ * Groups are always multiples of atom size and CPUs which are of
|
|
|
* LOCAL_DISTANCE both ways are grouped together and share space for
|
|
|
* units in the same group. The returned configuration is guaranteed
|
|
|
* to have CPUs on different nodes on different groups and >=75% usage
|