Browse Source

hexagon: Fix misspellings in comments.

Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Adam Buchbinder 9 years ago
parent
commit
238034e3fe

+ 1 - 1
arch/hexagon/include/asm/hexagon_vm.h

@@ -237,7 +237,7 @@ static inline long __vmintop_clear(long i)
 
 /*
  * The initial program gets to find a system environment descriptor
- * on its stack when it begins exection. The first word is a version
+ * on its stack when it begins execution. The first word is a version
  * code to indicate what is there.  Zero means nothing more.
  */
 

+ 1 - 1
arch/hexagon/include/asm/vm_mmu.h

@@ -78,7 +78,7 @@
 #define	__HEXAGON_C_WB_L2	0x7	/* Write-back, with L2 */
 
 /*
- * This can be overriden, but we're defaulting to the most aggressive
+ * This can be overridden, but we're defaulting to the most aggressive
  * cache policy, the better to find bugs sooner.
  */
 

+ 2 - 2
arch/hexagon/kernel/kgdb.c

@@ -236,9 +236,9 @@ static struct notifier_block kgdb_notifier = {
 };
 
 /**
- * kgdb_arch_init - Perform any architecture specific initalization.
+ * kgdb_arch_init - Perform any architecture specific initialization.
  *
- * This function will handle the initalization of any architecture
+ * This function will handle the initialization of any architecture
  * specific callbacks.
  */
 int kgdb_arch_init(void)

+ 1 - 1
arch/hexagon/kernel/vm_ops.S

@@ -26,7 +26,7 @@
  * could be, and perhaps some day will be, handled as in-line
  * macros, but for tracing/debugging it's handy to have
  * a single point of invocation for each of them.
- * Conveniently, they take paramters and return values
+ * Conveniently, they take parameters and return values
  * consistent with the ABI calling convention.
  */
 

+ 1 - 1
arch/hexagon/lib/memcpy.S

@@ -39,7 +39,7 @@
  *   DJH 10/14/09 Version 1.3 added special loop for aligned case, was
  *                            overreading bloated codesize back up to 892
  *   DJH  4/20/10 Version 1.4 fixed Ldword_loop_epilog loop to prevent loads
- *                            occuring if only 1 left outstanding, fixes bug
+ *                            occurring if only 1 left outstanding, fixes bug
  *                            # 3888, corrected for all alignments. Peeled off
  *                            1 32byte chunk from kernel loop and extended 8byte
  *                            loop at end to solve all combinations and prevent