浏览代码

Merge tag 'csky-for-linus-4.20' of https://github.com/c-sky/csky-linux

Pull C-SKY architecture port from Guo Ren:
 "This contains the Linux port for C-SKY(csky) based on linux-4.19
  Release, which has been through 10 rounds of review on mailing list.

  More information:

    http://en.c-sky.com

  The development repo:

    https://github.com/c-sky/csky-linux

  ABI Documentation:

    https://github.com/c-sky/csky-doc

  Here is the pre-built cross compiler for fast test from our CI:

    https://gitlab.com/c-sky/buildroot/-/jobs/101608095/artifacts/file/output/images/csky_toolchain_qemu_csky_ck807f_4.18_glibc_defconfig_482b221e52908be1c9b2ccb444255e1562bb7025.tar.xz

  We use buildroot as our CI-test enviornment. "LTP, Lmbench ..." will
  be tested for every commit. See here for more details:

    https://gitlab.com/c-sky/buildroot/pipelines

  We'll continouslly improve csky subsystem in future"

Arnd acks, and adds the following notes:
 "I did a thorough review of the ABI, which as usual mainly consists of
  spotting any files that don't use the asm-generic ABI itself, and
  having it changed to it matches exactly what we do on other new
  architectures.

  I also looked at every other patch and commented on maybe half of them
  where I saw something that did not quite seem right. Others have
  reviewed specific patches in greater depth. I'm sure that one could
  fine more of the minor details, but as long as they are not ABI
  relevant, they can be fixed later.

  The only patch that is part of the ABI and that nobody reviewed is the
  signal handling. This is one of the areas I never worked on in much
  detail. I did not see anything wrong with it, but I also don't know
  what the problems with the other architectures are here, and we seem
  to be hitting issues occasionally, and we never managed to generalize
  this enough for new architectures to have a trivial implementation.

  I was originally hoping that we could have the 64-bit time_t
  interfaces ready in time to completely drop the 32-bit ones, but that
  did not happen. We might still remove them in the next merge window
  depending on whether the libc upstream people prefer to keep them or
  not.

  One more general comment: I think this may well be the last new CPU
  architecture we ever add to the kernel. Both nds32 and c-sky are made
  by companies that also work on risc-v, and generally speaking risc-v
  seems to be killing off any of the minor licensable instruction set
  projects, just like ARM has mostly killed off the custom
  vendor-specific instruction sets already.

  If we add another architecture in the future, it may instead be
  something like the LLVM bitcode or WebAssembly, who knows?"

To which Geert Uytterhoeven pipes in about another architecture still in
the pipeline: Kalray MPPA.

* tag 'csky-for-linus-4.20' of https://github.com/c-sky/csky-linux: (24 commits)
  dt-bindings: interrupt-controller: C-SKY APB intc
  irqchip: add C-SKY APB bus interrupt controller
  dt-bindings: interrupt-controller: C-SKY SMP intc
  irqchip: add C-SKY SMP interrupt controller
  MAINTAINERS: Add csky
  dt-bindings: Add vendor prefix for csky
  dt-bindings: csky CPU Bindings
  csky: Misc headers
  csky: SMP support
  csky: Debug and Ptrace GDB
  csky: User access
  csky: Library functions
  csky: ELF and module probe
  csky: Atomic operations
  csky: IRQ handling
  csky: VDSO and rt_sigreturn
  csky: Process management and Signal
  csky: MMU and page table management
  csky: Cache and TLB routines
  csky: System Call
  ...
Linus Torvalds 6 年之前
父节点
当前提交
ac43507589
共有 100 个文件被更改,包括 6708 次插入0 次删除
  1. 73 0
      Documentation/devicetree/bindings/csky/cpus.txt
  2. 62 0
      Documentation/devicetree/bindings/interrupt-controller/csky,apb-intc.txt
  3. 40 0
      Documentation/devicetree/bindings/interrupt-controller/csky,mpintc.txt
  4. 1 0
      Documentation/devicetree/bindings/vendor-prefixes.txt
  5. 9 0
      MAINTAINERS
  6. 205 0
      arch/csky/Kconfig
  7. 9 0
      arch/csky/Kconfig.debug
  8. 93 0
      arch/csky/Makefile
  9. 8 0
      arch/csky/abiv1/Makefile
  10. 326 0
      arch/csky/abiv1/alignment.c
  11. 12 0
      arch/csky/abiv1/bswapdi.c
  12. 12 0
      arch/csky/abiv1/bswapsi.c
  13. 52 0
      arch/csky/abiv1/cacheflush.c
  14. 49 0
      arch/csky/abiv1/inc/abi/cacheflush.h
  15. 75 0
      arch/csky/abiv1/inc/abi/ckmmu.h
  16. 26 0
      arch/csky/abiv1/inc/abi/elf.h
  17. 160 0
      arch/csky/abiv1/inc/abi/entry.h
  18. 27 0
      arch/csky/abiv1/inc/abi/page.h
  19. 37 0
      arch/csky/abiv1/inc/abi/pgtable-bits.h
  20. 27 0
      arch/csky/abiv1/inc/abi/reg_ops.h
  21. 26 0
      arch/csky/abiv1/inc/abi/regdef.h
  22. 13 0
      arch/csky/abiv1/inc/abi/string.h
  23. 17 0
      arch/csky/abiv1/inc/abi/vdso.h
  24. 347 0
      arch/csky/abiv1/memcpy.S
  25. 37 0
      arch/csky/abiv1/memset.c
  26. 66 0
      arch/csky/abiv1/mmap.c
  27. 7 0
      arch/csky/abiv1/strksyms.c
  28. 10 0
      arch/csky/abiv2/Makefile
  29. 60 0
      arch/csky/abiv2/cacheflush.c
  30. 275 0
      arch/csky/abiv2/fpu.c
  31. 46 0
      arch/csky/abiv2/inc/abi/cacheflush.h
  32. 87 0
      arch/csky/abiv2/inc/abi/ckmmu.h
  33. 43 0
      arch/csky/abiv2/inc/abi/elf.h
  34. 156 0
      arch/csky/abiv2/inc/abi/entry.h
  35. 66 0
      arch/csky/abiv2/inc/abi/fpu.h
  36. 14 0
      arch/csky/abiv2/inc/abi/page.h
  37. 37 0
      arch/csky/abiv2/inc/abi/pgtable-bits.h
  38. 17 0
      arch/csky/abiv2/inc/abi/reg_ops.h
  39. 26 0
      arch/csky/abiv2/inc/abi/regdef.h
  40. 27 0
      arch/csky/abiv2/inc/abi/string.h
  41. 23 0
      arch/csky/abiv2/inc/abi/vdso.h
  42. 152 0
      arch/csky/abiv2/memcmp.S
  43. 110 0
      arch/csky/abiv2/memcpy.S
  44. 108 0
      arch/csky/abiv2/memmove.S
  45. 83 0
      arch/csky/abiv2/memset.S
  46. 168 0
      arch/csky/abiv2/strcmp.S
  47. 123 0
      arch/csky/abiv2/strcpy.S
  48. 12 0
      arch/csky/abiv2/strksyms.c
  49. 97 0
      arch/csky/abiv2/strlen.S
  50. 30 0
      arch/csky/abiv2/sysdep.h
  51. 24 0
      arch/csky/boot/Makefile
  52. 13 0
      arch/csky/boot/dts/Makefile
  53. 1 0
      arch/csky/boot/dts/include/dt-bindings
  54. 61 0
      arch/csky/configs/defconfig
  55. 49 0
      arch/csky/include/asm/Kbuild
  56. 10 0
      arch/csky/include/asm/addrspace.h
  57. 212 0
      arch/csky/include/asm/atomic.h
  58. 49 0
      arch/csky/include/asm/barrier.h
  59. 82 0
      arch/csky/include/asm/bitops.h
  60. 26 0
      arch/csky/include/asm/bug.h
  61. 30 0
      arch/csky/include/asm/cache.h
  62. 9 0
      arch/csky/include/asm/cacheflush.h
  63. 50 0
      arch/csky/include/asm/checksum.h
  64. 73 0
      arch/csky/include/asm/cmpxchg.h
  65. 85 0
      arch/csky/include/asm/elf.h
  66. 27 0
      arch/csky/include/asm/fixmap.h
  67. 51 0
      arch/csky/include/asm/highmem.h
  68. 24 0
      arch/csky/include/asm/io.h
  69. 49 0
      arch/csky/include/asm/irqflags.h
  70. 12 0
      arch/csky/include/asm/mmu.h
  71. 150 0
      arch/csky/include/asm/mmu_context.h
  72. 104 0
      arch/csky/include/asm/page.h
  73. 115 0
      arch/csky/include/asm/pgalloc.h
  74. 306 0
      arch/csky/include/asm/pgtable.h
  75. 121 0
      arch/csky/include/asm/processor.h
  76. 26 0
      arch/csky/include/asm/reg_ops.h
  77. 19 0
      arch/csky/include/asm/segment.h
  78. 11 0
      arch/csky/include/asm/shmparam.h
  79. 26 0
      arch/csky/include/asm/smp.h
  80. 256 0
      arch/csky/include/asm/spinlock.h
  81. 37 0
      arch/csky/include/asm/spinlock_types.h
  82. 13 0
      arch/csky/include/asm/string.h
  83. 36 0
      arch/csky/include/asm/switch_to.h
  84. 71 0
      arch/csky/include/asm/syscall.h
  85. 15 0
      arch/csky/include/asm/syscalls.h
  86. 75 0
      arch/csky/include/asm/thread_info.h
  87. 25 0
      arch/csky/include/asm/tlb.h
  88. 25 0
      arch/csky/include/asm/tlbflush.h
  89. 44 0
      arch/csky/include/asm/traps.h
  90. 416 0
      arch/csky/include/asm/uaccess.h
  91. 4 0
      arch/csky/include/asm/unistd.h
  92. 12 0
      arch/csky/include/asm/vdso.h
  93. 32 0
      arch/csky/include/uapi/asm/Kbuild
  94. 9 0
      arch/csky/include/uapi/asm/byteorder.h
  95. 13 0
      arch/csky/include/uapi/asm/cachectl.h
  96. 104 0
      arch/csky/include/uapi/asm/ptrace.h
  97. 14 0
      arch/csky/include/uapi/asm/sigcontext.h
  98. 10 0
      arch/csky/include/uapi/asm/unistd.h
  99. 8 0
      arch/csky/kernel/Makefile
  100. 88 0
      arch/csky/kernel/asm-offsets.c

+ 73 - 0
Documentation/devicetree/bindings/csky/cpus.txt

@@ -0,0 +1,73 @@
+==================
+C-SKY CPU Bindings
+==================
+
+The device tree allows to describe the layout of CPUs in a system through
+the "cpus" node, which in turn contains a number of subnodes (ie "cpu")
+defining properties for every cpu.
+
+Only SMP system need to care about the cpus node and single processor
+needn't define cpus node at all.
+
+=====================================
+cpus and cpu node bindings definition
+=====================================
+
+- cpus node
+
+	Description: Container of cpu nodes
+
+	The node name must be "cpus".
+
+	A cpus node must define the following properties:
+
+	- #address-cells
+		Usage: required
+		Value type: <u32>
+		Definition: must be set to 1
+	- #size-cells
+		Usage: required
+		Value type: <u32>
+		Definition: must be set to 0
+
+- cpu node
+
+	Description: Describes one of SMP cores
+
+	PROPERTIES
+
+	- device_type
+		Usage: required
+		Value type: <string>
+		Definition: must be "cpu"
+	- reg
+		Usage: required
+		Value type: <u32>
+		Definition: CPU index
+	- compatible:
+		Usage: required
+		Value type: <string>
+		Definition: must contain "csky", eg:
+			"csky,610"
+			"csky,807"
+			"csky,810"
+			"csky,860"
+
+Example:
+--------
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+		cpu@0 {
+			device_type = "cpu";
+			reg = <0>;
+			status = "ok";
+		};
+
+		cpu@1 {
+			device_type = "cpu";
+			reg = <1>;
+			status = "ok";
+		};
+	};

+ 62 - 0
Documentation/devicetree/bindings/interrupt-controller/csky,apb-intc.txt

@@ -0,0 +1,62 @@
+==============================
+C-SKY APB Interrupt Controller
+==============================
+
+C-SKY APB Interrupt Controller is a simple soc interrupt controller
+on the apb bus and we only use it as root irq controller.
+
+ - csky,apb-intc is used in a lot of csky fpgas and socs, it support 64 irq nums.
+ - csky,dual-apb-intc consists of 2 apb-intc and 128 irq nums supported.
+ - csky,gx6605s-intc is gx6605s soc internal irq interrupt controller, 64 irq nums.
+
+=============================
+intc node bindings definition
+=============================
+
+	Description: Describes APB interrupt controller
+
+	PROPERTIES
+
+	- compatible
+		Usage: required
+		Value type: <string>
+		Definition: must be "csky,apb-intc"
+				    "csky,dual-apb-intc"
+				    "csky,gx6605s-intc"
+	- #interrupt-cells
+		Usage: required
+		Value type: <u32>
+		Definition: must be <1>
+	- reg
+		Usage: required
+		Value type: <u32 u32>
+		Definition: <phyaddr size> in soc from cpu view
+	- interrupt-controller:
+		Usage: required
+	- csky,support-pulse-signal:
+		Usage: select
+		Description: to support pulse signal flag
+
+Examples:
+---------
+
+	intc: interrupt-controller@500000 {
+		compatible = "csky,apb-intc";
+		#interrupt-cells = <1>;
+		reg = <0x00500000 0x400>;
+		interrupt-controller;
+	};
+
+	intc: interrupt-controller@500000 {
+		compatible = "csky,dual-apb-intc";
+		#interrupt-cells = <1>;
+		reg = <0x00500000 0x400>;
+		interrupt-controller;
+	};
+
+	intc: interrupt-controller@500000 {
+		compatible = "csky,gx6605s-intc";
+		#interrupt-cells = <1>;
+		reg = <0x00500000 0x400>;
+		interrupt-controller;
+	};

+ 40 - 0
Documentation/devicetree/bindings/interrupt-controller/csky,mpintc.txt

@@ -0,0 +1,40 @@
+===========================================
+C-SKY Multi-processors Interrupt Controller
+===========================================
+
+C-SKY Multi-processors Interrupt Controller is designed for ck807/ck810/ck860
+SMP soc, and it also could be used in non-SMP system.
+
+Interrupt number definition:
+
+  0-15  : software irq, and we use 15 as our IPI_IRQ.
+ 16-31  : private  irq, and we use 16 as the co-processor timer.
+ 31-1024: common irq for soc ip.
+
+=============================
+intc node bindings definition
+=============================
+
+	Description: Describes SMP interrupt controller
+
+	PROPERTIES
+
+	- compatible
+		Usage: required
+		Value type: <string>
+		Definition: must be "csky,mpintc"
+	- #interrupt-cells
+		Usage: required
+		Value type: <u32>
+		Definition: must be <1>
+	- interrupt-controller:
+		Usage: required
+
+Examples:
+---------
+
+	intc: interrupt-controller {
+		compatible = "csky,mpintc";
+		#interrupt-cells = <1>;
+		interrupt-controller;
+	};

+ 1 - 0
Documentation/devicetree/bindings/vendor-prefixes.txt

@@ -84,6 +84,7 @@ cosmic	Cosmic Circuits
 crane	Crane Connectivity Solutions
 creative	Creative Technology Ltd
 crystalfontz	Crystalfontz America, Inc.
+csky	Hangzhou C-SKY Microsystems Co., Ltd
 cubietech	Cubietech, Ltd.
 cypress	Cypress Semiconductor Corporation
 cznic	CZ.NIC, z.s.p.o.

+ 9 - 0
MAINTAINERS

@@ -3229,6 +3229,15 @@ T:	git git://git.alsa-project.org/alsa-kernel.git
 S:	Maintained
 F:	sound/pci/oxygen/
 
+C-SKY ARCHITECTURE
+M:	Guo Ren <ren_guo@c-sky.com>
+T:	git https://github.com/c-sky/csky-linux.git
+S:	Supported
+F:	arch/csky/
+F:	Documentation/devicetree/bindings/csky/
+K:	csky
+N:	csky
+
 C6X ARCHITECTURE
 M:	Mark Salter <msalter@redhat.com>
 M:	Aurelien Jacquiot <jacquiot.aurelien@gmail.com>

+ 205 - 0
arch/csky/Kconfig

@@ -0,0 +1,205 @@
+config CSKY
+	def_bool y
+	select ARCH_HAS_SYNC_DMA_FOR_CPU
+	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+	select ARCH_USE_BUILTIN_BSWAP
+	select ARCH_USE_QUEUED_RWLOCKS if NR_CPUS>2
+	select COMMON_CLK
+	select CLKSRC_MMIO
+	select CLKSRC_OF
+	select DMA_DIRECT_OPS
+	select DMA_NONCOHERENT_OPS
+	select IRQ_DOMAIN
+	select HANDLE_DOMAIN_IRQ
+	select DW_APB_TIMER_OF
+	select GENERIC_LIB_ASHLDI3
+	select GENERIC_LIB_ASHRDI3
+	select GENERIC_LIB_LSHRDI3
+	select GENERIC_LIB_MULDI3
+	select GENERIC_LIB_CMPDI2
+	select GENERIC_LIB_UCMPDI2
+	select GENERIC_ALLOCATOR
+	select GENERIC_ATOMIC64
+	select GENERIC_CLOCKEVENTS
+	select GENERIC_CPU_DEVICES
+	select GENERIC_IRQ_CHIP
+	select GENERIC_IRQ_PROBE
+	select GENERIC_IRQ_SHOW
+	select GENERIC_IRQ_MULTI_HANDLER
+	select GENERIC_SCHED_CLOCK
+	select GENERIC_SMP_IDLE_THREAD
+	select HAVE_ARCH_TRACEHOOK
+	select HAVE_GENERIC_DMA_COHERENT
+	select HAVE_KERNEL_GZIP
+	select HAVE_KERNEL_LZO
+	select HAVE_KERNEL_LZMA
+	select HAVE_C_RECORDMCOUNT
+	select HAVE_DMA_API_DEBUG
+	select HAVE_DMA_CONTIGUOUS
+	select HAVE_MEMBLOCK
+	select MAY_HAVE_SPARSE_IRQ
+	select MODULES_USE_ELF_RELA if MODULES
+	select NO_BOOTMEM
+	select OF
+	select OF_EARLY_FLATTREE
+	select OF_RESERVED_MEM
+	select PERF_USE_VMALLOC
+	select RTC_LIB
+	select TIMER_OF
+	select USB_ARCH_HAS_EHCI
+	select USB_ARCH_HAS_OHCI
+
+config CPU_HAS_CACHEV2
+	bool
+
+config CPU_HAS_FPUV2
+	bool
+
+config CPU_HAS_HILO
+	bool
+
+config CPU_HAS_TLBI
+	bool
+
+config CPU_HAS_LDSTEX
+	bool
+	help
+	  For SMP, CPU needs "ldex&stex" instrcutions to atomic operations.
+
+config CPU_NEED_TLBSYNC
+	bool
+
+config CPU_NEED_SOFTALIGN
+	bool
+
+config CPU_NO_USER_BKPT
+	bool
+	help
+	  For abiv2 we couldn't use "trap 1" as user space bkpt in gdbserver, because
+	  abiv2 is 16/32bit instruction set and "trap 1" is 32bit.
+	  So we need a 16bit instruction as user space bkpt, and it will cause an illegal
+	  instruction exception.
+	  In kernel we parse the *regs->pc to determine whether to send SIGTRAP or not.
+
+config GENERIC_CALIBRATE_DELAY
+	def_bool y
+
+config GENERIC_CSUM
+	def_bool y
+
+config GENERIC_HWEIGHT
+	def_bool y
+
+config MMU
+	def_bool y
+
+config RWSEM_GENERIC_SPINLOCK
+	def_bool y
+
+config TIME_LOW_RES
+	def_bool y
+
+config TRACE_IRQFLAGS_SUPPORT
+	def_bool y
+
+config CPU_TLB_SIZE
+	int
+	default "128"	if (CPU_CK610 || CPU_CK807 || CPU_CK810)
+	default "1024"	if (CPU_CK860)
+
+config CPU_ASID_BITS
+	int
+	default "8"	if (CPU_CK610 || CPU_CK807 || CPU_CK810)
+	default "12"	if (CPU_CK860)
+
+config L1_CACHE_SHIFT
+	int
+	default "4"	if (CPU_CK610)
+	default "5"	if (CPU_CK807 || CPU_CK810)
+	default "6"	if (CPU_CK860)
+
+menu "Processor type and features"
+
+choice
+	prompt "CPU MODEL"
+	default CPU_CK807
+
+config CPU_CK610
+	bool "CSKY CPU ck610"
+	select CPU_NEED_TLBSYNC
+	select CPU_NEED_SOFTALIGN
+	select CPU_NO_USER_BKPT
+
+config CPU_CK810
+	bool "CSKY CPU ck810"
+	select CPU_HAS_HILO
+	select CPU_NEED_TLBSYNC
+
+config CPU_CK807
+	bool "CSKY CPU ck807"
+	select CPU_HAS_HILO
+
+config CPU_CK860
+	bool "CSKY CPU ck860"
+	select CPU_HAS_TLBI
+	select CPU_HAS_CACHEV2
+	select CPU_HAS_LDSTEX
+	select CPU_HAS_FPUV2
+endchoice
+
+choice
+	prompt "Power Manager Instruction (wait/doze/stop)"
+	default CPU_PM_NONE
+
+config CPU_PM_NONE
+	bool "None"
+
+config CPU_PM_WAIT
+	bool "wait"
+
+config CPU_PM_DOZE
+	bool "doze"
+
+config CPU_PM_STOP
+	bool "stop"
+endchoice
+
+config CPU_HAS_VDSP
+	bool "CPU has VDSP coprocessor"
+	depends on CPU_HAS_FPU && CPU_HAS_FPUV2
+
+config CPU_HAS_FPU
+	bool "CPU has FPU coprocessor"
+	depends on CPU_CK807 || CPU_CK810 || CPU_CK860
+
+config CPU_HAS_TEE
+	bool "CPU has Trusted Execution Environment"
+	depends on CPU_CK810
+
+config SMP
+	bool "Symmetric Multi-Processing (SMP) support for C-SKY"
+	depends on CPU_CK860
+	default n
+
+config NR_CPUS
+	int "Maximum number of CPUs (2-32)"
+	range 2 32
+	depends on SMP
+	default "2"
+
+config HIGHMEM
+	bool "High Memory Support"
+	depends on !CPU_CK610
+	default y
+
+config FORCE_MAX_ZONEORDER
+	int "Maximum zone order"
+	default "11"
+
+config RAM_BASE
+	hex "DRAM start addr (the same with memory-section in dts)"
+	default 0x0
+
+endmenu
+
+source "kernel/Kconfig.hz"

+ 9 - 0
arch/csky/Kconfig.debug

@@ -0,0 +1,9 @@
+menu "C-SKY Debug Options"
+config CSKY_BUILTIN_DTB
+	string "Use kernel builtin dtb"
+	help
+	  User could define the dtb instead of the one which is passed from
+	  bootloader.
+	  Sometimes for debug, we want to use a built-in dtb and then we needn't
+	  modify bootloader at all.
+endmenu

+ 93 - 0
arch/csky/Makefile

@@ -0,0 +1,93 @@
+OBJCOPYFLAGS		:=-O binary
+GZFLAGS			:=-9
+KBUILD_DEFCONFIG	:= defconfig
+
+ifdef CONFIG_CPU_HAS_FPU
+FPUEXT = f
+endif
+
+ifdef CONFIG_CPU_HAS_VDSP
+VDSPEXT = v
+endif
+
+ifdef CONFIG_CPU_HAS_TEE
+TEEEXT = t
+endif
+
+ifdef CONFIG_CPU_CK610
+CPUTYPE	= ck610
+CSKYABI	= abiv1
+endif
+
+ifdef CONFIG_CPU_CK810
+CPUTYPE = ck810
+CSKYABI	= abiv2
+endif
+
+ifdef CONFIG_CPU_CK807
+CPUTYPE = ck807
+CSKYABI	= abiv2
+endif
+
+ifdef CONFIG_CPU_CK860
+CPUTYPE = ck860
+CSKYABI	= abiv2
+endif
+
+ifneq ($(CSKYABI),)
+MCPU_STR = $(CPUTYPE)$(FPUEXT)$(VDSPEXT)$(TEEEXT)
+KBUILD_CFLAGS += -mcpu=$(MCPU_STR)
+KBUILD_CFLAGS += -DCSKYCPU_DEF_NAME=\"$(MCPU_STR)\"
+KBUILD_CFLAGS += -msoft-float -mdiv
+KBUILD_CFLAGS += -fno-tree-vectorize
+endif
+
+KBUILD_CFLAGS += -pipe
+ifeq ($(CSKYABI),abiv2)
+KBUILD_CFLAGS += -mno-stack-size
+endif
+
+abidirs := $(patsubst %,arch/csky/%/,$(CSKYABI))
+KBUILD_CFLAGS += $(patsubst %,-I$(srctree)/%inc,$(abidirs))
+
+KBUILD_CPPFLAGS += -mlittle-endian
+LDFLAGS += -EL
+
+KBUILD_AFLAGS += $(KBUILD_CFLAGS)
+
+head-y := arch/csky/kernel/head.o
+
+core-y += arch/csky/kernel/
+core-y += arch/csky/mm/
+core-y += arch/csky/$(CSKYABI)/
+
+libs-y += arch/csky/lib/ \
+	$(shell $(CC) $(KBUILD_CFLAGS) $(KCFLAGS) -print-libgcc-file-name)
+
+boot := arch/csky/boot
+ifneq '$(CONFIG_CSKY_BUILTIN_DTB)' '""'
+core-y += $(boot)/dts/
+endif
+
+all: zImage
+
+
+dtbs: scripts
+	$(Q)$(MAKE) $(build)=$(boot)/dts
+
+%.dtb %.dtb.S %.dtb.o: scripts
+	$(Q)$(MAKE) $(build)=$(boot)/dts $(boot)/dts/$@
+
+zImage Image uImage: vmlinux dtbs
+	$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
+
+archclean:
+	$(Q)$(MAKE) $(clean)=$(boot)
+	$(Q)$(MAKE) $(clean)=$(boot)/dts
+	rm -rf arch/csky/include/generated
+
+define archhelp
+  echo  '* zImage       - Compressed kernel image (arch/$(ARCH)/boot/zImage)'
+  echo  '  Image        - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
+  echo  '  uImage       - U-Boot wrapped zImage'
+endef

+ 8 - 0
arch/csky/abiv1/Makefile

@@ -0,0 +1,8 @@
+obj-$(CONFIG_CPU_NEED_SOFTALIGN)	+= alignment.o
+obj-y					+= bswapdi.o
+obj-y					+= bswapsi.o
+obj-y					+= cacheflush.o
+obj-y					+= mmap.o
+obj-y					+= memcpy.o
+obj-y					+= memset.o
+obj-y					+= strksyms.o

+ 326 - 0
arch/csky/abiv1/alignment.c

@@ -0,0 +1,326 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/kernel.h>
+#include <linux/uaccess.h>
+#include <linux/ptrace.h>
+
+static int align_enable = 1;
+static int align_count;
+
+static inline uint32_t get_ptreg(struct pt_regs *regs, uint32_t rx)
+{
+	return rx == 15 ? regs->lr : *((uint32_t *)&(regs->a0) - 2 + rx);
+}
+
+static inline void put_ptreg(struct pt_regs *regs, uint32_t rx, uint32_t val)
+{
+	if (rx == 15)
+		regs->lr = val;
+	else
+		*((uint32_t *)&(regs->a0) - 2 + rx) = val;
+}
+
+/*
+ * Get byte-value from addr and set it to *valp.
+ *
+ * Success: return 0
+ * Failure: return 1
+ */
+static int ldb_asm(uint32_t addr, uint32_t *valp)
+{
+	uint32_t val;
+	int err;
+
+	if (!access_ok(VERIFY_READ, (void *)addr, 1))
+		return 1;
+
+	asm volatile (
+		"movi	%0, 0\n"
+		"1:\n"
+		"ldb	%1, (%2)\n"
+		"br	3f\n"
+		"2:\n"
+		"movi	%0, 1\n"
+		"br	3f\n"
+		".section __ex_table,\"a\"\n"
+		".align 2\n"
+		".long	1b, 2b\n"
+		".previous\n"
+		"3:\n"
+		: "=&r"(err), "=r"(val)
+		: "r" (addr)
+	);
+
+	*valp = val;
+
+	return err;
+}
+
+/*
+ * Put byte-value to addr.
+ *
+ * Success: return 0
+ * Failure: return 1
+ */
+static int stb_asm(uint32_t addr, uint32_t val)
+{
+	int err;
+
+	if (!access_ok(VERIFY_WRITE, (void *)addr, 1))
+		return 1;
+
+	asm volatile (
+		"movi	%0, 0\n"
+		"1:\n"
+		"stb	%1, (%2)\n"
+		"br	3f\n"
+		"2:\n"
+		"movi	%0, 1\n"
+		"br	3f\n"
+		".section __ex_table,\"a\"\n"
+		".align 2\n"
+		".long	1b, 2b\n"
+		".previous\n"
+		"3:\n"
+		: "=&r"(err)
+		: "r"(val), "r" (addr)
+	);
+
+	return err;
+}
+
+/*
+ * Get half-word from [rx + imm]
+ *
+ * Success: return 0
+ * Failure: return 1
+ */
+static int ldh_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
+{
+	uint32_t byte0, byte1;
+
+	if (ldb_asm(addr, &byte0))
+		return 1;
+	addr += 1;
+	if (ldb_asm(addr, &byte1))
+		return 1;
+
+	byte0 |= byte1 << 8;
+	put_ptreg(regs, rz, byte0);
+
+	return 0;
+}
+
+/*
+ * Store half-word to [rx + imm]
+ *
+ * Success: return 0
+ * Failure: return 1
+ */
+static int sth_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
+{
+	uint32_t byte0, byte1;
+
+	byte0 = byte1 = get_ptreg(regs, rz);
+
+	byte0 &= 0xff;
+
+	if (stb_asm(addr, byte0))
+		return 1;
+
+	addr += 1;
+	byte1 = (byte1 >> 8) & 0xff;
+	if (stb_asm(addr, byte1))
+		return 1;
+
+	return 0;
+}
+
+/*
+ * Get word from [rx + imm]
+ *
+ * Success: return 0
+ * Failure: return 1
+ */
+static int ldw_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
+{
+	uint32_t byte0, byte1, byte2, byte3;
+
+	if (ldb_asm(addr, &byte0))
+		return 1;
+
+	addr += 1;
+	if (ldb_asm(addr, &byte1))
+		return 1;
+
+	addr += 1;
+	if (ldb_asm(addr, &byte2))
+		return 1;
+
+	addr += 1;
+	if (ldb_asm(addr, &byte3))
+		return 1;
+
+	byte0 |= byte1 << 8;
+	byte0 |= byte2 << 16;
+	byte0 |= byte3 << 24;
+
+	put_ptreg(regs, rz, byte0);
+
+	return 0;
+}
+
+/*
+ * Store word to [rx + imm]
+ *
+ * Success: return 0
+ * Failure: return 1
+ */
+static int stw_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
+{
+	uint32_t byte0, byte1, byte2, byte3;
+
+	byte0 = byte1 = byte2 = byte3 = get_ptreg(regs, rz);
+
+	byte0 &= 0xff;
+
+	if (stb_asm(addr, byte0))
+		return 1;
+
+	addr += 1;
+	byte1 = (byte1 >> 8) & 0xff;
+	if (stb_asm(addr, byte1))
+		return 1;
+
+	addr += 1;
+	byte2 = (byte2 >> 16) & 0xff;
+	if (stb_asm(addr, byte2))
+		return 1;
+
+	addr += 1;
+	byte3 = (byte3 >> 24) & 0xff;
+	if (stb_asm(addr, byte3))
+		return 1;
+
+	align_count++;
+
+	return 0;
+}
+
+extern int fixup_exception(struct pt_regs *regs);
+
+#define OP_LDH 0xc000
+#define OP_STH 0xd000
+#define OP_LDW 0x8000
+#define OP_STW 0x9000
+
+void csky_alignment(struct pt_regs *regs)
+{
+	int ret;
+	uint16_t tmp;
+	uint32_t opcode = 0;
+	uint32_t rx     = 0;
+	uint32_t rz     = 0;
+	uint32_t imm    = 0;
+	uint32_t addr   = 0;
+
+	if (!user_mode(regs))
+		goto bad_area;
+
+	ret = get_user(tmp, (uint16_t *)instruction_pointer(regs));
+	if (ret) {
+		pr_err("%s get_user failed.\n", __func__);
+		goto bad_area;
+	}
+
+	opcode = (uint32_t)tmp;
+
+	rx  = opcode & 0xf;
+	imm = (opcode >> 4) & 0xf;
+	rz  = (opcode >> 8) & 0xf;
+	opcode &= 0xf000;
+
+	if (rx == 0 || rx == 1 || rz == 0 || rz == 1)
+		goto bad_area;
+
+	switch (opcode) {
+	case OP_LDH:
+		addr = get_ptreg(regs, rx) + (imm << 1);
+		ret = ldh_c(regs, rz, addr);
+		break;
+	case OP_LDW:
+		addr = get_ptreg(regs, rx) + (imm << 2);
+		ret = ldw_c(regs, rz, addr);
+		break;
+	case OP_STH:
+		addr = get_ptreg(regs, rx) + (imm << 1);
+		ret = sth_c(regs, rz, addr);
+		break;
+	case OP_STW:
+		addr = get_ptreg(regs, rx) + (imm << 2);
+		ret = stw_c(regs, rz, addr);
+		break;
+	}
+
+	if (ret)
+		goto bad_area;
+
+	regs->pc += 2;
+
+	return;
+
+bad_area:
+	if (!user_mode(regs)) {
+		if (fixup_exception(regs))
+			return;
+
+		bust_spinlocks(1);
+		pr_alert("%s opcode: %x, rz: %d, rx: %d, imm: %d, addr: %x.\n",
+				__func__, opcode, rz, rx, imm, addr);
+		show_regs(regs);
+		bust_spinlocks(0);
+		do_exit(SIGKILL);
+	}
+
+	force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)addr, current);
+}
+
+static struct ctl_table alignment_tbl[4] = {
+	{
+		.procname = "enable",
+		.data = &align_enable,
+		.maxlen = sizeof(align_enable),
+		.mode = 0666,
+		.proc_handler = &proc_dointvec
+	},
+	{
+		.procname = "count",
+		.data = &align_count,
+		.maxlen = sizeof(align_count),
+		.mode = 0666,
+		.proc_handler = &proc_dointvec
+	},
+	{}
+};
+
+static struct ctl_table sysctl_table[2] = {
+	{
+	 .procname = "csky_alignment",
+	 .mode = 0555,
+	 .child = alignment_tbl},
+	{}
+};
+
+static struct ctl_path sysctl_path[2] = {
+	{.procname = "csky"},
+	{}
+};
+
+static int __init csky_alignment_init(void)
+{
+	register_sysctl_paths(sysctl_path, sysctl_table);
+	return 0;
+}
+
+arch_initcall(csky_alignment_init);

+ 12 - 0
arch/csky/abiv1/bswapdi.c

@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/export.h>
+#include <linux/compiler.h>
+#include <uapi/linux/swab.h>
+
+unsigned long long notrace __bswapdi2(unsigned long long u)
+{
+	return ___constant_swab64(u);
+}
+EXPORT_SYMBOL(__bswapdi2);

+ 12 - 0
arch/csky/abiv1/bswapsi.c

@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/export.h>
+#include <linux/compiler.h>
+#include <uapi/linux/swab.h>
+
+unsigned int notrace __bswapsi2(unsigned int u)
+{
+	return ___constant_swab32(u);
+}
+EXPORT_SYMBOL(__bswapsi2);

+ 52 - 0
arch/csky/abiv1/cacheflush.c

@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/syscalls.h>
+#include <linux/spinlock.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/cacheflush.h>
+#include <asm/cachectl.h>
+
+void flush_dcache_page(struct page *page)
+{
+	struct address_space *mapping = page_mapping(page);
+	unsigned long addr;
+
+	if (mapping && !mapping_mapped(mapping)) {
+		set_bit(PG_arch_1, &(page)->flags);
+		return;
+	}
+
+	/*
+	 * We could delay the flush for the !page_mapping case too.  But that
+	 * case is for exec env/arg pages and those are %99 certainly going to
+	 * get faulted into the tlb (and thus flushed) anyways.
+	 */
+	addr = (unsigned long) page_address(page);
+	dcache_wb_range(addr, addr + PAGE_SIZE);
+}
+
+void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
+		      pte_t *pte)
+{
+	unsigned long addr;
+	struct page *page;
+	unsigned long pfn;
+
+	pfn = pte_pfn(*pte);
+	if (unlikely(!pfn_valid(pfn)))
+		return;
+
+	page = pfn_to_page(pfn);
+	addr = (unsigned long) page_address(page);
+
+	if (vma->vm_flags & VM_EXEC ||
+	    pages_do_alias(addr, address & PAGE_MASK))
+		cache_wbinv_all();
+
+	clear_bit(PG_arch_1, &(page)->flags);
+}

+ 49 - 0
arch/csky/abiv1/inc/abi/cacheflush.h

@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ABI_CSKY_CACHEFLUSH_H
+#define __ABI_CSKY_CACHEFLUSH_H
+
+#include <linux/compiler.h>
+#include <asm/string.h>
+#include <asm/cache.h>
+
+#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
+extern void flush_dcache_page(struct page *);
+
+#define flush_cache_mm(mm)			cache_wbinv_all()
+#define flush_cache_page(vma, page, pfn)	cache_wbinv_all()
+#define flush_cache_dup_mm(mm)			cache_wbinv_all()
+
+/*
+ * if (current_mm != vma->mm) cache_wbinv_range(start, end) will be broken.
+ * Use cache_wbinv_all() here and need to be improved in future.
+ */
+#define flush_cache_range(vma, start, end)	cache_wbinv_all()
+#define flush_cache_vmap(start, end)		cache_wbinv_range(start, end)
+#define flush_cache_vunmap(start, end)		cache_wbinv_range(start, end)
+
+#define flush_icache_page(vma, page)		cache_wbinv_all()
+#define flush_icache_range(start, end)		cache_wbinv_range(start, end)
+
+#define flush_icache_user_range(vma, pg, adr, len) \
+				cache_wbinv_range(adr, adr + len)
+
+#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
+do { \
+	cache_wbinv_all(); \
+	memcpy(dst, src, len); \
+	cache_wbinv_all(); \
+} while (0)
+
+#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
+do { \
+	cache_wbinv_all(); \
+	memcpy(dst, src, len); \
+	cache_wbinv_all(); \
+} while (0)
+
+#define flush_dcache_mmap_lock(mapping)		do {} while (0)
+#define flush_dcache_mmap_unlock(mapping)	do {} while (0)
+
+#endif /* __ABI_CSKY_CACHEFLUSH_H */

+ 75 - 0
arch/csky/abiv1/inc/abi/ckmmu.h

@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_CKMMUV1_H
+#define __ASM_CSKY_CKMMUV1_H
+#include <abi/reg_ops.h>
+
+static inline int read_mmu_index(void)
+{
+	return cprcr("cpcr0");
+}
+
+static inline void write_mmu_index(int value)
+{
+	cpwcr("cpcr0", value);
+}
+
+static inline int read_mmu_entrylo0(void)
+{
+	return cprcr("cpcr2") << 6;
+}
+
+static inline int read_mmu_entrylo1(void)
+{
+	return cprcr("cpcr3") << 6;
+}
+
+static inline void write_mmu_pagemask(int value)
+{
+	cpwcr("cpcr6", value);
+}
+
+static inline int read_mmu_entryhi(void)
+{
+	return cprcr("cpcr4");
+}
+
+static inline void write_mmu_entryhi(int value)
+{
+	cpwcr("cpcr4", value);
+}
+
+/*
+ * TLB operations.
+ */
+static inline void tlb_probe(void)
+{
+	cpwcr("cpcr8", 0x80000000);
+}
+
+static inline void tlb_read(void)
+{
+	cpwcr("cpcr8", 0x40000000);
+}
+
+static inline void tlb_invalid_all(void)
+{
+	cpwcr("cpcr8", 0x04000000);
+}
+
+static inline void tlb_invalid_indexed(void)
+{
+	cpwcr("cpcr8", 0x02000000);
+}
+
+static inline void setup_pgd(unsigned long pgd, bool kernel)
+{
+	cpwcr("cpcr29", pgd);
+}
+
+static inline unsigned long get_pgd(void)
+{
+	return cprcr("cpcr29");
+}
+#endif /* __ASM_CSKY_CKMMUV1_H */

+ 26 - 0
arch/csky/abiv1/inc/abi/elf.h

@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ABI_CSKY_ELF_H
+#define __ABI_CSKY_ELF_H
+
+#define ELF_CORE_COPY_REGS(pr_reg, regs) do {	\
+	pr_reg[0] = regs->pc;			\
+	pr_reg[1] = regs->regs[9];		\
+	pr_reg[2] = regs->usp;			\
+	pr_reg[3] = regs->sr;			\
+	pr_reg[4] = regs->a0;			\
+	pr_reg[5] = regs->a1;			\
+	pr_reg[6] = regs->a2;			\
+	pr_reg[7] = regs->a3;			\
+	pr_reg[8] = regs->regs[0];		\
+	pr_reg[9] = regs->regs[1];		\
+	pr_reg[10] = regs->regs[2];		\
+	pr_reg[11] = regs->regs[3];		\
+	pr_reg[12] = regs->regs[4];		\
+	pr_reg[13] = regs->regs[5];		\
+	pr_reg[14] = regs->regs[6];		\
+	pr_reg[15] = regs->regs[7];		\
+	pr_reg[16] = regs->regs[8];		\
+	pr_reg[17] = regs->lr;			\
+} while (0);
+#endif /* __ABI_CSKY_ELF_H */

+ 160 - 0
arch/csky/abiv1/inc/abi/entry.h

@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_ENTRY_H
+#define __ASM_CSKY_ENTRY_H
+
+#include <asm/setup.h>
+#include <abi/regdef.h>
+
+#define LSAVE_PC	8
+#define LSAVE_PSR	12
+#define LSAVE_A0	24
+#define LSAVE_A1	28
+#define LSAVE_A2	32
+#define LSAVE_A3	36
+#define LSAVE_A4	40
+#define LSAVE_A5	44
+
+#define EPC_INCREASE	2
+#define EPC_KEEP	0
+
+.macro USPTOKSP
+	mtcr	sp, ss1
+	mfcr	sp, ss0
+.endm
+
+.macro KSPTOUSP
+	mtcr	sp, ss0
+	mfcr	sp, ss1
+.endm
+
+.macro INCTRAP	rx
+	addi	\rx, EPC_INCREASE
+.endm
+
+.macro	SAVE_ALL epc_inc
+	mtcr    r13, ss2
+	mfcr    r13, epsr
+	btsti   r13, 31
+	bt      1f
+	USPTOKSP
+1:
+	subi    sp, 32
+	subi    sp, 32
+	subi    sp, 16
+	stw     r13, (sp, 12)
+
+	stw     lr, (sp, 4)
+
+	mfcr	lr, epc
+	movi	r13, \epc_inc
+	add	lr, r13
+	stw     lr, (sp, 8)
+
+	mfcr	lr, ss1
+	stw     lr, (sp, 16)
+
+	stw     a0, (sp, 20)
+	stw     a0, (sp, 24)
+	stw     a1, (sp, 28)
+	stw     a2, (sp, 32)
+	stw     a3, (sp, 36)
+
+	addi	sp, 32
+	addi	sp, 8
+	mfcr    r13, ss2
+	stw	r6, (sp)
+	stw	r7, (sp, 4)
+	stw	r8, (sp, 8)
+	stw	r9, (sp, 12)
+	stw	r10, (sp, 16)
+	stw	r11, (sp, 20)
+	stw	r12, (sp, 24)
+	stw	r13, (sp, 28)
+	stw	r14, (sp, 32)
+	stw	r1, (sp, 36)
+	subi	sp, 32
+	subi	sp, 8
+.endm
+
+.macro	RESTORE_ALL
+	psrclr  ie
+	ldw	lr, (sp, 4)
+	ldw     a0, (sp, 8)
+	mtcr    a0, epc
+	ldw     a0, (sp, 12)
+	mtcr    a0, epsr
+	btsti   a0, 31
+	ldw     a0, (sp, 16)
+	mtcr	a0, ss1
+
+	ldw     a0, (sp, 24)
+	ldw     a1, (sp, 28)
+	ldw     a2, (sp, 32)
+	ldw     a3, (sp, 36)
+
+	addi	sp, 32
+	addi	sp, 8
+	ldw	r6, (sp)
+	ldw	r7, (sp, 4)
+	ldw	r8, (sp, 8)
+	ldw	r9, (sp, 12)
+	ldw	r10, (sp, 16)
+	ldw	r11, (sp, 20)
+	ldw	r12, (sp, 24)
+	ldw	r13, (sp, 28)
+	ldw	r14, (sp, 32)
+	ldw	r1, (sp, 36)
+	addi	sp, 32
+	addi	sp, 8
+
+	bt      1f
+	KSPTOUSP
+1:
+	rte
+.endm
+
+.macro SAVE_SWITCH_STACK
+	subi    sp, 32
+	stm     r8-r15, (sp)
+.endm
+
+.macro RESTORE_SWITCH_STACK
+	ldm     r8-r15, (sp)
+	addi    sp, 32
+.endm
+
+/* MMU registers operators. */
+.macro RD_MIR	rx
+	cprcr   \rx, cpcr0
+.endm
+
+.macro RD_MEH	rx
+	cprcr   \rx, cpcr4
+.endm
+
+.macro RD_MCIR	rx
+	cprcr   \rx, cpcr8
+.endm
+
+.macro RD_PGDR  rx
+	cprcr   \rx, cpcr29
+.endm
+
+.macro WR_MEH	rx
+	cpwcr   \rx, cpcr4
+.endm
+
+.macro WR_MCIR	rx
+	cpwcr   \rx, cpcr8
+.endm
+
+.macro SETUP_MMU rx
+	lrw	\rx, PHYS_OFFSET | 0xe
+	cpwcr	\rx, cpcr30
+	lrw	\rx, (PHYS_OFFSET + 0x20000000) | 0xe
+	cpwcr	\rx, cpcr31
+.endm
+
+#endif /* __ASM_CSKY_ENTRY_H */

+ 27 - 0
arch/csky/abiv1/inc/abi/page.h

@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+extern unsigned long shm_align_mask;
+extern void flush_dcache_page(struct page *page);
+
+static inline unsigned long pages_do_alias(unsigned long addr1,
+					   unsigned long addr2)
+{
+	return (addr1 ^ addr2) & shm_align_mask;
+}
+
+static inline void clear_user_page(void *addr, unsigned long vaddr,
+				   struct page *page)
+{
+	clear_page(addr);
+	if (pages_do_alias((unsigned long) addr, vaddr & PAGE_MASK))
+		flush_dcache_page(page);
+}
+
+static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
+				  struct page *page)
+{
+	copy_page(to, from);
+	if (pages_do_alias((unsigned long) to, vaddr & PAGE_MASK))
+		flush_dcache_page(page);
+}

+ 37 - 0
arch/csky/abiv1/inc/abi/pgtable-bits.h

@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_PGTABLE_BITS_H
+#define __ASM_CSKY_PGTABLE_BITS_H
+
+/* implemented in software */
+#define _PAGE_ACCESSED		(1<<3)
+#define PAGE_ACCESSED_BIT	(3)
+
+#define _PAGE_READ		(1<<1)
+#define _PAGE_WRITE		(1<<2)
+#define _PAGE_PRESENT		(1<<0)
+
+#define _PAGE_MODIFIED		(1<<4)
+#define PAGE_MODIFIED_BIT	(4)
+
+/* implemented in hardware */
+#define _PAGE_GLOBAL		(1<<6)
+
+#define _PAGE_VALID		(1<<7)
+#define PAGE_VALID_BIT		(7)
+
+#define _PAGE_DIRTY		(1<<8)
+#define PAGE_DIRTY_BIT		(8)
+
+#define _PAGE_CACHE		(3<<9)
+#define _PAGE_UNCACHE		(2<<9)
+
+#define _CACHE_MASK		(7<<9)
+
+#define _CACHE_CACHED		(_PAGE_VALID | _PAGE_CACHE)
+#define _CACHE_UNCACHED		(_PAGE_VALID | _PAGE_UNCACHE)
+
+#define HAVE_ARCH_UNMAPPED_AREA
+
+#endif /* __ASM_CSKY_PGTABLE_BITS_H */

+ 27 - 0
arch/csky/abiv1/inc/abi/reg_ops.h

@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ABI_REG_OPS_H
+#define __ABI_REG_OPS_H
+#include <asm/reg_ops.h>
+
+#define cprcr(reg)					\
+({							\
+	unsigned int tmp;				\
+	asm volatile("cprcr %0, "reg"\n":"=b"(tmp));	\
+	tmp;						\
+})
+
+#define cpwcr(reg, val)					\
+({							\
+	asm volatile("cpwcr %0, "reg"\n"::"b"(val));	\
+})
+
+static inline unsigned int mfcr_hint(void)
+{
+	return mfcr("cr30");
+}
+
+static inline unsigned int mfcr_ccr2(void) { return 0; }
+
+#endif /* __ABI_REG_OPS_H */

+ 26 - 0
arch/csky/abiv1/inc/abi/regdef.h

@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_REGDEF_H
+#define __ASM_CSKY_REGDEF_H
+
+#define syscallid	r1
+#define r11_sig		r11
+
+#define regs_syscallid(regs) regs->regs[9]
+
+/*
+ * PSR format:
+ * | 31 | 30-24 | 23-16 | 15 14 | 13-0 |
+ *   S     CPID     VEC     TM
+ *
+ *    S: Super Mode
+ * CPID: Coprocessor id, only 15 for MMU
+ *  VEC: Exception Number
+ *   TM: Trace Mode
+ */
+#define DEFAULT_PSR_VALUE	0x8f000000
+
+#define SYSTRACE_SAVENUM	2
+
+#endif /* __ASM_CSKY_REGDEF_H */

+ 13 - 0
arch/csky/abiv1/inc/abi/string.h

@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ABI_CSKY_STRING_H
+#define __ABI_CSKY_STRING_H
+
+#define __HAVE_ARCH_MEMCPY
+extern void *memcpy(void *, const void *, __kernel_size_t);
+
+#define __HAVE_ARCH_MEMSET
+extern void *memset(void *, int, __kernel_size_t);
+
+#endif /* __ABI_CSKY_STRING_H */

+ 17 - 0
arch/csky/abiv1/inc/abi/vdso.h

@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/uaccess.h>
+
+static inline int setup_vdso_page(unsigned short *ptr)
+{
+	int err = 0;
+
+	/* movi r1, 127 */
+	err |= __put_user(0x67f1, ptr + 0);
+	/* addi r1, (139 - 127) */
+	err |= __put_user(0x20b1, ptr + 1);
+	/* trap 0 */
+	err |= __put_user(0x0008, ptr + 2);
+
+	return err;
+}

+ 347 - 0
arch/csky/abiv1/memcpy.S

@@ -0,0 +1,347 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+
+.macro	GET_FRONT_BITS rx y
+#ifdef	__cskyLE__
+	lsri	\rx, \y
+#else
+	lsli	\rx, \y
+#endif
+.endm
+
+.macro	GET_AFTER_BITS rx y
+#ifdef	__cskyLE__
+	lsli	\rx, \y
+#else
+	lsri	\rx, \y
+#endif
+.endm
+
+/* void *memcpy(void *dest, const void *src, size_t n); */
+ENTRY(memcpy)
+	mov	r7, r2
+	cmplti	r4, 4
+	bt	.L_copy_by_byte
+	mov	r6, r2
+	andi	r6, 3
+	cmpnei	r6, 0
+	jbt	.L_dest_not_aligned
+	mov	r6, r3
+	andi	r6, 3
+	cmpnei	r6, 0
+	jbt	.L_dest_aligned_but_src_not_aligned
+.L0:
+	cmplti	r4, 16
+	jbt	.L_aligned_and_len_less_16bytes
+	subi	sp, 8
+	stw	r8, (sp, 0)
+.L_aligned_and_len_larger_16bytes:
+	ldw	r1, (r3, 0)
+	ldw	r5, (r3, 4)
+	ldw	r8, (r3, 8)
+	stw	r1, (r7, 0)
+	ldw	r1, (r3, 12)
+	stw	r5, (r7, 4)
+	stw	r8, (r7, 8)
+	stw	r1, (r7, 12)
+	subi	r4, 16
+	addi	r3, 16
+	addi	r7, 16
+	cmplti	r4, 16
+	jbf	.L_aligned_and_len_larger_16bytes
+	ldw	r8, (sp, 0)
+	addi	sp, 8
+	cmpnei	r4, 0
+	jbf	.L_return
+
+.L_aligned_and_len_less_16bytes:
+	cmplti	r4, 4
+	bt	.L_copy_by_byte
+.L1:
+	ldw	r1, (r3, 0)
+	stw	r1, (r7, 0)
+	subi	r4, 4
+	addi	r3, 4
+	addi	r7, 4
+	cmplti	r4, 4
+	jbf	.L1
+	br	.L_copy_by_byte
+
+.L_return:
+	rts
+
+.L_copy_by_byte:                      /* len less than 4 bytes */
+	cmpnei	r4, 0
+	jbf	.L_return
+.L4:
+	ldb	r1, (r3, 0)
+	stb	r1, (r7, 0)
+	addi	r3, 1
+	addi	r7, 1
+	decne	r4
+	jbt	.L4
+	rts
+
+/*
+ * If dest is not aligned, just copying some bytes makes the dest align.
+ * Afther that, we judge whether the src is aligned.
+ */
+.L_dest_not_aligned:
+	mov	r5, r3
+	rsub	r5, r5, r7
+	abs	r5, r5
+	cmplt	r5, r4
+	bt	.L_copy_by_byte
+	mov	r5, r7
+	sub	r5, r3
+	cmphs	r5, r4
+	bf	.L_copy_by_byte
+	mov	r5, r6
+.L5:
+	ldb	r1, (r3, 0)              /* makes the dest align. */
+	stb	r1, (r7, 0)
+	addi	r5, 1
+	subi	r4, 1
+	addi	r3, 1
+	addi	r7, 1
+	cmpnei	r5, 4
+	jbt	.L5
+	cmplti	r4, 4
+	jbt	.L_copy_by_byte
+	mov	r6, r3                   /* judge whether the src is aligned. */
+	andi	r6, 3
+	cmpnei	r6, 0
+	jbf	.L0
+
+/* Judge the number of misaligned, 1, 2, 3? */
+.L_dest_aligned_but_src_not_aligned:
+	mov	r5, r3
+	rsub	r5, r5, r7
+	abs	r5, r5
+	cmplt	r5, r4
+	bt	.L_copy_by_byte
+	bclri	r3, 0
+	bclri	r3, 1
+	ldw	r1, (r3, 0)
+	addi	r3, 4
+	cmpnei	r6, 2
+	bf	.L_dest_aligned_but_src_not_aligned_2bytes
+	cmpnei	r6, 3
+	bf	.L_dest_aligned_but_src_not_aligned_3bytes
+
+.L_dest_aligned_but_src_not_aligned_1byte:
+	mov	r5, r7
+	sub	r5, r3
+	cmphs	r5, r4
+	bf	.L_copy_by_byte
+	cmplti	r4, 16
+	bf	.L11
+.L10:                                     /* If the len is less than 16 bytes */
+	GET_FRONT_BITS r1 8
+	mov	r5, r1
+	ldw	r6, (r3, 0)
+	mov	r1, r6
+	GET_AFTER_BITS r6 24
+	or	r5, r6
+	stw	r5, (r7, 0)
+	subi	r4, 4
+	addi	r3, 4
+	addi	r7, 4
+	cmplti	r4, 4
+	bf	.L10
+	subi	r3, 3
+	br	.L_copy_by_byte
+.L11:
+	subi	sp, 16
+	stw	r8, (sp, 0)
+	stw	r9, (sp, 4)
+	stw	r10, (sp, 8)
+	stw	r11, (sp, 12)
+.L12:
+	ldw	r5, (r3, 0)
+	ldw	r11, (r3, 4)
+	ldw	r8, (r3, 8)
+	ldw	r9, (r3, 12)
+
+	GET_FRONT_BITS r1 8               /* little or big endian? */
+	mov	r10, r5
+	GET_AFTER_BITS r5 24
+	or	r5, r1
+
+	GET_FRONT_BITS r10 8
+	mov	r1, r11
+	GET_AFTER_BITS r11 24
+	or	r11, r10
+
+	GET_FRONT_BITS r1 8
+	mov	r10, r8
+	GET_AFTER_BITS r8 24
+	or	r8, r1
+
+	GET_FRONT_BITS r10 8
+	mov	r1, r9
+	GET_AFTER_BITS r9 24
+	or	r9, r10
+
+	stw	r5, (r7, 0)
+	stw	r11, (r7, 4)
+	stw	r8, (r7, 8)
+	stw	r9, (r7, 12)
+	subi	r4, 16
+	addi	r3, 16
+	addi	r7, 16
+	cmplti	r4, 16
+	jbf	.L12
+	ldw	r8, (sp, 0)
+	ldw	r9, (sp, 4)
+	ldw	r10, (sp, 8)
+	ldw	r11, (sp, 12)
+	addi	sp , 16
+	cmplti	r4, 4
+	bf	.L10
+	subi	r3, 3
+	br	.L_copy_by_byte
+
+.L_dest_aligned_but_src_not_aligned_2bytes:
+	cmplti	r4, 16
+	bf	.L21
+.L20:
+	GET_FRONT_BITS r1 16
+	mov	r5, r1
+	ldw	r6, (r3, 0)
+	mov	r1, r6
+	GET_AFTER_BITS r6 16
+	or	r5, r6
+	stw	r5, (r7, 0)
+	subi	r4, 4
+	addi	r3, 4
+	addi	r7, 4
+	cmplti	r4, 4
+	bf	.L20
+	subi	r3, 2
+	br	.L_copy_by_byte
+	rts
+
+.L21:	/* n > 16 */
+	subi 	sp, 16
+	stw	r8, (sp, 0)
+	stw	r9, (sp, 4)
+	stw	r10, (sp, 8)
+	stw	r11, (sp, 12)
+
+.L22:
+	ldw	r5, (r3, 0)
+	ldw	r11, (r3, 4)
+	ldw	r8, (r3, 8)
+	ldw	r9, (r3, 12)
+
+	GET_FRONT_BITS r1 16
+	mov	r10, r5
+	GET_AFTER_BITS r5 16
+	or	r5, r1
+
+	GET_FRONT_BITS r10 16
+	mov	r1, r11
+	GET_AFTER_BITS r11 16
+	or	r11, r10
+
+	GET_FRONT_BITS r1 16
+	mov	r10, r8
+	GET_AFTER_BITS r8 16
+	or	r8, r1
+
+	GET_FRONT_BITS r10 16
+	mov	r1, r9
+	GET_AFTER_BITS r9 16
+	or	r9, r10
+
+	stw	r5, (r7, 0)
+	stw	r11, (r7, 4)
+	stw	r8, (r7, 8)
+	stw	r9, (r7, 12)
+	subi	r4, 16
+	addi	r3, 16
+	addi	r7, 16
+	cmplti	r4, 16
+	jbf	.L22
+	ldw	r8, (sp, 0)
+	ldw	r9, (sp, 4)
+	ldw	r10, (sp, 8)
+	ldw	r11, (sp, 12)
+	addi	sp, 16
+	cmplti	r4, 4
+	bf	.L20
+	subi	r3, 2
+	br	.L_copy_by_byte
+
+
+.L_dest_aligned_but_src_not_aligned_3bytes:
+	cmplti	r4, 16
+	bf	.L31
+.L30:
+	GET_FRONT_BITS r1 24
+	mov	r5, r1
+	ldw	r6, (r3, 0)
+	mov	r1, r6
+	GET_AFTER_BITS r6 8
+	or	r5, r6
+	stw	r5, (r7, 0)
+	subi	r4, 4
+	addi	r3, 4
+	addi	r7, 4
+	cmplti	r4, 4
+	bf	.L30
+	subi	r3, 1
+	br	.L_copy_by_byte
+.L31:
+	subi	sp, 16
+	stw	r8, (sp, 0)
+	stw	r9, (sp, 4)
+	stw	r10, (sp, 8)
+	stw	r11, (sp, 12)
+.L32:
+	ldw	r5, (r3, 0)
+	ldw	r11, (r3, 4)
+	ldw	r8, (r3, 8)
+	ldw	r9, (r3, 12)
+
+	GET_FRONT_BITS r1 24
+	mov	r10, r5
+	GET_AFTER_BITS r5 8
+	or	r5, r1
+
+	GET_FRONT_BITS r10 24
+	mov	r1, r11
+	GET_AFTER_BITS r11 8
+	or	r11, r10
+
+	GET_FRONT_BITS r1 24
+	mov	r10, r8
+	GET_AFTER_BITS r8 8
+	or	r8, r1
+
+	GET_FRONT_BITS r10 24
+	mov	r1, r9
+	GET_AFTER_BITS r9 8
+	or	r9, r10
+
+	stw	r5, (r7, 0)
+	stw	r11, (r7, 4)
+	stw	r8, (r7, 8)
+	stw	r9, (r7, 12)
+	subi	r4, 16
+	addi	r3, 16
+	addi	r7, 16
+	cmplti	r4, 16
+	jbf	.L32
+	ldw	r8, (sp, 0)
+	ldw	r9, (sp, 4)
+	ldw	r10, (sp, 8)
+	ldw	r11, (sp, 12)
+	addi	sp, 16
+	cmplti	r4, 4
+	bf	.L30
+	subi	r3, 1
+	br	.L_copy_by_byte

+ 37 - 0
arch/csky/abiv1/memset.c

@@ -0,0 +1,37 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/types.h>
+
+void *memset(void *dest, int c, size_t l)
+{
+	char *d = dest;
+	int ch = c & 0xff;
+	int tmp = (ch | ch << 8 | ch << 16 | ch << 24);
+
+	while (((uintptr_t)d & 0x3) && l--)
+		*d++ = ch;
+
+	while (l >= 16) {
+		*(((u32 *)d))   = tmp;
+		*(((u32 *)d)+1) = tmp;
+		*(((u32 *)d)+2) = tmp;
+		*(((u32 *)d)+3) = tmp;
+		l -= 16;
+		d += 16;
+	}
+
+	while (l > 3) {
+		*(((u32 *)d)) = tmp;
+		l -= 4;
+		d += 4;
+	}
+
+	while (l) {
+		*d = ch;
+		l--;
+		d++;
+	}
+
+	return dest;
+}

+ 66 - 0
arch/csky/abiv1/mmap.c

@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/shm.h>
+#include <linux/sched.h>
+#include <linux/random.h>
+#include <linux/io.h>
+
+unsigned long shm_align_mask = (0x4000 >> 1) - 1;   /* Sane caches */
+
+#define COLOUR_ALIGN(addr, pgoff) \
+	((((addr) + shm_align_mask) & ~shm_align_mask) + \
+	 (((pgoff) << PAGE_SHIFT) & shm_align_mask))
+
+unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
+		unsigned long len, unsigned long pgoff, unsigned long flags)
+{
+	struct vm_area_struct *vmm;
+	int do_color_align;
+
+	if (flags & MAP_FIXED) {
+		/*
+		 * We do not accept a shared mapping if it would violate
+		 * cache aliasing constraints.
+		 */
+		if ((flags & MAP_SHARED) &&
+			((addr - (pgoff << PAGE_SHIFT)) & shm_align_mask))
+			return -EINVAL;
+		return addr;
+	}
+
+	if (len > TASK_SIZE)
+		return -ENOMEM;
+	do_color_align = 0;
+	if (filp || (flags & MAP_SHARED))
+		do_color_align = 1;
+	if (addr) {
+		if (do_color_align)
+			addr = COLOUR_ALIGN(addr, pgoff);
+		else
+			addr = PAGE_ALIGN(addr);
+		vmm = find_vma(current->mm, addr);
+		if (TASK_SIZE - len >= addr &&
+				(!vmm || addr + len <= vmm->vm_start))
+			return addr;
+	}
+	addr = TASK_UNMAPPED_BASE;
+	if (do_color_align)
+		addr = COLOUR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	for (vmm = find_vma(current->mm, addr); ; vmm = vmm->vm_next) {
+		/* At this point: (!vmm || addr < vmm->vm_end). */
+		if (TASK_SIZE - len < addr)
+			return -ENOMEM;
+		if (!vmm || addr + len <= vmm->vm_start)
+			return addr;
+		addr = vmm->vm_end;
+		if (do_color_align)
+			addr = COLOUR_ALIGN(addr, pgoff);
+	}
+}

+ 7 - 0
arch/csky/abiv1/strksyms.c

@@ -0,0 +1,7 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/module.h>
+
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(memset);

+ 10 - 0
arch/csky/abiv2/Makefile

@@ -0,0 +1,10 @@
+obj-y				+= cacheflush.o
+obj-$(CONFIG_CPU_HAS_FPU)	+= fpu.o
+obj-y				+= memcmp.o
+obj-y				+= memcpy.o
+obj-y				+= memmove.o
+obj-y				+= memset.o
+obj-y				+= strcmp.o
+obj-y				+= strcpy.o
+obj-y				+= strlen.o
+obj-y				+= strksyms.o

+ 60 - 0
arch/csky/abiv2/cacheflush.c

@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/cache.h>
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <asm/cache.h>
+
+void flush_icache_page(struct vm_area_struct *vma, struct page *page)
+{
+	unsigned long start;
+
+	start = (unsigned long) kmap_atomic(page);
+
+	cache_wbinv_range(start, start + PAGE_SIZE);
+
+	kunmap_atomic((void *)start);
+}
+
+void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
+			     unsigned long vaddr, int len)
+{
+	unsigned long kaddr;
+
+	kaddr = (unsigned long) kmap_atomic(page) + (vaddr & ~PAGE_MASK);
+
+	cache_wbinv_range(kaddr, kaddr + len);
+
+	kunmap_atomic((void *)kaddr);
+}
+
+void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
+		      pte_t *pte)
+{
+	unsigned long addr, pfn;
+	struct page *page;
+	void *va;
+
+	if (!(vma->vm_flags & VM_EXEC))
+		return;
+
+	pfn = pte_pfn(*pte);
+	if (unlikely(!pfn_valid(pfn)))
+		return;
+
+	page = pfn_to_page(pfn);
+	if (page == ZERO_PAGE(0))
+		return;
+
+	va = page_address(page);
+	addr = (unsigned long) va;
+
+	if (va == NULL && PageHighMem(page))
+		addr = (unsigned long) kmap_atomic(page);
+
+	cache_wbinv_range(addr, addr + PAGE_SIZE);
+
+	if (va == NULL && PageHighMem(page))
+		kunmap_atomic((void *) addr);
+}

+ 275 - 0
arch/csky/abiv2/fpu.c

@@ -0,0 +1,275 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/ptrace.h>
+#include <linux/uaccess.h>
+#include <abi/reg_ops.h>
+
+#define MTCR_MASK	0xFC00FFE0
+#define MFCR_MASK	0xFC00FFE0
+#define MTCR_DIST	0xC0006420
+#define MFCR_DIST	0xC0006020
+
+void __init init_fpu(void)
+{
+	mtcr("cr<1, 2>", 0);
+}
+
+/*
+ * fpu_libc_helper() is to help libc to excute:
+ *  - mfcr %a, cr<1, 2>
+ *  - mfcr %a, cr<2, 2>
+ *  - mtcr %a, cr<1, 2>
+ *  - mtcr %a, cr<2, 2>
+ */
+int fpu_libc_helper(struct pt_regs *regs)
+{
+	int fault;
+	unsigned long instrptr, regx = 0;
+	unsigned long index = 0, tmp = 0;
+	unsigned long tinstr = 0;
+	u16 instr_hi, instr_low;
+
+	instrptr = instruction_pointer(regs);
+	if (instrptr & 1)
+		return 0;
+
+	fault = __get_user(instr_low, (u16 *)instrptr);
+	if (fault)
+		return 0;
+
+	fault = __get_user(instr_hi, (u16 *)(instrptr + 2));
+	if (fault)
+		return 0;
+
+	tinstr = instr_hi | ((unsigned long)instr_low << 16);
+
+	if (((tinstr >> 21) & 0x1F) != 2)
+		return 0;
+
+	if ((tinstr & MTCR_MASK) == MTCR_DIST) {
+		index = (tinstr >> 16) & 0x1F;
+		if (index > 13)
+			return 0;
+
+		tmp = tinstr & 0x1F;
+		if (tmp > 2)
+			return 0;
+
+		regx =  *(&regs->a0 + index);
+
+		if (tmp == 1)
+			mtcr("cr<1, 2>", regx);
+		else if (tmp == 2)
+			mtcr("cr<2, 2>", regx);
+		else
+			return 0;
+
+		regs->pc += 4;
+		return 1;
+	}
+
+	if ((tinstr & MFCR_MASK) == MFCR_DIST) {
+		index = tinstr & 0x1F;
+		if (index > 13)
+			return 0;
+
+		tmp = ((tinstr >> 16) & 0x1F);
+		if (tmp > 2)
+			return 0;
+
+		if (tmp == 1)
+			regx = mfcr("cr<1, 2>");
+		else if (tmp == 2)
+			regx = mfcr("cr<2, 2>");
+		else
+			return 0;
+
+		*(&regs->a0 + index) = regx;
+
+		regs->pc += 4;
+		return 1;
+	}
+
+	return 0;
+}
+
+void fpu_fpe(struct pt_regs *regs)
+{
+	int sig, code;
+	unsigned int fesr;
+
+	fesr = mfcr("cr<2, 2>");
+
+	sig = SIGFPE;
+	code = FPE_FLTUNK;
+
+	if (fesr & FPE_ILLE) {
+		sig = SIGILL;
+		code = ILL_ILLOPC;
+	} else if (fesr & FPE_IDC) {
+		sig = SIGILL;
+		code = ILL_ILLOPN;
+	} else if (fesr & FPE_FEC) {
+		sig = SIGFPE;
+		if (fesr & FPE_IOC)
+			code = FPE_FLTINV;
+		else if (fesr & FPE_DZC)
+			code = FPE_FLTDIV;
+		else if (fesr & FPE_UFC)
+			code = FPE_FLTUND;
+		else if (fesr & FPE_OFC)
+			code = FPE_FLTOVF;
+		else if (fesr & FPE_IXC)
+			code = FPE_FLTRES;
+	}
+
+	force_sig_fault(sig, code, (void __user *)regs->pc, current);
+}
+
+#define FMFVR_FPU_REGS(vrx, vry)	\
+	"fmfvrl %0, "#vrx"\n"		\
+	"fmfvrh %1, "#vrx"\n"		\
+	"fmfvrl %2, "#vry"\n"		\
+	"fmfvrh %3, "#vry"\n"
+
+#define FMTVR_FPU_REGS(vrx, vry)	\
+	"fmtvrl "#vrx", %0\n"		\
+	"fmtvrh "#vrx", %1\n"		\
+	"fmtvrl "#vry", %2\n"		\
+	"fmtvrh "#vry", %3\n"
+
+#define STW_FPU_REGS(a, b, c, d)	\
+	"stw    %0, (%4, "#a")\n"	\
+	"stw    %1, (%4, "#b")\n"	\
+	"stw    %2, (%4, "#c")\n"	\
+	"stw    %3, (%4, "#d")\n"
+
+#define LDW_FPU_REGS(a, b, c, d)	\
+	"ldw    %0, (%4, "#a")\n"	\
+	"ldw    %1, (%4, "#b")\n"	\
+	"ldw    %2, (%4, "#c")\n"	\
+	"ldw    %3, (%4, "#d")\n"
+
+void save_to_user_fp(struct user_fp *user_fp)
+{
+	unsigned long flg;
+	unsigned long tmp1, tmp2;
+	unsigned long *fpregs;
+
+	local_irq_save(flg);
+
+	tmp1 = mfcr("cr<1, 2>");
+	tmp2 = mfcr("cr<2, 2>");
+
+	user_fp->fcr = tmp1;
+	user_fp->fesr = tmp2;
+
+	fpregs = &user_fp->vr[0];
+#ifdef CONFIG_CPU_HAS_FPUV2
+#ifdef CONFIG_CPU_HAS_VDSP
+	asm volatile(
+		"vstmu.32    vr0-vr3,   (%0)\n"
+		"vstmu.32    vr4-vr7,   (%0)\n"
+		"vstmu.32    vr8-vr11,  (%0)\n"
+		"vstmu.32    vr12-vr15, (%0)\n"
+		"fstmu.64    vr16-vr31, (%0)\n"
+		: "+a"(fpregs)
+		::"memory");
+#else
+	asm volatile(
+		"fstmu.64    vr0-vr31,  (%0)\n"
+		: "+a"(fpregs)
+		::"memory");
+#endif
+#else
+	{
+	unsigned long tmp3, tmp4;
+
+	asm volatile(
+		FMFVR_FPU_REGS(vr0, vr1)
+		STW_FPU_REGS(0, 4, 16, 20)
+		FMFVR_FPU_REGS(vr2, vr3)
+		STW_FPU_REGS(32, 36, 48, 52)
+		FMFVR_FPU_REGS(vr4, vr5)
+		STW_FPU_REGS(64, 68, 80, 84)
+		FMFVR_FPU_REGS(vr6, vr7)
+		STW_FPU_REGS(96, 100, 112, 116)
+		"addi	%4, 128\n"
+		FMFVR_FPU_REGS(vr8, vr9)
+		STW_FPU_REGS(0, 4, 16, 20)
+		FMFVR_FPU_REGS(vr10, vr11)
+		STW_FPU_REGS(32, 36, 48, 52)
+		FMFVR_FPU_REGS(vr12, vr13)
+		STW_FPU_REGS(64, 68, 80, 84)
+		FMFVR_FPU_REGS(vr14, vr15)
+		STW_FPU_REGS(96, 100, 112, 116)
+		: "=a"(tmp1), "=a"(tmp2), "=a"(tmp3),
+		  "=a"(tmp4), "+a"(fpregs)
+		::"memory");
+	}
+#endif
+
+	local_irq_restore(flg);
+}
+
+void restore_from_user_fp(struct user_fp *user_fp)
+{
+	unsigned long flg;
+	unsigned long tmp1, tmp2;
+	unsigned long *fpregs;
+
+	local_irq_save(flg);
+
+	tmp1 = user_fp->fcr;
+	tmp2 = user_fp->fesr;
+
+	mtcr("cr<1, 2>", tmp1);
+	mtcr("cr<2, 2>", tmp2);
+
+	fpregs = &user_fp->vr[0];
+#ifdef CONFIG_CPU_HAS_FPUV2
+#ifdef CONFIG_CPU_HAS_VDSP
+	asm volatile(
+		"vldmu.32    vr0-vr3,   (%0)\n"
+		"vldmu.32    vr4-vr7,   (%0)\n"
+		"vldmu.32    vr8-vr11,  (%0)\n"
+		"vldmu.32    vr12-vr15, (%0)\n"
+		"fldmu.64    vr16-vr31, (%0)\n"
+		: "+a"(fpregs)
+		::"memory");
+#else
+	asm volatile(
+		"fldmu.64    vr0-vr31,  (%0)\n"
+		: "+a"(fpregs)
+		::"memory");
+#endif
+#else
+	{
+	unsigned long tmp3, tmp4;
+
+	asm volatile(
+		LDW_FPU_REGS(0, 4, 16, 20)
+		FMTVR_FPU_REGS(vr0, vr1)
+		LDW_FPU_REGS(32, 36, 48, 52)
+		FMTVR_FPU_REGS(vr2, vr3)
+		LDW_FPU_REGS(64, 68, 80, 84)
+		FMTVR_FPU_REGS(vr4, vr5)
+		LDW_FPU_REGS(96, 100, 112, 116)
+		FMTVR_FPU_REGS(vr6, vr7)
+		"addi	%4, 128\n"
+		LDW_FPU_REGS(0, 4, 16, 20)
+		FMTVR_FPU_REGS(vr8, vr9)
+		LDW_FPU_REGS(32, 36, 48, 52)
+		FMTVR_FPU_REGS(vr10, vr11)
+		LDW_FPU_REGS(64, 68, 80, 84)
+		FMTVR_FPU_REGS(vr12, vr13)
+		LDW_FPU_REGS(96, 100, 112, 116)
+		FMTVR_FPU_REGS(vr14, vr15)
+		: "=a"(tmp1), "=a"(tmp2), "=a"(tmp3),
+		  "=a"(tmp4), "+a"(fpregs)
+		::"memory");
+	}
+#endif
+	local_irq_restore(flg);
+}

+ 46 - 0
arch/csky/abiv2/inc/abi/cacheflush.h

@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ABI_CSKY_CACHEFLUSH_H
+#define __ABI_CSKY_CACHEFLUSH_H
+
+/* Keep includes the same across arches.  */
+#include <linux/mm.h>
+
+/*
+ * The cache doesn't need to be flushed when TLB entries change when
+ * the cache is mapped to physical memory, not virtual memory
+ */
+#define flush_cache_all()			do { } while (0)
+#define flush_cache_mm(mm)			do { } while (0)
+#define flush_cache_dup_mm(mm)			do { } while (0)
+
+#define flush_cache_range(vma, start, end) \
+	do { \
+		if (vma->vm_flags & VM_EXEC) \
+			icache_inv_all(); \
+	} while (0)
+
+#define flush_cache_page(vma, vmaddr, pfn)	do { } while (0)
+#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
+#define flush_dcache_page(page)			do { } while (0)
+#define flush_dcache_mmap_lock(mapping)		do { } while (0)
+#define flush_dcache_mmap_unlock(mapping)	do { } while (0)
+
+#define flush_icache_range(start, end)		cache_wbinv_range(start, end)
+
+void flush_icache_page(struct vm_area_struct *vma, struct page *page);
+void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
+			     unsigned long vaddr, int len);
+
+#define flush_cache_vmap(start, end)		do { } while (0)
+#define flush_cache_vunmap(start, end)		do { } while (0)
+
+#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
+do { \
+	memcpy(dst, src, len); \
+	cache_wbinv_range((unsigned long)dst, (unsigned long)dst + len); \
+} while (0)
+#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
+	memcpy(dst, src, len)
+
+#endif /* __ABI_CSKY_CACHEFLUSH_H */

+ 87 - 0
arch/csky/abiv2/inc/abi/ckmmu.h

@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_CKMMUV2_H
+#define __ASM_CSKY_CKMMUV2_H
+
+#include <abi/reg_ops.h>
+#include <asm/barrier.h>
+
+static inline int read_mmu_index(void)
+{
+	return mfcr("cr<0, 15>");
+}
+
+static inline void write_mmu_index(int value)
+{
+	mtcr("cr<0, 15>", value);
+}
+
+static inline int read_mmu_entrylo0(void)
+{
+	return mfcr("cr<2, 15>");
+}
+
+static inline int read_mmu_entrylo1(void)
+{
+	return mfcr("cr<3, 15>");
+}
+
+static inline void write_mmu_pagemask(int value)
+{
+	mtcr("cr<6, 15>", value);
+}
+
+static inline int read_mmu_entryhi(void)
+{
+	return mfcr("cr<4, 15>");
+}
+
+static inline void write_mmu_entryhi(int value)
+{
+	mtcr("cr<4, 15>", value);
+}
+
+/*
+ * TLB operations.
+ */
+static inline void tlb_probe(void)
+{
+	mtcr("cr<8, 15>", 0x80000000);
+}
+
+static inline void tlb_read(void)
+{
+	mtcr("cr<8, 15>", 0x40000000);
+}
+
+static inline void tlb_invalid_all(void)
+{
+#ifdef CONFIG_CPU_HAS_TLBI
+	asm volatile("tlbi.alls\n":::"memory");
+	sync_is();
+#else
+	mtcr("cr<8, 15>", 0x04000000);
+#endif
+}
+
+static inline void tlb_invalid_indexed(void)
+{
+	mtcr("cr<8, 15>", 0x02000000);
+}
+
+/* setup hardrefil pgd */
+static inline unsigned long get_pgd(void)
+{
+	return mfcr("cr<29, 15>");
+}
+
+static inline void setup_pgd(unsigned long pgd, bool kernel)
+{
+	if (kernel)
+		mtcr("cr<28, 15>", pgd);
+	else
+		mtcr("cr<29, 15>", pgd);
+}
+
+#endif /* __ASM_CSKY_CKMMUV2_H */

+ 43 - 0
arch/csky/abiv2/inc/abi/elf.h

@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ABI_CSKY_ELF_H
+#define __ABI_CSKY_ELF_H
+
+/* The member sort in array pr_reg[x] is defined by GDB. */
+#define ELF_CORE_COPY_REGS(pr_reg, regs) do {	\
+	pr_reg[0] = regs->pc;			\
+	pr_reg[1] = regs->a1;			\
+	pr_reg[2] = regs->a0;			\
+	pr_reg[3] = regs->sr;			\
+	pr_reg[4] = regs->a2;			\
+	pr_reg[5] = regs->a3;			\
+	pr_reg[6] = regs->regs[0];		\
+	pr_reg[7] = regs->regs[1];		\
+	pr_reg[8] = regs->regs[2];		\
+	pr_reg[9] = regs->regs[3];		\
+	pr_reg[10] = regs->regs[4];		\
+	pr_reg[11] = regs->regs[5];		\
+	pr_reg[12] = regs->regs[6];		\
+	pr_reg[13] = regs->regs[7];		\
+	pr_reg[14] = regs->regs[8];		\
+	pr_reg[15] = regs->regs[9];		\
+	pr_reg[16] = regs->usp;			\
+	pr_reg[17] = regs->lr;			\
+	pr_reg[18] = regs->exregs[0];		\
+	pr_reg[19] = regs->exregs[1];		\
+	pr_reg[20] = regs->exregs[2];		\
+	pr_reg[21] = regs->exregs[3];		\
+	pr_reg[22] = regs->exregs[4];		\
+	pr_reg[23] = regs->exregs[5];		\
+	pr_reg[24] = regs->exregs[6];		\
+	pr_reg[25] = regs->exregs[7];		\
+	pr_reg[26] = regs->exregs[8];		\
+	pr_reg[27] = regs->exregs[9];		\
+	pr_reg[28] = regs->exregs[10];		\
+	pr_reg[29] = regs->exregs[11];		\
+	pr_reg[30] = regs->exregs[12];		\
+	pr_reg[31] = regs->exregs[13];		\
+	pr_reg[32] = regs->exregs[14];		\
+	pr_reg[33] = regs->tls;			\
+} while (0);
+#endif /* __ABI_CSKY_ELF_H */

+ 156 - 0
arch/csky/abiv2/inc/abi/entry.h

@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_ENTRY_H
+#define __ASM_CSKY_ENTRY_H
+
+#include <asm/setup.h>
+#include <abi/regdef.h>
+
+#define LSAVE_PC	8
+#define LSAVE_PSR	12
+#define LSAVE_A0	24
+#define LSAVE_A1	28
+#define LSAVE_A2	32
+#define LSAVE_A3	36
+
+#define EPC_INCREASE	4
+#define EPC_KEEP	0
+
+#define KSPTOUSP
+#define USPTOKSP
+
+#define usp cr<14, 1>
+
+.macro INCTRAP	rx
+	addi	\rx, EPC_INCREASE
+.endm
+
+.macro SAVE_ALL epc_inc
+	subi    sp, 152
+	stw	tls, (sp, 0)
+	stw	lr, (sp, 4)
+
+	mfcr	lr, epc
+	movi	tls, \epc_inc
+	add	lr, tls
+	stw	lr, (sp, 8)
+
+	mfcr	lr, epsr
+	stw	lr, (sp, 12)
+	mfcr	lr, usp
+	stw	lr, (sp, 16)
+
+	stw     a0, (sp, 20)
+	stw     a0, (sp, 24)
+	stw     a1, (sp, 28)
+	stw     a2, (sp, 32)
+	stw     a3, (sp, 36)
+
+	addi	sp, 40
+	stm	r4-r13, (sp)
+
+	addi    sp, 40
+	stm     r16-r30, (sp)
+#ifdef CONFIG_CPU_HAS_HILO
+	mfhi	lr
+	stw	lr, (sp, 60)
+	mflo	lr
+	stw	lr, (sp, 64)
+#endif
+	subi	sp, 80
+.endm
+
+.macro	RESTORE_ALL
+	psrclr  ie
+	ldw	tls, (sp, 0)
+	ldw	lr, (sp, 4)
+	ldw	a0, (sp, 8)
+	mtcr	a0, epc
+	ldw	a0, (sp, 12)
+	mtcr	a0, epsr
+	ldw	a0, (sp, 16)
+	mtcr	a0, usp
+
+#ifdef CONFIG_CPU_HAS_HILO
+	ldw	a0, (sp, 140)
+	mthi	a0
+	ldw	a0, (sp, 144)
+	mtlo	a0
+#endif
+
+	ldw     a0, (sp, 24)
+	ldw     a1, (sp, 28)
+	ldw     a2, (sp, 32)
+	ldw     a3, (sp, 36)
+
+	addi	sp, 40
+	ldm	r4-r13, (sp)
+	addi    sp, 40
+	ldm     r16-r30, (sp)
+	addi    sp, 72
+	rte
+.endm
+
+.macro SAVE_SWITCH_STACK
+	subi	sp, 64
+	stm	r4-r11, (sp)
+	stw	r15, (sp, 32)
+	stw	r16, (sp, 36)
+	stw	r17, (sp, 40)
+	stw	r26, (sp, 44)
+	stw	r27, (sp, 48)
+	stw	r28, (sp, 52)
+	stw	r29, (sp, 56)
+	stw	r30, (sp, 60)
+.endm
+
+.macro RESTORE_SWITCH_STACK
+	ldm	r4-r11, (sp)
+	ldw	r15, (sp, 32)
+	ldw	r16, (sp, 36)
+	ldw	r17, (sp, 40)
+	ldw	r26, (sp, 44)
+	ldw	r27, (sp, 48)
+	ldw	r28, (sp, 52)
+	ldw	r29, (sp, 56)
+	ldw	r30, (sp, 60)
+	addi	sp, 64
+.endm
+
+/* MMU registers operators. */
+.macro RD_MIR rx
+	mfcr	\rx, cr<0, 15>
+.endm
+
+.macro RD_MEH rx
+	mfcr	\rx, cr<4, 15>
+.endm
+
+.macro RD_MCIR rx
+	mfcr	\rx, cr<8, 15>
+.endm
+
+.macro RD_PGDR rx
+	mfcr	\rx, cr<29, 15>
+.endm
+
+.macro RD_PGDR_K rx
+	mfcr	\rx, cr<28, 15>
+.endm
+
+.macro WR_MEH rx
+	mtcr	\rx, cr<4, 15>
+.endm
+
+.macro WR_MCIR rx
+	mtcr	\rx, cr<8, 15>
+.endm
+
+.macro SETUP_MMU rx
+	lrw	\rx, PHYS_OFFSET | 0xe
+	mtcr	\rx, cr<30, 15>
+	lrw	\rx, (PHYS_OFFSET + 0x20000000) | 0xe
+	mtcr	\rx, cr<31, 15>
+.endm
+#endif /* __ASM_CSKY_ENTRY_H */

+ 66 - 0
arch/csky/abiv2/inc/abi/fpu.h

@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_FPU_H
+#define __ASM_CSKY_FPU_H
+
+#include <asm/sigcontext.h>
+#include <asm/ptrace.h>
+
+int fpu_libc_helper(struct pt_regs *regs);
+void fpu_fpe(struct pt_regs *regs);
+void __init init_fpu(void);
+
+void save_to_user_fp(struct user_fp *user_fp);
+void restore_from_user_fp(struct user_fp *user_fp);
+
+/*
+ * Define the fesr bit for fpe handle.
+ */
+#define  FPE_ILLE  (1 << 16)    /* Illegal instruction  */
+#define  FPE_FEC   (1 << 7)     /* Input float-point arithmetic exception */
+#define  FPE_IDC   (1 << 5)     /* Input denormalized exception */
+#define  FPE_IXC   (1 << 4)     /* Inexact exception */
+#define  FPE_UFC   (1 << 3)     /* Underflow exception */
+#define  FPE_OFC   (1 << 2)     /* Overflow exception */
+#define  FPE_DZC   (1 << 1)     /* Divide by zero exception */
+#define  FPE_IOC   (1 << 0)     /* Invalid operation exception */
+#define  FPE_REGULAR_EXCEPTION (FPE_IXC | FPE_UFC | FPE_OFC | FPE_DZC | FPE_IOC)
+
+#ifdef CONFIG_OPEN_FPU_IDE
+#define IDE_STAT   (1 << 5)
+#else
+#define IDE_STAT   0
+#endif
+
+#ifdef CONFIG_OPEN_FPU_IXE
+#define IXE_STAT   (1 << 4)
+#else
+#define IXE_STAT   0
+#endif
+
+#ifdef CONFIG_OPEN_FPU_UFE
+#define UFE_STAT   (1 << 3)
+#else
+#define UFE_STAT   0
+#endif
+
+#ifdef CONFIG_OPEN_FPU_OFE
+#define OFE_STAT   (1 << 2)
+#else
+#define OFE_STAT   0
+#endif
+
+#ifdef CONFIG_OPEN_FPU_DZE
+#define DZE_STAT   (1 << 1)
+#else
+#define DZE_STAT   0
+#endif
+
+#ifdef CONFIG_OPEN_FPU_IOE
+#define IOE_STAT   (1 << 0)
+#else
+#define IOE_STAT   0
+#endif
+
+#endif /* __ASM_CSKY_FPU_H */

+ 14 - 0
arch/csky/abiv2/inc/abi/page.h

@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+static inline void clear_user_page(void *addr, unsigned long vaddr,
+				   struct page *page)
+{
+	clear_page(addr);
+}
+
+static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
+				  struct page *page)
+{
+	copy_page(to, from);
+}

+ 37 - 0
arch/csky/abiv2/inc/abi/pgtable-bits.h

@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_PGTABLE_BITS_H
+#define __ASM_CSKY_PGTABLE_BITS_H
+
+/* implemented in software */
+#define _PAGE_ACCESSED		(1<<7)
+#define PAGE_ACCESSED_BIT	(7)
+
+#define _PAGE_READ		(1<<8)
+#define _PAGE_WRITE		(1<<9)
+#define _PAGE_PRESENT		(1<<10)
+
+#define _PAGE_MODIFIED		(1<<11)
+#define PAGE_MODIFIED_BIT	(11)
+
+/* implemented in hardware */
+#define _PAGE_GLOBAL		(1<<0)
+
+#define _PAGE_VALID		(1<<1)
+#define PAGE_VALID_BIT		(1)
+
+#define _PAGE_DIRTY		(1<<2)
+#define PAGE_DIRTY_BIT		(2)
+
+#define _PAGE_SO		(1<<5)
+#define _PAGE_BUF		(1<<6)
+
+#define _PAGE_CACHE		(1<<3)
+
+#define _CACHE_MASK		_PAGE_CACHE
+
+#define _CACHE_CACHED		(_PAGE_VALID | _PAGE_CACHE | _PAGE_BUF)
+#define _CACHE_UNCACHED		(_PAGE_VALID | _PAGE_SO)
+
+#endif /* __ASM_CSKY_PGTABLE_BITS_H */

+ 17 - 0
arch/csky/abiv2/inc/abi/reg_ops.h

@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ABI_REG_OPS_H
+#define __ABI_REG_OPS_H
+#include <asm/reg_ops.h>
+
+static inline unsigned int mfcr_hint(void)
+{
+	return mfcr("cr31");
+}
+
+static inline unsigned int mfcr_ccr2(void)
+{
+	return mfcr("cr23");
+}
+#endif /* __ABI_REG_OPS_H */

+ 26 - 0
arch/csky/abiv2/inc/abi/regdef.h

@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_REGDEF_H
+#define __ASM_CSKY_REGDEF_H
+
+#define syscallid	r7
+#define r11_sig		r11
+
+#define regs_syscallid(regs) regs->regs[3]
+
+/*
+ * PSR format:
+ * | 31 | 30-24 | 23-16 | 15 14 | 13-10 | 9 | 8-0 |
+ *   S              VEC     TM            MM
+ *
+ *   S: Super Mode
+ * VEC: Exception Number
+ *  TM: Trace Mode
+ *  MM: Memory unaligned addr access
+ */
+#define DEFAULT_PSR_VALUE	0x80000200
+
+#define SYSTRACE_SAVENUM	5
+
+#endif /* __ASM_CSKY_REGDEF_H */

+ 27 - 0
arch/csky/abiv2/inc/abi/string.h

@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ABI_CSKY_STRING_H
+#define __ABI_CSKY_STRING_H
+
+#define __HAVE_ARCH_MEMCMP
+extern int memcmp(const void *, const void *, __kernel_size_t);
+
+#define __HAVE_ARCH_MEMCPY
+extern void *memcpy(void *, const void *, __kernel_size_t);
+
+#define __HAVE_ARCH_MEMMOVE
+extern void *memmove(void *, const void *, __kernel_size_t);
+
+#define __HAVE_ARCH_MEMSET
+extern void *memset(void *, int,  __kernel_size_t);
+
+#define __HAVE_ARCH_STRCMP
+extern int strcmp(const char *, const char *);
+
+#define __HAVE_ARCH_STRCPY
+extern char *strcpy(char *, const char *);
+
+#define __HAVE_ARCH_STRLEN
+extern __kernel_size_t strlen(const char *);
+
+#endif /* __ABI_CSKY_STRING_H */

+ 23 - 0
arch/csky/abiv2/inc/abi/vdso.h

@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ABI_CSKY_VDSO_H
+#define __ABI_CSKY_VDSO_H
+
+#include <linux/uaccess.h>
+
+static inline int setup_vdso_page(unsigned short *ptr)
+{
+	int err = 0;
+
+	/* movi r7, 173 */
+	err |= __put_user(0xea07, ptr);
+	err |= __put_user(0x008b,      ptr+1);
+
+	/* trap 0 */
+	err |= __put_user(0xc000,   ptr+2);
+	err |= __put_user(0x2020,   ptr+3);
+
+	return err;
+}
+
+#endif /* __ABI_CSKY_STRING_H */

+ 152 - 0
arch/csky/abiv2/memcmp.S

@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+#include "sysdep.h"
+
+ENTRY(memcmp)
+	/* Test if len less than 4 bytes.  */
+	mov	r3, r0
+	movi	r0, 0
+	mov	r12, r4
+	cmplti	r2, 4
+	bt	.L_compare_by_byte
+
+	andi	r13, r0, 3
+	movi	r19, 4
+
+	/* Test if s1 is not 4 bytes aligned.  */
+	bnez	r13, .L_s1_not_aligned
+
+	LABLE_ALIGN
+.L_s1_aligned:
+	/* If dest is aligned, then copy.  */
+	zext	r18, r2, 31, 4
+	/* Test if len less than 16 bytes.  */
+	bez	r18, .L_compare_by_word
+
+.L_compare_by_4word:
+	/* If aligned, load word each time.  */
+	ldw	r20, (r3, 0)
+	ldw	r21, (r1, 0)
+	/* If s1[i] != s2[i], goto .L_byte_check.  */
+	cmpne	r20, r21
+	bt	.L_byte_check
+
+	ldw	r20, (r3, 4)
+	ldw	r21, (r1, 4)
+	cmpne	r20, r21
+	bt	.L_byte_check
+
+	ldw	r20, (r3, 8)
+	ldw	r21, (r1, 8)
+	cmpne	r20, r21
+	bt	.L_byte_check
+
+	ldw	r20, (r3, 12)
+	ldw	r21, (r1, 12)
+	cmpne	r20, r21
+	bt	.L_byte_check
+
+	PRE_BNEZAD (r18)
+	addi	a3, 16
+	addi	a1, 16
+
+	BNEZAD (r18, .L_compare_by_4word)
+
+.L_compare_by_word:
+	zext	r18, r2, 3, 2
+	bez	r18, .L_compare_by_byte
+.L_compare_by_word_loop:
+	ldw	r20, (r3, 0)
+	ldw	r21, (r1, 0)
+	addi	r3, 4
+	PRE_BNEZAD (r18)
+	cmpne	r20, r21
+	addi    r1, 4
+	bt	.L_byte_check
+	BNEZAD (r18, .L_compare_by_word_loop)
+
+.L_compare_by_byte:
+        zext    r18, r2, 1, 0
+        bez     r18, .L_return
+.L_compare_by_byte_loop:
+        ldb     r0, (r3, 0)
+        ldb     r4, (r1, 0)
+        addi    r3, 1
+        subu    r0, r4
+        PRE_BNEZAD (r18)
+        addi    r1, 1
+        bnez    r0, .L_return
+        BNEZAD (r18, .L_compare_by_byte_loop)
+
+.L_return:
+        mov     r4, r12
+        rts
+
+# ifdef __CSKYBE__
+/* d[i] != s[i] in word, so we check byte 0.  */
+.L_byte_check:
+        xtrb0   r0, r20
+        xtrb0   r2, r21
+        subu    r0, r2
+        bnez    r0, .L_return
+
+        /* check byte 1 */
+        xtrb1   r0, r20
+        xtrb1   r2, r21
+        subu    r0, r2
+        bnez    r0, .L_return
+
+        /* check byte 2 */
+        xtrb2   r0, r20
+        xtrb2   r2, r21
+        subu    r0, r2
+        bnez    r0, .L_return
+
+        /* check byte 3 */
+        xtrb3   r0, r20
+        xtrb3   r2, r21
+        subu    r0, r2
+# else
+/* s1[i] != s2[i] in word, so we check byte 3.  */
+.L_byte_check:
+	xtrb3	r0, r20
+	xtrb3	r2, r21
+        subu    r0, r2
+        bnez    r0, .L_return
+
+	/* check byte 2 */
+	xtrb2	r0, r20
+	xtrb2	r2, r21
+        subu    r0, r2
+        bnez    r0, .L_return
+
+	/* check byte 1 */
+	xtrb1	r0, r20
+	xtrb1	r2, r21
+	subu	r0, r2
+	bnez    r0, .L_return
+
+	/* check byte 0 */
+	xtrb0	r0, r20
+	xtrb0	r2, r21
+	subu	r0, r2
+	br	.L_return
+# endif /* !__CSKYBE__ */
+
+/* Compare when s1 is not aligned.  */
+.L_s1_not_aligned:
+	sub	r13, r19, r13
+	sub	r2, r13
+.L_s1_not_aligned_loop:
+	ldb	r0, (r3, 0)
+	ldb	r4, (r1, 0)
+	addi	r3, 1
+	subu	r0, r4
+	PRE_BNEZAD (r13)
+	addi	r1, 1
+	bnez	r0, .L_return
+	BNEZAD (r13, .L_s1_not_aligned_loop)
+	br	.L_s1_aligned
+ENDPROC(memcmp)

+ 110 - 0
arch/csky/abiv2/memcpy.S

@@ -0,0 +1,110 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+#include "sysdep.h"
+
+ENTRY(__memcpy)
+ENTRY(memcpy)
+	/* Test if len less than 4 bytes.  */
+	mov	r12, r0
+	cmplti	r2, 4
+	bt	.L_copy_by_byte
+
+	andi	r13, r0, 3
+	movi	r19, 4
+	/* Test if dest is not 4 bytes aligned.  */
+	bnez	r13, .L_dest_not_aligned
+
+/* Hardware can handle unaligned access directly.  */
+.L_dest_aligned:
+	/* If dest is aligned, then copy.  */
+	zext	r18, r2, 31, 4
+
+	/* Test if len less than 16 bytes.  */
+	bez	r18, .L_len_less_16bytes
+	movi	r19, 0
+
+	LABLE_ALIGN
+.L_len_larger_16bytes:
+#if defined(__CSKY_VDSPV2__)
+	vldx.8	vr0, (r1), r19
+	PRE_BNEZAD (r18)
+	addi	r1, 16
+	vstx.8	vr0, (r0), r19
+	addi	r0, 16
+#elif defined(__CK860__)
+	ldw	r3, (r1, 0)
+	stw	r3, (r0, 0)
+	ldw	r3, (r1, 4)
+	stw	r3, (r0, 4)
+	ldw	r3, (r1, 8)
+	stw	r3, (r0, 8)
+	ldw	r3, (r1, 12)
+	addi	r1, 16
+	stw	r3, (r0, 12)
+	addi	r0, 16
+#else
+	ldw	r20, (r1, 0)
+	ldw	r21, (r1, 4)
+	ldw	r22, (r1, 8)
+	ldw	r23, (r1, 12)
+	stw	r20, (r0, 0)
+	stw	r21, (r0, 4)
+	stw	r22, (r0, 8)
+	stw	r23, (r0, 12)
+	PRE_BNEZAD (r18)
+	addi	r1, 16
+	addi	r0, 16
+#endif
+	BNEZAD (r18, .L_len_larger_16bytes)
+
+.L_len_less_16bytes:
+	zext	r18, r2, 3, 2
+	bez	r18, .L_copy_by_byte
+.L_len_less_16bytes_loop:
+	ldw	r3, (r1, 0)
+	PRE_BNEZAD (r18)
+	addi	r1, 4
+	stw	r3, (r0, 0)
+	addi	r0, 4
+	BNEZAD (r18, .L_len_less_16bytes_loop)
+
+/* Test if len less than 4 bytes.  */
+.L_copy_by_byte:
+	zext	r18, r2, 1, 0
+	bez	r18, .L_return
+.L_copy_by_byte_loop:
+	ldb	r3, (r1, 0)
+	PRE_BNEZAD (r18)
+	addi	r1, 1
+	stb	r3, (r0, 0)
+	addi	r0, 1
+	BNEZAD (r18, .L_copy_by_byte_loop)
+
+.L_return:
+	mov	r0, r12
+	rts
+
+/*
+ * If dest is not aligned, just copying some bytes makes the
+ * dest align.
+ */
+.L_dest_not_aligned:
+	sub	r13, r19, r13
+	sub	r2, r13
+
+/* Makes the dest align.  */
+.L_dest_not_aligned_loop:
+	ldb	r3, (r1, 0)
+	PRE_BNEZAD (r13)
+	addi	r1, 1
+	stb	r3, (r0, 0)
+	addi	r0, 1
+	BNEZAD (r13, .L_dest_not_aligned_loop)
+	cmplti	r2, 4
+	bt	.L_copy_by_byte
+
+	/* Check whether the src is aligned.  */
+	jbr	.L_dest_aligned
+ENDPROC(__memcpy)

+ 108 - 0
arch/csky/abiv2/memmove.S

@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+#include "sysdep.h"
+
+	.weak memmove
+ENTRY(__memmove)
+ENTRY(memmove)
+	subu	r3, r0, r1
+	cmphs	r3, r2
+	bt	memcpy
+
+	mov	r12, r0
+	addu	r0, r0, r2
+	addu	r1, r1, r2
+
+	/* Test if len less than 4 bytes.  */
+	cmplti	r2, 4
+	bt	.L_copy_by_byte
+
+	andi	r13, r0, 3
+	/* Test if dest is not 4 bytes aligned.  */
+	bnez	r13, .L_dest_not_aligned
+	/* Hardware can handle unaligned access directly.  */
+.L_dest_aligned:
+	/* If dest is aligned, then copy.  */
+	zext	r18, r2, 31, 4
+	/* Test if len less than 16 bytes.  */
+	bez	r18, .L_len_less_16bytes
+	movi	r19, 0
+
+	/* len > 16 bytes */
+	LABLE_ALIGN
+.L_len_larger_16bytes:
+	subi	r1, 16
+	subi	r0, 16
+#if defined(__CSKY_VDSPV2__)
+	vldx.8	vr0, (r1), r19
+	PRE_BNEZAD (r18)
+	vstx.8	vr0, (r0), r19
+#elif defined(__CK860__)
+	ldw	r3, (r1, 12)
+	stw	r3, (r0, 12)
+	ldw	r3, (r1, 8)
+	stw	r3, (r0, 8)
+	ldw	r3, (r1, 4)
+	stw	r3, (r0, 4)
+	ldw	r3, (r1, 0)
+	stw	r3, (r0, 0)
+#else
+	ldw	r20, (r1, 0)
+	ldw	r21, (r1, 4)
+	ldw	r22, (r1, 8)
+	ldw	r23, (r1, 12)
+	stw	r20, (r0, 0)
+	stw	r21, (r0, 4)
+	stw	r22, (r0, 8)
+	stw	r23, (r0, 12)
+	PRE_BNEZAD (r18)
+#endif
+	BNEZAD (r18, .L_len_larger_16bytes)
+
+.L_len_less_16bytes:
+	zext	r18, r2, 3, 2
+	bez	r18, .L_copy_by_byte
+.L_len_less_16bytes_loop:
+	subi	r1, 4
+	subi	r0, 4
+	ldw	r3, (r1, 0)
+	PRE_BNEZAD (r18)
+	stw	r3, (r0, 0)
+	BNEZAD (r18, .L_len_less_16bytes_loop)
+
+	/* Test if len less than 4 bytes.  */
+.L_copy_by_byte:
+	zext	r18, r2, 1, 0
+	bez	r18, .L_return
+.L_copy_by_byte_loop:
+	subi	r1, 1
+	subi	r0, 1
+	ldb	r3, (r1, 0)
+	PRE_BNEZAD (r18)
+	stb	r3, (r0, 0)
+	BNEZAD (r18, .L_copy_by_byte_loop)
+
+.L_return:
+	mov	r0, r12
+	rts
+
+	/* If dest is not aligned, just copy some bytes makes the dest
+	   align.  */
+.L_dest_not_aligned:
+	sub	r2, r13
+.L_dest_not_aligned_loop:
+	subi	r1, 1
+	subi	r0, 1
+	/* Makes the dest align.  */
+	ldb	r3, (r1, 0)
+	PRE_BNEZAD (r13)
+	stb	r3, (r0, 0)
+	BNEZAD (r13, .L_dest_not_aligned_loop)
+	cmplti	r2, 4
+	bt	.L_copy_by_byte
+	/* Check whether the src is aligned.  */
+	jbr	.L_dest_aligned
+ENDPROC(memmove)
+ENDPROC(__memmove)

+ 83 - 0
arch/csky/abiv2/memset.S

@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+#include "sysdep.h"
+
+	.weak memset
+ENTRY(__memset)
+ENTRY(memset)
+	/* Test if len less than 4 bytes.  */
+	mov	r12, r0
+	cmplti	r2, 8
+	bt	.L_set_by_byte
+
+	andi	r13, r0, 3
+	movi	r19, 4
+	/* Test if dest is not 4 bytes aligned.  */
+	bnez	r13, .L_dest_not_aligned
+	/* Hardware can handle unaligned access directly.  */
+.L_dest_aligned:
+        zextb   r3, r1
+        lsli    r1, 8
+        or      r1, r3
+        lsli    r3, r1, 16
+        or      r3, r1
+
+	/* If dest is aligned, then copy.  */
+	zext	r18, r2, 31, 4
+	/* Test if len less than 16 bytes.  */
+	bez	r18, .L_len_less_16bytes
+
+	LABLE_ALIGN
+.L_len_larger_16bytes:
+	stw	r3, (r0, 0)
+	stw	r3, (r0, 4)
+	stw	r3, (r0, 8)
+	stw	r3, (r0, 12)
+	PRE_BNEZAD (r18)
+	addi	r0, 16
+	BNEZAD (r18, .L_len_larger_16bytes)
+
+.L_len_less_16bytes:
+	zext	r18, r2, 3, 2
+	andi	r2, 3
+	bez	r18, .L_set_by_byte
+.L_len_less_16bytes_loop:
+	stw	r3, (r0, 0)
+	PRE_BNEZAD (r18)
+	addi	r0, 4
+	BNEZAD (r18, .L_len_less_16bytes_loop)
+
+	/* Test if len less than 4 bytes.  */
+.L_set_by_byte:
+	zext	r18, r2, 2, 0
+	bez	r18, .L_return
+.L_set_by_byte_loop:
+	stb	r1, (r0, 0)
+	PRE_BNEZAD (r18)
+	addi	r0, 1
+	BNEZAD (r18, .L_set_by_byte_loop)
+
+.L_return:
+	mov	r0, r12
+	rts
+
+	/* If dest is not aligned, just set some bytes makes the dest
+	   align.  */
+
+.L_dest_not_aligned:
+	sub	r13, r19, r13
+	sub	r2, r13
+.L_dest_not_aligned_loop:
+	/* Makes the dest align.  */
+	stb	r1, (r0, 0)
+	PRE_BNEZAD (r13)
+	addi	r0, 1
+	BNEZAD (r13, .L_dest_not_aligned_loop)
+	cmplti	r2, 8
+	bt	.L_set_by_byte
+	/* Check whether the src is aligned.  */
+	jbr	.L_dest_aligned
+ENDPROC(memset)
+ENDPROC(__memset)

+ 168 - 0
arch/csky/abiv2/strcmp.S

@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+#include "sysdep.h"
+
+ENTRY(strcmp)
+	mov	a3, a0
+	/* Check if the s1 addr is aligned.  */
+	xor	a2, a3, a1
+	andi	a2, 0x3
+	bnez	a2, 7f
+	andi	t1, a0, 0x3
+	bnez	t1, 5f
+
+1:
+	/* If aligned, load word each time.  */
+	ldw	t0, (a3, 0)
+	ldw	t1, (a1, 0)
+	/* If s1[i] != s2[i], goto 2f.  */
+	cmpne   t0, t1
+	bt      2f
+	/* If s1[i] == s2[i], check if s1 or s2 is at the end.  */
+	tstnbz	t0
+	/* If at the end, goto 3f (finish comparing).  */
+	bf	3f
+
+	ldw	t0, (a3, 4)
+	ldw	t1, (a1, 4)
+	cmpne	t0, t1
+	bt	2f
+	tstnbz	t0
+	bf	3f
+
+	ldw	t0, (a3, 8)
+	ldw	t1, (a1, 8)
+	cmpne	t0, t1
+	bt	2f
+	tstnbz	t0
+	bf	3f
+
+	ldw	t0, (a3, 12)
+	ldw	t1, (a1, 12)
+	cmpne	t0, t1
+	bt	2f
+	tstnbz	t0
+	bf	3f
+
+	ldw	t0, (a3, 16)
+	ldw	t1, (a1, 16)
+	cmpne	t0, t1
+	bt	2f
+	tstnbz	t0
+	bf	3f
+
+	ldw	t0, (a3, 20)
+	ldw	t1, (a1, 20)
+	cmpne	t0, t1
+	bt	2f
+	tstnbz	t0
+	bf	3f
+
+	ldw	t0, (a3, 24)
+	ldw	t1, (a1, 24)
+	cmpne	t0, t1
+	bt	2f
+	tstnbz	t0
+	bf	3f
+
+	ldw	t0, (a3, 28)
+	ldw	t1, (a1, 28)
+	cmpne	t0, t1
+	bt	2f
+	tstnbz	t0
+	bf	3f
+
+	addi	a3, 32
+	addi	a1, 32
+
+	br	1b
+
+# ifdef __CSKYBE__
+	/* d[i] != s[i] in word, so we check byte 0.  */
+2:
+	xtrb0   a0, t0
+	xtrb0   a2, t1
+	subu    a0, a2
+	bez     a2, 4f
+	bnez    a0, 4f
+
+	/* check byte 1 */
+	xtrb1   a0, t0
+	xtrb1   a2, t1
+	subu    a0, a2
+	bez     a2, 4f
+	bnez    a0, 4f
+
+	/* check byte 2 */
+	xtrb2   a0, t0
+	xtrb2   a2, t1
+	subu    a0, a2
+	bez     a2, 4f
+	bnez    a0, 4f
+
+	/* check byte 3 */
+	xtrb3   a0, t0
+	xtrb3   a2, t1
+	subu    a0, a2
+# else
+	/* s1[i] != s2[i] in word, so we check byte 3.  */
+2:
+	xtrb3	a0, t0
+	xtrb3	a2, t1
+	subu    a0, a2
+	bez     a2, 4f
+	bnez    a0, 4f
+
+	/* check byte 2 */
+	xtrb2	a0, t0
+	xtrb2	a2, t1
+	subu    a0, a2
+	bez     a2, 4f
+	bnez    a0, 4f
+
+	/* check byte 1 */
+	xtrb1	a0, t0
+	xtrb1	a2, t1
+	subu	a0, a2
+	bez	a2, 4f
+	bnez    a0, 4f
+
+	/* check byte 0 */
+	xtrb0	a0, t0
+	xtrb0	a2, t1
+	subu	a0, a2
+
+# endif /* !__CSKYBE__ */
+	jmp     lr
+3:
+	movi	a0, 0
+4:
+	jmp     lr
+
+	/* Compare when s1 or s2 is not aligned.  */
+5:
+	subi    t1, 4
+6:
+	ldb	a0, (a3, 0)
+	ldb	a2, (a1, 0)
+	subu	a0, a2
+	bez	a2, 4b
+	bnez	a0, 4b
+	addi    t1, 1
+	addi	a1, 1
+	addi	a3, 1
+	bnez	t1, 6b
+	br	1b
+
+7:
+	ldb	a0, (a3, 0)
+	addi	a3, 1
+	ldb	a2, (a1, 0)
+	addi	a1, 1
+	subu    a0, a2
+	bnez    a0, 4b
+	bnez	a2, 7b
+	jmp	r15
+ENDPROC(strcmp)

+ 123 - 0
arch/csky/abiv2/strcpy.S

@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+#include "sysdep.h"
+
+ENTRY(strcpy)
+	mov	a3, a0
+	/* Check if the src addr is aligned.  */
+        andi    t0, a1, 3
+        bnez	t0, 11f
+1:
+	/* Check if all the bytes in the word are not zero.  */
+	ldw	a2, (a1)
+	tstnbz	a2
+	bf	9f
+	stw	a2, (a3)
+
+	ldw	a2, (a1, 4)
+	tstnbz	a2
+	bf	2f
+	stw	a2, (a3, 4)
+
+	ldw	a2, (a1, 8)
+	tstnbz	a2
+	bf	3f
+	stw	a2, (a3, 8)
+
+	ldw	a2, (a1, 12)
+	tstnbz	a2
+	bf	4f
+	stw	a2, (a3, 12)
+
+	ldw	a2, (a1, 16)
+	tstnbz	a2
+	bf	5f
+	stw	a2, (a3, 16)
+
+	ldw	a2, (a1, 20)
+	tstnbz	a2
+	bf	6f
+	stw	a2, (a3, 20)
+
+	ldw	a2, (a1, 24)
+	tstnbz	a2
+	bf	7f
+	stw	a2, (a3, 24)
+
+	ldw	a2, (a1, 28)
+	tstnbz	a2
+	bf	8f
+	stw	a2, (a3, 28)
+
+	addi	a3, 32
+	addi	a1, 32
+	br	1b
+
+
+2:
+	addi	a3, 4
+	br	9f
+
+3:
+	addi	a3, 8
+	br	9f
+
+4:
+	addi	a3, 12
+	br	9f
+
+5:
+	addi	a3, 16
+	br	9f
+
+6:
+	addi	a3, 20
+	br	9f
+
+7:
+	addi	a3, 24
+	br	9f
+
+8:
+	addi	a3, 28
+9:
+# ifdef __CSKYBE__
+	xtrb0	t0, a2
+	st.b	t0, (a3)
+	bez	t0, 10f
+	xtrb1	t0, a2
+	st.b	t0, (a3, 1)
+	bez	t0, 10f
+	xtrb2	t0, a2
+	st.b	t0, (a3, 2)
+	bez	t0, 10f
+	stw	a2, (a3)
+# else
+	xtrb3	t0, a2
+	st.b	t0, (a3)
+	bez	t0, 10f
+	xtrb2	t0, a2
+	st.b	t0, (a3, 1)
+	bez	t0, 10f
+	xtrb1	t0, a2
+	st.b	t0, (a3, 2)
+	bez	t0, 10f
+	stw	a2, (a3)
+# endif	/* !__CSKYBE__ */
+10:
+	jmp	lr
+
+11:
+	subi    t0, 4
+12:
+        ld.b    a2, (a1)
+        st.b	a2, (a3)
+        bez	a2, 10b
+	addi    t0, 1
+        addi    a1, a1, 1
+        addi    a3, a3, 1
+	bnez	t0, 12b
+	jbr	1b
+ENDPROC(strcpy)

+ 12 - 0
arch/csky/abiv2/strksyms.c

@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/module.h>
+
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL(memcmp);
+EXPORT_SYMBOL(memmove);
+EXPORT_SYMBOL(strcmp);
+EXPORT_SYMBOL(strcpy);
+EXPORT_SYMBOL(strlen);

+ 97 - 0
arch/csky/abiv2/strlen.S

@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/linkage.h>
+#include "sysdep.h"
+
+ENTRY(strlen)
+	/* Check if the start addr is aligned.  */
+	mov	r3, r0
+	andi	r1, r0, 3
+	movi	r2, 4
+	movi	r0, 0
+	bnez	r1, .L_start_not_aligned
+
+	LABLE_ALIGN
+.L_start_addr_aligned:
+	/* Check if all the bytes in the word are not zero.  */
+	ldw	r1, (r3)
+	tstnbz	r1
+	bf	.L_string_tail
+
+	ldw	r1, (r3, 4)
+	addi	r0, 4
+	tstnbz	r1
+	bf	.L_string_tail
+
+	ldw	r1, (r3, 8)
+	addi	r0, 4
+	tstnbz	r1
+	bf	.L_string_tail
+
+	ldw	r1, (r3, 12)
+	addi	r0, 4
+	tstnbz	r1
+	bf	.L_string_tail
+
+	ldw	r1, (r3, 16)
+	addi	r0, 4
+	tstnbz	r1
+	bf	.L_string_tail
+
+	ldw	r1, (r3, 20)
+	addi	r0, 4
+	tstnbz	r1
+	bf	.L_string_tail
+
+	ldw	r1, (r3, 24)
+	addi	r0, 4
+	tstnbz	r1
+	bf	.L_string_tail
+
+	ldw	r1, (r3, 28)
+	addi	r0, 4
+	tstnbz	r1
+	bf	.L_string_tail
+
+	addi	r0, 4
+	addi	r3, 32
+	br	.L_start_addr_aligned
+
+.L_string_tail:
+# ifdef __CSKYBE__
+	xtrb0	r3, r1
+	bez	r3, .L_return
+	addi	r0, 1
+	xtrb1	r3, r1
+	bez	r3, .L_return
+	addi	r0, 1
+	xtrb2	r3, r1
+	bez	r3, .L_return
+	addi	r0, 1
+# else
+	xtrb3	r3, r1
+	bez	r3, .L_return
+	addi	r0, 1
+	xtrb2	r3, r1
+	bez	r3, .L_return
+	addi	r0, 1
+	xtrb1	r3, r1
+	bez	r3, .L_return
+	addi	r0, 1
+# endif	/* !__CSKYBE__ */
+
+.L_return:
+	rts
+
+.L_start_not_aligned:
+	sub	r2, r2, r1
+.L_start_not_aligned_loop:
+	ldb	r1, (r3)
+	PRE_BNEZAD (r2)
+	addi	r3, 1
+	bez	r1, .L_return
+	addi	r0, 1
+	BNEZAD (r2, .L_start_not_aligned_loop)
+	br	.L_start_addr_aligned
+ENDPROC(strlen)

+ 30 - 0
arch/csky/abiv2/sysdep.h

@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __SYSDEP_H
+#define __SYSDEP_H
+
+#ifdef __ASSEMBLER__
+
+#if defined(__CK860__)
+#define LABLE_ALIGN	\
+	.balignw 16, 0x6c03
+
+#define PRE_BNEZAD(R)
+
+#define BNEZAD(R, L)	\
+	bnezad	R, L
+#else
+#define LABLE_ALIGN	\
+	.balignw 8, 0x6c03
+
+#define PRE_BNEZAD(R)	\
+	subi	R, 1
+
+#define BNEZAD(R, L)	\
+	bnez	R, L
+#endif
+
+#endif
+
+#endif

+ 24 - 0
arch/csky/boot/Makefile

@@ -0,0 +1,24 @@
+targets := Image zImage uImage
+targets += $(dtb-y)
+
+$(obj)/Image: vmlinux FORCE
+	$(call if_changed,objcopy)
+	@echo '  Kernel: $@ is ready'
+
+compress-$(CONFIG_KERNEL_GZIP) = gzip
+compress-$(CONFIG_KERNEL_LZO)  = lzo
+compress-$(CONFIG_KERNEL_LZMA) = lzma
+compress-$(CONFIG_KERNEL_XZ)   = xzkern
+compress-$(CONFIG_KERNEL_LZ4)  = lz4
+
+$(obj)/zImage:  $(obj)/Image FORCE
+	$(call if_changed,$(compress-y))
+	@echo '  Kernel: $@ is ready'
+
+UIMAGE_ARCH		= sandbox
+UIMAGE_COMPRESSION	= $(compress-y)
+UIMAGE_LOADADDR		= $(shell $(NM) vmlinux | awk '$$NF == "_start" {print $$1}')
+
+$(obj)/uImage: $(obj)/zImage
+	$(call if_changed,uimage)
+	@echo 'Image: $@ is ready'

+ 13 - 0
arch/csky/boot/dts/Makefile

@@ -0,0 +1,13 @@
+dtstree	:= $(srctree)/$(src)
+
+ifneq '$(CONFIG_CSKY_BUILTIN_DTB)' '""'
+builtindtb-y := $(patsubst "%",%,$(CONFIG_CSKY_BUILTIN_DTB))
+dtb-y += $(builtindtb-y).dtb
+obj-y += $(builtindtb-y).dtb.o
+.SECONDARY: $(obj)/$(builtindtb-y).dtb.S
+else
+dtb-y := $(patsubst $(dtstree)/%.dts,%.dtb, $(wildcard $(dtstree)/*.dts))
+endif
+
+always += $(dtb-y)
+clean-files += *.dtb *.dtb.S

+ 1 - 0
arch/csky/boot/dts/include/dt-bindings

@@ -0,0 +1 @@
+../../../../../include/dt-bindings

+ 61 - 0
arch/csky/configs/defconfig

@@ -0,0 +1,61 @@
+# CONFIG_LOCALVERSION_AUTO is not set
+CONFIG_DEFAULT_HOSTNAME="csky"
+# CONFIG_SWAP is not set
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_AUDIT=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_BSD_PROCESS_ACCT_V3=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_DEFAULT_DEADLINE=y
+CONFIG_CPU_CK807=y
+CONFIG_CPU_HAS_FPU=y
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=65536
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_SERIAL_NONSTANDARD=y
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_OF_PLATFORM=y
+CONFIG_TTY_PRINTK=y
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_CSKY_MPTIMER=y
+CONFIG_GX6605S_TIMER=y
+CONFIG_PM_DEVFREQ=y
+CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
+CONFIG_DEVFREQ_GOV_PERFORMANCE=y
+CONFIG_DEVFREQ_GOV_POWERSAVE=y
+CONFIG_DEVFREQ_GOV_USERSPACE=y
+CONFIG_GENERIC_PHY=y
+CONFIG_EXT4_FS=y
+CONFIG_FANOTIFY=y
+CONFIG_QUOTA=y
+CONFIG_FSCACHE=m
+CONFIG_FSCACHE_STATS=y
+CONFIG_CACHEFILES=m
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_FAT_DEFAULT_UTF8=y
+CONFIG_NTFS_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_PROC_CHILDREN=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_CONFIGFS_FS=y
+CONFIG_CRAMFS=y
+CONFIG_ROMFS_FS=y
+CONFIG_NFS_FS=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_FS=y
+CONFIG_MAGIC_SYSRQ=y

+ 49 - 0
arch/csky/include/asm/Kbuild

@@ -0,0 +1,49 @@
+generic-y += asm-offsets.h
+generic-y += bugs.h
+generic-y += clkdev.h
+generic-y += compat.h
+generic-y += current.h
+generic-y += delay.h
+generic-y += device.h
+generic-y += div64.h
+generic-y += dma.h
+generic-y += dma-contiguous.h
+generic-y += dma-mapping.h
+generic-y += emergency-restart.h
+generic-y += exec.h
+generic-y += fb.h
+generic-y += ftrace.h
+generic-y += futex.h
+generic-y += gpio.h
+generic-y += hardirq.h
+generic-y += hw_irq.h
+generic-y += irq.h
+generic-y += irq_regs.h
+generic-y += irq_work.h
+generic-y += kdebug.h
+generic-y += kmap_types.h
+generic-y += kprobes.h
+generic-y += kvm_para.h
+generic-y += linkage.h
+generic-y += local.h
+generic-y += local64.h
+generic-y += mm-arch-hooks.h
+generic-y += module.h
+generic-y += mutex.h
+generic-y += pci.h
+generic-y += percpu.h
+generic-y += preempt.h
+generic-y += qrwlock.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += serial.h
+generic-y += shm.h
+generic-y += timex.h
+generic-y += topology.h
+generic-y += trace_clock.h
+generic-y += unaligned.h
+generic-y += user.h
+generic-y += vga.h
+generic-y += vmlinux.lds.h
+generic-y += word-at-a-time.h
+generic-y += xor.h

+ 10 - 0
arch/csky/include/asm/addrspace.h

@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_ADDRSPACE_H
+#define __ASM_CSKY_ADDRSPACE_H
+
+#define KSEG0		0x80000000ul
+#define KSEG0ADDR(a)	(((unsigned long)a & 0x1fffffff) | KSEG0)
+
+#endif /* __ASM_CSKY_ADDRSPACE_H */

+ 212 - 0
arch/csky/include/asm/atomic.h

@@ -0,0 +1,212 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_ATOMIC_H
+#define __ASM_CSKY_ATOMIC_H
+
+#include <linux/version.h>
+#include <asm/cmpxchg.h>
+#include <asm/barrier.h>
+
+#ifdef CONFIG_CPU_HAS_LDSTEX
+
+#define __atomic_add_unless __atomic_add_unless
+static inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+	unsigned long tmp, ret;
+
+	smp_mb();
+
+	asm volatile (
+	"1:	ldex.w		%0, (%3) \n"
+	"	mov		%1, %0   \n"
+	"	cmpne		%0, %4   \n"
+	"	bf		2f	 \n"
+	"	add		%0, %2   \n"
+	"	stex.w		%0, (%3) \n"
+	"	bez		%0, 1b   \n"
+	"2:				 \n"
+		: "=&r" (tmp), "=&r" (ret)
+		: "r" (a), "r"(&v->counter), "r"(u)
+		: "memory");
+
+	if (ret != u)
+		smp_mb();
+
+	return ret;
+}
+
+#define ATOMIC_OP(op, c_op)						\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp;						\
+									\
+	asm volatile (							\
+	"1:	ldex.w		%0, (%2) \n"				\
+	"	" #op "		%0, %1   \n"				\
+	"	stex.w		%0, (%2) \n"				\
+	"	bez		%0, 1b   \n"				\
+		: "=&r" (tmp)						\
+		: "r" (i), "r"(&v->counter)				\
+		: "memory");						\
+}
+
+#define ATOMIC_OP_RETURN(op, c_op)					\
+static inline int atomic_##op##_return(int i, atomic_t *v)		\
+{									\
+	unsigned long tmp, ret;						\
+									\
+	smp_mb();							\
+	asm volatile (							\
+	"1:	ldex.w		%0, (%3) \n"				\
+	"	" #op "		%0, %2   \n"				\
+	"	mov		%1, %0   \n"				\
+	"	stex.w		%0, (%3) \n"				\
+	"	bez		%0, 1b   \n"				\
+		: "=&r" (tmp), "=&r" (ret)				\
+		: "r" (i), "r"(&v->counter)				\
+		: "memory");						\
+	smp_mb();							\
+									\
+	return ret;							\
+}
+
+#define ATOMIC_FETCH_OP(op, c_op)					\
+static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp, ret;						\
+									\
+	smp_mb();							\
+	asm volatile (							\
+	"1:	ldex.w		%0, (%3) \n"				\
+	"	mov		%1, %0   \n"				\
+	"	" #op "		%0, %2   \n"				\
+	"	stex.w		%0, (%3) \n"				\
+	"	bez		%0, 1b   \n"				\
+		: "=&r" (tmp), "=&r" (ret)				\
+		: "r" (i), "r"(&v->counter)				\
+		: "memory");						\
+	smp_mb();							\
+									\
+	return ret;							\
+}
+
+#else /* CONFIG_CPU_HAS_LDSTEX */
+
+#include <linux/irqflags.h>
+
+#define __atomic_add_unless __atomic_add_unless
+static inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+	unsigned long tmp, ret, flags;
+
+	raw_local_irq_save(flags);
+
+	asm volatile (
+	"	ldw		%0, (%3) \n"
+	"	mov		%1, %0   \n"
+	"	cmpne		%0, %4   \n"
+	"	bf		2f	 \n"
+	"	add		%0, %2   \n"
+	"	stw		%0, (%3) \n"
+	"2:				 \n"
+		: "=&r" (tmp), "=&r" (ret)
+		: "r" (a), "r"(&v->counter), "r"(u)
+		: "memory");
+
+	raw_local_irq_restore(flags);
+
+	return ret;
+}
+
+#define ATOMIC_OP(op, c_op)						\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp, flags;					\
+									\
+	raw_local_irq_save(flags);					\
+									\
+	asm volatile (							\
+	"	ldw		%0, (%2) \n"				\
+	"	" #op "		%0, %1   \n"				\
+	"	stw		%0, (%2) \n"				\
+		: "=&r" (tmp)						\
+		: "r" (i), "r"(&v->counter)				\
+		: "memory");						\
+									\
+	raw_local_irq_restore(flags);					\
+}
+
+#define ATOMIC_OP_RETURN(op, c_op)					\
+static inline int atomic_##op##_return(int i, atomic_t *v)		\
+{									\
+	unsigned long tmp, ret, flags;					\
+									\
+	raw_local_irq_save(flags);					\
+									\
+	asm volatile (							\
+	"	ldw		%0, (%3) \n"				\
+	"	" #op "		%0, %2   \n"				\
+	"	stw		%0, (%3) \n"				\
+	"	mov		%1, %0   \n"				\
+		: "=&r" (tmp), "=&r" (ret)				\
+		: "r" (i), "r"(&v->counter)				\
+		: "memory");						\
+									\
+	raw_local_irq_restore(flags);					\
+									\
+	return ret;							\
+}
+
+#define ATOMIC_FETCH_OP(op, c_op)					\
+static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+{									\
+	unsigned long tmp, ret, flags;					\
+									\
+	raw_local_irq_save(flags);					\
+									\
+	asm volatile (							\
+	"	ldw		%0, (%3) \n"				\
+	"	mov		%1, %0   \n"				\
+	"	" #op "		%0, %2   \n"				\
+	"	stw		%0, (%3) \n"				\
+		: "=&r" (tmp), "=&r" (ret)				\
+		: "r" (i), "r"(&v->counter)				\
+		: "memory");						\
+									\
+	raw_local_irq_restore(flags);					\
+									\
+	return ret;							\
+}
+
+#endif /* CONFIG_CPU_HAS_LDSTEX */
+
+#define atomic_add_return atomic_add_return
+ATOMIC_OP_RETURN(add, +)
+#define atomic_sub_return atomic_sub_return
+ATOMIC_OP_RETURN(sub, -)
+
+#define atomic_fetch_add atomic_fetch_add
+ATOMIC_FETCH_OP(add, +)
+#define atomic_fetch_sub atomic_fetch_sub
+ATOMIC_FETCH_OP(sub, -)
+#define atomic_fetch_and atomic_fetch_and
+ATOMIC_FETCH_OP(and, &)
+#define atomic_fetch_or atomic_fetch_or
+ATOMIC_FETCH_OP(or, |)
+#define atomic_fetch_xor atomic_fetch_xor
+ATOMIC_FETCH_OP(xor, ^)
+
+#define atomic_and atomic_and
+ATOMIC_OP(and, &)
+#define atomic_or atomic_or
+ATOMIC_OP(or, |)
+#define atomic_xor atomic_xor
+ATOMIC_OP(xor, ^)
+
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_OP_RETURN
+#undef ATOMIC_OP
+
+#include <asm-generic/atomic.h>
+
+#endif /* __ASM_CSKY_ATOMIC_H */

+ 49 - 0
arch/csky/include/asm/barrier.h

@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_BARRIER_H
+#define __ASM_CSKY_BARRIER_H
+
+#ifndef __ASSEMBLY__
+
+#define nop()	asm volatile ("nop\n":::"memory")
+
+/*
+ * sync:        completion barrier
+ * sync.s:      completion barrier and shareable to other cores
+ * sync.i:      completion barrier with flush cpu pipeline
+ * sync.is:     completion barrier with flush cpu pipeline and shareable to
+ *		other cores
+ *
+ * bar.brwarw:  ordering barrier for all load/store instructions before it
+ * bar.brwarws: ordering barrier for all load/store instructions before it
+ *						and shareable to other cores
+ * bar.brar:    ordering barrier for all load       instructions before it
+ * bar.brars:   ordering barrier for all load       instructions before it
+ *						and shareable to other cores
+ * bar.bwaw:    ordering barrier for all store      instructions before it
+ * bar.bwaws:   ordering barrier for all store      instructions before it
+ *						and shareable to other cores
+ */
+
+#ifdef CONFIG_CPU_HAS_CACHEV2
+#define mb()		asm volatile ("bar.brwarw\n":::"memory")
+#define rmb()		asm volatile ("bar.brar\n":::"memory")
+#define wmb()		asm volatile ("bar.bwaw\n":::"memory")
+
+#ifdef CONFIG_SMP
+#define __smp_mb()	asm volatile ("bar.brwarws\n":::"memory")
+#define __smp_rmb()	asm volatile ("bar.brars\n":::"memory")
+#define __smp_wmb()	asm volatile ("bar.bwaws\n":::"memory")
+#endif /* CONFIG_SMP */
+
+#define sync_is()	asm volatile ("sync.is\n":::"memory")
+
+#else /* !CONFIG_CPU_HAS_CACHEV2 */
+#define mb()		asm volatile ("sync\n":::"memory")
+#endif
+
+#include <asm-generic/barrier.h>
+
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_CSKY_BARRIER_H */

+ 82 - 0
arch/csky/include/asm/bitops.h

@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_BITOPS_H
+#define __ASM_CSKY_BITOPS_H
+
+#include <linux/compiler.h>
+#include <asm/barrier.h>
+
+/*
+ * asm-generic/bitops/ffs.h
+ */
+static inline int ffs(int x)
+{
+	if (!x)
+		return 0;
+
+	asm volatile (
+		"brev %0\n"
+		"ff1  %0\n"
+		"addi %0, 1\n"
+		: "=&r"(x)
+		: "0"(x));
+	return x;
+}
+
+/*
+ * asm-generic/bitops/__ffs.h
+ */
+static __always_inline unsigned long __ffs(unsigned long x)
+{
+	asm volatile (
+		"brev %0\n"
+		"ff1  %0\n"
+		: "=&r"(x)
+		: "0"(x));
+	return x;
+}
+
+/*
+ * asm-generic/bitops/fls.h
+ */
+static __always_inline int fls(int x)
+{
+	asm volatile(
+		"ff1 %0\n"
+		: "=&r"(x)
+		: "0"(x));
+
+	return (32 - x);
+}
+
+/*
+ * asm-generic/bitops/__fls.h
+ */
+static __always_inline unsigned long __fls(unsigned long x)
+{
+	return fls(x) - 1;
+}
+
+#include <asm-generic/bitops/ffz.h>
+#include <asm-generic/bitops/fls64.h>
+#include <asm-generic/bitops/find.h>
+
+#ifndef _LINUX_BITOPS_H
+#error only <linux/bitops.h> can be included directly
+#endif
+
+#include <asm-generic/bitops/sched.h>
+#include <asm-generic/bitops/hweight.h>
+#include <asm-generic/bitops/lock.h>
+#include <asm-generic/bitops/atomic.h>
+
+/*
+ * bug fix, why only could use atomic!!!!
+ */
+#include <asm-generic/bitops/non-atomic.h>
+#define __clear_bit(nr, vaddr) clear_bit(nr, vaddr)
+
+#include <asm-generic/bitops/le.h>
+#include <asm-generic/bitops/ext2-atomic.h>
+#endif /* __ASM_CSKY_BITOPS_H */

+ 26 - 0
arch/csky/include/asm/bug.h

@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_BUG_H
+#define __ASM_CSKY_BUG_H
+
+#include <linux/compiler.h>
+#include <linux/const.h>
+#include <linux/types.h>
+
+#define BUG()				\
+do {					\
+	asm volatile ("bkpt\n");	\
+	unreachable();			\
+} while (0)
+
+#define HAVE_ARCH_BUG
+
+#include <asm-generic/bug.h>
+
+struct pt_regs;
+
+void die_if_kernel(char *str, struct pt_regs *regs, int nr);
+void show_regs(struct pt_regs *regs);
+
+#endif /* __ASM_CSKY_BUG_H */

+ 30 - 0
arch/csky/include/asm/cache.h

@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_CACHE_H
+#define __ASM_CSKY_CACHE_H
+
+/* bytes per L1 cache line */
+#define L1_CACHE_SHIFT	CONFIG_L1_CACHE_SHIFT
+
+#define L1_CACHE_BYTES	(1 << L1_CACHE_SHIFT)
+
+#define ARCH_DMA_MINALIGN	L1_CACHE_BYTES
+
+#ifndef __ASSEMBLY__
+
+void dcache_wb_line(unsigned long start);
+
+void icache_inv_range(unsigned long start, unsigned long end);
+void icache_inv_all(void);
+
+void dcache_wb_range(unsigned long start, unsigned long end);
+void dcache_wbinv_all(void);
+
+void cache_wbinv_range(unsigned long start, unsigned long end);
+void cache_wbinv_all(void);
+
+void dma_wbinv_range(unsigned long start, unsigned long end);
+void dma_wb_range(unsigned long start, unsigned long end);
+
+#endif
+#endif  /* __ASM_CSKY_CACHE_H */

+ 9 - 0
arch/csky/include/asm/cacheflush.h

@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_CACHEFLUSH_H
+#define __ASM_CSKY_CACHEFLUSH_H
+
+#include <abi/cacheflush.h>
+
+#endif /* __ASM_CSKY_CACHEFLUSH_H */

+ 50 - 0
arch/csky/include/asm/checksum.h

@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_CHECKSUM_H
+#define __ASM_CSKY_CHECKSUM_H
+
+#include <linux/in6.h>
+#include <asm/byteorder.h>
+
+static inline __sum16 csum_fold(__wsum csum)
+{
+	u32 tmp;
+
+	asm volatile(
+	"mov	%1, %0\n"
+	"rori	%0, 16\n"
+	"addu	%0, %1\n"
+	"lsri	%0, 16\n"
+	: "=r"(csum), "=r"(tmp)
+	: "0"(csum));
+
+	return (__force __sum16) ~csum;
+}
+#define csum_fold csum_fold
+
+static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
+		unsigned short len, unsigned short proto, __wsum sum)
+{
+	asm volatile(
+	"clrc\n"
+	"addc    %0, %1\n"
+	"addc    %0, %2\n"
+	"addc    %0, %3\n"
+	"inct    %0\n"
+	: "=r"(sum)
+	: "r"((__force u32)saddr), "r"((__force u32)daddr),
+#ifdef __BIG_ENDIAN
+	"r"(proto + len),
+#else
+	"r"((proto + len) << 8),
+#endif
+	"0" ((__force unsigned long)sum)
+	: "cc");
+	return sum;
+}
+#define csum_tcpudp_nofold csum_tcpudp_nofold
+
+#include <asm-generic/checksum.h>
+
+#endif /* __ASM_CSKY_CHECKSUM_H */

+ 73 - 0
arch/csky/include/asm/cmpxchg.h

@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_CMPXCHG_H
+#define __ASM_CSKY_CMPXCHG_H
+
+#ifdef CONFIG_CPU_HAS_LDSTEX
+#include <asm/barrier.h>
+
+extern void __bad_xchg(void);
+
+#define __xchg(new, ptr, size)					\
+({								\
+	__typeof__(ptr) __ptr = (ptr);				\
+	__typeof__(new) __new = (new);				\
+	__typeof__(*(ptr)) __ret;				\
+	unsigned long tmp;					\
+	switch (size) {						\
+	case 4:							\
+		smp_mb();					\
+		asm volatile (					\
+		"1:	ldex.w		%0, (%3) \n"		\
+		"	mov		%1, %2   \n"		\
+		"	stex.w		%1, (%3) \n"		\
+		"	bez		%1, 1b   \n"		\
+			: "=&r" (__ret), "=&r" (tmp)		\
+			: "r" (__new), "r"(__ptr)		\
+			:);					\
+		smp_mb();					\
+		break;						\
+	default:						\
+		__bad_xchg();					\
+	}							\
+	__ret;							\
+})
+
+#define xchg(ptr, x)	(__xchg((x), (ptr), sizeof(*(ptr))))
+
+#define __cmpxchg(ptr, old, new, size)				\
+({								\
+	__typeof__(ptr) __ptr = (ptr);				\
+	__typeof__(new) __new = (new);				\
+	__typeof__(new) __tmp;					\
+	__typeof__(old) __old = (old);				\
+	__typeof__(*(ptr)) __ret;				\
+	switch (size) {						\
+	case 4:							\
+		smp_mb();					\
+		asm volatile (					\
+		"1:	ldex.w		%0, (%3) \n"		\
+		"	cmpne		%0, %4   \n"		\
+		"	bt		2f       \n"		\
+		"	mov		%1, %2   \n"		\
+		"	stex.w		%1, (%3) \n"		\
+		"	bez		%1, 1b   \n"		\
+		"2:				 \n"		\
+			: "=&r" (__ret), "=&r" (__tmp)		\
+			: "r" (__new), "r"(__ptr), "r"(__old)	\
+			:);					\
+		smp_mb();					\
+		break;						\
+	default:						\
+		__bad_xchg();					\
+	}							\
+	__ret;							\
+})
+
+#define cmpxchg(ptr, o, n) \
+	(__cmpxchg((ptr), (o), (n), sizeof(*(ptr))))
+#else
+#include <asm-generic/cmpxchg.h>
+#endif
+
+#endif /* __ASM_CSKY_CMPXCHG_H */

+ 85 - 0
arch/csky/include/asm/elf.h

@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_ELF_H
+#define __ASM_CSKY_ELF_H
+
+#include <asm/ptrace.h>
+#include <abi/regdef.h>
+
+#define ELF_ARCH 252
+
+/* CSKY Relocations */
+#define R_CSKY_NONE               0
+#define R_CSKY_32                 1
+#define R_CSKY_PCIMM8BY4          2
+#define R_CSKY_PCIMM11BY2         3
+#define R_CSKY_PCIMM4BY2          4
+#define R_CSKY_PC32               5
+#define R_CSKY_PCRELJSR_IMM11BY2  6
+#define R_CSKY_GNU_VTINHERIT      7
+#define R_CSKY_GNU_VTENTRY        8
+#define R_CSKY_RELATIVE           9
+#define R_CSKY_COPY               10
+#define R_CSKY_GLOB_DAT           11
+#define R_CSKY_JUMP_SLOT          12
+#define R_CSKY_ADDR_HI16          24
+#define R_CSKY_ADDR_LO16          25
+#define R_CSKY_PCRELJSR_IMM26BY2  40
+
+typedef unsigned long elf_greg_t;
+
+typedef struct user_fp elf_fpregset_t;
+
+#define ELF_NGREG (sizeof(struct pt_regs) / sizeof(elf_greg_t))
+
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ((x)->e_machine == ELF_ARCH)
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define USE_ELF_CORE_DUMP
+#define ELF_EXEC_PAGESIZE		4096
+#define ELF_CLASS			ELFCLASS32
+#define ELF_PLAT_INIT(_r, load_addr)	{ _r->a0 = 0; }
+
+#ifdef __cskyBE__
+#define ELF_DATA	ELFDATA2MSB
+#else
+#define ELF_DATA	ELFDATA2LSB
+#endif
+
+/*
+ * This is the location that an ET_DYN program is loaded if exec'ed. Typical
+ * use of this is to invoke "./ld.so someprog" to test out a new version of
+ * the loader.  We need to make sure that it is out of the way of the program
+ * that it will "exec", and that there is sufficient room for the brk.
+ */
+#define ELF_ET_DYN_BASE	0x0UL
+#include <abi/elf.h>
+
+/* Similar, but for a thread other than current. */
+struct task_struct;
+extern int dump_task_regs(struct task_struct *tsk, elf_gregset_t *elf_regs);
+#define ELF_CORE_COPY_TASK_REGS(tsk, elf_regs) dump_task_regs(tsk, elf_regs)
+
+#define ELF_HWCAP	(0)
+
+/*
+ * This yields a string that ld.so will use to load implementation specific
+ * libraries for optimization. This is more specific in intent than poking
+ * at uname or /proc/cpuinfo.
+ */
+#define ELF_PLATFORM		(NULL)
+#define SET_PERSONALITY(ex)	set_personality(PER_LINUX)
+
+#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
+struct linux_binprm;
+extern int arch_setup_additional_pages(struct linux_binprm *bprm,
+				       int uses_interp);
+#endif /* __ASM_CSKY_ELF_H */

+ 27 - 0
arch/csky/include/asm/fixmap.h

@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_FIXMAP_H
+#define __ASM_CSKY_FIXMAP_H
+
+#include <asm/page.h>
+#ifdef CONFIG_HIGHMEM
+#include <linux/threads.h>
+#include <asm/kmap_types.h>
+#endif
+
+enum fixed_addresses {
+#ifdef CONFIG_HIGHMEM
+	FIX_KMAP_BEGIN,
+	FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,
+#endif
+	__end_of_fixed_addresses
+};
+
+#define FIXADDR_TOP	0xffffc000
+#define FIXADDR_SIZE	(__end_of_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_START	(FIXADDR_TOP - FIXADDR_SIZE)
+
+#include <asm-generic/fixmap.h>
+
+#endif /* __ASM_CSKY_FIXMAP_H */

+ 51 - 0
arch/csky/include/asm/highmem.h

@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_HIGHMEM_H
+#define __ASM_CSKY_HIGHMEM_H
+
+#ifdef __KERNEL__
+
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/uaccess.h>
+#include <asm/kmap_types.h>
+#include <asm/cache.h>
+
+/* undef for production */
+#define HIGHMEM_DEBUG 1
+
+/* declarations for highmem.c */
+extern unsigned long highstart_pfn, highend_pfn;
+
+extern pte_t *pkmap_page_table;
+
+/*
+ * Right now we initialize only a single pte table. It can be extended
+ * easily, subsequent pte tables have to be allocated in one physical
+ * chunk of RAM.
+ */
+#define LAST_PKMAP 1024
+#define LAST_PKMAP_MASK (LAST_PKMAP-1)
+#define PKMAP_NR(virt)  ((virt-PKMAP_BASE) >> PAGE_SHIFT)
+#define PKMAP_ADDR(nr)  (PKMAP_BASE + ((nr) << PAGE_SHIFT))
+
+extern void *kmap_high(struct page *page);
+extern void kunmap_high(struct page *page);
+
+extern void *kmap(struct page *page);
+extern void kunmap(struct page *page);
+extern void *kmap_atomic(struct page *page);
+extern void __kunmap_atomic(void *kvaddr);
+extern void *kmap_atomic_pfn(unsigned long pfn);
+extern struct page *kmap_atomic_to_page(void *ptr);
+
+#define flush_cache_kmaps() do {} while (0)
+
+extern void kmap_init(void);
+
+#define kmap_prot PAGE_KERNEL
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_CSKY_HIGHMEM_H */

+ 24 - 0
arch/csky/include/asm/io.h

@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_IO_H
+#define __ASM_CSKY_IO_H
+
+#include <abi/pgtable-bits.h>
+#include <linux/types.h>
+#include <linux/version.h>
+
+extern void __iomem *ioremap(phys_addr_t offset, size_t size);
+
+extern void iounmap(void *addr);
+
+extern int remap_area_pages(unsigned long address, phys_addr_t phys_addr,
+		size_t size, unsigned long flags);
+
+#define ioremap_nocache(phy, sz)	ioremap(phy, sz)
+#define ioremap_wc ioremap_nocache
+#define ioremap_wt ioremap_nocache
+
+#include <asm-generic/io.h>
+
+#endif /* __ASM_CSKY_IO_H */

+ 49 - 0
arch/csky/include/asm/irqflags.h

@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_IRQFLAGS_H
+#define __ASM_CSKY_IRQFLAGS_H
+#include <abi/reg_ops.h>
+
+static inline unsigned long arch_local_irq_save(void)
+{
+	unsigned long flags;
+
+	flags = mfcr("psr");
+	asm volatile("psrclr ie\n":::"memory");
+	return flags;
+}
+#define arch_local_irq_save arch_local_irq_save
+
+static inline void arch_local_irq_enable(void)
+{
+	asm volatile("psrset ee, ie\n":::"memory");
+}
+#define arch_local_irq_enable arch_local_irq_enable
+
+static inline void arch_local_irq_disable(void)
+{
+	asm volatile("psrclr ie\n":::"memory");
+}
+#define arch_local_irq_disable arch_local_irq_disable
+
+static inline unsigned long arch_local_save_flags(void)
+{
+	return mfcr("psr");
+}
+#define arch_local_save_flags arch_local_save_flags
+
+static inline void arch_local_irq_restore(unsigned long flags)
+{
+	mtcr("psr", flags);
+}
+#define arch_local_irq_restore arch_local_irq_restore
+
+static inline int arch_irqs_disabled_flags(unsigned long flags)
+{
+	return !(flags & (1<<6));
+}
+#define arch_irqs_disabled_flags arch_irqs_disabled_flags
+
+#include <asm-generic/irqflags.h>
+
+#endif /* __ASM_CSKY_IRQFLAGS_H */

+ 12 - 0
arch/csky/include/asm/mmu.h

@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_MMU_H
+#define __ASM_CSKY_MMU_H
+
+typedef struct {
+	unsigned long asid[NR_CPUS];
+	void *vdso;
+} mm_context_t;
+
+#endif /* __ASM_CSKY_MMU_H */

+ 150 - 0
arch/csky/include/asm/mmu_context.h

@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_MMU_CONTEXT_H
+#define __ASM_CSKY_MMU_CONTEXT_H
+
+#include <asm-generic/mm_hooks.h>
+#include <asm/setup.h>
+#include <asm/page.h>
+#include <asm/cacheflush.h>
+#include <asm/tlbflush.h>
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <abi/ckmmu.h>
+
+static inline void tlbmiss_handler_setup_pgd(unsigned long pgd, bool kernel)
+{
+	pgd &= ~(1<<31);
+	pgd += PHYS_OFFSET;
+	pgd |= 1;
+	setup_pgd(pgd, kernel);
+}
+
+#define TLBMISS_HANDLER_SETUP_PGD(pgd) \
+	tlbmiss_handler_setup_pgd((unsigned long)pgd, 0)
+#define TLBMISS_HANDLER_SETUP_PGD_KERNEL(pgd) \
+	tlbmiss_handler_setup_pgd((unsigned long)pgd, 1)
+
+static inline unsigned long tlb_get_pgd(void)
+{
+	return ((get_pgd()|(1<<31)) - PHYS_OFFSET) & ~1;
+}
+
+#define cpu_context(cpu, mm)	((mm)->context.asid[cpu])
+#define cpu_asid(cpu, mm)	(cpu_context((cpu), (mm)) & ASID_MASK)
+#define asid_cache(cpu)		(cpu_data[cpu].asid_cache)
+
+#define ASID_FIRST_VERSION	(1 << CONFIG_CPU_ASID_BITS)
+#define ASID_INC		0x1
+#define ASID_MASK		(ASID_FIRST_VERSION - 1)
+#define ASID_VERSION_MASK	~ASID_MASK
+
+#define destroy_context(mm)		do {} while (0)
+#define enter_lazy_tlb(mm, tsk)		do {} while (0)
+#define deactivate_mm(tsk, mm)		do {} while (0)
+
+/*
+ *  All unused by hardware upper bits will be considered
+ *  as a software asid extension.
+ */
+static inline void
+get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
+{
+	unsigned long asid = asid_cache(cpu);
+
+	asid += ASID_INC;
+	if (!(asid & ASID_MASK)) {
+		flush_tlb_all();	/* start new asid cycle */
+		if (!asid)		/* fix version if needed */
+			asid = ASID_FIRST_VERSION;
+	}
+	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
+}
+
+/*
+ * Initialize the context related info for a new mm_struct
+ * instance.
+ */
+static inline int
+init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+{
+	int i;
+
+	for_each_online_cpu(i)
+		cpu_context(i, mm) = 0;
+	return 0;
+}
+
+static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+			struct task_struct *tsk)
+{
+	unsigned int cpu = smp_processor_id();
+	unsigned long flags;
+
+	local_irq_save(flags);
+	/* Check if our ASID is of an older version and thus invalid */
+	if ((cpu_context(cpu, next) ^ asid_cache(cpu)) & ASID_VERSION_MASK)
+		get_new_mmu_context(next, cpu);
+	write_mmu_entryhi(cpu_asid(cpu, next));
+	TLBMISS_HANDLER_SETUP_PGD(next->pgd);
+
+	/*
+	 * Mark current->active_mm as not "active" anymore.
+	 * We don't want to mislead possible IPI tlb flush routines.
+	 */
+	cpumask_clear_cpu(cpu, mm_cpumask(prev));
+	cpumask_set_cpu(cpu, mm_cpumask(next));
+
+	local_irq_restore(flags);
+}
+
+/*
+ * After we have set current->mm to a new value, this activates
+ * the context for the new mm so we see the new mappings.
+ */
+static inline void
+activate_mm(struct mm_struct *prev, struct mm_struct *next)
+{
+	unsigned long flags;
+	int cpu = smp_processor_id();
+
+	local_irq_save(flags);
+
+	/* Unconditionally get a new ASID.  */
+	get_new_mmu_context(next, cpu);
+
+	write_mmu_entryhi(cpu_asid(cpu, next));
+	TLBMISS_HANDLER_SETUP_PGD(next->pgd);
+
+	/* mark mmu ownership change */
+	cpumask_clear_cpu(cpu, mm_cpumask(prev));
+	cpumask_set_cpu(cpu, mm_cpumask(next));
+
+	local_irq_restore(flags);
+}
+
+/*
+ * If mm is currently active_mm, we can't really drop it. Instead,
+ * we will get a new one for it.
+ */
+static inline void
+drop_mmu_context(struct mm_struct *mm, unsigned int cpu)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+
+	if (cpumask_test_cpu(cpu, mm_cpumask(mm)))  {
+		get_new_mmu_context(mm, cpu);
+		write_mmu_entryhi(cpu_asid(cpu, mm));
+	} else {
+		/* will get a new context next time */
+		cpu_context(cpu, mm) = 0;
+	}
+
+	local_irq_restore(flags);
+}
+
+#endif /* __ASM_CSKY_MMU_CONTEXT_H */

+ 104 - 0
arch/csky/include/asm/page.h

@@ -0,0 +1,104 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_PAGE_H
+#define __ASM_CSKY_PAGE_H
+
+#include <asm/setup.h>
+#include <asm/cache.h>
+#include <linux/const.h>
+
+/*
+ * PAGE_SHIFT determines the page size
+ */
+#define PAGE_SHIFT	12
+#define PAGE_SIZE	(_AC(1, UL) << PAGE_SHIFT)
+#define PAGE_MASK	(~(PAGE_SIZE - 1))
+#define THREAD_SIZE	(PAGE_SIZE * 2)
+#define THREAD_MASK	(~(THREAD_SIZE - 1))
+#define THREAD_SHIFT	(PAGE_SHIFT + 1)
+
+/*
+ * NOTE: virtual isn't really correct, actually it should be the offset into the
+ * memory node, but we have no highmem, so that works for now.
+ * TODO: implement (fast) pfn<->pgdat_idx conversion functions, this makes lots
+ * of the shifts unnecessary.
+ */
+
+#ifndef __ASSEMBLY__
+
+#include <linux/pfn.h>
+
+#define virt_to_pfn(kaddr)      (__pa(kaddr) >> PAGE_SHIFT)
+#define pfn_to_virt(pfn)        __va((pfn) << PAGE_SHIFT)
+
+#define virt_addr_valid(kaddr)  ((void *)(kaddr) >= (void *)PAGE_OFFSET && \
+			(void *)(kaddr) < high_memory)
+#define pfn_valid(pfn)		((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
+
+extern void *memset(void *dest, int c, size_t l);
+extern void *memcpy(void *to, const void *from, size_t l);
+
+#define clear_page(page)	memset((page), 0, PAGE_SIZE)
+#define copy_page(to, from)	memcpy((to), (from), PAGE_SIZE)
+
+#define page_to_phys(page)	(page_to_pfn(page) << PAGE_SHIFT)
+#define phys_to_page(paddr)	(pfn_to_page(PFN_DOWN(paddr)))
+
+struct page;
+
+#include <abi/page.h>
+
+struct vm_area_struct;
+
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { unsigned long pte_low; } pte_t;
+#define pte_val(x)	((x).pte_low)
+
+typedef struct { unsigned long pgd; } pgd_t;
+typedef struct { unsigned long pgprot; } pgprot_t;
+typedef struct page *pgtable_t;
+
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)	((x).pgprot)
+
+#define ptep_buddy(x)	((pte_t *)((unsigned long)(x) ^ sizeof(pte_t)))
+
+#define __pte(x)	((pte_t) { (x) })
+#define __pgd(x)	((pgd_t) { (x) })
+#define __pgprot(x)	((pgprot_t) { (x) })
+
+#endif /* !__ASSEMBLY__ */
+
+#define PHYS_OFFSET		(CONFIG_RAM_BASE & ~(LOWMEM_LIMIT - 1))
+#define PHYS_OFFSET_OFFSET	(CONFIG_RAM_BASE & (LOWMEM_LIMIT - 1))
+#define ARCH_PFN_OFFSET		PFN_DOWN(CONFIG_RAM_BASE)
+
+#define	PAGE_OFFSET	0x80000000
+#define LOWMEM_LIMIT	0x40000000
+
+#define __pa(x)		((unsigned long)(x) - PAGE_OFFSET + PHYS_OFFSET)
+#define __va(x)		((void *)((unsigned long)(x) + PAGE_OFFSET - \
+				  PHYS_OFFSET))
+#define __pa_symbol(x)	__pa(RELOC_HIDE((unsigned long)(x), 0))
+
+#define MAP_NR(x)	PFN_DOWN((unsigned long)(x) - PAGE_OFFSET - \
+				 PHYS_OFFSET_OFFSET)
+#define virt_to_page(x)	(mem_map + MAP_NR(x))
+
+#define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | VM_EXEC | \
+				VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+/*
+ * main RAM and kernel working space are coincident at 0x80000000, but to make
+ * life more interesting, there's also an uncached virtual shadow at 0xb0000000
+ * - these mappings are fixed in the MMU
+ */
+
+#define pfn_to_kaddr(x)	__va(PFN_PHYS(x))
+
+#include <asm-generic/memory_model.h>
+#include <asm-generic/getorder.h>
+
+#endif /* __ASM_CSKY_PAGE_H */

+ 115 - 0
arch/csky/include/asm/pgalloc.h

@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_PGALLOC_H
+#define __ASM_CSKY_PGALLOC_H
+
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+
+static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
+					pte_t *pte)
+{
+	set_pmd(pmd, __pmd(__pa(pte)));
+}
+
+static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
+					pgtable_t pte)
+{
+	set_pmd(pmd, __pmd(__pa(page_address(pte))));
+}
+
+#define pmd_pgtable(pmd) pmd_page(pmd)
+
+extern void pgd_init(unsigned long *p);
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+					unsigned long address)
+{
+	pte_t *pte;
+	unsigned long *kaddr, i;
+
+	pte = (pte_t *) __get_free_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL,
+					 PTE_ORDER);
+	kaddr = (unsigned long *)pte;
+	if (address & 0x80000000)
+		for (i = 0; i < (PAGE_SIZE/4); i++)
+			*(kaddr + i) = 0x1;
+	else
+		clear_page(kaddr);
+
+	return pte;
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm,
+						unsigned long address)
+{
+	struct page *pte;
+	unsigned long *kaddr, i;
+
+	pte = alloc_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL, PTE_ORDER);
+	if (pte) {
+		kaddr = kmap_atomic(pte);
+		if (address & 0x80000000) {
+			for (i = 0; i < (PAGE_SIZE/4); i++)
+				*(kaddr + i) = 0x1;
+		} else
+			clear_page(kaddr);
+		kunmap_atomic(kaddr);
+		pgtable_page_ctor(pte);
+	}
+	return pte;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_pages((unsigned long)pte, PTE_ORDER);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
+{
+	pgtable_page_dtor(pte);
+	__free_pages(pte, PTE_ORDER);
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+	free_pages((unsigned long)pgd, PGD_ORDER);
+}
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+	pgd_t *ret;
+	pgd_t *init;
+
+	ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER);
+	if (ret) {
+		init = pgd_offset(&init_mm, 0UL);
+		pgd_init((unsigned long *)ret);
+		memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
+			(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+		/* prevent out of order excute */
+		smp_mb();
+#ifdef CONFIG_CPU_NEED_TLBSYNC
+		dcache_wb_range((unsigned int)ret,
+				(unsigned int)(ret + PTRS_PER_PGD));
+#endif
+	}
+
+	return ret;
+}
+
+#define __pte_free_tlb(tlb, pte, address)		\
+do {							\
+	pgtable_page_dtor(pte);				\
+	tlb_remove_page(tlb, pte);			\
+} while (0)
+
+#define check_pgt_cache()	do {} while (0)
+
+extern void pagetable_init(void);
+extern void pre_mmu_init(void);
+extern void pre_trap_init(void);
+
+#endif /* __ASM_CSKY_PGALLOC_H */

+ 306 - 0
arch/csky/include/asm/pgtable.h

@@ -0,0 +1,306 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_PGTABLE_H
+#define __ASM_CSKY_PGTABLE_H
+
+#include <asm/fixmap.h>
+#include <asm/addrspace.h>
+#include <abi/pgtable-bits.h>
+#include <asm-generic/pgtable-nopmd.h>
+
+#define PGDIR_SHIFT		22
+#define PGDIR_SIZE		(1UL << PGDIR_SHIFT)
+#define PGDIR_MASK		(~(PGDIR_SIZE-1))
+
+#define USER_PTRS_PER_PGD	(0x80000000UL/PGDIR_SIZE)
+#define FIRST_USER_ADDRESS	0UL
+
+#define PKMAP_BASE		(0xff800000)
+
+#define VMALLOC_START		(0xc0008000)
+#define VMALLOC_END		(PKMAP_BASE - 2*PAGE_SIZE)
+
+/*
+ * C-SKY is two-level paging structure:
+ */
+#define PGD_ORDER	0
+#define PTE_ORDER	0
+
+#define PTRS_PER_PGD	((PAGE_SIZE << PGD_ORDER) / sizeof(pgd_t))
+#define PTRS_PER_PMD	1
+#define PTRS_PER_PTE	((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t))
+
+#define pte_ERROR(e) \
+	pr_err("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, (e).pte_low)
+#define pgd_ERROR(e) \
+	pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+
+/* Find an entry in the third-level page table.. */
+#define __pte_offset_t(address) \
+	(((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+#define pte_offset_kernel(dir, address) \
+	(pmd_page_vaddr(*(dir)) + __pte_offset_t(address))
+#define pte_offset_map(dir, address) \
+	((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset_t(address))
+#define pmd_page(pmd)	(pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT))
+#define pte_clear(mm, addr, ptep)	set_pte((ptep), \
+			(((unsigned int)addr&0x80000000)?__pte(1):__pte(0)))
+#define pte_none(pte)	(!(pte_val(pte)&0xfffffffe))
+#define pte_present(pte)	(pte_val(pte) & _PAGE_PRESENT)
+#define pte_pfn(x)	((unsigned long)((x).pte_low >> PAGE_SHIFT))
+#define pfn_pte(pfn, prot) __pte(((unsigned long long)(pfn) << PAGE_SHIFT) \
+				| pgprot_val(prot))
+
+#define __READABLE	(_PAGE_READ | _PAGE_VALID | _PAGE_ACCESSED)
+#define __WRITEABLE	(_PAGE_WRITE | _PAGE_DIRTY | _PAGE_MODIFIED)
+
+#define _PAGE_CHG_MASK	(PAGE_MASK | _PAGE_ACCESSED | _PAGE_MODIFIED | \
+			 _CACHE_MASK)
+
+#define pte_unmap(pte)	((void)(pte))
+
+#define __swp_type(x)			(((x).val >> 4) & 0xff)
+#define __swp_offset(x)			((x).val >> 12)
+#define __swp_entry(type, offset)	((swp_entry_t) {((type) << 4) | \
+					((offset) << 12) })
+#define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x)		((pte_t) { (x).val })
+
+#define pte_page(x)			pfn_to_page(pte_pfn(x))
+#define __mk_pte(page_nr, pgprot)	__pte(((page_nr) << PAGE_SHIFT) | \
+					pgprot_val(pgprot))
+
+/*
+ * CSKY can't do page protection for execute, and considers that the same like
+ * read. Also, write permissions imply read permissions. This is the closest
+ * we can get by reasonable means..
+ */
+#define PAGE_NONE	__pgprot(_PAGE_PRESENT | _CACHE_CACHED)
+#define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+				_CACHE_CACHED)
+#define PAGE_COPY	__pgprot(_PAGE_PRESENT | _PAGE_READ | _CACHE_CACHED)
+#define PAGE_READONLY	__pgprot(_PAGE_PRESENT | _PAGE_READ | _CACHE_CACHED)
+#define PAGE_KERNEL	__pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
+				_PAGE_GLOBAL | _CACHE_CACHED)
+#define PAGE_USERIO	__pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+				_CACHE_CACHED)
+
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READONLY
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_READONLY
+#define __P101	PAGE_READONLY
+#define __P110	PAGE_COPY
+#define __P111	PAGE_COPY
+
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READONLY
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_READONLY
+#define __S101	PAGE_READONLY
+#define __S110	PAGE_SHARED
+#define __S111	PAGE_SHARED
+
+extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+#define ZERO_PAGE(vaddr)	(virt_to_page(empty_zero_page))
+
+extern void load_pgd(unsigned long pg_dir);
+extern pte_t invalid_pte_table[PTRS_PER_PTE];
+
+static inline int pte_special(pte_t pte) { return 0; }
+static inline pte_t pte_mkspecial(pte_t pte) { return pte; }
+
+static inline void set_pte(pte_t *p, pte_t pte)
+{
+	*p = pte;
+#if defined(CONFIG_CPU_NEED_TLBSYNC)
+	dcache_wb_line((u32)p);
+#endif
+	/* prevent out of order excution */
+	smp_mb();
+}
+#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
+
+static inline pte_t *pmd_page_vaddr(pmd_t pmd)
+{
+	unsigned long ptr;
+
+	ptr = pmd_val(pmd);
+
+	return __va(ptr);
+}
+
+#define pmd_phys(pmd) pmd_val(pmd)
+
+static inline void set_pmd(pmd_t *p, pmd_t pmd)
+{
+	*p = pmd;
+#if defined(CONFIG_CPU_NEED_TLBSYNC)
+	dcache_wb_line((u32)p);
+#endif
+	/* prevent specul excute */
+	smp_mb();
+}
+
+
+static inline int pmd_none(pmd_t pmd)
+{
+	return pmd_val(pmd) == __pa(invalid_pte_table);
+}
+
+#define pmd_bad(pmd)	(pmd_val(pmd) & ~PAGE_MASK)
+
+static inline int pmd_present(pmd_t pmd)
+{
+	return (pmd_val(pmd) != __pa(invalid_pte_table));
+}
+
+static inline void pmd_clear(pmd_t *p)
+{
+	pmd_val(*p) = (__pa(invalid_pte_table));
+#if defined(CONFIG_CPU_NEED_TLBSYNC)
+	dcache_wb_line((u32)p);
+#endif
+}
+
+/*
+ * The following only work if pte_present() is true.
+ * Undefined behaviour if not..
+ */
+static inline int pte_read(pte_t pte)
+{
+	return pte.pte_low & _PAGE_READ;
+}
+
+static inline int pte_write(pte_t pte)
+{
+	return (pte).pte_low & _PAGE_WRITE;
+}
+
+static inline int pte_dirty(pte_t pte)
+{
+	return (pte).pte_low & _PAGE_MODIFIED;
+}
+
+static inline int pte_young(pte_t pte)
+{
+	return (pte).pte_low & _PAGE_ACCESSED;
+}
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+	pte_val(pte) &= ~(_PAGE_WRITE | _PAGE_DIRTY);
+	return pte;
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+	pte_val(pte) &= ~(_PAGE_MODIFIED|_PAGE_DIRTY);
+	return pte;
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+	pte_val(pte) &= ~(_PAGE_ACCESSED|_PAGE_VALID);
+	return pte;
+}
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+	pte_val(pte) |= _PAGE_WRITE;
+	if (pte_val(pte) & _PAGE_MODIFIED)
+		pte_val(pte) |= _PAGE_DIRTY;
+	return pte;
+}
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+	pte_val(pte) |= _PAGE_MODIFIED;
+	if (pte_val(pte) & _PAGE_WRITE)
+		pte_val(pte) |= _PAGE_DIRTY;
+	return pte;
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+	pte_val(pte) |= _PAGE_ACCESSED;
+	if (pte_val(pte) & _PAGE_READ)
+		pte_val(pte) |= _PAGE_VALID;
+	return pte;
+}
+
+#define __pgd_offset(address)	pgd_index(address)
+#define __pud_offset(address)	(((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
+#define __pmd_offset(address)	(((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
+
+/* to find an entry in a kernel page-table-directory */
+#define pgd_offset_k(address)	pgd_offset(&init_mm, address)
+
+#define pgd_index(address)	((address) >> PGDIR_SHIFT)
+
+/*
+ * Macro to make mark a page protection value as "uncacheable".  Note
+ * that "protection" is really a misnomer here as the protection value
+ * contains the memory attribute bits, dirty bits, and various other
+ * bits as well.
+ */
+#define pgprot_noncached pgprot_noncached
+
+static inline pgprot_t pgprot_noncached(pgprot_t _prot)
+{
+	unsigned long prot = pgprot_val(_prot);
+
+	prot = (prot & ~_CACHE_MASK) | _CACHE_UNCACHED;
+
+	return __pgprot(prot);
+}
+
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ */
+#define mk_pte(page, pgprot)    pfn_pte(page_to_pfn(page), (pgprot))
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	return __pte((pte_val(pte) & _PAGE_CHG_MASK) |
+		     (pgprot_val(newprot)));
+}
+
+/* to find an entry in a page-table-directory */
+static inline pgd_t *pgd_offset(struct mm_struct *mm, unsigned long address)
+{
+	return mm->pgd + pgd_index(address);
+}
+
+/* Find an entry in the third-level page table.. */
+static inline pte_t *pte_offset(pmd_t *dir, unsigned long address)
+{
+	return (pte_t *) (pmd_page_vaddr(*dir)) +
+		((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1));
+}
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+extern void paging_init(void);
+
+extern void show_jtlb_table(void);
+
+void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
+		      pte_t *pte);
+
+/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
+#define kern_addr_valid(addr)	(1)
+
+/*
+ * No page table caches to initialise
+ */
+#define pgtable_cache_init()	do {} while (0)
+
+#define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \
+	remap_pfn_range(vma, vaddr, pfn, size, prot)
+
+#include <asm-generic/pgtable.h>
+
+#endif /* __ASM_CSKY_PGTABLE_H */

+ 121 - 0
arch/csky/include/asm/processor.h

@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_PROCESSOR_H
+#define __ASM_CSKY_PROCESSOR_H
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr() ({ __label__ _l; _l: &&_l; })
+
+#include <linux/bitops.h>
+#include <asm/segment.h>
+#include <asm/ptrace.h>
+#include <asm/current.h>
+#include <asm/cache.h>
+#include <abi/reg_ops.h>
+#include <abi/regdef.h>
+#ifdef CONFIG_CPU_HAS_FPU
+#include <abi/fpu.h>
+#endif
+
+struct cpuinfo_csky {
+	unsigned long udelay_val;
+	unsigned long asid_cache;
+	/*
+	 * Capability and feature descriptor structure for CSKY CPU
+	 */
+	unsigned long options;
+	unsigned int processor_id[4];
+	unsigned int fpu_id;
+} __aligned(SMP_CACHE_BYTES);
+
+extern struct cpuinfo_csky cpu_data[];
+
+/*
+ * User space process size: 2GB. This is hardcoded into a few places,
+ * so don't change it unless you know what you are doing.  TASK_SIZE
+ * for a 64 bit kernel expandable to 8192EB, of which the current CSKY
+ * implementations will "only" be able to use 1TB ...
+ */
+#define TASK_SIZE       0x7fff8000UL
+
+#ifdef __KERNEL__
+#define STACK_TOP       TASK_SIZE
+#define STACK_TOP_MAX   STACK_TOP
+#endif
+
+/* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE      (TASK_SIZE / 3)
+
+struct thread_struct {
+	unsigned long  ksp;       /* kernel stack pointer */
+	unsigned long  sr;        /* saved status register */
+	unsigned long  esp0;      /* points to SR of stack frame */
+	unsigned long  hi;
+	unsigned long  lo;
+
+	/* Other stuff associated with the thread. */
+	unsigned long address;      /* Last user fault */
+	unsigned long error_code;
+
+	/* FPU regs */
+	struct user_fp __aligned(16) user_fp;
+};
+
+#define INIT_THREAD  { \
+	.ksp = (unsigned long) init_thread_union.stack + THREAD_SIZE, \
+	.sr = DEFAULT_PSR_VALUE, \
+}
+
+/*
+ * Do necessary setup to start up a newly executed thread.
+ *
+ * pass the data segment into user programs if it exists,
+ * it can't hurt anything as far as I can tell
+ */
+#define start_thread(_regs, _pc, _usp)					\
+do {									\
+	set_fs(USER_DS); /* reads from user space */			\
+	(_regs)->pc = (_pc);						\
+	(_regs)->regs[1] = 0; /* ABIV1 is R7, uClibc_main rtdl arg */	\
+	(_regs)->regs[2] = 0;						\
+	(_regs)->regs[3] = 0; /* ABIV2 is R7, use it? */		\
+	(_regs)->sr &= ~PS_S;						\
+	(_regs)->usp = (_usp);						\
+} while (0)
+
+/* Forward declaration, a strange C thing */
+struct task_struct;
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+/* Prepare to copy thread state - unlazy all lazy status */
+#define prepare_to_copy(tsk)    do { } while (0)
+
+extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
+
+#define copy_segments(tsk, mm)		do { } while (0)
+#define release_segments(mm)		do { } while (0)
+#define forget_segments()		do { } while (0)
+
+extern unsigned long thread_saved_pc(struct task_struct *tsk);
+
+unsigned long get_wchan(struct task_struct *p);
+
+#define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
+#define KSTK_ESP(tsk)		(task_pt_regs(tsk)->usp)
+
+#define task_pt_regs(p) \
+	((struct pt_regs *)(THREAD_SIZE + p->stack) - 1)
+
+#define cpu_relax() barrier()
+
+#endif /* __ASM_CSKY_PROCESSOR_H */

+ 26 - 0
arch/csky/include/asm/reg_ops.h

@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_REGS_OPS_H
+#define __ASM_REGS_OPS_H
+
+#define mfcr(reg)		\
+({				\
+	unsigned int tmp;	\
+	asm volatile(		\
+	"mfcr %0, "reg"\n"	\
+	: "=r"(tmp)		\
+	:			\
+	: "memory");		\
+	tmp;			\
+})
+
+#define mtcr(reg, val)		\
+({				\
+	asm volatile(		\
+	"mtcr %0, "reg"\n"	\
+	:			\
+	: "r"(val)		\
+	: "memory");		\
+})
+
+#endif /* __ASM_REGS_OPS_H */

+ 19 - 0
arch/csky/include/asm/segment.h

@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_SEGMENT_H
+#define __ASM_CSKY_SEGMENT_H
+
+typedef struct {
+	unsigned long seg;
+} mm_segment_t;
+
+#define KERNEL_DS		((mm_segment_t) { 0xFFFFFFFF })
+#define get_ds()		KERNEL_DS
+
+#define USER_DS			((mm_segment_t) { 0x80000000UL })
+#define get_fs()		(current_thread_info()->addr_limit)
+#define set_fs(x)		(current_thread_info()->addr_limit = (x))
+#define segment_eq(a, b)	((a).seg == (b).seg)
+
+#endif /* __ASM_CSKY_SEGMENT_H */

+ 11 - 0
arch/csky/include/asm/shmparam.h

@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_SHMPARAM_H
+#define __ASM_CSKY_SHMPARAM_H
+
+#define SHMLBA	(4 * PAGE_SIZE)
+
+#define __ARCH_FORCE_SHMLBA
+
+#endif /* __ASM_CSKY_SHMPARAM_H */

+ 26 - 0
arch/csky/include/asm/smp.h

@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_SMP_H
+#define __ASM_CSKY_SMP_H
+
+#include <linux/cpumask.h>
+#include <linux/irqreturn.h>
+#include <linux/threads.h>
+
+#ifdef CONFIG_SMP
+
+void __init setup_smp(void);
+
+void __init setup_smp_ipi(void);
+
+void arch_send_call_function_ipi_mask(struct cpumask *mask);
+
+void arch_send_call_function_single_ipi(int cpu);
+
+void __init set_send_ipi(void (*func)(const struct cpumask *mask), int irq);
+
+#define raw_smp_processor_id()	(current_thread_info()->cpu)
+
+#endif /* CONFIG_SMP */
+
+#endif /* __ASM_CSKY_SMP_H */

+ 256 - 0
arch/csky/include/asm/spinlock.h

@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_SPINLOCK_H
+#define __ASM_CSKY_SPINLOCK_H
+
+#include <linux/spinlock_types.h>
+#include <asm/barrier.h>
+
+#ifdef CONFIG_QUEUED_RWLOCKS
+
+/*
+ * Ticket-based spin-locking.
+ */
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	arch_spinlock_t lockval;
+	u32 ticket_next = 1 << TICKET_NEXT;
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	asm volatile (
+		"1:	ldex.w		%0, (%2) \n"
+		"	mov		%1, %0	 \n"
+		"	add		%0, %3	 \n"
+		"	stex.w		%0, (%2) \n"
+		"	bez		%0, 1b   \n"
+		: "=&r" (tmp), "=&r" (lockval)
+		: "r"(p), "r"(ticket_next)
+		: "cc");
+
+	while (lockval.tickets.next != lockval.tickets.owner)
+		lockval.tickets.owner = READ_ONCE(lock->tickets.owner);
+
+	smp_mb();
+}
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+	u32 tmp, contended, res;
+	u32 ticket_next = 1 << TICKET_NEXT;
+	u32 *p = &lock->lock;
+
+	do {
+		asm volatile (
+		"	ldex.w		%0, (%3)   \n"
+		"	movi		%2, 1	   \n"
+		"	rotli		%1, %0, 16 \n"
+		"	cmpne		%1, %0     \n"
+		"	bt		1f         \n"
+		"	movi		%2, 0	   \n"
+		"	add		%0, %0, %4 \n"
+		"	stex.w		%0, (%3)   \n"
+		"1:				   \n"
+		: "=&r" (res), "=&r" (tmp), "=&r" (contended)
+		: "r"(p), "r"(ticket_next)
+		: "cc");
+	} while (!res);
+
+	if (!contended)
+		smp_mb();
+
+	return !contended;
+}
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	smp_mb();
+	WRITE_ONCE(lock->tickets.owner, lock->tickets.owner + 1);
+}
+
+static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
+{
+	return lock.tickets.owner == lock.tickets.next;
+}
+
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
+{
+	return !arch_spin_value_unlocked(READ_ONCE(*lock));
+}
+
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
+{
+	struct __raw_tickets tickets = READ_ONCE(lock->tickets);
+
+	return (tickets.next - tickets.owner) > 1;
+}
+#define arch_spin_is_contended	arch_spin_is_contended
+
+#include <asm/qrwlock.h>
+
+/* See include/linux/spinlock.h */
+#define smp_mb__after_spinlock()	smp_mb()
+
+#else /* CONFIG_QUEUED_RWLOCKS */
+
+/*
+ * Test-and-set spin-locking.
+ */
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	asm volatile (
+		"1:	ldex.w		%0, (%1) \n"
+		"	bnez		%0, 1b   \n"
+		"	movi		%0, 1    \n"
+		"	stex.w		%0, (%1) \n"
+		"	bez		%0, 1b   \n"
+		: "=&r" (tmp)
+		: "r"(p)
+		: "cc");
+	smp_mb();
+}
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	smp_mb();
+	WRITE_ONCE(lock->lock, 0);
+}
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	asm volatile (
+		"1:	ldex.w		%0, (%1) \n"
+		"	bnez		%0, 2f   \n"
+		"	movi		%0, 1    \n"
+		"	stex.w		%0, (%1) \n"
+		"	bez		%0, 1b   \n"
+		"	movi		%0, 0    \n"
+		"2:				 \n"
+		: "=&r" (tmp)
+		: "r"(p)
+		: "cc");
+
+	if (!tmp)
+		smp_mb();
+
+	return !tmp;
+}
+
+#define arch_spin_is_locked(x)	(READ_ONCE((x)->lock) != 0)
+
+/*
+ * read lock/unlock/trylock
+ */
+static inline void arch_read_lock(arch_rwlock_t *lock)
+{
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	asm volatile (
+		"1:	ldex.w		%0, (%1) \n"
+		"	blz		%0, 1b   \n"
+		"	addi		%0, 1    \n"
+		"	stex.w		%0, (%1) \n"
+		"	bez		%0, 1b   \n"
+		: "=&r" (tmp)
+		: "r"(p)
+		: "cc");
+	smp_mb();
+}
+
+static inline void arch_read_unlock(arch_rwlock_t *lock)
+{
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	smp_mb();
+	asm volatile (
+		"1:	ldex.w		%0, (%1) \n"
+		"	subi		%0, 1    \n"
+		"	stex.w		%0, (%1) \n"
+		"	bez		%0, 1b   \n"
+		: "=&r" (tmp)
+		: "r"(p)
+		: "cc");
+}
+
+static inline int arch_read_trylock(arch_rwlock_t *lock)
+{
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	asm volatile (
+		"1:	ldex.w		%0, (%1) \n"
+		"	blz		%0, 2f   \n"
+		"	addi		%0, 1    \n"
+		"	stex.w		%0, (%1) \n"
+		"	bez		%0, 1b   \n"
+		"	movi		%0, 0    \n"
+		"2:				 \n"
+		: "=&r" (tmp)
+		: "r"(p)
+		: "cc");
+
+	if (!tmp)
+		smp_mb();
+
+	return !tmp;
+}
+
+/*
+ * write lock/unlock/trylock
+ */
+static inline void arch_write_lock(arch_rwlock_t *lock)
+{
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	asm volatile (
+		"1:	ldex.w		%0, (%1) \n"
+		"	bnez		%0, 1b   \n"
+		"	subi		%0, 1    \n"
+		"	stex.w		%0, (%1) \n"
+		"	bez		%0, 1b   \n"
+		: "=&r" (tmp)
+		: "r"(p)
+		: "cc");
+	smp_mb();
+}
+
+static inline void arch_write_unlock(arch_rwlock_t *lock)
+{
+	smp_mb();
+	WRITE_ONCE(lock->lock, 0);
+}
+
+static inline int arch_write_trylock(arch_rwlock_t *lock)
+{
+	u32 *p = &lock->lock;
+	u32 tmp;
+
+	asm volatile (
+		"1:	ldex.w		%0, (%1) \n"
+		"	bnez		%0, 2f   \n"
+		"	subi		%0, 1    \n"
+		"	stex.w		%0, (%1) \n"
+		"	bez		%0, 1b   \n"
+		"	movi		%0, 0    \n"
+		"2:				 \n"
+		: "=&r" (tmp)
+		: "r"(p)
+		: "cc");
+
+	if (!tmp)
+		smp_mb();
+
+	return !tmp;
+}
+
+#endif /* CONFIG_QUEUED_RWLOCKS */
+#endif /* __ASM_CSKY_SPINLOCK_H */

+ 37 - 0
arch/csky/include/asm/spinlock_types.h

@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_SPINLOCK_TYPES_H
+#define __ASM_CSKY_SPINLOCK_TYPES_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+# error "please don't include this file directly"
+#endif
+
+#define TICKET_NEXT	16
+
+typedef struct {
+	union {
+		u32 lock;
+		struct __raw_tickets {
+			/* little endian */
+			u16 owner;
+			u16 next;
+		} tickets;
+	};
+} arch_spinlock_t;
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+
+#ifdef CONFIG_QUEUED_RWLOCKS
+#include <asm-generic/qrwlock_types.h>
+
+#else /* CONFIG_NR_CPUS > 2 */
+
+typedef struct {
+	u32 lock;
+} arch_rwlock_t;
+
+#define __ARCH_RW_LOCK_UNLOCKED		{ 0 }
+
+#endif /* CONFIG_QUEUED_RWLOCKS */
+#endif /* __ASM_CSKY_SPINLOCK_TYPES_H */

+ 13 - 0
arch/csky/include/asm/string.h

@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef _CSKY_STRING_MM_H_
+#define _CSKY_STRING_MM_H_
+
+#ifndef __ASSEMBLY__
+#include <linux/types.h>
+#include <linux/compiler.h>
+#include <abi/string.h>
+#endif
+
+#endif /* _CSKY_STRING_MM_H_ */

+ 36 - 0
arch/csky/include/asm/switch_to.h

@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_SWITCH_TO_H
+#define __ASM_CSKY_SWITCH_TO_H
+
+#include <linux/thread_info.h>
+#ifdef CONFIG_CPU_HAS_FPU
+#include <abi/fpu.h>
+static inline void __switch_to_fpu(struct task_struct *prev,
+				   struct task_struct *next)
+{
+	save_to_user_fp(&prev->thread.user_fp);
+	restore_from_user_fp(&next->thread.user_fp);
+}
+#else
+static inline void __switch_to_fpu(struct task_struct *prev,
+				   struct task_struct *next)
+{}
+#endif
+
+/*
+ * Context switching is now performed out-of-line in switch_to.S
+ */
+extern struct task_struct *__switch_to(struct task_struct *,
+				       struct task_struct *);
+
+#define switch_to(prev, next, last)					\
+	do {								\
+		struct task_struct *__prev = (prev);			\
+		struct task_struct *__next = (next);			\
+		__switch_to_fpu(__prev, __next);			\
+		((last) = __switch_to((prev), (next)));			\
+	} while (0)
+
+#endif /* __ASM_CSKY_SWITCH_TO_H */

+ 71 - 0
arch/csky/include/asm/syscall.h

@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_SYSCALL_H
+#define __ASM_SYSCALL_H
+
+#include <linux/sched.h>
+#include <linux/err.h>
+#include <abi/regdef.h>
+
+static inline int
+syscall_get_nr(struct task_struct *task, struct pt_regs *regs)
+{
+	return regs_syscallid(regs);
+}
+
+static inline void
+syscall_rollback(struct task_struct *task, struct pt_regs *regs)
+{
+	regs->a0 = regs->orig_a0;
+}
+
+static inline long
+syscall_get_error(struct task_struct *task, struct pt_regs *regs)
+{
+	unsigned long error = regs->a0;
+
+	return IS_ERR_VALUE(error) ? error : 0;
+}
+
+static inline long
+syscall_get_return_value(struct task_struct *task, struct pt_regs *regs)
+{
+	return regs->a0;
+}
+
+static inline void
+syscall_set_return_value(struct task_struct *task, struct pt_regs *regs,
+		int error, long val)
+{
+	regs->a0 = (long) error ?: val;
+}
+
+static inline void
+syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
+		      unsigned int i, unsigned int n, unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	if (i == 0) {
+		args[0] = regs->orig_a0;
+		args++;
+		i++;
+		n--;
+	}
+	memcpy(args, &regs->a1 + i * sizeof(regs->a1), n * sizeof(args[0]));
+}
+
+static inline void
+syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
+		      unsigned int i, unsigned int n, const unsigned long *args)
+{
+	BUG_ON(i + n > 6);
+	if (i == 0) {
+		regs->orig_a0 = args[0];
+		args++;
+		i++;
+		n--;
+	}
+	memcpy(&regs->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0));
+}
+
+#endif	/* __ASM_SYSCALL_H */

+ 15 - 0
arch/csky/include/asm/syscalls.h

@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_SYSCALLS_H
+#define __ASM_CSKY_SYSCALLS_H
+
+#include <asm-generic/syscalls.h>
+
+long sys_cacheflush(void __user *, unsigned long, int);
+
+long sys_set_thread_area(unsigned long addr);
+
+long sys_csky_fadvise64_64(int fd, int advice, loff_t offset, loff_t len);
+
+#endif /* __ASM_CSKY_SYSCALLS_H */

+ 75 - 0
arch/csky/include/asm/thread_info.h

@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef _ASM_CSKY_THREAD_INFO_H
+#define _ASM_CSKY_THREAD_INFO_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/version.h>
+#include <asm/types.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
+struct thread_info {
+	struct task_struct	*task;
+	void			*dump_exec_domain;
+	unsigned long		flags;
+	int			preempt_count;
+	unsigned long		tp_value;
+	mm_segment_t		addr_limit;
+	struct restart_block	restart_block;
+	struct pt_regs		*regs;
+	unsigned int		cpu;
+};
+
+#define INIT_THREAD_INFO(tsk)			\
+{						\
+	.task		= &tsk,			\
+	.preempt_count  = INIT_PREEMPT_COUNT,	\
+	.addr_limit     = KERNEL_DS,		\
+	.cpu		= 0,			\
+	.restart_block = {			\
+		.fn = do_no_restart_syscall,	\
+	},					\
+}
+
+#define THREAD_SIZE_ORDER (THREAD_SHIFT - PAGE_SHIFT)
+
+static inline struct thread_info *current_thread_info(void)
+{
+	unsigned long sp;
+
+	asm volatile("mov %0, sp\n":"=r"(sp));
+
+	return (struct thread_info *)(sp & ~(THREAD_SIZE - 1));
+}
+
+#endif /* !__ASSEMBLY__ */
+
+/* entry.S relies on these definitions!
+ * bits 0-5 are tested at every exception exit
+ */
+#define TIF_SIGPENDING		0	/* signal pending */
+#define TIF_NOTIFY_RESUME	1       /* callback before returning to user */
+#define TIF_NEED_RESCHED	2	/* rescheduling necessary */
+#define TIF_SYSCALL_TRACE	5	/* syscall trace active */
+#define TIF_DELAYED_TRACE	14	/* single step a syscall */
+#define TIF_POLLING_NRFLAG	16	/* poll_idle() is TIF_NEED_RESCHED */
+#define TIF_MEMDIE		18      /* is terminating due to OOM killer */
+#define TIF_FREEZE		19	/* thread is freezing for suspend */
+#define TIF_RESTORE_SIGMASK	20	/* restore signal mask in do_signal() */
+#define TIF_SECCOMP		21	/* secure computing */
+
+#define _TIF_SIGPENDING         (1 << TIF_SIGPENDING)
+#define _TIF_NOTIFY_RESUME      (1 << TIF_NOTIFY_RESUME)
+#define _TIF_NEED_RESCHED       (1 << TIF_NEED_RESCHED)
+#define _TIF_SYSCALL_TRACE      (1 << TIF_SYSCALL_TRACE)
+#define _TIF_DELAYED_TRACE	(1 << TIF_DELAYED_TRACE)
+#define _TIF_POLLING_NRFLAG     (1 << TIF_POLLING_NRFLAG)
+#define _TIF_MEMDIE		(1 << TIF_MEMDIE)
+#define _TIF_FREEZE             (1 << TIF_FREEZE)
+#define _TIF_RESTORE_SIGMASK    (1 << TIF_RESTORE_SIGMASK)
+#define _TIF_SECCOMP            (1 << TIF_SECCOMP)
+
+#endif	/* _ASM_CSKY_THREAD_INFO_H */

+ 25 - 0
arch/csky/include/asm/tlb.h

@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_TLB_H
+#define __ASM_CSKY_TLB_H
+
+#include <asm/cacheflush.h>
+
+#define tlb_start_vma(tlb, vma) \
+	do { \
+		if (!tlb->fullmm) \
+			flush_cache_range(vma, vma->vm_start, vma->vm_end); \
+	}  while (0)
+
+#define tlb_end_vma(tlb, vma) \
+	do { \
+		if (!tlb->fullmm) \
+			flush_tlb_range(vma, vma->vm_start, vma->vm_end); \
+	}  while (0)
+
+#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
+
+#include <asm-generic/tlb.h>
+
+#endif /* __ASM_CSKY_TLB_H */

+ 25 - 0
arch/csky/include/asm/tlbflush.h

@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_TLBFLUSH_H
+#define __ASM_TLBFLUSH_H
+
+/*
+ * TLB flushing:
+ *
+ *  - flush_tlb_all() flushes all processes TLB entries
+ *  - flush_tlb_mm(mm) flushes the specified mm context TLB entries
+ *  - flush_tlb_page(vma, vmaddr) flushes one page
+ *  - flush_tlb_range(vma, start, end) flushes a range of pages
+ *  - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
+ */
+extern void flush_tlb_all(void);
+extern void flush_tlb_mm(struct mm_struct *mm);
+extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+			    unsigned long end);
+extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
+
+extern void flush_tlb_one(unsigned long vaddr);
+
+#endif

+ 44 - 0
arch/csky/include/asm/traps.h

@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_TRAPS_H
+#define __ASM_CSKY_TRAPS_H
+
+#define VEC_RESET	0
+#define VEC_ALIGN	1
+#define VEC_ACCESS	2
+#define VEC_ZERODIV	3
+#define VEC_ILLEGAL	4
+#define VEC_PRIV	5
+#define VEC_TRACE	6
+#define VEC_BREAKPOINT	7
+#define VEC_UNRECOVER	8
+#define VEC_SOFTRESET	9
+#define VEC_AUTOVEC	10
+#define VEC_FAUTOVEC	11
+#define VEC_HWACCEL	12
+
+#define	VEC_TLBMISS	14
+#define	VEC_TLBMODIFIED	15
+
+#define VEC_TRAP0	16
+#define VEC_TRAP1	17
+#define VEC_TRAP2	18
+#define VEC_TRAP3	19
+
+#define	VEC_TLBINVALIDL	20
+#define	VEC_TLBINVALIDS	21
+
+#define VEC_PRFL	29
+#define VEC_FPE		30
+
+extern void *vec_base[];
+
+#define VEC_INIT(i, func) \
+do { \
+	vec_base[i] = (void *)func; \
+} while (0)
+
+void csky_alignment(struct pt_regs *regs);
+
+#endif /* __ASM_CSKY_TRAPS_H */

+ 416 - 0
arch/csky/include/asm/uaccess.h

@@ -0,0 +1,416 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_UACCESS_H
+#define __ASM_CSKY_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/compiler.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+#include <linux/version.h>
+#include <asm/segment.h>
+
+#define VERIFY_READ	0
+#define VERIFY_WRITE	1
+
+static inline int access_ok(int type, const void *addr, unsigned long size)
+{
+	unsigned long limit = current_thread_info()->addr_limit.seg;
+
+	return (((unsigned long)addr < limit) &&
+		((unsigned long)(addr + size) < limit));
+}
+
+static inline int verify_area(int type, const void *addr, unsigned long size)
+{
+	return access_ok(type, addr, size) ? 0 : -EFAULT;
+}
+
+#define __addr_ok(addr) (access_ok(VERIFY_READ, addr, 0))
+
+extern int __put_user_bad(void);
+
+/*
+ * Tell gcc we read from memory instead of writing: this is because
+ * we do not write to any memory gcc knows about, so there are no
+ * aliasing issues.
+ */
+
+/*
+ * These are the main single-value transfer routines.  They automatically
+ * use the right size if we just have the right pointer type.
+ *
+ * This gets kind of ugly. We want to return _two_ values in "get_user()"
+ * and yet we don't want to do any pointers, because that is too much
+ * of a performance impact. Thus we have a few rather ugly macros here,
+ * and hide all the ugliness from the user.
+ *
+ * The "__xxx" versions of the user access functions are versions that
+ * do not verify the address space, that must have been done previously
+ * with a separate "access_ok()" call (this is used when we do multiple
+ * accesses to the same area of user memory).
+ *
+ * As we use the same address space for kernel and user data on
+ * Ckcore, we can just do these as direct assignments.  (Of course, the
+ * exception handling means that it's no longer "just"...)
+ */
+
+#define put_user(x, ptr) \
+	__put_user_check((x), (ptr), sizeof(*(ptr)))
+
+#define __put_user(x, ptr) \
+	__put_user_nocheck((x), (ptr), sizeof(*(ptr)))
+
+#define __ptr(x) ((unsigned long *)(x))
+
+#define get_user(x, ptr) \
+	__get_user_check((x), (ptr), sizeof(*(ptr)))
+
+#define __get_user(x, ptr) \
+	__get_user_nocheck((x), (ptr), sizeof(*(ptr)))
+
+#define __put_user_nocheck(x, ptr, size)				\
+({									\
+	long __pu_err = 0;						\
+	typeof(*(ptr)) *__pu_addr = (ptr);				\
+	typeof(*(ptr)) __pu_val = (typeof(*(ptr)))(x);			\
+	if (__pu_addr)							\
+		__put_user_size(__pu_val, (__pu_addr), (size),		\
+				__pu_err);				\
+	__pu_err;							\
+})
+
+#define __put_user_check(x, ptr, size)					\
+({									\
+	long __pu_err = -EFAULT;					\
+	typeof(*(ptr)) *__pu_addr = (ptr);				\
+	typeof(*(ptr)) __pu_val = (typeof(*(ptr)))(x);			\
+	if (access_ok(VERIFY_WRITE, __pu_addr, size) && __pu_addr)	\
+		__put_user_size(__pu_val, __pu_addr, (size), __pu_err);	\
+	__pu_err;							\
+})
+
+#define __put_user_size(x, ptr, size, retval)		\
+do {							\
+	retval = 0;					\
+	switch (size) {                                 \
+	case 1:						\
+		__put_user_asm_b(x, ptr, retval);	\
+		break;					\
+	case 2:						\
+		__put_user_asm_h(x, ptr, retval);	\
+		break;					\
+	case 4:						\
+		__put_user_asm_w(x, ptr, retval);	\
+		break;					\
+	case 8:						\
+		__put_user_asm_64(x, ptr, retval);	\
+		break;					\
+	default:					\
+		__put_user_bad();			\
+	}	                                        \
+} while (0)
+
+/*
+ * We don't tell gcc that we are accessing memory, but this is OK
+ * because we do not write to any memory gcc knows about, so there
+ * are no aliasing issues.
+ *
+ * Note that PC at a fault is the address *after* the faulting
+ * instruction.
+ */
+#define __put_user_asm_b(x, ptr, err)			\
+do {							\
+	int errcode;					\
+	asm volatile(					\
+	"1:     stb   %1, (%2,0)	\n"		\
+	"       br    3f		\n"		\
+	"2:     mov   %0, %3		\n"		\
+	"       br    3f		\n"		\
+	".section __ex_table, \"a\"	\n"		\
+	".align   2			\n"		\
+	".long    1b,2b			\n"		\
+	".previous			\n"		\
+	"3:				\n"		\
+	: "=r"(err), "=r"(x), "=r"(ptr), "=r"(errcode)	\
+	: "0"(err), "1"(x), "2"(ptr), "3"(-EFAULT)	\
+	: "memory");					\
+} while (0)
+
+#define __put_user_asm_h(x, ptr, err)			\
+do {							\
+	int errcode;					\
+	asm volatile(					\
+	"1:     sth   %1, (%2,0)	\n"		\
+	"       br    3f		\n"		\
+	"2:     mov   %0, %3		\n"		\
+	"       br    3f		\n"		\
+	".section __ex_table, \"a\"	\n"		\
+	".align   2			\n"		\
+	".long    1b,2b			\n"		\
+	".previous			\n"		\
+	"3:				\n"		\
+	: "=r"(err), "=r"(x), "=r"(ptr), "=r"(errcode)	\
+	: "0"(err), "1"(x), "2"(ptr), "3"(-EFAULT)	\
+	: "memory");					\
+} while (0)
+
+#define __put_user_asm_w(x, ptr, err)			\
+do {							\
+	int errcode;					\
+	asm volatile(					\
+	"1:     stw   %1, (%2,0)	\n"		\
+	"       br    3f		\n"		\
+	"2:     mov   %0, %3		\n"		\
+	"       br    3f		\n"		\
+	".section __ex_table,\"a\"	\n"		\
+	".align   2			\n"		\
+	".long    1b, 2b		\n"		\
+	".previous			\n"		\
+	"3:				\n"		\
+	: "=r"(err), "=r"(x), "=r"(ptr), "=r"(errcode)	\
+	: "0"(err), "1"(x), "2"(ptr), "3"(-EFAULT)	\
+	: "memory");					\
+} while (0)
+
+#define __put_user_asm_64(x, ptr, err)				\
+do {								\
+	int tmp;						\
+	int errcode;						\
+	typeof(*(ptr))src = (typeof(*(ptr)))x;			\
+	typeof(*(ptr))*psrc = &src;				\
+								\
+	asm volatile(						\
+	"     ldw     %3, (%1, 0)     \n"			\
+	"1:   stw     %3, (%2, 0)     \n"			\
+	"     ldw     %3, (%1, 4)     \n"			\
+	"2:   stw     %3, (%2, 4)     \n"			\
+	"     br      4f              \n"			\
+	"3:   mov     %0, %4          \n"			\
+	"     br      4f              \n"			\
+	".section __ex_table, \"a\"   \n"			\
+	".align   2                   \n"			\
+	".long    1b, 3b              \n"			\
+	".long    2b, 3b              \n"			\
+	".previous                    \n"			\
+	"4:                           \n"			\
+	: "=r"(err), "=r"(psrc), "=r"(ptr),			\
+	  "=r"(tmp), "=r"(errcode)				\
+	: "0"(err), "1"(psrc), "2"(ptr), "3"(0), "4"(-EFAULT)	\
+	: "memory");						\
+} while (0)
+
+#define __get_user_nocheck(x, ptr, size)			\
+({								\
+	long  __gu_err;						\
+	__get_user_size(x, (ptr), (size), __gu_err);		\
+	__gu_err;						\
+})
+
+#define __get_user_check(x, ptr, size)				\
+({								\
+	int __gu_err = -EFAULT;					\
+	const __typeof__(*(ptr)) __user *__gu_ptr = (ptr);	\
+	if (access_ok(VERIFY_READ, __gu_ptr, size) && __gu_ptr)	\
+		__get_user_size(x, __gu_ptr, size, __gu_err);	\
+	__gu_err;						\
+})
+
+#define __get_user_size(x, ptr, size, retval)			\
+do {								\
+	switch (size) {						\
+	case 1:							\
+		__get_user_asm_common((x), ptr, "ldb", retval);	\
+		break;						\
+	case 2:							\
+		__get_user_asm_common((x), ptr, "ldh", retval);	\
+		break;						\
+	case 4:							\
+		__get_user_asm_common((x), ptr, "ldw", retval);	\
+		break;						\
+	default:						\
+		x = 0;						\
+		(retval) = __get_user_bad();			\
+	}							\
+} while (0)
+
+#define __get_user_asm_common(x, ptr, ins, err)			\
+do {								\
+	int errcode;						\
+	asm volatile(						\
+	"1:   " ins " %1, (%4,0)	\n"			\
+	"       br    3f		\n"			\
+	/* Fix up codes */					\
+	"2:     mov   %0, %2		\n"			\
+	"       movi  %1, 0		\n"			\
+	"       br    3f		\n"			\
+	".section __ex_table,\"a\"      \n"			\
+	".align   2			\n"			\
+	".long    1b, 2b		\n"			\
+	".previous			\n"			\
+	"3:				\n" 			\
+	: "=r"(err), "=r"(x), "=r"(errcode)			\
+	: "0"(0), "r"(ptr), "2"(-EFAULT)			\
+	: "memory");						\
+} while (0)
+
+extern int __get_user_bad(void);
+
+#define __copy_user(to, from, n)			\
+do {							\
+	int w0, w1, w2, w3;				\
+	asm volatile(					\
+	"0:     cmpnei  %1, 0           \n"		\
+	"       bf      8f              \n"		\
+	"       mov     %3, %1          \n"		\
+	"       or      %3, %2          \n"		\
+	"       andi    %3, 3           \n"		\
+	"       cmpnei  %3, 0           \n"		\
+	"       bf      1f              \n"		\
+	"       br      5f              \n"		\
+	"1:     cmplti  %0, 16          \n" /* 4W */	\
+	"       bt      3f              \n"		\
+	"       ldw     %3, (%2, 0)     \n"		\
+	"       ldw     %4, (%2, 4)     \n"		\
+	"       ldw     %5, (%2, 8)     \n"		\
+	"       ldw     %6, (%2, 12)    \n"		\
+	"2:     stw     %3, (%1, 0)     \n"		\
+	"9:     stw     %4, (%1, 4)     \n"		\
+	"10:    stw     %5, (%1, 8)     \n"		\
+	"11:    stw     %6, (%1, 12)    \n"		\
+	"       addi    %2, 16          \n"		\
+	"       addi    %1, 16          \n"		\
+	"       subi    %0, 16          \n"		\
+	"       br      1b              \n"		\
+	"3:     cmplti  %0, 4           \n" /* 1W */	\
+	"       bt      5f              \n"		\
+	"       ldw     %3, (%2, 0)     \n"		\
+	"4:     stw     %3, (%1, 0)     \n"		\
+	"       addi    %2, 4           \n"		\
+	"       addi    %1, 4           \n"		\
+	"       subi    %0, 4           \n"		\
+	"       br      3b              \n"		\
+	"5:     cmpnei  %0, 0           \n"  /* 1B */   \
+	"       bf      8f              \n"		\
+	"       ldb     %3, (%2, 0)     \n"		\
+	"6:     stb     %3, (%1, 0)     \n"		\
+	"       addi    %2,  1          \n"		\
+	"       addi    %1,  1          \n"		\
+	"       subi    %0,  1          \n"		\
+	"       br      5b              \n"		\
+	"7:     br      8f              \n"		\
+	".section __ex_table, \"a\"     \n"		\
+	".align   2                     \n"		\
+	".long    2b, 7b                \n"		\
+	".long    9b, 7b                \n"		\
+	".long   10b, 7b                \n"		\
+	".long   11b, 7b                \n"		\
+	".long    4b, 7b                \n"		\
+	".long    6b, 7b                \n"		\
+	".previous                      \n"		\
+	"8:                             \n"		\
+	: "=r"(n), "=r"(to), "=r"(from), "=r"(w0),	\
+	  "=r"(w1), "=r"(w2), "=r"(w3)			\
+	: "0"(n), "1"(to), "2"(from)			\
+	: "memory");					\
+} while (0)
+
+#define __copy_user_zeroing(to, from, n)		\
+do {							\
+	int tmp;					\
+	int nsave;					\
+	asm volatile(					\
+	"0:     cmpnei  %1, 0           \n"		\
+	"       bf      7f              \n"		\
+	"       mov     %3, %1          \n"		\
+	"       or      %3, %2          \n"		\
+	"       andi    %3, 3           \n"		\
+	"       cmpnei  %3, 0           \n"		\
+	"       bf      1f              \n"		\
+	"       br      5f              \n"		\
+	"1:     cmplti  %0, 16          \n"		\
+	"       bt      3f              \n"		\
+	"2:     ldw     %3, (%2, 0)     \n"		\
+	"10:    ldw     %4, (%2, 4)     \n"		\
+	"       stw     %3, (%1, 0)     \n"		\
+	"       stw     %4, (%1, 4)     \n"		\
+	"11:    ldw     %3, (%2, 8)     \n"		\
+	"12:    ldw     %4, (%2, 12)    \n"		\
+	"       stw     %3, (%1, 8)     \n"		\
+	"       stw     %4, (%1, 12)    \n"		\
+	"       addi    %2, 16          \n"		\
+	"       addi    %1, 16          \n"		\
+	"       subi    %0, 16          \n"		\
+	"       br      1b              \n"		\
+	"3:     cmplti  %0, 4           \n"		\
+	"       bt      5f              \n"		\
+	"4:     ldw     %3, (%2, 0)     \n"		\
+	"       stw     %3, (%1, 0)     \n"		\
+	"       addi    %2, 4           \n"		\
+	"       addi    %1, 4           \n"		\
+	"       subi    %0, 4           \n"		\
+	"       br      3b              \n"		\
+	"5:     cmpnei  %0, 0           \n"		\
+	"       bf      7f              \n"		\
+	"6:     ldb     %3, (%2, 0)     \n"		\
+	"       stb     %3, (%1, 0)     \n"		\
+	"       addi    %2,  1          \n"		\
+	"       addi    %1,  1          \n"		\
+	"       subi    %0,  1          \n"		\
+	"       br      5b              \n"		\
+	"8:     mov     %3, %0          \n"		\
+	"       movi    %4, 0           \n"		\
+	"9:     stb     %4, (%1, 0)     \n"		\
+	"       addi    %1, 1           \n"		\
+	"       subi    %3, 1           \n"		\
+	"       cmpnei  %3, 0           \n"		\
+	"       bt      9b              \n"		\
+	"       br      7f              \n"		\
+	".section __ex_table, \"a\"     \n"		\
+	".align   2                     \n"		\
+	".long    2b, 8b                \n"		\
+	".long   10b, 8b                \n"		\
+	".long   11b, 8b                \n"		\
+	".long   12b, 8b                \n"		\
+	".long    4b, 8b                \n"		\
+	".long    6b, 8b                \n"		\
+	".previous                      \n"		\
+	"7:                             \n"		\
+	: "=r"(n), "=r"(to), "=r"(from), "=r"(nsave),	\
+	  "=r"(tmp)					\
+	: "0"(n), "1"(to), "2"(from)			\
+	: "memory");					\
+} while (0)
+
+unsigned long raw_copy_from_user(void *to, const void *from, unsigned long n);
+unsigned long raw_copy_to_user(void *to, const void *from, unsigned long n);
+
+unsigned long clear_user(void *to, unsigned long n);
+unsigned long __clear_user(void __user *to, unsigned long n);
+
+long strncpy_from_user(char *dst, const char *src, long count);
+long __strncpy_from_user(char *dst, const char *src, long count);
+
+/*
+ * Return the size of a string (including the ending 0)
+ *
+ * Return 0 on exception, a value greater than N if too long
+ */
+long strnlen_user(const char *src, long n);
+
+#define strlen_user(str) strnlen_user(str, 32767)
+
+struct exception_table_entry {
+	unsigned long insn;
+	unsigned long nextinsn;
+};
+
+extern int fixup_exception(struct pt_regs *regs);
+
+#endif /* __ASM_CSKY_UACCESS_H */

+ 4 - 0
arch/csky/include/asm/unistd.h

@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <uapi/asm/unistd.h>

+ 12 - 0
arch/csky/include/asm/vdso.h

@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_VDSO_H
+#define __ASM_CSKY_VDSO_H
+
+#include <abi/vdso.h>
+
+struct csky_vdso {
+	unsigned short rt_signal_retcode[4];
+};
+
+#endif /* __ASM_CSKY_VDSO_H */

+ 32 - 0
arch/csky/include/uapi/asm/Kbuild

@@ -0,0 +1,32 @@
+include include/uapi/asm-generic/Kbuild.asm
+
+header-y += cachectl.h
+
+generic-y += auxvec.h
+generic-y += param.h
+generic-y += bpf_perf_event.h
+generic-y += errno.h
+generic-y += fcntl.h
+generic-y += ioctl.h
+generic-y += ioctls.h
+generic-y += ipcbuf.h
+generic-y += shmbuf.h
+generic-y += bitsperlong.h
+generic-y += mman.h
+generic-y += msgbuf.h
+generic-y += poll.h
+generic-y += posix_types.h
+generic-y += resource.h
+generic-y += sembuf.h
+generic-y += siginfo.h
+generic-y += signal.h
+generic-y += socket.h
+generic-y += sockios.h
+generic-y += statfs.h
+generic-y += stat.h
+generic-y += setup.h
+generic-y += swab.h
+generic-y += termbits.h
+generic-y += termios.h
+generic-y += types.h
+generic-y += ucontext.h

+ 9 - 0
arch/csky/include/uapi/asm/byteorder.h

@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_BYTEORDER_H
+#define __ASM_CSKY_BYTEORDER_H
+
+#include <linux/byteorder/little_endian.h>
+
+#endif /* __ASM_CSKY_BYTEORDER_H */

+ 13 - 0
arch/csky/include/uapi/asm/cachectl.h

@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_CSKY_CACHECTL_H
+#define __ASM_CSKY_CACHECTL_H
+
+/*
+ * See "man cacheflush"
+ */
+#define ICACHE  (1<<0)
+#define DCACHE  (1<<1)
+#define BCACHE  (ICACHE|DCACHE)
+
+#endif /* __ASM_CSKY_CACHECTL_H */

+ 104 - 0
arch/csky/include/uapi/asm/ptrace.h

@@ -0,0 +1,104 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef _CSKY_PTRACE_H
+#define _CSKY_PTRACE_H
+
+#ifndef __ASSEMBLY__
+
+struct pt_regs {
+	unsigned long	tls;
+	unsigned long	lr;
+	unsigned long	pc;
+	unsigned long	sr;
+	unsigned long	usp;
+
+	/*
+	 * a0, a1, a2, a3:
+	 * abiv1: r2, r3, r4, r5
+	 * abiv2: r0, r1, r2, r3
+	 */
+	unsigned long	orig_a0;
+	unsigned long	a0;
+	unsigned long	a1;
+	unsigned long	a2;
+	unsigned long	a3;
+
+	/*
+	 * ABIV2: r4 ~ r13
+	 * ABIV1: r6 ~ r14, r1
+	 */
+	unsigned long	regs[10];
+
+#if defined(__CSKYABIV2__)
+	/* r16 ~ r30 */
+	unsigned long	exregs[15];
+
+	unsigned long	rhi;
+	unsigned long	rlo;
+	unsigned long	pad; /* reserved */
+#endif
+};
+
+struct user_fp {
+	unsigned long	vr[96];
+	unsigned long	fcr;
+	unsigned long	fesr;
+	unsigned long	fid;
+	unsigned long	reserved;
+};
+
+/*
+ * Switch stack for switch_to after push pt_regs.
+ *
+ * ABI_CSKYV2: r4 ~ r11, r15 ~ r17, r26 ~ r30;
+ * ABI_CSKYV1: r8 ~ r14, r15;
+ */
+struct  switch_stack {
+#if defined(__CSKYABIV2__)
+	unsigned long   r4;
+	unsigned long   r5;
+	unsigned long   r6;
+	unsigned long   r7;
+	unsigned long   r8;
+	unsigned long   r9;
+	unsigned long   r10;
+	unsigned long   r11;
+#else
+	unsigned long   r8;
+	unsigned long   r9;
+	unsigned long   r10;
+	unsigned long   r11;
+	unsigned long   r12;
+	unsigned long   r13;
+	unsigned long   r14;
+#endif
+	unsigned long   r15;
+#if defined(__CSKYABIV2__)
+	unsigned long   r16;
+	unsigned long   r17;
+	unsigned long   r26;
+	unsigned long   r27;
+	unsigned long   r28;
+	unsigned long   r29;
+	unsigned long   r30;
+#endif
+};
+
+#ifdef __KERNEL__
+
+#define PS_S	0x80000000 /* Supervisor Mode */
+
+#define arch_has_single_step() (1)
+#define current_pt_regs() \
+({ (struct pt_regs *)((char *)current_thread_info() + THREAD_SIZE) - 1; })
+
+#define user_stack_pointer(regs) ((regs)->usp)
+
+#define user_mode(regs) (!((regs)->sr & PS_S))
+#define instruction_pointer(regs) ((regs)->pc)
+#define profile_pc(regs) instruction_pointer(regs)
+
+#endif /* __KERNEL__ */
+#endif /* __ASSEMBLY__ */
+#endif /* _CSKY_PTRACE_H */

+ 14 - 0
arch/csky/include/uapi/asm/sigcontext.h

@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#ifndef __ASM_CSKY_SIGCONTEXT_H
+#define __ASM_CSKY_SIGCONTEXT_H
+
+#include <asm/ptrace.h>
+
+struct sigcontext {
+	struct pt_regs	sc_pt_regs;
+	struct user_fp	sc_user_fp;
+};
+
+#endif /* __ASM_CSKY_SIGCONTEXT_H */

+ 10 - 0
arch/csky/include/uapi/asm/unistd.h

@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#define __ARCH_WANT_SYS_CLONE
+#include <asm-generic/unistd.h>
+
+#define __NR_set_thread_area	(__NR_arch_specific_syscall + 0)
+__SYSCALL(__NR_set_thread_area, sys_set_thread_area)
+#define __NR_cacheflush		(__NR_arch_specific_syscall + 1)
+__SYSCALL(__NR_cacheflush, sys_cacheflush)

+ 8 - 0
arch/csky/kernel/Makefile

@@ -0,0 +1,8 @@
+extra-y := head.o vmlinux.lds
+
+obj-y += entry.o atomic.o signal.o traps.o irq.o time.o vdso.o
+obj-y += power.o syscall.o syscall_table.o setup.o
+obj-y += process.o cpu-probe.o ptrace.o dumpstack.o
+
+obj-$(CONFIG_MODULES)			+= module.o
+obj-$(CONFIG_SMP)			+= smp.o

+ 88 - 0
arch/csky/kernel/asm-offsets.c

@@ -0,0 +1,88 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/sched.h>
+#include <linux/kernel_stat.h>
+#include <linux/kbuild.h>
+#include <abi/regdef.h>
+
+int main(void)
+{
+	/* offsets into the task struct */
+	DEFINE(TASK_STATE,        offsetof(struct task_struct, state));
+	DEFINE(TASK_THREAD_INFO,  offsetof(struct task_struct, stack));
+	DEFINE(TASK_FLAGS,        offsetof(struct task_struct, flags));
+	DEFINE(TASK_PTRACE,       offsetof(struct task_struct, ptrace));
+	DEFINE(TASK_THREAD,       offsetof(struct task_struct, thread));
+	DEFINE(TASK_MM,           offsetof(struct task_struct, mm));
+	DEFINE(TASK_ACTIVE_MM,    offsetof(struct task_struct, active_mm));
+
+	/* offsets into the thread struct */
+	DEFINE(THREAD_KSP,        offsetof(struct thread_struct, ksp));
+	DEFINE(THREAD_SR,         offsetof(struct thread_struct, sr));
+	DEFINE(THREAD_ESP0,       offsetof(struct thread_struct, esp0));
+	DEFINE(THREAD_FESR,       offsetof(struct thread_struct, user_fp.fesr));
+	DEFINE(THREAD_FCR,        offsetof(struct thread_struct, user_fp.fcr));
+	DEFINE(THREAD_FPREG,      offsetof(struct thread_struct, user_fp.vr));
+	DEFINE(THREAD_DSPHI,      offsetof(struct thread_struct, hi));
+	DEFINE(THREAD_DSPLO,      offsetof(struct thread_struct, lo));
+
+	/* offsets into the thread_info struct */
+	DEFINE(TINFO_FLAGS,       offsetof(struct thread_info, flags));
+	DEFINE(TINFO_PREEMPT,     offsetof(struct thread_info, preempt_count));
+	DEFINE(TINFO_ADDR_LIMIT,  offsetof(struct thread_info, addr_limit));
+	DEFINE(TINFO_TP_VALUE,   offsetof(struct thread_info, tp_value));
+	DEFINE(TINFO_TASK,        offsetof(struct thread_info, task));
+
+	/* offsets into the pt_regs */
+	DEFINE(PT_PC,             offsetof(struct pt_regs, pc));
+	DEFINE(PT_ORIG_AO,        offsetof(struct pt_regs, orig_a0));
+	DEFINE(PT_SR,             offsetof(struct pt_regs, sr));
+
+	DEFINE(PT_A0,             offsetof(struct pt_regs, a0));
+	DEFINE(PT_A1,             offsetof(struct pt_regs, a1));
+	DEFINE(PT_A2,             offsetof(struct pt_regs, a2));
+	DEFINE(PT_A3,             offsetof(struct pt_regs, a3));
+	DEFINE(PT_REGS0,          offsetof(struct pt_regs, regs[0]));
+	DEFINE(PT_REGS1,          offsetof(struct pt_regs, regs[1]));
+	DEFINE(PT_REGS2,          offsetof(struct pt_regs, regs[2]));
+	DEFINE(PT_REGS3,          offsetof(struct pt_regs, regs[3]));
+	DEFINE(PT_REGS4,          offsetof(struct pt_regs, regs[4]));
+	DEFINE(PT_REGS5,          offsetof(struct pt_regs, regs[5]));
+	DEFINE(PT_REGS6,          offsetof(struct pt_regs, regs[6]));
+	DEFINE(PT_REGS7,          offsetof(struct pt_regs, regs[7]));
+	DEFINE(PT_REGS8,          offsetof(struct pt_regs, regs[8]));
+	DEFINE(PT_REGS9,          offsetof(struct pt_regs, regs[9]));
+	DEFINE(PT_R15,            offsetof(struct pt_regs, lr));
+#if defined(__CSKYABIV2__)
+	DEFINE(PT_R16,            offsetof(struct pt_regs, exregs[0]));
+	DEFINE(PT_R17,            offsetof(struct pt_regs, exregs[1]));
+	DEFINE(PT_R18,            offsetof(struct pt_regs, exregs[2]));
+	DEFINE(PT_R19,            offsetof(struct pt_regs, exregs[3]));
+	DEFINE(PT_R20,            offsetof(struct pt_regs, exregs[4]));
+	DEFINE(PT_R21,            offsetof(struct pt_regs, exregs[5]));
+	DEFINE(PT_R22,            offsetof(struct pt_regs, exregs[6]));
+	DEFINE(PT_R23,            offsetof(struct pt_regs, exregs[7]));
+	DEFINE(PT_R24,            offsetof(struct pt_regs, exregs[8]));
+	DEFINE(PT_R25,            offsetof(struct pt_regs, exregs[9]));
+	DEFINE(PT_R26,            offsetof(struct pt_regs, exregs[10]));
+	DEFINE(PT_R27,            offsetof(struct pt_regs, exregs[11]));
+	DEFINE(PT_R28,            offsetof(struct pt_regs, exregs[12]));
+	DEFINE(PT_R29,            offsetof(struct pt_regs, exregs[13]));
+	DEFINE(PT_R30,            offsetof(struct pt_regs, exregs[14]));
+	DEFINE(PT_R31,            offsetof(struct pt_regs, exregs[15]));
+	DEFINE(PT_RHI,            offsetof(struct pt_regs, rhi));
+	DEFINE(PT_RLO,            offsetof(struct pt_regs, rlo));
+#endif
+	DEFINE(PT_USP,            offsetof(struct pt_regs, usp));
+
+	/* offsets into the irq_cpustat_t struct */
+	DEFINE(CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t,
+						__softirq_pending));
+
+	/* signal defines */
+	DEFINE(SIGSEGV, SIGSEGV);
+	DEFINE(SIGTRAP, SIGTRAP);
+
+	return 0;
+}

部分文件因为文件数量过多而无法显示