瀏覽代碼

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Arnd Bergmann:
 "The main addition this time around is the new ARM "SCMI" framework,
  which is the latest in a series of standards coming from ARM to do
  power management in a platform independent way.

  This has been through many review cycles, and it relies on a rather
  interesting way of using the mailbox subsystem, but in the end I
  agreed that Sudeep's version was the best we could do after all.

  Other changes include:

   - the ARM CCN driver is moved out of drivers/bus into drivers/perf,
     which makes more sense. Similarly, the performance monitoring
     portion of the CCI driver are moved the same way and cleaned up a
     little more.

   - a series of updates to the SCPI framework

   - support for the Mediatek mt7623a SoC in drivers/soc

   - support for additional NVIDIA Tegra hardware in drivers/soc

   - a new reset driver for Socionext Uniphier

   - lesser bug fixes in drivers/soc, drivers/tee, drivers/memory, and
     drivers/firmware and drivers/reset across platforms"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (87 commits)
  reset: uniphier: add ethernet reset control support for PXs3
  reset: stm32mp1: Enable stm32mp1 reset driver
  dt-bindings: reset: add STM32MP1 resets
  reset: uniphier: add Pro4/Pro5/PXs2 audio systems reset control
  reset: imx7: add 'depends on HAS_IOMEM' to fix unmet dependency
  reset: modify the way reset lookup works for board files
  reset: add support for non-DT systems
  clk: scmi: use devm_of_clk_add_hw_provider() API and drop scmi_clocks_remove
  firmware: arm_scmi: prevent accessing rate_discrete uninitialized
  hwmon: (scmi) return -EINVAL when sensor information is unavailable
  amlogic: meson-gx-socinfo: Update soc ids
  soc/tegra: pmc: Use the new reset APIs to manage reset controllers
  soc: mediatek: update power domain data of MT2712
  dt-bindings: soc: update MT2712 power dt-bindings
  cpufreq: scmi: add thermal dependency
  soc: mediatek: fix the mistaken pointer accessed when subdomains are added
  soc: mediatek: add SCPSYS power domain driver for MediaTek MT7623A SoC
  soc: mediatek: avoid hardcoded value with bus_prot_mask
  dt-bindings: soc: add header files required for MT7623A SCPSYS dt-binding
  dt-bindings: soc: add SCPSYS binding for MT7623 and MT7623A SoC
  ...
Linus Torvalds 7 年之前
父節點
當前提交
38c23685b2
共有 79 個文件被更改,包括 6863 次插入2191 次删除
  1. 179 0
      Documentation/devicetree/bindings/arm/arm,scmi.txt
  2. 6 0
      Documentation/devicetree/bindings/arm/samsung/pmu.txt
  3. 28 0
      Documentation/devicetree/bindings/mailbox/mailbox.txt
  4. 21 0
      Documentation/devicetree/bindings/mfd/aspeed-lpc.txt
  5. 0 0
      Documentation/devicetree/bindings/perf/arm-ccn.txt
  6. 6 0
      Documentation/devicetree/bindings/reset/st,stm32mp1-rcc.txt
  7. 4 1
      Documentation/devicetree/bindings/soc/mediatek/scpsys.txt
  8. 0 0
      Documentation/perf/arm-ccn.txt
  9. 6 5
      MAINTAINERS
  10. 0 36
      drivers/bus/Kconfig
  11. 0 2
      drivers/bus/Makefile
  12. 14 1749
      drivers/bus/arm-cci.c
  13. 10 0
      drivers/clk/Kconfig
  14. 1 0
      drivers/clk/Makefile
  15. 194 0
      drivers/clk/clk-scmi.c
  16. 12 0
      drivers/cpufreq/Kconfig.arm
  17. 1 0
      drivers/cpufreq/Makefile
  18. 264 0
      drivers/cpufreq/scmi-cpufreq.c
  19. 34 0
      drivers/firmware/Kconfig
  20. 1 0
      drivers/firmware/Makefile
  21. 5 0
      drivers/firmware/arm_scmi/Makefile
  22. 253 0
      drivers/firmware/arm_scmi/base.c
  23. 221 0
      drivers/firmware/arm_scmi/bus.c
  24. 343 0
      drivers/firmware/arm_scmi/clock.c
  25. 105 0
      drivers/firmware/arm_scmi/common.h
  26. 871 0
      drivers/firmware/arm_scmi/driver.c
  27. 481 0
      drivers/firmware/arm_scmi/perf.c
  28. 221 0
      drivers/firmware/arm_scmi/power.c
  29. 129 0
      drivers/firmware/arm_scmi/scmi_pm_domain.c
  30. 291 0
      drivers/firmware/arm_scmi/sensors.c
  31. 89 122
      drivers/firmware/arm_scpi.c
  32. 12 13
      drivers/firmware/meson/meson_sm.c
  33. 65 79
      drivers/firmware/tegra/bpmp.c
  34. 12 0
      drivers/hwmon/Kconfig
  35. 1 0
      drivers/hwmon/Makefile
  36. 225 0
      drivers/hwmon/scmi-hwmon.c
  37. 1 1
      drivers/memory/emif.c
  38. 1 0
      drivers/memory/samsung/Kconfig
  39. 1 0
      drivers/memory/samsung/Makefile
  40. 7 11
      drivers/memory/samsung/exynos-srom.c
  41. 2 5
      drivers/memory/samsung/exynos-srom.h
  42. 0 1
      drivers/memory/ti-emif-pm.c
  43. 33 0
      drivers/perf/Kconfig
  44. 2 0
      drivers/perf/Makefile
  45. 1722 0
      drivers/perf/arm-cci.c
  46. 0 0
      drivers/perf/arm-ccn.c
  47. 14 3
      drivers/reset/Kconfig
  48. 1 0
      drivers/reset/Makefile
  49. 95 1
      drivers/reset/core.c
  50. 5 17
      drivers/reset/reset-meson.c
  51. 2 0
      drivers/reset/reset-simple.c
  52. 115 0
      drivers/reset/reset-stm32mp1.c
  53. 5 0
      drivers/reset/reset-uniphier.c
  54. 7 2
      drivers/soc/amlogic/meson-gx-pwrc-vpu.c
  55. 10 1
      drivers/soc/amlogic/meson-gx-socinfo.c
  56. 1 1
      drivers/soc/amlogic/meson-mx-socinfo.c
  57. 1 0
      drivers/soc/imx/gpc.c
  58. 99 5
      drivers/soc/mediatek/mtk-scpsys.c
  59. 1 0
      drivers/soc/qcom/Kconfig
  60. 34 0
      drivers/soc/qcom/rmtfs_mem.c
  61. 1 1
      drivers/soc/qcom/wcnss_ctrl.c
  62. 28 0
      drivers/soc/rockchip/grf.c
  63. 47 48
      drivers/soc/rockchip/pm_domains.c
  64. 7 0
      drivers/soc/samsung/exynos-pmu.c
  65. 10 0
      drivers/soc/tegra/Kconfig
  66. 28 70
      drivers/soc/tegra/pmc.c
  67. 23 0
      drivers/tee/optee/core.c
  68. 9 1
      drivers/tee/optee/optee_smc.h
  69. 9 5
      drivers/tee/tee_core.c
  70. 3 0
      include/dt-bindings/power/mt2712-power.h
  71. 10 0
      include/dt-bindings/power/mt7623a-power.h
  72. 108 0
      include/dt-bindings/reset/stm32mp1-resets.h
  73. 1 0
      include/linux/hwmon.h
  74. 30 0
      include/linux/reset-controller.h
  75. 277 0
      include/linux/scmi_protocol.h
  76. 4 0
      include/linux/soc/mediatek/infracfg.h
  77. 1 4
      include/linux/soc/samsung/exynos-pmu.h
  78. 1 5
      include/linux/soc/samsung/exynos-regs-pmu.h
  79. 2 2
      include/soc/tegra/bpmp.h

+ 179 - 0
Documentation/devicetree/bindings/arm/arm,scmi.txt

@@ -0,0 +1,179 @@
+System Control and Management Interface (SCMI) Message Protocol
+----------------------------------------------------------
+
+The SCMI is intended to allow agents such as OSPM to manage various functions
+that are provided by the hardware platform it is running on, including power
+and performance functions.
+
+This binding is intended to define the interface the firmware implementing
+the SCMI as described in ARM document number ARM DUI 0922B ("ARM System Control
+and Management Interface Platform Design Document")[0] provide for OSPM in
+the device tree.
+
+Required properties:
+
+The scmi node with the following properties shall be under the /firmware/ node.
+
+- compatible : shall be "arm,scmi"
+- mboxes: List of phandle and mailbox channel specifiers. It should contain
+	  exactly one or two mailboxes, one for transmitting messages("tx")
+	  and another optional for receiving the notifications("rx") if
+	  supported.
+- shmem : List of phandle pointing to the shared memory(SHM) area as per
+	  generic mailbox client binding.
+- #address-cells : should be '1' if the device has sub-nodes, maps to
+	  protocol identifier for a given sub-node.
+- #size-cells : should be '0' as 'reg' property doesn't have any size
+	  associated with it.
+
+Optional properties:
+
+- mbox-names: shall be "tx" or "rx" depending on mboxes entries.
+
+See Documentation/devicetree/bindings/mailbox/mailbox.txt for more details
+about the generic mailbox controller and client driver bindings.
+
+The mailbox is the only permitted method of calling the SCMI firmware.
+Mailbox doorbell is used as a mechanism to alert the presence of a
+messages and/or notification.
+
+Each protocol supported shall have a sub-node with corresponding compatible
+as described in the following sections. If the platform supports dedicated
+communication channel for a particular protocol, the 3 properties namely:
+mboxes, mbox-names and shmem shall be present in the sub-node corresponding
+to that protocol.
+
+Clock/Performance bindings for the clocks/OPPs based on SCMI Message Protocol
+------------------------------------------------------------
+
+This binding uses the common clock binding[1].
+
+Required properties:
+- #clock-cells : Should be 1. Contains the Clock ID value used by SCMI commands.
+
+Power domain bindings for the power domains based on SCMI Message Protocol
+------------------------------------------------------------
+
+This binding for the SCMI power domain providers uses the generic power
+domain binding[2].
+
+Required properties:
+ - #power-domain-cells : Should be 1. Contains the device or the power
+			 domain ID value used by SCMI commands.
+
+Sensor bindings for the sensors based on SCMI Message Protocol
+--------------------------------------------------------------
+SCMI provides an API to access the various sensors on the SoC.
+
+Required properties:
+- #thermal-sensor-cells: should be set to 1. This property follows the
+			 thermal device tree bindings[3].
+
+			 Valid cell values are raw identifiers (Sensor ID)
+			 as used by the firmware. Refer to  platform details
+			 for your implementation for the IDs to use.
+
+SRAM and Shared Memory for SCMI
+-------------------------------
+
+A small area of SRAM is reserved for SCMI communication between application
+processors and SCP.
+
+The properties should follow the generic mmio-sram description found in [4]
+
+Each sub-node represents the reserved area for SCMI.
+
+Required sub-node properties:
+- reg : The base offset and size of the reserved area with the SRAM
+- compatible : should be "arm,scmi-shmem" for Non-secure SRAM based
+	       shared memory
+
+[0] http://infocenter.arm.com/help/topic/com.arm.doc.den0056a/index.html
+[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
+[2] Documentation/devicetree/bindings/power/power_domain.txt
+[3] Documentation/devicetree/bindings/thermal/thermal.txt
+[4] Documentation/devicetree/bindings/sram/sram.txt
+
+Example:
+
+sram@50000000 {
+	compatible = "mmio-sram";
+	reg = <0x0 0x50000000 0x0 0x10000>;
+
+	#address-cells = <1>;
+	#size-cells = <1>;
+	ranges = <0 0x0 0x50000000 0x10000>;
+
+	cpu_scp_lpri: scp-shmem@0 {
+		compatible = "arm,scmi-shmem";
+		reg = <0x0 0x200>;
+	};
+
+	cpu_scp_hpri: scp-shmem@200 {
+		compatible = "arm,scmi-shmem";
+		reg = <0x200 0x200>;
+	};
+};
+
+mailbox@40000000 {
+	....
+	#mbox-cells = <1>;
+	reg = <0x0 0x40000000 0x0 0x10000>;
+};
+
+firmware {
+
+	...
+
+	scmi {
+		compatible = "arm,scmi";
+		mboxes = <&mailbox 0 &mailbox 1>;
+		mbox-names = "tx", "rx";
+		shmem = <&cpu_scp_lpri &cpu_scp_hpri>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		scmi_devpd: protocol@11 {
+			reg = <0x11>;
+			#power-domain-cells = <1>;
+		};
+
+		scmi_dvfs: protocol@13 {
+			reg = <0x13>;
+			#clock-cells = <1>;
+		};
+
+		scmi_clk: protocol@14 {
+			reg = <0x14>;
+			#clock-cells = <1>;
+		};
+
+		scmi_sensors0: protocol@15 {
+			reg = <0x15>;
+			#thermal-sensor-cells = <1>;
+		};
+	};
+};
+
+cpu@0 {
+	...
+	reg = <0 0>;
+	clocks = <&scmi_dvfs 0>;
+};
+
+hdlcd@7ff60000 {
+	...
+	reg = <0 0x7ff60000 0 0x1000>;
+	clocks = <&scmi_clk 4>;
+	power-domains = <&scmi_devpd 1>;
+};
+
+thermal-zones {
+	soc_thermal {
+		polling-delay-passive = <100>;
+		polling-delay = <1000>;
+					/* sensor ID */
+		thermal-sensors = <&scmi_sensors0 3>;
+		...
+	};
+};

+ 6 - 0
Documentation/devicetree/bindings/arm/samsung/pmu.txt

@@ -43,6 +43,12 @@ following properties:
 - interrupt-parent: a phandle indicating which interrupt controller
 - interrupt-parent: a phandle indicating which interrupt controller
   this PMU signals interrupts to.
   this PMU signals interrupts to.
 
 
+
+Optional nodes:
+
+- nodes defining the restart and poweroff syscon children
+
+
 Example :
 Example :
 pmu_system_controller: system-controller@10040000 {
 pmu_system_controller: system-controller@10040000 {
 	compatible = "samsung,exynos5250-pmu", "syscon";
 	compatible = "samsung,exynos5250-pmu", "syscon";

+ 28 - 0
Documentation/devicetree/bindings/mailbox/mailbox.txt

@@ -23,6 +23,11 @@ Required property:
 
 
 Optional property:
 Optional property:
 - mbox-names: List of identifier strings for each mailbox channel.
 - mbox-names: List of identifier strings for each mailbox channel.
+- shmem : List of phandle pointing to the shared memory(SHM) area between the
+	  users of these mailboxes for IPC, one for each mailbox. This shared
+	  memory can be part of any memory reserved for the purpose of this
+	  communication between the mailbox client and the remote.
+
 
 
 Example:
 Example:
 	pwr_cntrl: power {
 	pwr_cntrl: power {
@@ -30,3 +35,26 @@ Example:
 		mbox-names = "pwr-ctrl", "rpc";
 		mbox-names = "pwr-ctrl", "rpc";
 		mboxes = <&mailbox 0 &mailbox 1>;
 		mboxes = <&mailbox 0 &mailbox 1>;
 	};
 	};
+
+Example with shared memory(shmem):
+
+	sram: sram@50000000 {
+		compatible = "mmio-sram";
+		reg = <0x50000000 0x10000>;
+
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges = <0 0x50000000 0x10000>;
+
+		cl_shmem: shmem@0 {
+			compatible = "client-shmem";
+			reg = <0x0 0x200>;
+		};
+	};
+
+	client@2e000000 {
+		...
+		mboxes = <&mailbox 0>;
+		shmem = <&cl_shmem>;
+		..
+	};

+ 21 - 0
Documentation/devicetree/bindings/mfd/aspeed-lpc.txt

@@ -176,3 +176,24 @@ lhc: lhc@20 {
 	compatible = "aspeed,ast2500-lhc";
 	compatible = "aspeed,ast2500-lhc";
 	reg = <0x20 0x24 0x48 0x8>;
 	reg = <0x20 0x24 0x48 0x8>;
 };
 };
+
+LPC reset control
+-----------------
+
+The UARTs present in the ASPEED SoC can have their resets tied to the reset
+state of the LPC bus. Some systems may chose to modify this configuration.
+
+Required properties:
+
+ - compatible:		"aspeed,ast2500-lpc-reset" or
+			"aspeed,ast2400-lpc-reset"
+ - reg:			offset and length of the IP in the LHC memory region
+ - #reset-controller	indicates the number of reset cells expected
+
+Example:
+
+lpc_reset: reset-controller@18 {
+        compatible = "aspeed,ast2500-lpc-reset";
+        reg = <0x18 0x4>;
+        #reset-cells = <1>;
+};

+ 0 - 0
Documentation/devicetree/bindings/arm/ccn.txt → Documentation/devicetree/bindings/perf/arm-ccn.txt


+ 6 - 0
Documentation/devicetree/bindings/reset/st,stm32mp1-rcc.txt

@@ -0,0 +1,6 @@
+STMicroelectronics STM32MP1 Peripheral Reset Controller
+=======================================================
+
+The RCC IP is both a reset and a clock controller.
+
+Please see Documentation/devicetree/bindings/clock/st,stm32mp1-rcc.txt

+ 4 - 1
Documentation/devicetree/bindings/soc/mediatek/scpsys.txt

@@ -21,6 +21,8 @@ Required properties:
 	- "mediatek,mt2712-scpsys"
 	- "mediatek,mt2712-scpsys"
 	- "mediatek,mt6797-scpsys"
 	- "mediatek,mt6797-scpsys"
 	- "mediatek,mt7622-scpsys"
 	- "mediatek,mt7622-scpsys"
+	- "mediatek,mt7623-scpsys", "mediatek,mt2701-scpsys": For MT7623 SoC
+	- "mediatek,mt7623a-scpsys": For MT7623A SoC
 	- "mediatek,mt8173-scpsys"
 	- "mediatek,mt8173-scpsys"
 - #power-domain-cells: Must be 1
 - #power-domain-cells: Must be 1
 - reg: Address range of the SCPSYS unit
 - reg: Address range of the SCPSYS unit
@@ -28,10 +30,11 @@ Required properties:
 - clock, clock-names: clocks according to the common clock binding.
 - clock, clock-names: clocks according to the common clock binding.
                       These are clocks which hardware needs to be
                       These are clocks which hardware needs to be
                       enabled before enabling certain power domains.
                       enabled before enabling certain power domains.
-	Required clocks for MT2701: "mm", "mfg", "ethif"
+	Required clocks for MT2701 or MT7623: "mm", "mfg", "ethif"
 	Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec"
 	Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec"
 	Required clocks for MT6797: "mm", "mfg", "vdec"
 	Required clocks for MT6797: "mm", "mfg", "vdec"
 	Required clocks for MT7622: "hif_sel"
 	Required clocks for MT7622: "hif_sel"
+	Required clocks for MT7622A: "ethif"
 	Required clocks for MT8173: "mm", "mfg", "venc", "venc_lt"
 	Required clocks for MT8173: "mm", "mfg", "venc", "venc_lt"
 
 
 Optional properties:
 Optional properties:

+ 0 - 0
Documentation/arm/CCN.txt → Documentation/perf/arm-ccn.txt


+ 6 - 5
MAINTAINERS

@@ -13491,15 +13491,16 @@ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd.git
 S:	Supported
 S:	Supported
 F:	drivers/mfd/syscon.c
 F:	drivers/mfd/syscon.c
 
 
-SYSTEM CONTROL & POWER INTERFACE (SCPI) Message Protocol drivers
+SYSTEM CONTROL & POWER/MANAGEMENT INTERFACE (SCPI/SCMI) Message Protocol drivers
 M:	Sudeep Holla <sudeep.holla@arm.com>
 M:	Sudeep Holla <sudeep.holla@arm.com>
 L:	linux-arm-kernel@lists.infradead.org
 L:	linux-arm-kernel@lists.infradead.org
 S:	Maintained
 S:	Maintained
-F:	Documentation/devicetree/bindings/arm/arm,scpi.txt
-F:	drivers/clk/clk-scpi.c
-F:	drivers/cpufreq/scpi-cpufreq.c
+F:	Documentation/devicetree/bindings/arm/arm,sc[mp]i.txt
+F:	drivers/clk/clk-sc[mp]i.c
+F:	drivers/cpufreq/sc[mp]i-cpufreq.c
 F:	drivers/firmware/arm_scpi.c
 F:	drivers/firmware/arm_scpi.c
-F:	include/linux/scpi_protocol.h
+F:	drivers/firmware/arm_scmi/
+F:	include/linux/sc[mp]i_protocol.h
 
 
 SYSTEM RESET/SHUTDOWN DRIVERS
 SYSTEM RESET/SHUTDOWN DRIVERS
 M:	Sebastian Reichel <sre@kernel.org>
 M:	Sebastian Reichel <sre@kernel.org>

+ 0 - 36
drivers/bus/Kconfig

@@ -8,25 +8,10 @@ menu "Bus devices"
 config ARM_CCI
 config ARM_CCI
 	bool
 	bool
 
 
-config ARM_CCI_PMU
-	bool
-	select ARM_CCI
-
 config ARM_CCI400_COMMON
 config ARM_CCI400_COMMON
 	bool
 	bool
 	select ARM_CCI
 	select ARM_CCI
 
 
-config ARM_CCI400_PMU
-	bool "ARM CCI400 PMU support"
-	depends on (ARM && CPU_V7) || ARM64
-	depends on PERF_EVENTS
-	select ARM_CCI400_COMMON
-	select ARM_CCI_PMU
-	help
-	  Support for PMU events monitoring on the ARM CCI-400 (cache coherent
-	  interconnect). CCI-400 supports counting events related to the
-	  connected slave/master interfaces.
-
 config ARM_CCI400_PORT_CTRL
 config ARM_CCI400_PORT_CTRL
 	bool
 	bool
 	depends on ARM && OF && CPU_V7
 	depends on ARM && OF && CPU_V7
@@ -35,27 +20,6 @@ config ARM_CCI400_PORT_CTRL
 	  Low level power management driver for CCI400 cache coherent
 	  Low level power management driver for CCI400 cache coherent
 	  interconnect for ARM platforms.
 	  interconnect for ARM platforms.
 
 
-config ARM_CCI5xx_PMU
-	bool "ARM CCI-500/CCI-550 PMU support"
-	depends on (ARM && CPU_V7) || ARM64
-	depends on PERF_EVENTS
-	select ARM_CCI_PMU
-	help
-	  Support for PMU events monitoring on the ARM CCI-500/CCI-550 cache
-	  coherent interconnects. Both of them provide 8 independent event counters,
-	  which can count events pertaining to the slave/master interfaces as well
-	  as the internal events to the CCI.
-
-	  If unsure, say Y
-
-config ARM_CCN
-	tristate "ARM CCN driver support"
-	depends on ARM || ARM64
-	depends on PERF_EVENTS
-	help
-	  PMU (perf) driver supporting the ARM CCN (Cache Coherent Network)
-	  interconnect.
-
 config BRCMSTB_GISB_ARB
 config BRCMSTB_GISB_ARB
 	bool "Broadcom STB GISB bus arbiter"
 	bool "Broadcom STB GISB bus arbiter"
 	depends on ARM || ARM64 || MIPS
 	depends on ARM || ARM64 || MIPS

+ 0 - 2
drivers/bus/Makefile

@@ -5,8 +5,6 @@
 
 
 # Interconnect bus drivers for ARM platforms
 # Interconnect bus drivers for ARM platforms
 obj-$(CONFIG_ARM_CCI)		+= arm-cci.o
 obj-$(CONFIG_ARM_CCI)		+= arm-cci.o
-obj-$(CONFIG_ARM_CCN)		+= arm-ccn.o
-
 obj-$(CONFIG_BRCMSTB_GISB_ARB)	+= brcmstb_gisb.o
 obj-$(CONFIG_BRCMSTB_GISB_ARB)	+= brcmstb_gisb.o
 
 
 # DPAA2 fsl-mc bus
 # DPAA2 fsl-mc bus

+ 14 - 1749
drivers/bus/arm-cci.c

@@ -16,21 +16,17 @@
 
 
 #include <linux/arm-cci.h>
 #include <linux/arm-cci.h>
 #include <linux/io.h>
 #include <linux/io.h>
-#include <linux/interrupt.h>
 #include <linux/module.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
 #include <linux/of_address.h>
-#include <linux/of_irq.h>
 #include <linux/of_platform.h>
 #include <linux/of_platform.h>
-#include <linux/perf_event.h>
 #include <linux/platform_device.h>
 #include <linux/platform_device.h>
 #include <linux/slab.h>
 #include <linux/slab.h>
-#include <linux/spinlock.h>
 
 
 #include <asm/cacheflush.h>
 #include <asm/cacheflush.h>
 #include <asm/smp_plat.h>
 #include <asm/smp_plat.h>
 
 
-static void __iomem *cci_ctrl_base;
-static unsigned long cci_ctrl_phys;
+static void __iomem *cci_ctrl_base __ro_after_init;
+static unsigned long cci_ctrl_phys __ro_after_init;
 
 
 #ifdef CONFIG_ARM_CCI400_PORT_CTRL
 #ifdef CONFIG_ARM_CCI400_PORT_CTRL
 struct cci_nb_ports {
 struct cci_nb_ports {
@@ -59,1733 +55,26 @@ static const struct of_device_id arm_cci_matches[] = {
 	{},
 	{},
 };
 };
 
 
-#ifdef CONFIG_ARM_CCI_PMU
-
-#define DRIVER_NAME		"ARM-CCI"
-#define DRIVER_NAME_PMU		DRIVER_NAME " PMU"
-
-#define CCI_PMCR		0x0100
-#define CCI_PID2		0x0fe8
-
-#define CCI_PMCR_CEN		0x00000001
-#define CCI_PMCR_NCNT_MASK	0x0000f800
-#define CCI_PMCR_NCNT_SHIFT	11
-
-#define CCI_PID2_REV_MASK	0xf0
-#define CCI_PID2_REV_SHIFT	4
-
-#define CCI_PMU_EVT_SEL		0x000
-#define CCI_PMU_CNTR		0x004
-#define CCI_PMU_CNTR_CTRL	0x008
-#define CCI_PMU_OVRFLW		0x00c
-
-#define CCI_PMU_OVRFLW_FLAG	1
-
-#define CCI_PMU_CNTR_SIZE(model)	((model)->cntr_size)
-#define CCI_PMU_CNTR_BASE(model, idx)	((idx) * CCI_PMU_CNTR_SIZE(model))
-#define CCI_PMU_CNTR_MASK		((1ULL << 32) -1)
-#define CCI_PMU_CNTR_LAST(cci_pmu)	(cci_pmu->num_cntrs - 1)
-
-#define CCI_PMU_MAX_HW_CNTRS(model) \
-	((model)->num_hw_cntrs + (model)->fixed_hw_cntrs)
-
-/* Types of interfaces that can generate events */
-enum {
-	CCI_IF_SLAVE,
-	CCI_IF_MASTER,
-#ifdef CONFIG_ARM_CCI5xx_PMU
-	CCI_IF_GLOBAL,
-#endif
-	CCI_IF_MAX,
-};
-
-struct event_range {
-	u32 min;
-	u32 max;
-};
-
-struct cci_pmu_hw_events {
-	struct perf_event **events;
-	unsigned long *used_mask;
-	raw_spinlock_t pmu_lock;
-};
-
-struct cci_pmu;
-/*
- * struct cci_pmu_model:
- * @fixed_hw_cntrs - Number of fixed event counters
- * @num_hw_cntrs - Maximum number of programmable event counters
- * @cntr_size - Size of an event counter mapping
- */
-struct cci_pmu_model {
-	char *name;
-	u32 fixed_hw_cntrs;
-	u32 num_hw_cntrs;
-	u32 cntr_size;
-	struct attribute **format_attrs;
-	struct attribute **event_attrs;
-	struct event_range event_ranges[CCI_IF_MAX];
-	int (*validate_hw_event)(struct cci_pmu *, unsigned long);
-	int (*get_event_idx)(struct cci_pmu *, struct cci_pmu_hw_events *, unsigned long);
-	void (*write_counters)(struct cci_pmu *, unsigned long *);
-};
-
-static struct cci_pmu_model cci_pmu_models[];
-
-struct cci_pmu {
-	void __iomem *base;
-	struct pmu pmu;
-	int nr_irqs;
-	int *irqs;
-	unsigned long active_irqs;
-	const struct cci_pmu_model *model;
-	struct cci_pmu_hw_events hw_events;
-	struct platform_device *plat_device;
-	int num_cntrs;
-	atomic_t active_events;
-	struct mutex reserve_mutex;
-	struct hlist_node node;
-	cpumask_t cpus;
-};
-
-#define to_cci_pmu(c)	(container_of(c, struct cci_pmu, pmu))
-
-enum cci_models {
-#ifdef CONFIG_ARM_CCI400_PMU
-	CCI400_R0,
-	CCI400_R1,
-#endif
-#ifdef CONFIG_ARM_CCI5xx_PMU
-	CCI500_R0,
-	CCI550_R0,
-#endif
-	CCI_MODEL_MAX
-};
-
-static void pmu_write_counters(struct cci_pmu *cci_pmu,
-				 unsigned long *mask);
-static ssize_t cci_pmu_format_show(struct device *dev,
-			struct device_attribute *attr, char *buf);
-static ssize_t cci_pmu_event_show(struct device *dev,
-			struct device_attribute *attr, char *buf);
-
-#define CCI_EXT_ATTR_ENTRY(_name, _func, _config) 				\
-	&((struct dev_ext_attribute[]) {					\
-		{ __ATTR(_name, S_IRUGO, _func, NULL), (void *)_config }	\
-	})[0].attr.attr
-
-#define CCI_FORMAT_EXT_ATTR_ENTRY(_name, _config) \
-	CCI_EXT_ATTR_ENTRY(_name, cci_pmu_format_show, (char *)_config)
-#define CCI_EVENT_EXT_ATTR_ENTRY(_name, _config) \
-	CCI_EXT_ATTR_ENTRY(_name, cci_pmu_event_show, (unsigned long)_config)
-
-/* CCI400 PMU Specific definitions */
-
-#ifdef CONFIG_ARM_CCI400_PMU
-
-/* Port ids */
-#define CCI400_PORT_S0		0
-#define CCI400_PORT_S1		1
-#define CCI400_PORT_S2		2
-#define CCI400_PORT_S3		3
-#define CCI400_PORT_S4		4
-#define CCI400_PORT_M0		5
-#define CCI400_PORT_M1		6
-#define CCI400_PORT_M2		7
-
-#define CCI400_R1_PX		5
-
-/*
- * Instead of an event id to monitor CCI cycles, a dedicated counter is
- * provided. Use 0xff to represent CCI cycles and hope that no future revisions
- * make use of this event in hardware.
- */
-enum cci400_perf_events {
-	CCI400_PMU_CYCLES = 0xff
-};
-
-#define CCI400_PMU_CYCLE_CNTR_IDX	0
-#define CCI400_PMU_CNTR0_IDX		1
-
-/*
- * CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8
- * ports and bits 4:0 are event codes. There are different event codes
- * associated with each port type.
- *
- * Additionally, the range of events associated with the port types changed
- * between Rev0 and Rev1.
- *
- * The constants below define the range of valid codes for each port type for
- * the different revisions and are used to validate the event to be monitored.
- */
-
-#define CCI400_PMU_EVENT_MASK		0xffUL
-#define CCI400_PMU_EVENT_SOURCE_SHIFT	5
-#define CCI400_PMU_EVENT_SOURCE_MASK	0x7
-#define CCI400_PMU_EVENT_CODE_SHIFT	0
-#define CCI400_PMU_EVENT_CODE_MASK	0x1f
-#define CCI400_PMU_EVENT_SOURCE(event) \
-	((event >> CCI400_PMU_EVENT_SOURCE_SHIFT) & \
-			CCI400_PMU_EVENT_SOURCE_MASK)
-#define CCI400_PMU_EVENT_CODE(event) \
-	((event >> CCI400_PMU_EVENT_CODE_SHIFT) & CCI400_PMU_EVENT_CODE_MASK)
-
-#define CCI400_R0_SLAVE_PORT_MIN_EV	0x00
-#define CCI400_R0_SLAVE_PORT_MAX_EV	0x13
-#define CCI400_R0_MASTER_PORT_MIN_EV	0x14
-#define CCI400_R0_MASTER_PORT_MAX_EV	0x1a
-
-#define CCI400_R1_SLAVE_PORT_MIN_EV	0x00
-#define CCI400_R1_SLAVE_PORT_MAX_EV	0x14
-#define CCI400_R1_MASTER_PORT_MIN_EV	0x00
-#define CCI400_R1_MASTER_PORT_MAX_EV	0x11
-
-#define CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(_name, _config) \
-	CCI_EXT_ATTR_ENTRY(_name, cci400_pmu_cycle_event_show, \
-					(unsigned long)_config)
-
-static ssize_t cci400_pmu_cycle_event_show(struct device *dev,
-			struct device_attribute *attr, char *buf);
-
-static struct attribute *cci400_pmu_format_attrs[] = {
-	CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"),
-	CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-7"),
-	NULL
-};
-
-static struct attribute *cci400_r0_pmu_event_attrs[] = {
-	/* Slave events */
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13),
-	/* Master events */
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x14),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_addr_hazard, 0x15),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_id_hazard, 0x16),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_tt_full, 0x17),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x18),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x19),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_tt_full, 0x1A),
-	/* Special event for cycles counter */
-	CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff),
-	NULL
-};
-
-static struct attribute *cci400_r1_pmu_event_attrs[] = {
-	/* Slave events */
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_slave_id_hazard, 0x14),
-	/* Master events */
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x0),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_stall_cycle_addr_hazard, 0x1),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_master_id_hazard, 0x2),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_hi_prio_rtq_full, 0x3),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x4),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x5),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_wtq_full, 0x6),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_low_prio_rtq_full, 0x7),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_mid_prio_rtq_full, 0x8),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn0, 0x9),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn1, 0xA),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn2, 0xB),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn3, 0xC),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn0, 0xD),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn1, 0xE),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn2, 0xF),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn3, 0x10),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_unique_or_line_unique_addr_hazard, 0x11),
-	/* Special event for cycles counter */
-	CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff),
-	NULL
-};
-
-static ssize_t cci400_pmu_cycle_event_show(struct device *dev,
-			struct device_attribute *attr, char *buf)
-{
-	struct dev_ext_attribute *eattr = container_of(attr,
-				struct dev_ext_attribute, attr);
-	return snprintf(buf, PAGE_SIZE, "config=0x%lx\n", (unsigned long)eattr->var);
-}
-
-static int cci400_get_event_idx(struct cci_pmu *cci_pmu,
-				struct cci_pmu_hw_events *hw,
-				unsigned long cci_event)
-{
-	int idx;
-
-	/* cycles event idx is fixed */
-	if (cci_event == CCI400_PMU_CYCLES) {
-		if (test_and_set_bit(CCI400_PMU_CYCLE_CNTR_IDX, hw->used_mask))
-			return -EAGAIN;
-
-		return CCI400_PMU_CYCLE_CNTR_IDX;
-	}
-
-	for (idx = CCI400_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx)
-		if (!test_and_set_bit(idx, hw->used_mask))
-			return idx;
-
-	/* No counters available */
-	return -EAGAIN;
-}
-
-static int cci400_validate_hw_event(struct cci_pmu *cci_pmu, unsigned long hw_event)
-{
-	u8 ev_source = CCI400_PMU_EVENT_SOURCE(hw_event);
-	u8 ev_code = CCI400_PMU_EVENT_CODE(hw_event);
-	int if_type;
-
-	if (hw_event & ~CCI400_PMU_EVENT_MASK)
-		return -ENOENT;
-
-	if (hw_event == CCI400_PMU_CYCLES)
-		return hw_event;
-
-	switch (ev_source) {
-	case CCI400_PORT_S0:
-	case CCI400_PORT_S1:
-	case CCI400_PORT_S2:
-	case CCI400_PORT_S3:
-	case CCI400_PORT_S4:
-		/* Slave Interface */
-		if_type = CCI_IF_SLAVE;
-		break;
-	case CCI400_PORT_M0:
-	case CCI400_PORT_M1:
-	case CCI400_PORT_M2:
-		/* Master Interface */
-		if_type = CCI_IF_MASTER;
-		break;
-	default:
-		return -ENOENT;
-	}
-
-	if (ev_code >= cci_pmu->model->event_ranges[if_type].min &&
-		ev_code <= cci_pmu->model->event_ranges[if_type].max)
-		return hw_event;
-
-	return -ENOENT;
-}
-
-static int probe_cci400_revision(void)
-{
-	int rev;
-	rev = readl_relaxed(cci_ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK;
-	rev >>= CCI_PID2_REV_SHIFT;
-
-	if (rev < CCI400_R1_PX)
-		return CCI400_R0;
-	else
-		return CCI400_R1;
-}
-
-static const struct cci_pmu_model *probe_cci_model(struct platform_device *pdev)
-{
-	if (platform_has_secure_cci_access())
-		return &cci_pmu_models[probe_cci400_revision()];
-	return NULL;
-}
-#else	/* !CONFIG_ARM_CCI400_PMU */
-static inline struct cci_pmu_model *probe_cci_model(struct platform_device *pdev)
-{
-	return NULL;
-}
-#endif	/* CONFIG_ARM_CCI400_PMU */
-
-#ifdef CONFIG_ARM_CCI5xx_PMU
-
-/*
- * CCI5xx PMU event id is an 9-bit value made of two parts.
- *	 bits [8:5] - Source for the event
- *	 bits [4:0] - Event code (specific to type of interface)
- *
- *
- */
-
-/* Port ids */
-#define CCI5xx_PORT_S0			0x0
-#define CCI5xx_PORT_S1			0x1
-#define CCI5xx_PORT_S2			0x2
-#define CCI5xx_PORT_S3			0x3
-#define CCI5xx_PORT_S4			0x4
-#define CCI5xx_PORT_S5			0x5
-#define CCI5xx_PORT_S6			0x6
-
-#define CCI5xx_PORT_M0			0x8
-#define CCI5xx_PORT_M1			0x9
-#define CCI5xx_PORT_M2			0xa
-#define CCI5xx_PORT_M3			0xb
-#define CCI5xx_PORT_M4			0xc
-#define CCI5xx_PORT_M5			0xd
-#define CCI5xx_PORT_M6			0xe
-
-#define CCI5xx_PORT_GLOBAL		0xf
-
-#define CCI5xx_PMU_EVENT_MASK		0x1ffUL
-#define CCI5xx_PMU_EVENT_SOURCE_SHIFT	0x5
-#define CCI5xx_PMU_EVENT_SOURCE_MASK	0xf
-#define CCI5xx_PMU_EVENT_CODE_SHIFT	0x0
-#define CCI5xx_PMU_EVENT_CODE_MASK	0x1f
-
-#define CCI5xx_PMU_EVENT_SOURCE(event)	\
-	((event >> CCI5xx_PMU_EVENT_SOURCE_SHIFT) & CCI5xx_PMU_EVENT_SOURCE_MASK)
-#define CCI5xx_PMU_EVENT_CODE(event)	\
-	((event >> CCI5xx_PMU_EVENT_CODE_SHIFT) & CCI5xx_PMU_EVENT_CODE_MASK)
-
-#define CCI5xx_SLAVE_PORT_MIN_EV	0x00
-#define CCI5xx_SLAVE_PORT_MAX_EV	0x1f
-#define CCI5xx_MASTER_PORT_MIN_EV	0x00
-#define CCI5xx_MASTER_PORT_MAX_EV	0x06
-#define CCI5xx_GLOBAL_PORT_MIN_EV	0x00
-#define CCI5xx_GLOBAL_PORT_MAX_EV	0x0f
-
-
-#define CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(_name, _config) \
-	CCI_EXT_ATTR_ENTRY(_name, cci5xx_pmu_global_event_show, \
-					(unsigned long) _config)
-
-static ssize_t cci5xx_pmu_global_event_show(struct device *dev,
-				struct device_attribute *attr, char *buf);
-
-static struct attribute *cci5xx_pmu_format_attrs[] = {
-	CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"),
-	CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-8"),
-	NULL,
-};
-
-static struct attribute *cci5xx_pmu_event_attrs[] = {
-	/* Slave events */
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_arvalid, 0x0),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_dev, 0x1),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_nonshareable, 0x2),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_non_alloc, 0x3),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_alloc, 0x4),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_invalidate, 0x5),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maint, 0x6),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rval, 0x8),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rlast_snoop, 0x9),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_awalid, 0xA),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_dev, 0xB),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_non_shareable, 0xC),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wb, 0xD),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wlu, 0xE),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wunique, 0xF),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_evict, 0x10),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_wrevict, 0x11),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_beat, 0x12),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_acvalid, 0x13),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_read, 0x14),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_clean, 0x15),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_data_transfer_low, 0x16),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_arvalid, 0x17),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall, 0x18),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall, 0x19),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_stall, 0x1A),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_w_resp_stall, 0x1B),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_stall, 0x1C),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_s_data_stall, 0x1D),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_rq_stall_ot_limit, 0x1E),
-	CCI_EVENT_EXT_ATTR_ENTRY(si_r_stall_arbit, 0x1F),
-
-	/* Master events */
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_beat_any, 0x0),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_beat_any, 0x1),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall, 0x2),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_stall, 0x3),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall, 0x4),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_stall, 0x5),
-	CCI_EVENT_EXT_ATTR_ENTRY(mi_w_resp_stall, 0x6),
-
-	/* Global events */
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_0_1, 0x0),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_2_3, 0x1),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_4_5, 0x2),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_6_7, 0x3),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_0_1, 0x4),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_2_3, 0x5),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_4_5, 0x6),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_6_7, 0x7),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_back_invalidation, 0x8),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_alloc_busy, 0x9),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_tt_full, 0xA),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE),
-	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF),
-	NULL
-};
-
-static ssize_t cci5xx_pmu_global_event_show(struct device *dev,
-				struct device_attribute *attr, char *buf)
-{
-	struct dev_ext_attribute *eattr = container_of(attr,
-					struct dev_ext_attribute, attr);
-	/* Global events have single fixed source code */
-	return snprintf(buf, PAGE_SIZE, "event=0x%lx,source=0x%x\n",
-				(unsigned long)eattr->var, CCI5xx_PORT_GLOBAL);
-}
-
-/*
- * CCI500 provides 8 independent event counters that can count
- * any of the events available.
- * CCI500 PMU event source ids
- *	0x0-0x6 - Slave interfaces
- *	0x8-0xD - Master interfaces
- *	0xf     - Global Events
- *	0x7,0xe - Reserved
- */
-static int cci500_validate_hw_event(struct cci_pmu *cci_pmu,
-					unsigned long hw_event)
-{
-	u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event);
-	u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event);
-	int if_type;
-
-	if (hw_event & ~CCI5xx_PMU_EVENT_MASK)
-		return -ENOENT;
-
-	switch (ev_source) {
-	case CCI5xx_PORT_S0:
-	case CCI5xx_PORT_S1:
-	case CCI5xx_PORT_S2:
-	case CCI5xx_PORT_S3:
-	case CCI5xx_PORT_S4:
-	case CCI5xx_PORT_S5:
-	case CCI5xx_PORT_S6:
-		if_type = CCI_IF_SLAVE;
-		break;
-	case CCI5xx_PORT_M0:
-	case CCI5xx_PORT_M1:
-	case CCI5xx_PORT_M2:
-	case CCI5xx_PORT_M3:
-	case CCI5xx_PORT_M4:
-	case CCI5xx_PORT_M5:
-		if_type = CCI_IF_MASTER;
-		break;
-	case CCI5xx_PORT_GLOBAL:
-		if_type = CCI_IF_GLOBAL;
-		break;
-	default:
-		return -ENOENT;
-	}
-
-	if (ev_code >= cci_pmu->model->event_ranges[if_type].min &&
-		ev_code <= cci_pmu->model->event_ranges[if_type].max)
-		return hw_event;
-
-	return -ENOENT;
-}
-
-/*
- * CCI550 provides 8 independent event counters that can count
- * any of the events available.
- * CCI550 PMU event source ids
- *	0x0-0x6 - Slave interfaces
- *	0x8-0xe - Master interfaces
- *	0xf     - Global Events
- *	0x7	- Reserved
- */
-static int cci550_validate_hw_event(struct cci_pmu *cci_pmu,
-					unsigned long hw_event)
-{
-	u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event);
-	u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event);
-	int if_type;
-
-	if (hw_event & ~CCI5xx_PMU_EVENT_MASK)
-		return -ENOENT;
-
-	switch (ev_source) {
-	case CCI5xx_PORT_S0:
-	case CCI5xx_PORT_S1:
-	case CCI5xx_PORT_S2:
-	case CCI5xx_PORT_S3:
-	case CCI5xx_PORT_S4:
-	case CCI5xx_PORT_S5:
-	case CCI5xx_PORT_S6:
-		if_type = CCI_IF_SLAVE;
-		break;
-	case CCI5xx_PORT_M0:
-	case CCI5xx_PORT_M1:
-	case CCI5xx_PORT_M2:
-	case CCI5xx_PORT_M3:
-	case CCI5xx_PORT_M4:
-	case CCI5xx_PORT_M5:
-	case CCI5xx_PORT_M6:
-		if_type = CCI_IF_MASTER;
-		break;
-	case CCI5xx_PORT_GLOBAL:
-		if_type = CCI_IF_GLOBAL;
-		break;
-	default:
-		return -ENOENT;
-	}
-
-	if (ev_code >= cci_pmu->model->event_ranges[if_type].min &&
-		ev_code <= cci_pmu->model->event_ranges[if_type].max)
-		return hw_event;
-
-	return -ENOENT;
-}
-
-#endif	/* CONFIG_ARM_CCI5xx_PMU */
-
-/*
- * Program the CCI PMU counters which have PERF_HES_ARCH set
- * with the event period and mark them ready before we enable
- * PMU.
- */
-static void cci_pmu_sync_counters(struct cci_pmu *cci_pmu)
-{
-	int i;
-	struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events;
-
-	DECLARE_BITMAP(mask, cci_pmu->num_cntrs);
-
-	bitmap_zero(mask, cci_pmu->num_cntrs);
-	for_each_set_bit(i, cci_pmu->hw_events.used_mask, cci_pmu->num_cntrs) {
-		struct perf_event *event = cci_hw->events[i];
-
-		if (WARN_ON(!event))
-			continue;
-
-		/* Leave the events which are not counting */
-		if (event->hw.state & PERF_HES_STOPPED)
-			continue;
-		if (event->hw.state & PERF_HES_ARCH) {
-			set_bit(i, mask);
-			event->hw.state &= ~PERF_HES_ARCH;
-		}
-	}
-
-	pmu_write_counters(cci_pmu, mask);
-}
-
-/* Should be called with cci_pmu->hw_events->pmu_lock held */
-static void __cci_pmu_enable_nosync(struct cci_pmu *cci_pmu)
-{
-	u32 val;
-
-	/* Enable all the PMU counters. */
-	val = readl_relaxed(cci_ctrl_base + CCI_PMCR) | CCI_PMCR_CEN;
-	writel(val, cci_ctrl_base + CCI_PMCR);
-}
-
-/* Should be called with cci_pmu->hw_events->pmu_lock held */
-static void __cci_pmu_enable_sync(struct cci_pmu *cci_pmu)
-{
-	cci_pmu_sync_counters(cci_pmu);
-	__cci_pmu_enable_nosync(cci_pmu);
-}
-
-/* Should be called with cci_pmu->hw_events->pmu_lock held */
-static void __cci_pmu_disable(void)
-{
-	u32 val;
-
-	/* Disable all the PMU counters. */
-	val = readl_relaxed(cci_ctrl_base + CCI_PMCR) & ~CCI_PMCR_CEN;
-	writel(val, cci_ctrl_base + CCI_PMCR);
-}
-
-static ssize_t cci_pmu_format_show(struct device *dev,
-			struct device_attribute *attr, char *buf)
-{
-	struct dev_ext_attribute *eattr = container_of(attr,
-				struct dev_ext_attribute, attr);
-	return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var);
-}
-
-static ssize_t cci_pmu_event_show(struct device *dev,
-			struct device_attribute *attr, char *buf)
-{
-	struct dev_ext_attribute *eattr = container_of(attr,
-				struct dev_ext_attribute, attr);
-	/* source parameter is mandatory for normal PMU events */
-	return snprintf(buf, PAGE_SIZE, "source=?,event=0x%lx\n",
-					 (unsigned long)eattr->var);
-}
-
-static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx)
-{
-	return 0 <= idx && idx <= CCI_PMU_CNTR_LAST(cci_pmu);
-}
-
-static u32 pmu_read_register(struct cci_pmu *cci_pmu, int idx, unsigned int offset)
-{
-	return readl_relaxed(cci_pmu->base +
-			     CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset);
-}
-
-static void pmu_write_register(struct cci_pmu *cci_pmu, u32 value,
-			       int idx, unsigned int offset)
-{
-	writel_relaxed(value, cci_pmu->base +
-		       CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset);
-}
-
-static void pmu_disable_counter(struct cci_pmu *cci_pmu, int idx)
-{
-	pmu_write_register(cci_pmu, 0, idx, CCI_PMU_CNTR_CTRL);
-}
-
-static void pmu_enable_counter(struct cci_pmu *cci_pmu, int idx)
-{
-	pmu_write_register(cci_pmu, 1, idx, CCI_PMU_CNTR_CTRL);
-}
-
-static bool __maybe_unused
-pmu_counter_is_enabled(struct cci_pmu *cci_pmu, int idx)
-{
-	return (pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR_CTRL) & 0x1) != 0;
-}
-
-static void pmu_set_event(struct cci_pmu *cci_pmu, int idx, unsigned long event)
-{
-	pmu_write_register(cci_pmu, event, idx, CCI_PMU_EVT_SEL);
-}
-
-/*
- * For all counters on the CCI-PMU, disable any 'enabled' counters,
- * saving the changed counters in the mask, so that we can restore
- * it later using pmu_restore_counters. The mask is private to the
- * caller. We cannot rely on the used_mask maintained by the CCI_PMU
- * as it only tells us if the counter is assigned to perf_event or not.
- * The state of the perf_event cannot be locked by the PMU layer, hence
- * we check the individual counter status (which can be locked by
- * cci_pm->hw_events->pmu_lock).
- *
- * @mask should be initialised to empty by the caller.
- */
-static void __maybe_unused
-pmu_save_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
-{
-	int i;
-
-	for (i = 0; i < cci_pmu->num_cntrs; i++) {
-		if (pmu_counter_is_enabled(cci_pmu, i)) {
-			set_bit(i, mask);
-			pmu_disable_counter(cci_pmu, i);
-		}
-	}
-}
-
-/*
- * Restore the status of the counters. Reversal of the pmu_save_counters().
- * For each counter set in the mask, enable the counter back.
- */
-static void __maybe_unused
-pmu_restore_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
-{
-	int i;
-
-	for_each_set_bit(i, mask, cci_pmu->num_cntrs)
-		pmu_enable_counter(cci_pmu, i);
-}
-
-/*
- * Returns the number of programmable counters actually implemented
- * by the cci
- */
-static u32 pmu_get_max_counters(void)
-{
-	return (readl_relaxed(cci_ctrl_base + CCI_PMCR) &
-		CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT;
-}
-
-static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *event)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	unsigned long cci_event = event->hw.config_base;
-	int idx;
-
-	if (cci_pmu->model->get_event_idx)
-		return cci_pmu->model->get_event_idx(cci_pmu, hw, cci_event);
-
-	/* Generic code to find an unused idx from the mask */
-	for(idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++)
-		if (!test_and_set_bit(idx, hw->used_mask))
-			return idx;
-
-	/* No counters available */
-	return -EAGAIN;
-}
-
-static int pmu_map_event(struct perf_event *event)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-
-	if (event->attr.type < PERF_TYPE_MAX ||
-			!cci_pmu->model->validate_hw_event)
-		return -ENOENT;
-
-	return	cci_pmu->model->validate_hw_event(cci_pmu, event->attr.config);
-}
-
-static int pmu_request_irq(struct cci_pmu *cci_pmu, irq_handler_t handler)
-{
-	int i;
-	struct platform_device *pmu_device = cci_pmu->plat_device;
-
-	if (unlikely(!pmu_device))
-		return -ENODEV;
-
-	if (cci_pmu->nr_irqs < 1) {
-		dev_err(&pmu_device->dev, "no irqs for CCI PMUs defined\n");
-		return -ENODEV;
-	}
-
-	/*
-	 * Register all available CCI PMU interrupts. In the interrupt handler
-	 * we iterate over the counters checking for interrupt source (the
-	 * overflowing counter) and clear it.
-	 *
-	 * This should allow handling of non-unique interrupt for the counters.
-	 */
-	for (i = 0; i < cci_pmu->nr_irqs; i++) {
-		int err = request_irq(cci_pmu->irqs[i], handler, IRQF_SHARED,
-				"arm-cci-pmu", cci_pmu);
-		if (err) {
-			dev_err(&pmu_device->dev, "unable to request IRQ%d for ARM CCI PMU counters\n",
-				cci_pmu->irqs[i]);
-			return err;
-		}
-
-		set_bit(i, &cci_pmu->active_irqs);
-	}
-
-	return 0;
-}
-
-static void pmu_free_irq(struct cci_pmu *cci_pmu)
-{
-	int i;
-
-	for (i = 0; i < cci_pmu->nr_irqs; i++) {
-		if (!test_and_clear_bit(i, &cci_pmu->active_irqs))
-			continue;
-
-		free_irq(cci_pmu->irqs[i], cci_pmu);
-	}
-}
-
-static u32 pmu_read_counter(struct perf_event *event)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	struct hw_perf_event *hw_counter = &event->hw;
-	int idx = hw_counter->idx;
-	u32 value;
-
-	if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) {
-		dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx);
-		return 0;
-	}
-	value = pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR);
-
-	return value;
-}
-
-static void pmu_write_counter(struct cci_pmu *cci_pmu, u32 value, int idx)
-{
-	pmu_write_register(cci_pmu, value, idx, CCI_PMU_CNTR);
-}
-
-static void __pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
-{
-	int i;
-	struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events;
-
-	for_each_set_bit(i, mask, cci_pmu->num_cntrs) {
-		struct perf_event *event = cci_hw->events[i];
-
-		if (WARN_ON(!event))
-			continue;
-		pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i);
-	}
-}
-
-static void pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
-{
-	if (cci_pmu->model->write_counters)
-		cci_pmu->model->write_counters(cci_pmu, mask);
-	else
-		__pmu_write_counters(cci_pmu, mask);
-}
-
-#ifdef CONFIG_ARM_CCI5xx_PMU
-
-/*
- * CCI-500/CCI-550 has advanced power saving policies, which could gate the
- * clocks to the PMU counters, which makes the writes to them ineffective.
- * The only way to write to those counters is when the global counters
- * are enabled and the particular counter is enabled.
- *
- * So we do the following :
- *
- * 1) Disable all the PMU counters, saving their current state
- * 2) Enable the global PMU profiling, now that all counters are
- *    disabled.
- *
- * For each counter to be programmed, repeat steps 3-7:
- *
- * 3) Write an invalid event code to the event control register for the
-      counter, so that the counters are not modified.
- * 4) Enable the counter control for the counter.
- * 5) Set the counter value
- * 6) Disable the counter
- * 7) Restore the event in the target counter
- *
- * 8) Disable the global PMU.
- * 9) Restore the status of the rest of the counters.
- *
- * We choose an event which for CCI-5xx is guaranteed not to count.
- * We use the highest possible event code (0x1f) for the master interface 0.
- */
-#define CCI5xx_INVALID_EVENT	((CCI5xx_PORT_M0 << CCI5xx_PMU_EVENT_SOURCE_SHIFT) | \
-				 (CCI5xx_PMU_EVENT_CODE_MASK << CCI5xx_PMU_EVENT_CODE_SHIFT))
-static void cci5xx_pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
-{
-	int i;
-	DECLARE_BITMAP(saved_mask, cci_pmu->num_cntrs);
-
-	bitmap_zero(saved_mask, cci_pmu->num_cntrs);
-	pmu_save_counters(cci_pmu, saved_mask);
-
-	/*
-	 * Now that all the counters are disabled, we can safely turn the PMU on,
-	 * without syncing the status of the counters
-	 */
-	__cci_pmu_enable_nosync(cci_pmu);
-
-	for_each_set_bit(i, mask, cci_pmu->num_cntrs) {
-		struct perf_event *event = cci_pmu->hw_events.events[i];
-
-		if (WARN_ON(!event))
-			continue;
-
-		pmu_set_event(cci_pmu, i, CCI5xx_INVALID_EVENT);
-		pmu_enable_counter(cci_pmu, i);
-		pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i);
-		pmu_disable_counter(cci_pmu, i);
-		pmu_set_event(cci_pmu, i, event->hw.config_base);
-	}
-
-	__cci_pmu_disable();
-
-	pmu_restore_counters(cci_pmu, saved_mask);
-}
-
-#endif	/* CONFIG_ARM_CCI5xx_PMU */
-
-static u64 pmu_event_update(struct perf_event *event)
-{
-	struct hw_perf_event *hwc = &event->hw;
-	u64 delta, prev_raw_count, new_raw_count;
-
-	do {
-		prev_raw_count = local64_read(&hwc->prev_count);
-		new_raw_count = pmu_read_counter(event);
-	} while (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
-		 new_raw_count) != prev_raw_count);
-
-	delta = (new_raw_count - prev_raw_count) & CCI_PMU_CNTR_MASK;
-
-	local64_add(delta, &event->count);
-
-	return new_raw_count;
-}
-
-static void pmu_read(struct perf_event *event)
-{
-	pmu_event_update(event);
-}
-
-static void pmu_event_set_period(struct perf_event *event)
-{
-	struct hw_perf_event *hwc = &event->hw;
-	/*
-	 * The CCI PMU counters have a period of 2^32. To account for the
-	 * possiblity of extreme interrupt latency we program for a period of
-	 * half that. Hopefully we can handle the interrupt before another 2^31
-	 * events occur and the counter overtakes its previous value.
-	 */
-	u64 val = 1ULL << 31;
-	local64_set(&hwc->prev_count, val);
-
-	/*
-	 * CCI PMU uses PERF_HES_ARCH to keep track of the counters, whose
-	 * values needs to be sync-ed with the s/w state before the PMU is
-	 * enabled.
-	 * Mark this counter for sync.
-	 */
-	hwc->state |= PERF_HES_ARCH;
-}
-
-static irqreturn_t pmu_handle_irq(int irq_num, void *dev)
-{
-	unsigned long flags;
-	struct cci_pmu *cci_pmu = dev;
-	struct cci_pmu_hw_events *events = &cci_pmu->hw_events;
-	int idx, handled = IRQ_NONE;
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-
-	/* Disable the PMU while we walk through the counters */
-	__cci_pmu_disable();
-	/*
-	 * Iterate over counters and update the corresponding perf events.
-	 * This should work regardless of whether we have per-counter overflow
-	 * interrupt or a combined overflow interrupt.
-	 */
-	for (idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) {
-		struct perf_event *event = events->events[idx];
-
-		if (!event)
-			continue;
-
-		/* Did this counter overflow? */
-		if (!(pmu_read_register(cci_pmu, idx, CCI_PMU_OVRFLW) &
-		      CCI_PMU_OVRFLW_FLAG))
-			continue;
-
-		pmu_write_register(cci_pmu, CCI_PMU_OVRFLW_FLAG, idx,
-							CCI_PMU_OVRFLW);
-
-		pmu_event_update(event);
-		pmu_event_set_period(event);
-		handled = IRQ_HANDLED;
-	}
-
-	/* Enable the PMU and sync possibly overflowed counters */
-	__cci_pmu_enable_sync(cci_pmu);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
-
-	return IRQ_RETVAL(handled);
-}
-
-static int cci_pmu_get_hw(struct cci_pmu *cci_pmu)
-{
-	int ret = pmu_request_irq(cci_pmu, pmu_handle_irq);
-	if (ret) {
-		pmu_free_irq(cci_pmu);
-		return ret;
-	}
-	return 0;
-}
-
-static void cci_pmu_put_hw(struct cci_pmu *cci_pmu)
-{
-	pmu_free_irq(cci_pmu);
-}
-
-static void hw_perf_event_destroy(struct perf_event *event)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	atomic_t *active_events = &cci_pmu->active_events;
-	struct mutex *reserve_mutex = &cci_pmu->reserve_mutex;
-
-	if (atomic_dec_and_mutex_lock(active_events, reserve_mutex)) {
-		cci_pmu_put_hw(cci_pmu);
-		mutex_unlock(reserve_mutex);
-	}
-}
-
-static void cci_pmu_enable(struct pmu *pmu)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(pmu);
-	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
-	int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_cntrs);
-	unsigned long flags;
-
-	if (!enabled)
-		return;
-
-	raw_spin_lock_irqsave(&hw_events->pmu_lock, flags);
-	__cci_pmu_enable_sync(cci_pmu);
-	raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags);
-
-}
-
-static void cci_pmu_disable(struct pmu *pmu)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(pmu);
-	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
-	unsigned long flags;
-
-	raw_spin_lock_irqsave(&hw_events->pmu_lock, flags);
-	__cci_pmu_disable();
-	raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags);
-}
-
-/*
- * Check if the idx represents a non-programmable counter.
- * All the fixed event counters are mapped before the programmable
- * counters.
- */
-static bool pmu_fixed_hw_idx(struct cci_pmu *cci_pmu, int idx)
-{
-	return (idx >= 0) && (idx < cci_pmu->model->fixed_hw_cntrs);
-}
-
-static void cci_pmu_start(struct perf_event *event, int pmu_flags)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
-	struct hw_perf_event *hwc = &event->hw;
-	int idx = hwc->idx;
-	unsigned long flags;
-
-	/*
-	 * To handle interrupt latency, we always reprogram the period
-	 * regardlesss of PERF_EF_RELOAD.
-	 */
-	if (pmu_flags & PERF_EF_RELOAD)
-		WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE));
-
-	hwc->state = 0;
-
-	if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) {
-		dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx);
-		return;
-	}
-
-	raw_spin_lock_irqsave(&hw_events->pmu_lock, flags);
-
-	/* Configure the counter unless you are counting a fixed event */
-	if (!pmu_fixed_hw_idx(cci_pmu, idx))
-		pmu_set_event(cci_pmu, idx, hwc->config_base);
-
-	pmu_event_set_period(event);
-	pmu_enable_counter(cci_pmu, idx);
-
-	raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags);
-}
-
-static void cci_pmu_stop(struct perf_event *event, int pmu_flags)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	struct hw_perf_event *hwc = &event->hw;
-	int idx = hwc->idx;
-
-	if (hwc->state & PERF_HES_STOPPED)
-		return;
-
-	if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) {
-		dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx);
-		return;
-	}
-
-	/*
-	 * We always reprogram the counter, so ignore PERF_EF_UPDATE. See
-	 * cci_pmu_start()
-	 */
-	pmu_disable_counter(cci_pmu, idx);
-	pmu_event_update(event);
-	hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
-}
-
-static int cci_pmu_add(struct perf_event *event, int flags)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
-	struct hw_perf_event *hwc = &event->hw;
-	int idx;
-	int err = 0;
-
-	perf_pmu_disable(event->pmu);
-
-	/* If we don't have a space for the counter then finish early. */
-	idx = pmu_get_event_idx(hw_events, event);
-	if (idx < 0) {
-		err = idx;
-		goto out;
-	}
-
-	event->hw.idx = idx;
-	hw_events->events[idx] = event;
-
-	hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
-	if (flags & PERF_EF_START)
-		cci_pmu_start(event, PERF_EF_RELOAD);
-
-	/* Propagate our changes to the userspace mapping. */
-	perf_event_update_userpage(event);
-
-out:
-	perf_pmu_enable(event->pmu);
-	return err;
-}
-
-static void cci_pmu_del(struct perf_event *event, int flags)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
-	struct hw_perf_event *hwc = &event->hw;
-	int idx = hwc->idx;
-
-	cci_pmu_stop(event, PERF_EF_UPDATE);
-	hw_events->events[idx] = NULL;
-	clear_bit(idx, hw_events->used_mask);
-
-	perf_event_update_userpage(event);
-}
-
-static int
-validate_event(struct pmu *cci_pmu,
-               struct cci_pmu_hw_events *hw_events,
-               struct perf_event *event)
-{
-	if (is_software_event(event))
-		return 1;
-
-	/*
-	 * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The
-	 * core perf code won't check that the pmu->ctx == leader->ctx
-	 * until after pmu->event_init(event).
-	 */
-	if (event->pmu != cci_pmu)
-		return 0;
-
-	if (event->state < PERF_EVENT_STATE_OFF)
-		return 1;
-
-	if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec)
-		return 1;
-
-	return pmu_get_event_idx(hw_events, event) >= 0;
-}
-
-static int
-validate_group(struct perf_event *event)
-{
-	struct perf_event *sibling, *leader = event->group_leader;
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)];
-	struct cci_pmu_hw_events fake_pmu = {
-		/*
-		 * Initialise the fake PMU. We only need to populate the
-		 * used_mask for the purposes of validation.
-		 */
-		.used_mask = mask,
-	};
-	memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long));
-
-	if (!validate_event(event->pmu, &fake_pmu, leader))
-		return -EINVAL;
-
-	for_each_sibling_event(sibling, leader) {
-		if (!validate_event(event->pmu, &fake_pmu, sibling))
-			return -EINVAL;
-	}
-
-	if (!validate_event(event->pmu, &fake_pmu, event))
-		return -EINVAL;
-
-	return 0;
-}
-
-static int
-__hw_perf_event_init(struct perf_event *event)
-{
-	struct hw_perf_event *hwc = &event->hw;
-	int mapping;
-
-	mapping = pmu_map_event(event);
-
-	if (mapping < 0) {
-		pr_debug("event %x:%llx not supported\n", event->attr.type,
-			 event->attr.config);
-		return mapping;
-	}
-
-	/*
-	 * We don't assign an index until we actually place the event onto
-	 * hardware. Use -1 to signify that we haven't decided where to put it
-	 * yet.
-	 */
-	hwc->idx		= -1;
-	hwc->config_base	= 0;
-	hwc->config		= 0;
-	hwc->event_base		= 0;
-
-	/*
-	 * Store the event encoding into the config_base field.
-	 */
-	hwc->config_base	    |= (unsigned long)mapping;
-
-	/*
-	 * Limit the sample_period to half of the counter width. That way, the
-	 * new counter value is far less likely to overtake the previous one
-	 * unless you have some serious IRQ latency issues.
-	 */
-	hwc->sample_period  = CCI_PMU_CNTR_MASK >> 1;
-	hwc->last_period    = hwc->sample_period;
-	local64_set(&hwc->period_left, hwc->sample_period);
-
-	if (event->group_leader != event) {
-		if (validate_group(event) != 0)
-			return -EINVAL;
-	}
-
-	return 0;
-}
-
-static int cci_pmu_event_init(struct perf_event *event)
-{
-	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
-	atomic_t *active_events = &cci_pmu->active_events;
-	int err = 0;
-	int cpu;
-
-	if (event->attr.type != event->pmu->type)
-		return -ENOENT;
-
-	/* Shared by all CPUs, no meaningful state to sample */
-	if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)
-		return -EOPNOTSUPP;
-
-	/* We have no filtering of any kind */
-	if (event->attr.exclude_user	||
-	    event->attr.exclude_kernel	||
-	    event->attr.exclude_hv	||
-	    event->attr.exclude_idle	||
-	    event->attr.exclude_host	||
-	    event->attr.exclude_guest)
-		return -EINVAL;
-
-	/*
-	 * Following the example set by other "uncore" PMUs, we accept any CPU
-	 * and rewrite its affinity dynamically rather than having perf core
-	 * handle cpu == -1 and pid == -1 for this case.
-	 *
-	 * The perf core will pin online CPUs for the duration of this call and
-	 * the event being installed into its context, so the PMU's CPU can't
-	 * change under our feet.
-	 */
-	cpu = cpumask_first(&cci_pmu->cpus);
-	if (event->cpu < 0 || cpu < 0)
-		return -EINVAL;
-	event->cpu = cpu;
-
-	event->destroy = hw_perf_event_destroy;
-	if (!atomic_inc_not_zero(active_events)) {
-		mutex_lock(&cci_pmu->reserve_mutex);
-		if (atomic_read(active_events) == 0)
-			err = cci_pmu_get_hw(cci_pmu);
-		if (!err)
-			atomic_inc(active_events);
-		mutex_unlock(&cci_pmu->reserve_mutex);
-	}
-	if (err)
-		return err;
-
-	err = __hw_perf_event_init(event);
-	if (err)
-		hw_perf_event_destroy(event);
-
-	return err;
-}
-
-static ssize_t pmu_cpumask_attr_show(struct device *dev,
-				     struct device_attribute *attr, char *buf)
-{
-	struct pmu *pmu = dev_get_drvdata(dev);
-	struct cci_pmu *cci_pmu = to_cci_pmu(pmu);
-
-	int n = scnprintf(buf, PAGE_SIZE - 1, "%*pbl",
-			  cpumask_pr_args(&cci_pmu->cpus));
-	buf[n++] = '\n';
-	buf[n] = '\0';
-	return n;
-}
-
-static struct device_attribute pmu_cpumask_attr =
-	__ATTR(cpumask, S_IRUGO, pmu_cpumask_attr_show, NULL);
-
-static struct attribute *pmu_attrs[] = {
-	&pmu_cpumask_attr.attr,
-	NULL,
-};
-
-static struct attribute_group pmu_attr_group = {
-	.attrs = pmu_attrs,
-};
-
-static struct attribute_group pmu_format_attr_group = {
-	.name = "format",
-	.attrs = NULL,		/* Filled in cci_pmu_init_attrs */
-};
-
-static struct attribute_group pmu_event_attr_group = {
-	.name = "events",
-	.attrs = NULL,		/* Filled in cci_pmu_init_attrs */
-};
-
-static const struct attribute_group *pmu_attr_groups[] = {
-	&pmu_attr_group,
-	&pmu_format_attr_group,
-	&pmu_event_attr_group,
-	NULL
-};
-
-static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev)
-{
-	const struct cci_pmu_model *model = cci_pmu->model;
-	char *name = model->name;
-	u32 num_cntrs;
-
-	pmu_event_attr_group.attrs = model->event_attrs;
-	pmu_format_attr_group.attrs = model->format_attrs;
-
-	cci_pmu->pmu = (struct pmu) {
-		.name		= cci_pmu->model->name,
-		.task_ctx_nr	= perf_invalid_context,
-		.pmu_enable	= cci_pmu_enable,
-		.pmu_disable	= cci_pmu_disable,
-		.event_init	= cci_pmu_event_init,
-		.add		= cci_pmu_add,
-		.del		= cci_pmu_del,
-		.start		= cci_pmu_start,
-		.stop		= cci_pmu_stop,
-		.read		= pmu_read,
-		.attr_groups	= pmu_attr_groups,
-	};
-
-	cci_pmu->plat_device = pdev;
-	num_cntrs = pmu_get_max_counters();
-	if (num_cntrs > cci_pmu->model->num_hw_cntrs) {
-		dev_warn(&pdev->dev,
-			"PMU implements more counters(%d) than supported by"
-			" the model(%d), truncated.",
-			num_cntrs, cci_pmu->model->num_hw_cntrs);
-		num_cntrs = cci_pmu->model->num_hw_cntrs;
-	}
-	cci_pmu->num_cntrs = num_cntrs + cci_pmu->model->fixed_hw_cntrs;
-
-	return perf_pmu_register(&cci_pmu->pmu, name, -1);
-}
-
-static int cci_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
-{
-	struct cci_pmu *cci_pmu = hlist_entry_safe(node, struct cci_pmu, node);
-	unsigned int target;
-
-	if (!cpumask_test_and_clear_cpu(cpu, &cci_pmu->cpus))
-		return 0;
-	target = cpumask_any_but(cpu_online_mask, cpu);
-	if (target >= nr_cpu_ids)
-		return 0;
-	/*
-	 * TODO: migrate context once core races on event->ctx have
-	 * been fixed.
-	 */
-	cpumask_set_cpu(target, &cci_pmu->cpus);
-	return 0;
-}
-
-static struct cci_pmu_model cci_pmu_models[] = {
-#ifdef CONFIG_ARM_CCI400_PMU
-	[CCI400_R0] = {
-		.name = "CCI_400",
-		.fixed_hw_cntrs = 1,	/* Cycle counter */
-		.num_hw_cntrs = 4,
-		.cntr_size = SZ_4K,
-		.format_attrs = cci400_pmu_format_attrs,
-		.event_attrs = cci400_r0_pmu_event_attrs,
-		.event_ranges = {
-			[CCI_IF_SLAVE] = {
-				CCI400_R0_SLAVE_PORT_MIN_EV,
-				CCI400_R0_SLAVE_PORT_MAX_EV,
-			},
-			[CCI_IF_MASTER] = {
-				CCI400_R0_MASTER_PORT_MIN_EV,
-				CCI400_R0_MASTER_PORT_MAX_EV,
-			},
-		},
-		.validate_hw_event = cci400_validate_hw_event,
-		.get_event_idx = cci400_get_event_idx,
-	},
-	[CCI400_R1] = {
-		.name = "CCI_400_r1",
-		.fixed_hw_cntrs = 1,	/* Cycle counter */
-		.num_hw_cntrs = 4,
-		.cntr_size = SZ_4K,
-		.format_attrs = cci400_pmu_format_attrs,
-		.event_attrs = cci400_r1_pmu_event_attrs,
-		.event_ranges = {
-			[CCI_IF_SLAVE] = {
-				CCI400_R1_SLAVE_PORT_MIN_EV,
-				CCI400_R1_SLAVE_PORT_MAX_EV,
-			},
-			[CCI_IF_MASTER] = {
-				CCI400_R1_MASTER_PORT_MIN_EV,
-				CCI400_R1_MASTER_PORT_MAX_EV,
-			},
-		},
-		.validate_hw_event = cci400_validate_hw_event,
-		.get_event_idx = cci400_get_event_idx,
-	},
-#endif
-#ifdef CONFIG_ARM_CCI5xx_PMU
-	[CCI500_R0] = {
-		.name = "CCI_500",
-		.fixed_hw_cntrs = 0,
-		.num_hw_cntrs = 8,
-		.cntr_size = SZ_64K,
-		.format_attrs = cci5xx_pmu_format_attrs,
-		.event_attrs = cci5xx_pmu_event_attrs,
-		.event_ranges = {
-			[CCI_IF_SLAVE] = {
-				CCI5xx_SLAVE_PORT_MIN_EV,
-				CCI5xx_SLAVE_PORT_MAX_EV,
-			},
-			[CCI_IF_MASTER] = {
-				CCI5xx_MASTER_PORT_MIN_EV,
-				CCI5xx_MASTER_PORT_MAX_EV,
-			},
-			[CCI_IF_GLOBAL] = {
-				CCI5xx_GLOBAL_PORT_MIN_EV,
-				CCI5xx_GLOBAL_PORT_MAX_EV,
-			},
-		},
-		.validate_hw_event = cci500_validate_hw_event,
-		.write_counters	= cci5xx_pmu_write_counters,
-	},
-	[CCI550_R0] = {
-		.name = "CCI_550",
-		.fixed_hw_cntrs = 0,
-		.num_hw_cntrs = 8,
-		.cntr_size = SZ_64K,
-		.format_attrs = cci5xx_pmu_format_attrs,
-		.event_attrs = cci5xx_pmu_event_attrs,
-		.event_ranges = {
-			[CCI_IF_SLAVE] = {
-				CCI5xx_SLAVE_PORT_MIN_EV,
-				CCI5xx_SLAVE_PORT_MAX_EV,
-			},
-			[CCI_IF_MASTER] = {
-				CCI5xx_MASTER_PORT_MIN_EV,
-				CCI5xx_MASTER_PORT_MAX_EV,
-			},
-			[CCI_IF_GLOBAL] = {
-				CCI5xx_GLOBAL_PORT_MIN_EV,
-				CCI5xx_GLOBAL_PORT_MAX_EV,
-			},
-		},
-		.validate_hw_event = cci550_validate_hw_event,
-		.write_counters	= cci5xx_pmu_write_counters,
-	},
-#endif
-};
-
-static const struct of_device_id arm_cci_pmu_matches[] = {
-#ifdef CONFIG_ARM_CCI400_PMU
-	{
-		.compatible = "arm,cci-400-pmu",
-		.data	= NULL,
-	},
-	{
-		.compatible = "arm,cci-400-pmu,r0",
-		.data	= &cci_pmu_models[CCI400_R0],
-	},
-	{
-		.compatible = "arm,cci-400-pmu,r1",
-		.data	= &cci_pmu_models[CCI400_R1],
-	},
-#endif
-#ifdef CONFIG_ARM_CCI5xx_PMU
-	{
-		.compatible = "arm,cci-500-pmu,r0",
-		.data = &cci_pmu_models[CCI500_R0],
-	},
-	{
-		.compatible = "arm,cci-550-pmu,r0",
-		.data = &cci_pmu_models[CCI550_R0],
-	},
-#endif
-	{},
+static const struct of_dev_auxdata arm_cci_auxdata[] = {
+	OF_DEV_AUXDATA("arm,cci-400-pmu", 0, NULL, &cci_ctrl_base),
+	OF_DEV_AUXDATA("arm,cci-400-pmu,r0", 0, NULL, &cci_ctrl_base),
+	OF_DEV_AUXDATA("arm,cci-400-pmu,r1", 0, NULL, &cci_ctrl_base),
+	OF_DEV_AUXDATA("arm,cci-500-pmu,r0", 0, NULL, &cci_ctrl_base),
+	OF_DEV_AUXDATA("arm,cci-550-pmu,r0", 0, NULL, &cci_ctrl_base),
+	{}
 };
 };
 
 
-static inline const struct cci_pmu_model *get_cci_model(struct platform_device *pdev)
-{
-	const struct of_device_id *match = of_match_node(arm_cci_pmu_matches,
-							pdev->dev.of_node);
-	if (!match)
-		return NULL;
-	if (match->data)
-		return match->data;
-
-	dev_warn(&pdev->dev, "DEPRECATED compatible property,"
-			 "requires secure access to CCI registers");
-	return probe_cci_model(pdev);
-}
-
-static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs)
-{
-	int i;
-
-	for (i = 0; i < nr_irqs; i++)
-		if (irq == irqs[i])
-			return true;
-
-	return false;
-}
-
-static struct cci_pmu *cci_pmu_alloc(struct platform_device *pdev)
-{
-	struct cci_pmu *cci_pmu;
-	const struct cci_pmu_model *model;
-
-	/*
-	 * All allocations are devm_* hence we don't have to free
-	 * them explicitly on an error, as it would end up in driver
-	 * detach.
-	 */
-	model = get_cci_model(pdev);
-	if (!model) {
-		dev_warn(&pdev->dev, "CCI PMU version not supported\n");
-		return ERR_PTR(-ENODEV);
-	}
-
-	cci_pmu = devm_kzalloc(&pdev->dev, sizeof(*cci_pmu), GFP_KERNEL);
-	if (!cci_pmu)
-		return ERR_PTR(-ENOMEM);
-
-	cci_pmu->model = model;
-	cci_pmu->irqs = devm_kcalloc(&pdev->dev, CCI_PMU_MAX_HW_CNTRS(model),
-					sizeof(*cci_pmu->irqs), GFP_KERNEL);
-	if (!cci_pmu->irqs)
-		return ERR_PTR(-ENOMEM);
-	cci_pmu->hw_events.events = devm_kcalloc(&pdev->dev,
-					     CCI_PMU_MAX_HW_CNTRS(model),
-					     sizeof(*cci_pmu->hw_events.events),
-					     GFP_KERNEL);
-	if (!cci_pmu->hw_events.events)
-		return ERR_PTR(-ENOMEM);
-	cci_pmu->hw_events.used_mask = devm_kcalloc(&pdev->dev,
-						BITS_TO_LONGS(CCI_PMU_MAX_HW_CNTRS(model)),
-						sizeof(*cci_pmu->hw_events.used_mask),
-						GFP_KERNEL);
-	if (!cci_pmu->hw_events.used_mask)
-		return ERR_PTR(-ENOMEM);
-
-	return cci_pmu;
-}
-
-
-static int cci_pmu_probe(struct platform_device *pdev)
-{
-	struct resource *res;
-	struct cci_pmu *cci_pmu;
-	int i, ret, irq;
-
-	cci_pmu = cci_pmu_alloc(pdev);
-	if (IS_ERR(cci_pmu))
-		return PTR_ERR(cci_pmu);
-
-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-	cci_pmu->base = devm_ioremap_resource(&pdev->dev, res);
-	if (IS_ERR(cci_pmu->base))
-		return -ENOMEM;
-
-	/*
-	 * CCI PMU has one overflow interrupt per counter; but some may be tied
-	 * together to a common interrupt.
-	 */
-	cci_pmu->nr_irqs = 0;
-	for (i = 0; i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model); i++) {
-		irq = platform_get_irq(pdev, i);
-		if (irq < 0)
-			break;
-
-		if (is_duplicate_irq(irq, cci_pmu->irqs, cci_pmu->nr_irqs))
-			continue;
-
-		cci_pmu->irqs[cci_pmu->nr_irqs++] = irq;
-	}
-
-	/*
-	 * Ensure that the device tree has as many interrupts as the number
-	 * of counters.
-	 */
-	if (i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)) {
-		dev_warn(&pdev->dev, "In-correct number of interrupts: %d, should be %d\n",
-			i, CCI_PMU_MAX_HW_CNTRS(cci_pmu->model));
-		return -EINVAL;
-	}
-
-	raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock);
-	mutex_init(&cci_pmu->reserve_mutex);
-	atomic_set(&cci_pmu->active_events, 0);
-	cpumask_set_cpu(get_cpu(), &cci_pmu->cpus);
-
-	ret = cci_pmu_init(cci_pmu, pdev);
-	if (ret) {
-		put_cpu();
-		return ret;
-	}
-
-	cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE,
-					 &cci_pmu->node);
-	put_cpu();
-	pr_info("ARM %s PMU driver probed", cci_pmu->model->name);
-	return 0;
-}
+#define DRIVER_NAME		"ARM-CCI"
 
 
 static int cci_platform_probe(struct platform_device *pdev)
 static int cci_platform_probe(struct platform_device *pdev)
 {
 {
 	if (!cci_probed())
 	if (!cci_probed())
 		return -ENODEV;
 		return -ENODEV;
 
 
-	return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);
+	return of_platform_populate(pdev->dev.of_node, NULL,
+				    arm_cci_auxdata, &pdev->dev);
 }
 }
 
 
-static struct platform_driver cci_pmu_driver = {
-	.driver = {
-		   .name = DRIVER_NAME_PMU,
-		   .of_match_table = arm_cci_pmu_matches,
-		  },
-	.probe = cci_pmu_probe,
-};
-
 static struct platform_driver cci_platform_driver = {
 static struct platform_driver cci_platform_driver = {
 	.driver = {
 	.driver = {
 		   .name = DRIVER_NAME,
 		   .name = DRIVER_NAME,
@@ -1796,30 +85,9 @@ static struct platform_driver cci_platform_driver = {
 
 
 static int __init cci_platform_init(void)
 static int __init cci_platform_init(void)
 {
 {
-	int ret;
-
-	ret = cpuhp_setup_state_multi(CPUHP_AP_PERF_ARM_CCI_ONLINE,
-				      "perf/arm/cci:online", NULL,
-				      cci_pmu_offline_cpu);
-	if (ret)
-		return ret;
-
-	ret = platform_driver_register(&cci_pmu_driver);
-	if (ret)
-		return ret;
-
 	return platform_driver_register(&cci_platform_driver);
 	return platform_driver_register(&cci_platform_driver);
 }
 }
 
 
-#else /* !CONFIG_ARM_CCI_PMU */
-
-static int __init cci_platform_init(void)
-{
-	return 0;
-}
-
-#endif /* CONFIG_ARM_CCI_PMU */
-
 #ifdef CONFIG_ARM_CCI400_PORT_CTRL
 #ifdef CONFIG_ARM_CCI400_PORT_CTRL
 
 
 #define CCI_PORT_CTRL		0x0
 #define CCI_PORT_CTRL		0x0
@@ -2189,13 +457,10 @@ static int cci_probe_ports(struct device_node *np)
 	if (!ports)
 	if (!ports)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
-	for_each_child_of_node(np, cp) {
+	for_each_available_child_of_node(np, cp) {
 		if (!of_match_node(arm_cci_ctrl_if_matches, cp))
 		if (!of_match_node(arm_cci_ctrl_if_matches, cp))
 			continue;
 			continue;
 
 
-		if (!of_device_is_available(cp))
-			continue;
-
 		i = nb_ace + nb_ace_lite;
 		i = nb_ace + nb_ace_lite;
 
 
 		if (i >= nb_cci_ports)
 		if (i >= nb_cci_ports)
@@ -2275,7 +540,7 @@ static int cci_probe(void)
 	struct resource res;
 	struct resource res;
 
 
 	np = of_find_matching_node(NULL, arm_cci_matches);
 	np = of_find_matching_node(NULL, arm_cci_matches);
-	if(!np || !of_device_is_available(np))
+	if (!of_device_is_available(np))
 		return -ENODEV;
 		return -ENODEV;
 
 
 	ret = of_address_to_resource(np, 0, &res);
 	ret = of_address_to_resource(np, 0, &res);

+ 10 - 0
drivers/clk/Kconfig

@@ -62,6 +62,16 @@ config COMMON_CLK_HI655X
 	  multi-function device has one fixed-rate oscillator, clocked
 	  multi-function device has one fixed-rate oscillator, clocked
 	  at 32KHz.
 	  at 32KHz.
 
 
+config COMMON_CLK_SCMI
+	tristate "Clock driver controlled via SCMI interface"
+	depends on ARM_SCMI_PROTOCOL || COMPILE_TEST
+	  ---help---
+	  This driver provides support for clocks that are controlled
+	  by firmware that implements the SCMI interface.
+
+	  This driver uses SCMI Message Protocol to interact with the
+	  firmware providing all the clock controls.
+
 config COMMON_CLK_SCPI
 config COMMON_CLK_SCPI
 	tristate "Clock driver controlled via SCPI interface"
 	tristate "Clock driver controlled via SCPI interface"
 	depends on ARM_SCPI_PROTOCOL || COMPILE_TEST
 	depends on ARM_SCPI_PROTOCOL || COMPILE_TEST

+ 1 - 0
drivers/clk/Makefile

@@ -41,6 +41,7 @@ obj-$(CONFIG_CLK_QORIQ)			+= clk-qoriq.o
 obj-$(CONFIG_COMMON_CLK_RK808)		+= clk-rk808.o
 obj-$(CONFIG_COMMON_CLK_RK808)		+= clk-rk808.o
 obj-$(CONFIG_COMMON_CLK_HI655X)		+= clk-hi655x.o
 obj-$(CONFIG_COMMON_CLK_HI655X)		+= clk-hi655x.o
 obj-$(CONFIG_COMMON_CLK_S2MPS11)	+= clk-s2mps11.o
 obj-$(CONFIG_COMMON_CLK_S2MPS11)	+= clk-s2mps11.o
+obj-$(CONFIG_COMMON_CLK_SCMI)           += clk-scmi.o
 obj-$(CONFIG_COMMON_CLK_SCPI)           += clk-scpi.o
 obj-$(CONFIG_COMMON_CLK_SCPI)           += clk-scpi.o
 obj-$(CONFIG_COMMON_CLK_SI5351)		+= clk-si5351.o
 obj-$(CONFIG_COMMON_CLK_SI5351)		+= clk-si5351.o
 obj-$(CONFIG_COMMON_CLK_SI514)		+= clk-si514.o
 obj-$(CONFIG_COMMON_CLK_SI514)		+= clk-si514.o

+ 194 - 0
drivers/clk/clk-scmi.c

@@ -0,0 +1,194 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Power Interface (SCMI) Protocol based clock driver
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/scmi_protocol.h>
+#include <asm/div64.h>
+
+struct scmi_clk {
+	u32 id;
+	struct clk_hw hw;
+	const struct scmi_clock_info *info;
+	const struct scmi_handle *handle;
+};
+
+#define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw)
+
+static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
+					  unsigned long parent_rate)
+{
+	int ret;
+	u64 rate;
+	struct scmi_clk *clk = to_scmi_clk(hw);
+
+	ret = clk->handle->clk_ops->rate_get(clk->handle, clk->id, &rate);
+	if (ret)
+		return 0;
+	return rate;
+}
+
+static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+				unsigned long *parent_rate)
+{
+	int step;
+	u64 fmin, fmax, ftmp;
+	struct scmi_clk *clk = to_scmi_clk(hw);
+
+	/*
+	 * We can't figure out what rate it will be, so just return the
+	 * rate back to the caller. scmi_clk_recalc_rate() will be called
+	 * after the rate is set and we'll know what rate the clock is
+	 * running at then.
+	 */
+	if (clk->info->rate_discrete)
+		return rate;
+
+	fmin = clk->info->range.min_rate;
+	fmax = clk->info->range.max_rate;
+	if (rate <= fmin)
+		return fmin;
+	else if (rate >= fmax)
+		return fmax;
+
+	ftmp = rate - fmin;
+	ftmp += clk->info->range.step_size - 1; /* to round up */
+	step = do_div(ftmp, clk->info->range.step_size);
+
+	return step * clk->info->range.step_size + fmin;
+}
+
+static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+			     unsigned long parent_rate)
+{
+	struct scmi_clk *clk = to_scmi_clk(hw);
+
+	return clk->handle->clk_ops->rate_set(clk->handle, clk->id, 0, rate);
+}
+
+static int scmi_clk_enable(struct clk_hw *hw)
+{
+	struct scmi_clk *clk = to_scmi_clk(hw);
+
+	return clk->handle->clk_ops->enable(clk->handle, clk->id);
+}
+
+static void scmi_clk_disable(struct clk_hw *hw)
+{
+	struct scmi_clk *clk = to_scmi_clk(hw);
+
+	clk->handle->clk_ops->disable(clk->handle, clk->id);
+}
+
+static const struct clk_ops scmi_clk_ops = {
+	.recalc_rate = scmi_clk_recalc_rate,
+	.round_rate = scmi_clk_round_rate,
+	.set_rate = scmi_clk_set_rate,
+	/*
+	 * We can't provide enable/disable callback as we can't perform the same
+	 * in atomic context. Since the clock framework provides standard API
+	 * clk_prepare_enable that helps cases using clk_enable in non-atomic
+	 * context, it should be fine providing prepare/unprepare.
+	 */
+	.prepare = scmi_clk_enable,
+	.unprepare = scmi_clk_disable,
+};
+
+static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk)
+{
+	int ret;
+	struct clk_init_data init = {
+		.flags = CLK_GET_RATE_NOCACHE,
+		.num_parents = 0,
+		.ops = &scmi_clk_ops,
+		.name = sclk->info->name,
+	};
+
+	sclk->hw.init = &init;
+	ret = devm_clk_hw_register(dev, &sclk->hw);
+	if (!ret)
+		clk_hw_set_rate_range(&sclk->hw, sclk->info->range.min_rate,
+				      sclk->info->range.max_rate);
+	return ret;
+}
+
+static int scmi_clocks_probe(struct scmi_device *sdev)
+{
+	int idx, count, err;
+	struct clk_hw **hws;
+	struct clk_hw_onecell_data *clk_data;
+	struct device *dev = &sdev->dev;
+	struct device_node *np = dev->of_node;
+	const struct scmi_handle *handle = sdev->handle;
+
+	if (!handle || !handle->clk_ops)
+		return -ENODEV;
+
+	count = handle->clk_ops->count_get(handle);
+	if (count < 0) {
+		dev_err(dev, "%s: invalid clock output count\n", np->name);
+		return -EINVAL;
+	}
+
+	clk_data = devm_kzalloc(dev, sizeof(*clk_data) +
+				sizeof(*clk_data->hws) * count, GFP_KERNEL);
+	if (!clk_data)
+		return -ENOMEM;
+
+	clk_data->num = count;
+	hws = clk_data->hws;
+
+	for (idx = 0; idx < count; idx++) {
+		struct scmi_clk *sclk;
+
+		sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL);
+		if (!sclk)
+			return -ENOMEM;
+
+		sclk->info = handle->clk_ops->info_get(handle, idx);
+		if (!sclk->info) {
+			dev_dbg(dev, "invalid clock info for idx %d\n", idx);
+			continue;
+		}
+
+		sclk->id = idx;
+		sclk->handle = handle;
+
+		err = scmi_clk_ops_init(dev, sclk);
+		if (err) {
+			dev_err(dev, "failed to register clock %d\n", idx);
+			devm_kfree(dev, sclk);
+			hws[idx] = NULL;
+		} else {
+			dev_dbg(dev, "Registered clock:%s\n", sclk->info->name);
+			hws[idx] = &sclk->hw;
+		}
+	}
+
+	return devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get,
+					   clk_data);
+}
+
+static const struct scmi_device_id scmi_id_table[] = {
+	{ SCMI_PROTOCOL_CLOCK },
+	{ },
+};
+MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+
+static struct scmi_driver scmi_clocks_driver = {
+	.name = "scmi-clocks",
+	.probe = scmi_clocks_probe,
+	.id_table = scmi_id_table,
+};
+module_scmi_driver(scmi_clocks_driver);
+
+MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+MODULE_DESCRIPTION("ARM SCMI clock driver");
+MODULE_LICENSE("GPL v2");

+ 12 - 0
drivers/cpufreq/Kconfig.arm

@@ -239,6 +239,18 @@ config ARM_SA1100_CPUFREQ
 config ARM_SA1110_CPUFREQ
 config ARM_SA1110_CPUFREQ
 	bool
 	bool
 
 
+config ARM_SCMI_CPUFREQ
+	tristate "SCMI based CPUfreq driver"
+	depends on ARM_SCMI_PROTOCOL || COMPILE_TEST
+	depends on !CPU_THERMAL || THERMAL
+	select PM_OPP
+	help
+	  This adds the CPUfreq driver support for ARM platforms using SCMI
+	  protocol for CPU power management.
+
+	  This driver uses SCMI Message Protocol driver to interact with the
+	  firmware providing the CPU DVFS functionality.
+
 config ARM_SPEAR_CPUFREQ
 config ARM_SPEAR_CPUFREQ
 	bool "SPEAr CPUFreq support"
 	bool "SPEAr CPUFreq support"
 	depends on PLAT_SPEAR
 	depends on PLAT_SPEAR

+ 1 - 0
drivers/cpufreq/Makefile

@@ -75,6 +75,7 @@ obj-$(CONFIG_ARM_S3C24XX_CPUFREQ_DEBUGFS) += s3c24xx-cpufreq-debugfs.o
 obj-$(CONFIG_ARM_S5PV210_CPUFREQ)	+= s5pv210-cpufreq.o
 obj-$(CONFIG_ARM_S5PV210_CPUFREQ)	+= s5pv210-cpufreq.o
 obj-$(CONFIG_ARM_SA1100_CPUFREQ)	+= sa1100-cpufreq.o
 obj-$(CONFIG_ARM_SA1100_CPUFREQ)	+= sa1100-cpufreq.o
 obj-$(CONFIG_ARM_SA1110_CPUFREQ)	+= sa1110-cpufreq.o
 obj-$(CONFIG_ARM_SA1110_CPUFREQ)	+= sa1110-cpufreq.o
+obj-$(CONFIG_ARM_SCMI_CPUFREQ)		+= scmi-cpufreq.o
 obj-$(CONFIG_ARM_SCPI_CPUFREQ)		+= scpi-cpufreq.o
 obj-$(CONFIG_ARM_SCPI_CPUFREQ)		+= scpi-cpufreq.o
 obj-$(CONFIG_ARM_SPEAR_CPUFREQ)		+= spear-cpufreq.o
 obj-$(CONFIG_ARM_SPEAR_CPUFREQ)		+= spear-cpufreq.o
 obj-$(CONFIG_ARM_STI_CPUFREQ)		+= sti-cpufreq.o
 obj-$(CONFIG_ARM_STI_CPUFREQ)		+= sti-cpufreq.o

+ 264 - 0
drivers/cpufreq/scmi-cpufreq.c

@@ -0,0 +1,264 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Power Interface (SCMI) based CPUFreq Interface driver
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ * Sudeep Holla <sudeep.holla@arm.com>
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/cpu.h>
+#include <linux/cpufreq.h>
+#include <linux/cpumask.h>
+#include <linux/cpu_cooling.h>
+#include <linux/export.h>
+#include <linux/module.h>
+#include <linux/pm_opp.h>
+#include <linux/slab.h>
+#include <linux/scmi_protocol.h>
+#include <linux/types.h>
+
+struct scmi_data {
+	int domain_id;
+	struct device *cpu_dev;
+	struct thermal_cooling_device *cdev;
+};
+
+static const struct scmi_handle *handle;
+
+static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)
+{
+	struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
+	struct scmi_perf_ops *perf_ops = handle->perf_ops;
+	struct scmi_data *priv = policy->driver_data;
+	unsigned long rate;
+	int ret;
+
+	ret = perf_ops->freq_get(handle, priv->domain_id, &rate, false);
+	if (ret)
+		return 0;
+	return rate / 1000;
+}
+
+/*
+ * perf_ops->freq_set is not a synchronous, the actual OPP change will
+ * happen asynchronously and can get notified if the events are
+ * subscribed for by the SCMI firmware
+ */
+static int
+scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
+{
+	int ret;
+	struct scmi_data *priv = policy->driver_data;
+	struct scmi_perf_ops *perf_ops = handle->perf_ops;
+	u64 freq = policy->freq_table[index].frequency * 1000;
+
+	ret = perf_ops->freq_set(handle, priv->domain_id, freq, false);
+	if (!ret)
+		arch_set_freq_scale(policy->related_cpus, freq,
+				    policy->cpuinfo.max_freq);
+	return ret;
+}
+
+static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
+					     unsigned int target_freq)
+{
+	struct scmi_data *priv = policy->driver_data;
+	struct scmi_perf_ops *perf_ops = handle->perf_ops;
+
+	if (!perf_ops->freq_set(handle, priv->domain_id,
+				target_freq * 1000, true)) {
+		arch_set_freq_scale(policy->related_cpus, target_freq,
+				    policy->cpuinfo.max_freq);
+		return target_freq;
+	}
+
+	return 0;
+}
+
+static int
+scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
+{
+	int cpu, domain, tdomain;
+	struct device *tcpu_dev;
+
+	domain = handle->perf_ops->device_domain_id(cpu_dev);
+	if (domain < 0)
+		return domain;
+
+	for_each_possible_cpu(cpu) {
+		if (cpu == cpu_dev->id)
+			continue;
+
+		tcpu_dev = get_cpu_device(cpu);
+		if (!tcpu_dev)
+			continue;
+
+		tdomain = handle->perf_ops->device_domain_id(tcpu_dev);
+		if (tdomain == domain)
+			cpumask_set_cpu(cpu, cpumask);
+	}
+
+	return 0;
+}
+
+static int scmi_cpufreq_init(struct cpufreq_policy *policy)
+{
+	int ret;
+	unsigned int latency;
+	struct device *cpu_dev;
+	struct scmi_data *priv;
+	struct cpufreq_frequency_table *freq_table;
+
+	cpu_dev = get_cpu_device(policy->cpu);
+	if (!cpu_dev) {
+		pr_err("failed to get cpu%d device\n", policy->cpu);
+		return -ENODEV;
+	}
+
+	ret = handle->perf_ops->add_opps_to_device(handle, cpu_dev);
+	if (ret) {
+		dev_warn(cpu_dev, "failed to add opps to the device\n");
+		return ret;
+	}
+
+	ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus);
+	if (ret) {
+		dev_warn(cpu_dev, "failed to get sharing cpumask\n");
+		return ret;
+	}
+
+	ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
+	if (ret) {
+		dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
+			__func__, ret);
+		return ret;
+	}
+
+	ret = dev_pm_opp_get_opp_count(cpu_dev);
+	if (ret <= 0) {
+		dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n");
+		ret = -EPROBE_DEFER;
+		goto out_free_opp;
+	}
+
+	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+	if (!priv) {
+		ret = -ENOMEM;
+		goto out_free_opp;
+	}
+
+	ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+	if (ret) {
+		dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
+		goto out_free_priv;
+	}
+
+	priv->cpu_dev = cpu_dev;
+	priv->domain_id = handle->perf_ops->device_domain_id(cpu_dev);
+
+	policy->driver_data = priv;
+
+	ret = cpufreq_table_validate_and_show(policy, freq_table);
+	if (ret) {
+		dev_err(cpu_dev, "%s: invalid frequency table: %d\n", __func__,
+			ret);
+		goto out_free_cpufreq_table;
+	}
+
+	/* SCMI allows DVFS request for any domain from any CPU */
+	policy->dvfs_possible_from_any_cpu = true;
+
+	latency = handle->perf_ops->get_transition_latency(handle, cpu_dev);
+	if (!latency)
+		latency = CPUFREQ_ETERNAL;
+
+	policy->cpuinfo.transition_latency = latency;
+
+	policy->fast_switch_possible = true;
+	return 0;
+
+out_free_cpufreq_table:
+	dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+out_free_priv:
+	kfree(priv);
+out_free_opp:
+	dev_pm_opp_cpumask_remove_table(policy->cpus);
+
+	return ret;
+}
+
+static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
+{
+	struct scmi_data *priv = policy->driver_data;
+
+	cpufreq_cooling_unregister(priv->cdev);
+	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+	kfree(priv);
+	dev_pm_opp_cpumask_remove_table(policy->related_cpus);
+
+	return 0;
+}
+
+static void scmi_cpufreq_ready(struct cpufreq_policy *policy)
+{
+	struct scmi_data *priv = policy->driver_data;
+
+	priv->cdev = of_cpufreq_cooling_register(policy);
+}
+
+static struct cpufreq_driver scmi_cpufreq_driver = {
+	.name	= "scmi",
+	.flags	= CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
+		  CPUFREQ_NEED_INITIAL_FREQ_CHECK,
+	.verify	= cpufreq_generic_frequency_table_verify,
+	.attr	= cpufreq_generic_attr,
+	.target_index	= scmi_cpufreq_set_target,
+	.fast_switch	= scmi_cpufreq_fast_switch,
+	.get	= scmi_cpufreq_get_rate,
+	.init	= scmi_cpufreq_init,
+	.exit	= scmi_cpufreq_exit,
+	.ready	= scmi_cpufreq_ready,
+};
+
+static int scmi_cpufreq_probe(struct scmi_device *sdev)
+{
+	int ret;
+
+	handle = sdev->handle;
+
+	if (!handle || !handle->perf_ops)
+		return -ENODEV;
+
+	ret = cpufreq_register_driver(&scmi_cpufreq_driver);
+	if (ret) {
+		dev_err(&sdev->dev, "%s: registering cpufreq failed, err: %d\n",
+			__func__, ret);
+	}
+
+	return ret;
+}
+
+static void scmi_cpufreq_remove(struct scmi_device *sdev)
+{
+	cpufreq_unregister_driver(&scmi_cpufreq_driver);
+}
+
+static const struct scmi_device_id scmi_id_table[] = {
+	{ SCMI_PROTOCOL_PERF },
+	{ },
+};
+MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+
+static struct scmi_driver scmi_cpufreq_drv = {
+	.name		= "scmi-cpufreq",
+	.probe		= scmi_cpufreq_probe,
+	.remove		= scmi_cpufreq_remove,
+	.id_table	= scmi_id_table,
+};
+module_scmi_driver(scmi_cpufreq_drv);
+
+MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+MODULE_DESCRIPTION("ARM SCMI CPUFreq interface driver");
+MODULE_LICENSE("GPL v2");

+ 34 - 0
drivers/firmware/Kconfig

@@ -19,6 +19,40 @@ config ARM_PSCI_CHECKER
 	  on and off through hotplug, so for now torture tests and PSCI checker
 	  on and off through hotplug, so for now torture tests and PSCI checker
 	  are mutually exclusive.
 	  are mutually exclusive.
 
 
+config ARM_SCMI_PROTOCOL
+	bool "ARM System Control and Management Interface (SCMI) Message Protocol"
+	depends on ARM || ARM64 || COMPILE_TEST
+	depends on MAILBOX
+	help
+	  ARM System Control and Management Interface (SCMI) protocol is a
+	  set of operating system-independent software interfaces that are
+	  used in system management. SCMI is extensible and currently provides
+	  interfaces for: Discovery and self-description of the interfaces
+	  it supports, Power domain management which is the ability to place
+	  a given device or domain into the various power-saving states that
+	  it supports, Performance management which is the ability to control
+	  the performance of a domain that is composed of compute engines
+	  such as application processors and other accelerators, Clock
+	  management which is the ability to set and inquire rates on platform
+	  managed clocks and Sensor management which is the ability to read
+	  sensor data, and be notified of sensor value.
+
+	  This protocol library provides interface for all the client drivers
+	  making use of the features offered by the SCMI.
+
+config ARM_SCMI_POWER_DOMAIN
+	tristate "SCMI power domain driver"
+	depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
+	default y
+	select PM_GENERIC_DOMAINS if PM
+	help
+	  This enables support for the SCMI power domains which can be
+	  enabled or disabled via the SCP firmware
+
+	  This driver can also be built as a module.  If so, the module
+	  will be called scmi_pm_domain. Note this may needed early in boot
+	  before rootfs may be available.
+
 config ARM_SCPI_PROTOCOL
 config ARM_SCPI_PROTOCOL
 	tristate "ARM System Control and Power Interface (SCPI) Message Protocol"
 	tristate "ARM System Control and Power Interface (SCPI) Message Protocol"
 	depends on ARM || ARM64 || COMPILE_TEST
 	depends on ARM || ARM64 || COMPILE_TEST

+ 1 - 0
drivers/firmware/Makefile

@@ -25,6 +25,7 @@ obj-$(CONFIG_QCOM_SCM_32)	+= qcom_scm-32.o
 CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a
 CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a
 obj-$(CONFIG_TI_SCI_PROTOCOL)	+= ti_sci.o
 obj-$(CONFIG_TI_SCI_PROTOCOL)	+= ti_sci.o
 
 
+obj-$(CONFIG_ARM_SCMI_PROTOCOL)	+= arm_scmi/
 obj-y				+= broadcom/
 obj-y				+= broadcom/
 obj-y				+= meson/
 obj-y				+= meson/
 obj-$(CONFIG_GOOGLE_FIRMWARE)	+= google/
 obj-$(CONFIG_GOOGLE_FIRMWARE)	+= google/

+ 5 - 0
drivers/firmware/arm_scmi/Makefile

@@ -0,0 +1,5 @@
+obj-y	= scmi-bus.o scmi-driver.o scmi-protocols.o
+scmi-bus-y = bus.o
+scmi-driver-y = driver.o
+scmi-protocols-y = base.o clock.o perf.o power.o sensors.o
+obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o

+ 253 - 0
drivers/firmware/arm_scmi/base.c

@@ -0,0 +1,253 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Base Protocol
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include "common.h"
+
+enum scmi_base_protocol_cmd {
+	BASE_DISCOVER_VENDOR = 0x3,
+	BASE_DISCOVER_SUB_VENDOR = 0x4,
+	BASE_DISCOVER_IMPLEMENT_VERSION = 0x5,
+	BASE_DISCOVER_LIST_PROTOCOLS = 0x6,
+	BASE_DISCOVER_AGENT = 0x7,
+	BASE_NOTIFY_ERRORS = 0x8,
+};
+
+struct scmi_msg_resp_base_attributes {
+	u8 num_protocols;
+	u8 num_agents;
+	__le16 reserved;
+};
+
+/**
+ * scmi_base_attributes_get() - gets the implementation details
+ *	that are associated with the base protocol.
+ *
+ * @handle - SCMI entity handle
+ *
+ * Return: 0 on success, else appropriate SCMI error.
+ */
+static int scmi_base_attributes_get(const struct scmi_handle *handle)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_base_attributes *attr_info;
+	struct scmi_revision_info *rev = handle->version;
+
+	ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
+				 SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t);
+	if (ret)
+		return ret;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		attr_info = t->rx.buf;
+		rev->num_protocols = attr_info->num_protocols;
+		rev->num_agents = attr_info->num_agents;
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+/**
+ * scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string.
+ *
+ * @handle - SCMI entity handle
+ * @sub_vendor - specify true if sub-vendor ID is needed
+ *
+ * Return: 0 on success, else appropriate SCMI error.
+ */
+static int
+scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
+{
+	u8 cmd;
+	int ret, size;
+	char *vendor_id;
+	struct scmi_xfer *t;
+	struct scmi_revision_info *rev = handle->version;
+
+	if (sub_vendor) {
+		cmd = BASE_DISCOVER_SUB_VENDOR;
+		vendor_id = rev->sub_vendor_id;
+		size = ARRAY_SIZE(rev->sub_vendor_id);
+	} else {
+		cmd = BASE_DISCOVER_VENDOR;
+		vendor_id = rev->vendor_id;
+		size = ARRAY_SIZE(rev->vendor_id);
+	}
+
+	ret = scmi_one_xfer_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t);
+	if (ret)
+		return ret;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret)
+		memcpy(vendor_id, t->rx.buf, size);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+/**
+ * scmi_base_implementation_version_get() - gets a vendor-specific
+ *	implementation 32-bit version. The format of the version number is
+ *	vendor-specific
+ *
+ * @handle - SCMI entity handle
+ *
+ * Return: 0 on success, else appropriate SCMI error.
+ */
+static int
+scmi_base_implementation_version_get(const struct scmi_handle *handle)
+{
+	int ret;
+	__le32 *impl_ver;
+	struct scmi_xfer *t;
+	struct scmi_revision_info *rev = handle->version;
+
+	ret = scmi_one_xfer_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION,
+				 SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t);
+	if (ret)
+		return ret;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		impl_ver = t->rx.buf;
+		rev->impl_ver = le32_to_cpu(*impl_ver);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+/**
+ * scmi_base_implementation_list_get() - gets the list of protocols it is
+ *	OSPM is allowed to access
+ *
+ * @handle - SCMI entity handle
+ * @protocols_imp - pointer to hold the list of protocol identifiers
+ *
+ * Return: 0 on success, else appropriate SCMI error.
+ */
+static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
+					     u8 *protocols_imp)
+{
+	u8 *list;
+	int ret, loop;
+	struct scmi_xfer *t;
+	__le32 *num_skip, *num_ret;
+	u32 tot_num_ret = 0, loop_num_ret;
+	struct device *dev = handle->dev;
+
+	ret = scmi_one_xfer_init(handle, BASE_DISCOVER_LIST_PROTOCOLS,
+				 SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t);
+	if (ret)
+		return ret;
+
+	num_skip = t->tx.buf;
+	num_ret = t->rx.buf;
+	list = t->rx.buf + sizeof(*num_ret);
+
+	do {
+		/* Set the number of protocols to be skipped/already read */
+		*num_skip = cpu_to_le32(tot_num_ret);
+
+		ret = scmi_do_xfer(handle, t);
+		if (ret)
+			break;
+
+		loop_num_ret = le32_to_cpu(*num_ret);
+		if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) {
+			dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP");
+			break;
+		}
+
+		for (loop = 0; loop < loop_num_ret; loop++)
+			protocols_imp[tot_num_ret + loop] = *(list + loop);
+
+		tot_num_ret += loop_num_ret;
+	} while (loop_num_ret);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+/**
+ * scmi_base_discover_agent_get() - discover the name of an agent
+ *
+ * @handle - SCMI entity handle
+ * @id - Agent identifier
+ * @name - Agent identifier ASCII string
+ *
+ * An agent id of 0 is reserved to identify the platform itself.
+ * Generally operating system is represented as "OSPM"
+ *
+ * Return: 0 on success, else appropriate SCMI error.
+ */
+static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
+					int id, char *name)
+{
+	int ret;
+	struct scmi_xfer *t;
+
+	ret = scmi_one_xfer_init(handle, BASE_DISCOVER_AGENT,
+				 SCMI_PROTOCOL_BASE, sizeof(__le32),
+				 SCMI_MAX_STR_SIZE, &t);
+	if (ret)
+		return ret;
+
+	*(__le32 *)t->tx.buf = cpu_to_le32(id);
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret)
+		memcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+int scmi_base_protocol_init(struct scmi_handle *h)
+{
+	int id, ret;
+	u8 *prot_imp;
+	u32 version;
+	char name[SCMI_MAX_STR_SIZE];
+	const struct scmi_handle *handle = h;
+	struct device *dev = handle->dev;
+	struct scmi_revision_info *rev = handle->version;
+
+	ret = scmi_version_get(handle, SCMI_PROTOCOL_BASE, &version);
+	if (ret)
+		return ret;
+
+	prot_imp = devm_kcalloc(dev, MAX_PROTOCOLS_IMP, sizeof(u8), GFP_KERNEL);
+	if (!prot_imp)
+		return -ENOMEM;
+
+	rev->major_ver = PROTOCOL_REV_MAJOR(version),
+	rev->minor_ver = PROTOCOL_REV_MINOR(version);
+
+	scmi_base_attributes_get(handle);
+	scmi_base_vendor_id_get(handle, false);
+	scmi_base_vendor_id_get(handle, true);
+	scmi_base_implementation_version_get(handle);
+	scmi_base_implementation_list_get(handle, prot_imp);
+	scmi_setup_protocol_implemented(handle, prot_imp);
+
+	dev_info(dev, "SCMI Protocol v%d.%d '%s:%s' Firmware version 0x%x\n",
+		 rev->major_ver, rev->minor_ver, rev->vendor_id,
+		 rev->sub_vendor_id, rev->impl_ver);
+	dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols,
+		rev->num_agents);
+
+	for (id = 0; id < rev->num_agents; id++) {
+		scmi_base_discover_agent_get(handle, id, name);
+		dev_dbg(dev, "Agent %d: %s\n", id, name);
+	}
+
+	return 0;
+}

+ 221 - 0
drivers/firmware/arm_scmi/bus.c

@@ -0,0 +1,221 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Message Protocol bus layer
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/device.h>
+
+#include "common.h"
+
+static DEFINE_IDA(scmi_bus_id);
+static DEFINE_IDR(scmi_protocols);
+static DEFINE_SPINLOCK(protocol_lock);
+
+static const struct scmi_device_id *
+scmi_dev_match_id(struct scmi_device *scmi_dev, struct scmi_driver *scmi_drv)
+{
+	const struct scmi_device_id *id = scmi_drv->id_table;
+
+	if (!id)
+		return NULL;
+
+	for (; id->protocol_id; id++)
+		if (id->protocol_id == scmi_dev->protocol_id)
+			return id;
+
+	return NULL;
+}
+
+static int scmi_dev_match(struct device *dev, struct device_driver *drv)
+{
+	struct scmi_driver *scmi_drv = to_scmi_driver(drv);
+	struct scmi_device *scmi_dev = to_scmi_dev(dev);
+	const struct scmi_device_id *id;
+
+	id = scmi_dev_match_id(scmi_dev, scmi_drv);
+	if (id)
+		return 1;
+
+	return 0;
+}
+
+static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle)
+{
+	scmi_prot_init_fn_t fn = idr_find(&scmi_protocols, protocol_id);
+
+	if (unlikely(!fn))
+		return -EINVAL;
+	return fn(handle);
+}
+
+static int scmi_dev_probe(struct device *dev)
+{
+	struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver);
+	struct scmi_device *scmi_dev = to_scmi_dev(dev);
+	const struct scmi_device_id *id;
+	int ret;
+
+	id = scmi_dev_match_id(scmi_dev, scmi_drv);
+	if (!id)
+		return -ENODEV;
+
+	if (!scmi_dev->handle)
+		return -EPROBE_DEFER;
+
+	ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle);
+	if (ret)
+		return ret;
+
+	return scmi_drv->probe(scmi_dev);
+}
+
+static int scmi_dev_remove(struct device *dev)
+{
+	struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver);
+	struct scmi_device *scmi_dev = to_scmi_dev(dev);
+
+	if (scmi_drv->remove)
+		scmi_drv->remove(scmi_dev);
+
+	return 0;
+}
+
+static struct bus_type scmi_bus_type = {
+	.name =	"scmi_protocol",
+	.match = scmi_dev_match,
+	.probe = scmi_dev_probe,
+	.remove = scmi_dev_remove,
+};
+
+int scmi_driver_register(struct scmi_driver *driver, struct module *owner,
+			 const char *mod_name)
+{
+	int retval;
+
+	driver->driver.bus = &scmi_bus_type;
+	driver->driver.name = driver->name;
+	driver->driver.owner = owner;
+	driver->driver.mod_name = mod_name;
+
+	retval = driver_register(&driver->driver);
+	if (!retval)
+		pr_debug("registered new scmi driver %s\n", driver->name);
+
+	return retval;
+}
+EXPORT_SYMBOL_GPL(scmi_driver_register);
+
+void scmi_driver_unregister(struct scmi_driver *driver)
+{
+	driver_unregister(&driver->driver);
+}
+EXPORT_SYMBOL_GPL(scmi_driver_unregister);
+
+struct scmi_device *
+scmi_device_create(struct device_node *np, struct device *parent, int protocol)
+{
+	int id, retval;
+	struct scmi_device *scmi_dev;
+
+	id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL);
+	if (id < 0)
+		return NULL;
+
+	scmi_dev = kzalloc(sizeof(*scmi_dev), GFP_KERNEL);
+	if (!scmi_dev)
+		goto no_mem;
+
+	scmi_dev->id = id;
+	scmi_dev->protocol_id = protocol;
+	scmi_dev->dev.parent = parent;
+	scmi_dev->dev.of_node = np;
+	scmi_dev->dev.bus = &scmi_bus_type;
+	dev_set_name(&scmi_dev->dev, "scmi_dev.%d", id);
+
+	retval = device_register(&scmi_dev->dev);
+	if (!retval)
+		return scmi_dev;
+
+	put_device(&scmi_dev->dev);
+	kfree(scmi_dev);
+no_mem:
+	ida_simple_remove(&scmi_bus_id, id);
+	return NULL;
+}
+
+void scmi_device_destroy(struct scmi_device *scmi_dev)
+{
+	scmi_handle_put(scmi_dev->handle);
+	device_unregister(&scmi_dev->dev);
+	ida_simple_remove(&scmi_bus_id, scmi_dev->id);
+	kfree(scmi_dev);
+}
+
+void scmi_set_handle(struct scmi_device *scmi_dev)
+{
+	scmi_dev->handle = scmi_handle_get(&scmi_dev->dev);
+}
+
+int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn)
+{
+	int ret;
+
+	spin_lock(&protocol_lock);
+	ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1,
+			GFP_ATOMIC);
+	if (ret != protocol_id)
+		pr_err("unable to allocate SCMI idr slot, err %d\n", ret);
+	spin_unlock(&protocol_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(scmi_protocol_register);
+
+void scmi_protocol_unregister(int protocol_id)
+{
+	spin_lock(&protocol_lock);
+	idr_remove(&scmi_protocols, protocol_id);
+	spin_unlock(&protocol_lock);
+}
+EXPORT_SYMBOL_GPL(scmi_protocol_unregister);
+
+static int __scmi_devices_unregister(struct device *dev, void *data)
+{
+	struct scmi_device *scmi_dev = to_scmi_dev(dev);
+
+	scmi_device_destroy(scmi_dev);
+	return 0;
+}
+
+static void scmi_devices_unregister(void)
+{
+	bus_for_each_dev(&scmi_bus_type, NULL, NULL, __scmi_devices_unregister);
+}
+
+static int __init scmi_bus_init(void)
+{
+	int retval;
+
+	retval = bus_register(&scmi_bus_type);
+	if (retval)
+		pr_err("scmi protocol bus register failed (%d)\n", retval);
+
+	return retval;
+}
+subsys_initcall(scmi_bus_init);
+
+static void __exit scmi_bus_exit(void)
+{
+	scmi_devices_unregister();
+	bus_unregister(&scmi_bus_type);
+	ida_destroy(&scmi_bus_id);
+}
+module_exit(scmi_bus_exit);

+ 343 - 0
drivers/firmware/arm_scmi/clock.c

@@ -0,0 +1,343 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Clock Protocol
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include "common.h"
+
+enum scmi_clock_protocol_cmd {
+	CLOCK_ATTRIBUTES = 0x3,
+	CLOCK_DESCRIBE_RATES = 0x4,
+	CLOCK_RATE_SET = 0x5,
+	CLOCK_RATE_GET = 0x6,
+	CLOCK_CONFIG_SET = 0x7,
+};
+
+struct scmi_msg_resp_clock_protocol_attributes {
+	__le16 num_clocks;
+	u8 max_async_req;
+	u8 reserved;
+};
+
+struct scmi_msg_resp_clock_attributes {
+	__le32 attributes;
+#define	CLOCK_ENABLE	BIT(0)
+	    u8 name[SCMI_MAX_STR_SIZE];
+};
+
+struct scmi_clock_set_config {
+	__le32 id;
+	__le32 attributes;
+};
+
+struct scmi_msg_clock_describe_rates {
+	__le32 id;
+	__le32 rate_index;
+};
+
+struct scmi_msg_resp_clock_describe_rates {
+	__le32 num_rates_flags;
+#define NUM_RETURNED(x)		((x) & 0xfff)
+#define RATE_DISCRETE(x)	!((x) & BIT(12))
+#define NUM_REMAINING(x)	((x) >> 16)
+	struct {
+		__le32 value_low;
+		__le32 value_high;
+	} rate[0];
+#define RATE_TO_U64(X)		\
+({				\
+	typeof(X) x = (X);	\
+	le32_to_cpu((x).value_low) | (u64)le32_to_cpu((x).value_high) << 32; \
+})
+};
+
+struct scmi_clock_set_rate {
+	__le32 flags;
+#define CLOCK_SET_ASYNC		BIT(0)
+#define CLOCK_SET_DELAYED	BIT(1)
+#define CLOCK_SET_ROUND_UP	BIT(2)
+#define CLOCK_SET_ROUND_AUTO	BIT(3)
+	__le32 id;
+	__le32 value_low;
+	__le32 value_high;
+};
+
+struct clock_info {
+	int num_clocks;
+	int max_async_req;
+	struct scmi_clock_info *clk;
+};
+
+static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle,
+					      struct clock_info *ci)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_clock_protocol_attributes *attr;
+
+	ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
+				 SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t);
+	if (ret)
+		return ret;
+
+	attr = t->rx.buf;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		ci->num_clocks = le16_to_cpu(attr->num_clocks);
+		ci->max_async_req = attr->max_async_req;
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_clock_attributes_get(const struct scmi_handle *handle,
+				     u32 clk_id, struct scmi_clock_info *clk)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_clock_attributes *attr;
+
+	ret = scmi_one_xfer_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK,
+				 sizeof(clk_id), sizeof(*attr), &t);
+	if (ret)
+		return ret;
+
+	*(__le32 *)t->tx.buf = cpu_to_le32(clk_id);
+	attr = t->rx.buf;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret)
+		memcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE);
+	else
+		clk->name[0] = '\0';
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
+			      struct scmi_clock_info *clk)
+{
+	u64 *rate;
+	int ret, cnt;
+	bool rate_discrete = false;
+	u32 tot_rate_cnt = 0, rates_flag;
+	u16 num_returned, num_remaining;
+	struct scmi_xfer *t;
+	struct scmi_msg_clock_describe_rates *clk_desc;
+	struct scmi_msg_resp_clock_describe_rates *rlist;
+
+	ret = scmi_one_xfer_init(handle, CLOCK_DESCRIBE_RATES,
+				 SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t);
+	if (ret)
+		return ret;
+
+	clk_desc = t->tx.buf;
+	rlist = t->rx.buf;
+
+	do {
+		clk_desc->id = cpu_to_le32(clk_id);
+		/* Set the number of rates to be skipped/already read */
+		clk_desc->rate_index = cpu_to_le32(tot_rate_cnt);
+
+		ret = scmi_do_xfer(handle, t);
+		if (ret)
+			goto err;
+
+		rates_flag = le32_to_cpu(rlist->num_rates_flags);
+		num_remaining = NUM_REMAINING(rates_flag);
+		rate_discrete = RATE_DISCRETE(rates_flag);
+		num_returned = NUM_RETURNED(rates_flag);
+
+		if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) {
+			dev_err(handle->dev, "No. of rates > MAX_NUM_RATES");
+			break;
+		}
+
+		if (!rate_discrete) {
+			clk->range.min_rate = RATE_TO_U64(rlist->rate[0]);
+			clk->range.max_rate = RATE_TO_U64(rlist->rate[1]);
+			clk->range.step_size = RATE_TO_U64(rlist->rate[2]);
+			dev_dbg(handle->dev, "Min %llu Max %llu Step %llu Hz\n",
+				clk->range.min_rate, clk->range.max_rate,
+				clk->range.step_size);
+			break;
+		}
+
+		rate = &clk->list.rates[tot_rate_cnt];
+		for (cnt = 0; cnt < num_returned; cnt++, rate++) {
+			*rate = RATE_TO_U64(rlist->rate[cnt]);
+			dev_dbg(handle->dev, "Rate %llu Hz\n", *rate);
+		}
+
+		tot_rate_cnt += num_returned;
+		/*
+		 * check for both returned and remaining to avoid infinite
+		 * loop due to buggy firmware
+		 */
+	} while (num_returned && num_remaining);
+
+	if (rate_discrete)
+		clk->list.num_rates = tot_rate_cnt;
+
+err:
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value)
+{
+	int ret;
+	struct scmi_xfer *t;
+
+	ret = scmi_one_xfer_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK,
+				 sizeof(__le32), sizeof(u64), &t);
+	if (ret)
+		return ret;
+
+	*(__le32 *)t->tx.buf = cpu_to_le32(clk_id);
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		__le32 *pval = t->rx.buf;
+
+		*value = le32_to_cpu(*pval);
+		*value |= (u64)le32_to_cpu(*(pval + 1)) << 32;
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
+			       u32 config, u64 rate)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_clock_set_rate *cfg;
+
+	ret = scmi_one_xfer_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK,
+				 sizeof(*cfg), 0, &t);
+	if (ret)
+		return ret;
+
+	cfg = t->tx.buf;
+	cfg->flags = cpu_to_le32(config);
+	cfg->id = cpu_to_le32(clk_id);
+	cfg->value_low = cpu_to_le32(rate & 0xffffffff);
+	cfg->value_high = cpu_to_le32(rate >> 32);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_clock_set_config *cfg;
+
+	ret = scmi_one_xfer_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK,
+				 sizeof(*cfg), 0, &t);
+	if (ret)
+		return ret;
+
+	cfg = t->tx.buf;
+	cfg->id = cpu_to_le32(clk_id);
+	cfg->attributes = cpu_to_le32(config);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_clock_enable(const struct scmi_handle *handle, u32 clk_id)
+{
+	return scmi_clock_config_set(handle, clk_id, CLOCK_ENABLE);
+}
+
+static int scmi_clock_disable(const struct scmi_handle *handle, u32 clk_id)
+{
+	return scmi_clock_config_set(handle, clk_id, 0);
+}
+
+static int scmi_clock_count_get(const struct scmi_handle *handle)
+{
+	struct clock_info *ci = handle->clk_priv;
+
+	return ci->num_clocks;
+}
+
+static const struct scmi_clock_info *
+scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id)
+{
+	struct clock_info *ci = handle->clk_priv;
+	struct scmi_clock_info *clk = ci->clk + clk_id;
+
+	if (!clk->name || !clk->name[0])
+		return NULL;
+
+	return clk;
+}
+
+static struct scmi_clk_ops clk_ops = {
+	.count_get = scmi_clock_count_get,
+	.info_get = scmi_clock_info_get,
+	.rate_get = scmi_clock_rate_get,
+	.rate_set = scmi_clock_rate_set,
+	.enable = scmi_clock_enable,
+	.disable = scmi_clock_disable,
+};
+
+static int scmi_clock_protocol_init(struct scmi_handle *handle)
+{
+	u32 version;
+	int clkid, ret;
+	struct clock_info *cinfo;
+
+	scmi_version_get(handle, SCMI_PROTOCOL_CLOCK, &version);
+
+	dev_dbg(handle->dev, "Clock Version %d.%d\n",
+		PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
+
+	cinfo = devm_kzalloc(handle->dev, sizeof(*cinfo), GFP_KERNEL);
+	if (!cinfo)
+		return -ENOMEM;
+
+	scmi_clock_protocol_attributes_get(handle, cinfo);
+
+	cinfo->clk = devm_kcalloc(handle->dev, cinfo->num_clocks,
+				  sizeof(*cinfo->clk), GFP_KERNEL);
+	if (!cinfo->clk)
+		return -ENOMEM;
+
+	for (clkid = 0; clkid < cinfo->num_clocks; clkid++) {
+		struct scmi_clock_info *clk = cinfo->clk + clkid;
+
+		ret = scmi_clock_attributes_get(handle, clkid, clk);
+		if (!ret)
+			scmi_clock_describe_rates_get(handle, clkid, clk);
+	}
+
+	handle->clk_ops = &clk_ops;
+	handle->clk_priv = cinfo;
+
+	return 0;
+}
+
+static int __init scmi_clock_init(void)
+{
+	return scmi_protocol_register(SCMI_PROTOCOL_CLOCK,
+				      &scmi_clock_protocol_init);
+}
+subsys_initcall(scmi_clock_init);

+ 105 - 0
drivers/firmware/arm_scmi/common.h

@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Message Protocol
+ * driver common header file containing some definitions, structures
+ * and function prototypes used in all the different SCMI protocols.
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include <linux/completion.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/scmi_protocol.h>
+#include <linux/types.h>
+
+#define PROTOCOL_REV_MINOR_BITS	16
+#define PROTOCOL_REV_MINOR_MASK	((1U << PROTOCOL_REV_MINOR_BITS) - 1)
+#define PROTOCOL_REV_MAJOR(x)	((x) >> PROTOCOL_REV_MINOR_BITS)
+#define PROTOCOL_REV_MINOR(x)	((x) & PROTOCOL_REV_MINOR_MASK)
+#define MAX_PROTOCOLS_IMP	16
+#define MAX_OPPS		16
+
+enum scmi_common_cmd {
+	PROTOCOL_VERSION = 0x0,
+	PROTOCOL_ATTRIBUTES = 0x1,
+	PROTOCOL_MESSAGE_ATTRIBUTES = 0x2,
+};
+
+/**
+ * struct scmi_msg_resp_prot_version - Response for a message
+ *
+ * @major_version: Major version of the ABI that firmware supports
+ * @minor_version: Minor version of the ABI that firmware supports
+ *
+ * In general, ABI version changes follow the rule that minor version increments
+ * are backward compatible. Major revision changes in ABI may not be
+ * backward compatible.
+ *
+ * Response to a generic message with message type SCMI_MSG_VERSION
+ */
+struct scmi_msg_resp_prot_version {
+	__le16 minor_version;
+	__le16 major_version;
+};
+
+/**
+ * struct scmi_msg_hdr - Message(Tx/Rx) header
+ *
+ * @id: The identifier of the command being sent
+ * @protocol_id: The identifier of the protocol used to send @id command
+ * @seq: The token to identify the message. when a message/command returns,
+ *       the platform returns the whole message header unmodified including
+ *	 the token.
+ */
+struct scmi_msg_hdr {
+	u8 id;
+	u8 protocol_id;
+	u16 seq;
+	u32 status;
+	bool poll_completion;
+};
+
+/**
+ * struct scmi_msg - Message(Tx/Rx) structure
+ *
+ * @buf: Buffer pointer
+ * @len: Length of data in the Buffer
+ */
+struct scmi_msg {
+	void *buf;
+	size_t len;
+};
+
+/**
+ * struct scmi_xfer - Structure representing a message flow
+ *
+ * @hdr: Transmit message header
+ * @tx: Transmit message
+ * @rx: Receive message, the buffer should be pre-allocated to store
+ *	message. If request-ACK protocol is used, we can reuse the same
+ *	buffer for the rx path as we use for the tx path.
+ * @done: completion event
+ */
+
+struct scmi_xfer {
+	void *con_priv;
+	struct scmi_msg_hdr hdr;
+	struct scmi_msg tx;
+	struct scmi_msg rx;
+	struct completion done;
+};
+
+void scmi_one_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer);
+int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer);
+int scmi_one_xfer_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id,
+		       size_t tx_size, size_t rx_size, struct scmi_xfer **p);
+int scmi_handle_put(const struct scmi_handle *handle);
+struct scmi_handle *scmi_handle_get(struct device *dev);
+void scmi_set_handle(struct scmi_device *scmi_dev);
+int scmi_version_get(const struct scmi_handle *h, u8 protocol, u32 *version);
+void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
+				     u8 *prot_imp);
+
+int scmi_base_protocol_init(struct scmi_handle *h);

+ 871 - 0
drivers/firmware/arm_scmi/driver.c

@@ -0,0 +1,871 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Message Protocol driver
+ *
+ * SCMI Message Protocol is used between the System Control Processor(SCP)
+ * and the Application Processors(AP). The Message Handling Unit(MHU)
+ * provides a mechanism for inter-processor communication between SCP's
+ * Cortex M3 and AP.
+ *
+ * SCP offers control and management of the core/cluster power states,
+ * various power domain DVFS including the core/cluster, certain system
+ * clocks configuration, thermal sensors and many others.
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include <linux/bitmap.h>
+#include <linux/export.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/ktime.h>
+#include <linux/mailbox_client.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/processor.h>
+#include <linux/semaphore.h>
+#include <linux/slab.h>
+
+#include "common.h"
+
+#define MSG_ID_SHIFT		0
+#define MSG_ID_MASK		0xff
+#define MSG_TYPE_SHIFT		8
+#define MSG_TYPE_MASK		0x3
+#define MSG_PROTOCOL_ID_SHIFT	10
+#define MSG_PROTOCOL_ID_MASK	0xff
+#define MSG_TOKEN_ID_SHIFT	18
+#define MSG_TOKEN_ID_MASK	0x3ff
+#define MSG_XTRACT_TOKEN(header)	\
+	(((header) >> MSG_TOKEN_ID_SHIFT) & MSG_TOKEN_ID_MASK)
+
+enum scmi_error_codes {
+	SCMI_SUCCESS = 0,	/* Success */
+	SCMI_ERR_SUPPORT = -1,	/* Not supported */
+	SCMI_ERR_PARAMS = -2,	/* Invalid Parameters */
+	SCMI_ERR_ACCESS = -3,	/* Invalid access/permission denied */
+	SCMI_ERR_ENTRY = -4,	/* Not found */
+	SCMI_ERR_RANGE = -5,	/* Value out of range */
+	SCMI_ERR_BUSY = -6,	/* Device busy */
+	SCMI_ERR_COMMS = -7,	/* Communication Error */
+	SCMI_ERR_GENERIC = -8,	/* Generic Error */
+	SCMI_ERR_HARDWARE = -9,	/* Hardware Error */
+	SCMI_ERR_PROTOCOL = -10,/* Protocol Error */
+	SCMI_ERR_MAX
+};
+
+/* List of all  SCMI devices active in system */
+static LIST_HEAD(scmi_list);
+/* Protection for the entire list */
+static DEFINE_MUTEX(scmi_list_mutex);
+
+/**
+ * struct scmi_xfers_info - Structure to manage transfer information
+ *
+ * @xfer_block: Preallocated Message array
+ * @xfer_alloc_table: Bitmap table for allocated messages.
+ *	Index of this bitmap table is also used for message
+ *	sequence identifier.
+ * @xfer_lock: Protection for message allocation
+ */
+struct scmi_xfers_info {
+	struct scmi_xfer *xfer_block;
+	unsigned long *xfer_alloc_table;
+	/* protect transfer allocation */
+	spinlock_t xfer_lock;
+};
+
+/**
+ * struct scmi_desc - Description of SoC integration
+ *
+ * @max_rx_timeout_ms: Timeout for communication with SoC (in Milliseconds)
+ * @max_msg: Maximum number of messages that can be pending
+ *	simultaneously in the system
+ * @max_msg_size: Maximum size of data per message that can be handled.
+ */
+struct scmi_desc {
+	int max_rx_timeout_ms;
+	int max_msg;
+	int max_msg_size;
+};
+
+/**
+ * struct scmi_chan_info - Structure representing a SCMI channel informfation
+ *
+ * @cl: Mailbox Client
+ * @chan: Transmit/Receive mailbox channel
+ * @payload: Transmit/Receive mailbox channel payload area
+ * @dev: Reference to device in the SCMI hierarchy corresponding to this
+ *	 channel
+ */
+struct scmi_chan_info {
+	struct mbox_client cl;
+	struct mbox_chan *chan;
+	void __iomem *payload;
+	struct device *dev;
+	struct scmi_handle *handle;
+};
+
+/**
+ * struct scmi_info - Structure representing a  SCMI instance
+ *
+ * @dev: Device pointer
+ * @desc: SoC description for this instance
+ * @handle: Instance of SCMI handle to send to clients
+ * @version: SCMI revision information containing protocol version,
+ *	implementation version and (sub-)vendor identification.
+ * @minfo: Message info
+ * @tx_idr: IDR object to map protocol id to channel info pointer
+ * @protocols_imp: list of protocols implemented, currently maximum of
+ *	MAX_PROTOCOLS_IMP elements allocated by the base protocol
+ * @node: list head
+ * @users: Number of users of this instance
+ */
+struct scmi_info {
+	struct device *dev;
+	const struct scmi_desc *desc;
+	struct scmi_revision_info version;
+	struct scmi_handle handle;
+	struct scmi_xfers_info minfo;
+	struct idr tx_idr;
+	u8 *protocols_imp;
+	struct list_head node;
+	int users;
+};
+
+#define client_to_scmi_chan_info(c) container_of(c, struct scmi_chan_info, cl)
+#define handle_to_scmi_info(h)	container_of(h, struct scmi_info, handle)
+
+/*
+ * SCMI specification requires all parameters, message headers, return
+ * arguments or any protocol data to be expressed in little endian
+ * format only.
+ */
+struct scmi_shared_mem {
+	__le32 reserved;
+	__le32 channel_status;
+#define SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR	BIT(1)
+#define SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE	BIT(0)
+	__le32 reserved1[2];
+	__le32 flags;
+#define SCMI_SHMEM_FLAG_INTR_ENABLED	BIT(0)
+	__le32 length;
+	__le32 msg_header;
+	u8 msg_payload[0];
+};
+
+static const int scmi_linux_errmap[] = {
+	/* better than switch case as long as return value is continuous */
+	0,			/* SCMI_SUCCESS */
+	-EOPNOTSUPP,		/* SCMI_ERR_SUPPORT */
+	-EINVAL,		/* SCMI_ERR_PARAM */
+	-EACCES,		/* SCMI_ERR_ACCESS */
+	-ENOENT,		/* SCMI_ERR_ENTRY */
+	-ERANGE,		/* SCMI_ERR_RANGE */
+	-EBUSY,			/* SCMI_ERR_BUSY */
+	-ECOMM,			/* SCMI_ERR_COMMS */
+	-EIO,			/* SCMI_ERR_GENERIC */
+	-EREMOTEIO,		/* SCMI_ERR_HARDWARE */
+	-EPROTO,		/* SCMI_ERR_PROTOCOL */
+};
+
+static inline int scmi_to_linux_errno(int errno)
+{
+	if (errno < SCMI_SUCCESS && errno > SCMI_ERR_MAX)
+		return scmi_linux_errmap[-errno];
+	return -EIO;
+}
+
+/**
+ * scmi_dump_header_dbg() - Helper to dump a message header.
+ *
+ * @dev: Device pointer corresponding to the SCMI entity
+ * @hdr: pointer to header.
+ */
+static inline void scmi_dump_header_dbg(struct device *dev,
+					struct scmi_msg_hdr *hdr)
+{
+	dev_dbg(dev, "Command ID: %x Sequence ID: %x Protocol: %x\n",
+		hdr->id, hdr->seq, hdr->protocol_id);
+}
+
+static void scmi_fetch_response(struct scmi_xfer *xfer,
+				struct scmi_shared_mem __iomem *mem)
+{
+	xfer->hdr.status = ioread32(mem->msg_payload);
+	/* Skip the length of header and statues in payload area i.e 8 bytes*/
+	xfer->rx.len = min_t(size_t, xfer->rx.len, ioread32(&mem->length) - 8);
+
+	/* Take a copy to the rx buffer.. */
+	memcpy_fromio(xfer->rx.buf, mem->msg_payload + 4, xfer->rx.len);
+}
+
+/**
+ * scmi_rx_callback() - mailbox client callback for receive messages
+ *
+ * @cl: client pointer
+ * @m: mailbox message
+ *
+ * Processes one received message to appropriate transfer information and
+ * signals completion of the transfer.
+ *
+ * NOTE: This function will be invoked in IRQ context, hence should be
+ * as optimal as possible.
+ */
+static void scmi_rx_callback(struct mbox_client *cl, void *m)
+{
+	u16 xfer_id;
+	struct scmi_xfer *xfer;
+	struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl);
+	struct device *dev = cinfo->dev;
+	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+	struct scmi_xfers_info *minfo = &info->minfo;
+	struct scmi_shared_mem __iomem *mem = cinfo->payload;
+
+	xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header));
+
+	/*
+	 * Are we even expecting this?
+	 */
+	if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
+		dev_err(dev, "message for %d is not expected!\n", xfer_id);
+		return;
+	}
+
+	xfer = &minfo->xfer_block[xfer_id];
+
+	scmi_dump_header_dbg(dev, &xfer->hdr);
+	/* Is the message of valid length? */
+	if (xfer->rx.len > info->desc->max_msg_size) {
+		dev_err(dev, "unable to handle %zu xfer(max %d)\n",
+			xfer->rx.len, info->desc->max_msg_size);
+		return;
+	}
+
+	scmi_fetch_response(xfer, mem);
+	complete(&xfer->done);
+}
+
+/**
+ * pack_scmi_header() - packs and returns 32-bit header
+ *
+ * @hdr: pointer to header containing all the information on message id,
+ *	protocol id and sequence id.
+ */
+static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr)
+{
+	return ((hdr->id & MSG_ID_MASK) << MSG_ID_SHIFT) |
+	   ((hdr->seq & MSG_TOKEN_ID_MASK) << MSG_TOKEN_ID_SHIFT) |
+	   ((hdr->protocol_id & MSG_PROTOCOL_ID_MASK) << MSG_PROTOCOL_ID_SHIFT);
+}
+
+/**
+ * scmi_tx_prepare() - mailbox client callback to prepare for the transfer
+ *
+ * @cl: client pointer
+ * @m: mailbox message
+ *
+ * This function prepares the shared memory which contains the header and the
+ * payload.
+ */
+static void scmi_tx_prepare(struct mbox_client *cl, void *m)
+{
+	struct scmi_xfer *t = m;
+	struct scmi_chan_info *cinfo = client_to_scmi_chan_info(cl);
+	struct scmi_shared_mem __iomem *mem = cinfo->payload;
+
+	/* Mark channel busy + clear error */
+	iowrite32(0x0, &mem->channel_status);
+	iowrite32(t->hdr.poll_completion ? 0 : SCMI_SHMEM_FLAG_INTR_ENABLED,
+		  &mem->flags);
+	iowrite32(sizeof(mem->msg_header) + t->tx.len, &mem->length);
+	iowrite32(pack_scmi_header(&t->hdr), &mem->msg_header);
+	if (t->tx.buf)
+		memcpy_toio(mem->msg_payload, t->tx.buf, t->tx.len);
+}
+
+/**
+ * scmi_one_xfer_get() - Allocate one message
+ *
+ * @handle: SCMI entity handle
+ *
+ * Helper function which is used by various command functions that are
+ * exposed to clients of this driver for allocating a message traffic event.
+ *
+ * This function can sleep depending on pending requests already in the system
+ * for the SCMI entity. Further, this also holds a spinlock to maintain
+ * integrity of internal data structures.
+ *
+ * Return: 0 if all went fine, else corresponding error.
+ */
+static struct scmi_xfer *scmi_one_xfer_get(const struct scmi_handle *handle)
+{
+	u16 xfer_id;
+	struct scmi_xfer *xfer;
+	unsigned long flags, bit_pos;
+	struct scmi_info *info = handle_to_scmi_info(handle);
+	struct scmi_xfers_info *minfo = &info->minfo;
+
+	/* Keep the locked section as small as possible */
+	spin_lock_irqsave(&minfo->xfer_lock, flags);
+	bit_pos = find_first_zero_bit(minfo->xfer_alloc_table,
+				      info->desc->max_msg);
+	if (bit_pos == info->desc->max_msg) {
+		spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+		return ERR_PTR(-ENOMEM);
+	}
+	set_bit(bit_pos, minfo->xfer_alloc_table);
+	spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+
+	xfer_id = bit_pos;
+
+	xfer = &minfo->xfer_block[xfer_id];
+	xfer->hdr.seq = xfer_id;
+	reinit_completion(&xfer->done);
+
+	return xfer;
+}
+
+/**
+ * scmi_one_xfer_put() - Release a message
+ *
+ * @minfo: transfer info pointer
+ * @xfer: message that was reserved by scmi_one_xfer_get
+ *
+ * This holds a spinlock to maintain integrity of internal data structures.
+ */
+void scmi_one_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer)
+{
+	unsigned long flags;
+	struct scmi_info *info = handle_to_scmi_info(handle);
+	struct scmi_xfers_info *minfo = &info->minfo;
+
+	/*
+	 * Keep the locked section as small as possible
+	 * NOTE: we might escape with smp_mb and no lock here..
+	 * but just be conservative and symmetric.
+	 */
+	spin_lock_irqsave(&minfo->xfer_lock, flags);
+	clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table);
+	spin_unlock_irqrestore(&minfo->xfer_lock, flags);
+}
+
+static bool
+scmi_xfer_poll_done(const struct scmi_chan_info *cinfo, struct scmi_xfer *xfer)
+{
+	struct scmi_shared_mem __iomem *mem = cinfo->payload;
+	u16 xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header));
+
+	if (xfer->hdr.seq != xfer_id)
+		return false;
+
+	return ioread32(&mem->channel_status) &
+		(SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR |
+		SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
+}
+
+#define SCMI_MAX_POLL_TO_NS	(100 * NSEC_PER_USEC)
+
+static bool scmi_xfer_done_no_timeout(const struct scmi_chan_info *cinfo,
+				      struct scmi_xfer *xfer, ktime_t stop)
+{
+	ktime_t __cur = ktime_get();
+
+	return scmi_xfer_poll_done(cinfo, xfer) || ktime_after(__cur, stop);
+}
+
+/**
+ * scmi_do_xfer() - Do one transfer
+ *
+ * @info: Pointer to SCMI entity information
+ * @xfer: Transfer to initiate and wait for response
+ *
+ * Return: -ETIMEDOUT in case of no response, if transmit error,
+ *   return corresponding error, else if all goes well,
+ *   return 0.
+ */
+int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer)
+{
+	int ret;
+	int timeout;
+	struct scmi_info *info = handle_to_scmi_info(handle);
+	struct device *dev = info->dev;
+	struct scmi_chan_info *cinfo;
+
+	cinfo = idr_find(&info->tx_idr, xfer->hdr.protocol_id);
+	if (unlikely(!cinfo))
+		return -EINVAL;
+
+	ret = mbox_send_message(cinfo->chan, xfer);
+	if (ret < 0) {
+		dev_dbg(dev, "mbox send fail %d\n", ret);
+		return ret;
+	}
+
+	/* mbox_send_message returns non-negative value on success, so reset */
+	ret = 0;
+
+	if (xfer->hdr.poll_completion) {
+		ktime_t stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLL_TO_NS);
+
+		spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, stop));
+
+		if (ktime_before(ktime_get(), stop))
+			scmi_fetch_response(xfer, cinfo->payload);
+		else
+			ret = -ETIMEDOUT;
+	} else {
+		/* And we wait for the response. */
+		timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms);
+		if (!wait_for_completion_timeout(&xfer->done, timeout)) {
+			dev_err(dev, "mbox timed out in resp(caller: %pS)\n",
+				(void *)_RET_IP_);
+			ret = -ETIMEDOUT;
+		}
+	}
+
+	if (!ret && xfer->hdr.status)
+		ret = scmi_to_linux_errno(xfer->hdr.status);
+
+	/*
+	 * NOTE: we might prefer not to need the mailbox ticker to manage the
+	 * transfer queueing since the protocol layer queues things by itself.
+	 * Unfortunately, we have to kick the mailbox framework after we have
+	 * received our message.
+	 */
+	mbox_client_txdone(cinfo->chan, ret);
+
+	return ret;
+}
+
+/**
+ * scmi_one_xfer_init() - Allocate and initialise one message
+ *
+ * @handle: SCMI entity handle
+ * @msg_id: Message identifier
+ * @msg_prot_id: Protocol identifier for the message
+ * @tx_size: transmit message size
+ * @rx_size: receive message size
+ * @p: pointer to the allocated and initialised message
+ *
+ * This function allocates the message using @scmi_one_xfer_get and
+ * initialise the header.
+ *
+ * Return: 0 if all went fine with @p pointing to message, else
+ *	corresponding error.
+ */
+int scmi_one_xfer_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id,
+		       size_t tx_size, size_t rx_size, struct scmi_xfer **p)
+{
+	int ret;
+	struct scmi_xfer *xfer;
+	struct scmi_info *info = handle_to_scmi_info(handle);
+	struct device *dev = info->dev;
+
+	/* Ensure we have sane transfer sizes */
+	if (rx_size > info->desc->max_msg_size ||
+	    tx_size > info->desc->max_msg_size)
+		return -ERANGE;
+
+	xfer = scmi_one_xfer_get(handle);
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(dev, "failed to get free message slot(%d)\n", ret);
+		return ret;
+	}
+
+	xfer->tx.len = tx_size;
+	xfer->rx.len = rx_size ? : info->desc->max_msg_size;
+	xfer->hdr.id = msg_id;
+	xfer->hdr.protocol_id = prot_id;
+	xfer->hdr.poll_completion = false;
+
+	*p = xfer;
+	return 0;
+}
+
+/**
+ * scmi_version_get() - command to get the revision of the SCMI entity
+ *
+ * @handle: Handle to SCMI entity information
+ *
+ * Updates the SCMI information in the internal data structure.
+ *
+ * Return: 0 if all went fine, else return appropriate error.
+ */
+int scmi_version_get(const struct scmi_handle *handle, u8 protocol,
+		     u32 *version)
+{
+	int ret;
+	__le32 *rev_info;
+	struct scmi_xfer *t;
+
+	ret = scmi_one_xfer_init(handle, PROTOCOL_VERSION, protocol, 0,
+				 sizeof(*version), &t);
+	if (ret)
+		return ret;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		rev_info = t->rx.buf;
+		*version = le32_to_cpu(*rev_info);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
+				     u8 *prot_imp)
+{
+	struct scmi_info *info = handle_to_scmi_info(handle);
+
+	info->protocols_imp = prot_imp;
+}
+
+static bool
+scmi_is_protocol_implemented(const struct scmi_handle *handle, u8 prot_id)
+{
+	int i;
+	struct scmi_info *info = handle_to_scmi_info(handle);
+
+	if (!info->protocols_imp)
+		return false;
+
+	for (i = 0; i < MAX_PROTOCOLS_IMP; i++)
+		if (info->protocols_imp[i] == prot_id)
+			return true;
+	return false;
+}
+
+/**
+ * scmi_handle_get() - Get the  SCMI handle for a device
+ *
+ * @dev: pointer to device for which we want SCMI handle
+ *
+ * NOTE: The function does not track individual clients of the framework
+ * and is expected to be maintained by caller of  SCMI protocol library.
+ * scmi_handle_put must be balanced with successful scmi_handle_get
+ *
+ * Return: pointer to handle if successful, NULL on error
+ */
+struct scmi_handle *scmi_handle_get(struct device *dev)
+{
+	struct list_head *p;
+	struct scmi_info *info;
+	struct scmi_handle *handle = NULL;
+
+	mutex_lock(&scmi_list_mutex);
+	list_for_each(p, &scmi_list) {
+		info = list_entry(p, struct scmi_info, node);
+		if (dev->parent == info->dev) {
+			handle = &info->handle;
+			info->users++;
+			break;
+		}
+	}
+	mutex_unlock(&scmi_list_mutex);
+
+	return handle;
+}
+
+/**
+ * scmi_handle_put() - Release the handle acquired by scmi_handle_get
+ *
+ * @handle: handle acquired by scmi_handle_get
+ *
+ * NOTE: The function does not track individual clients of the framework
+ * and is expected to be maintained by caller of  SCMI protocol library.
+ * scmi_handle_put must be balanced with successful scmi_handle_get
+ *
+ * Return: 0 is successfully released
+ *	if null was passed, it returns -EINVAL;
+ */
+int scmi_handle_put(const struct scmi_handle *handle)
+{
+	struct scmi_info *info;
+
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_scmi_info(handle);
+	mutex_lock(&scmi_list_mutex);
+	if (!WARN_ON(!info->users))
+		info->users--;
+	mutex_unlock(&scmi_list_mutex);
+
+	return 0;
+}
+
+static const struct scmi_desc scmi_generic_desc = {
+	.max_rx_timeout_ms = 30,	/* we may increase this if required */
+	.max_msg = 20,		/* Limited by MBOX_TX_QUEUE_LEN */
+	.max_msg_size = 128,
+};
+
+/* Each compatible listed below must have descriptor associated with it */
+static const struct of_device_id scmi_of_match[] = {
+	{ .compatible = "arm,scmi", .data = &scmi_generic_desc },
+	{ /* Sentinel */ },
+};
+
+MODULE_DEVICE_TABLE(of, scmi_of_match);
+
+static int scmi_xfer_info_init(struct scmi_info *sinfo)
+{
+	int i;
+	struct scmi_xfer *xfer;
+	struct device *dev = sinfo->dev;
+	const struct scmi_desc *desc = sinfo->desc;
+	struct scmi_xfers_info *info = &sinfo->minfo;
+
+	/* Pre-allocated messages, no more than what hdr.seq can support */
+	if (WARN_ON(desc->max_msg >= (MSG_TOKEN_ID_MASK + 1))) {
+		dev_err(dev, "Maximum message of %d exceeds supported %d\n",
+			desc->max_msg, MSG_TOKEN_ID_MASK + 1);
+		return -EINVAL;
+	}
+
+	info->xfer_block = devm_kcalloc(dev, desc->max_msg,
+					sizeof(*info->xfer_block), GFP_KERNEL);
+	if (!info->xfer_block)
+		return -ENOMEM;
+
+	info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(desc->max_msg),
+					      sizeof(long), GFP_KERNEL);
+	if (!info->xfer_alloc_table)
+		return -ENOMEM;
+
+	bitmap_zero(info->xfer_alloc_table, desc->max_msg);
+
+	/* Pre-initialize the buffer pointer to pre-allocated buffers */
+	for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) {
+		xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size,
+					    GFP_KERNEL);
+		if (!xfer->rx.buf)
+			return -ENOMEM;
+
+		xfer->tx.buf = xfer->rx.buf;
+		init_completion(&xfer->done);
+	}
+
+	spin_lock_init(&info->xfer_lock);
+
+	return 0;
+}
+
+static int scmi_mailbox_check(struct device_node *np)
+{
+	struct of_phandle_args arg;
+
+	return of_parse_phandle_with_args(np, "mboxes", "#mbox-cells", 0, &arg);
+}
+
+static int scmi_mbox_free_channel(int id, void *p, void *data)
+{
+	struct scmi_chan_info *cinfo = p;
+	struct idr *idr = data;
+
+	if (!IS_ERR_OR_NULL(cinfo->chan)) {
+		mbox_free_channel(cinfo->chan);
+		cinfo->chan = NULL;
+	}
+
+	idr_remove(idr, id);
+
+	return 0;
+}
+
+static int scmi_remove(struct platform_device *pdev)
+{
+	int ret = 0;
+	struct scmi_info *info = platform_get_drvdata(pdev);
+	struct idr *idr = &info->tx_idr;
+
+	mutex_lock(&scmi_list_mutex);
+	if (info->users)
+		ret = -EBUSY;
+	else
+		list_del(&info->node);
+	mutex_unlock(&scmi_list_mutex);
+
+	if (!ret) {
+		/* Safe to free channels since no more users */
+		ret = idr_for_each(idr, scmi_mbox_free_channel, idr);
+		idr_destroy(&info->tx_idr);
+	}
+
+	return ret;
+}
+
+static inline int
+scmi_mbox_chan_setup(struct scmi_info *info, struct device *dev, int prot_id)
+{
+	int ret;
+	struct resource res;
+	resource_size_t size;
+	struct device_node *shmem, *np = dev->of_node;
+	struct scmi_chan_info *cinfo;
+	struct mbox_client *cl;
+
+	if (scmi_mailbox_check(np)) {
+		cinfo = idr_find(&info->tx_idr, SCMI_PROTOCOL_BASE);
+		goto idr_alloc;
+	}
+
+	cinfo = devm_kzalloc(info->dev, sizeof(*cinfo), GFP_KERNEL);
+	if (!cinfo)
+		return -ENOMEM;
+
+	cinfo->dev = dev;
+
+	cl = &cinfo->cl;
+	cl->dev = dev;
+	cl->rx_callback = scmi_rx_callback;
+	cl->tx_prepare = scmi_tx_prepare;
+	cl->tx_block = false;
+	cl->knows_txdone = true;
+
+	shmem = of_parse_phandle(np, "shmem", 0);
+	ret = of_address_to_resource(shmem, 0, &res);
+	of_node_put(shmem);
+	if (ret) {
+		dev_err(dev, "failed to get SCMI Tx payload mem resource\n");
+		return ret;
+	}
+
+	size = resource_size(&res);
+	cinfo->payload = devm_ioremap(info->dev, res.start, size);
+	if (!cinfo->payload) {
+		dev_err(dev, "failed to ioremap SCMI Tx payload\n");
+		return -EADDRNOTAVAIL;
+	}
+
+	/* Transmit channel is first entry i.e. index 0 */
+	cinfo->chan = mbox_request_channel(cl, 0);
+	if (IS_ERR(cinfo->chan)) {
+		ret = PTR_ERR(cinfo->chan);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to request SCMI Tx mailbox\n");
+		return ret;
+	}
+
+idr_alloc:
+	ret = idr_alloc(&info->tx_idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL);
+	if (ret != prot_id) {
+		dev_err(dev, "unable to allocate SCMI idr slot err %d\n", ret);
+		return ret;
+	}
+
+	cinfo->handle = &info->handle;
+	return 0;
+}
+
+static inline void
+scmi_create_protocol_device(struct device_node *np, struct scmi_info *info,
+			    int prot_id)
+{
+	struct scmi_device *sdev;
+
+	sdev = scmi_device_create(np, info->dev, prot_id);
+	if (!sdev) {
+		dev_err(info->dev, "failed to create %d protocol device\n",
+			prot_id);
+		return;
+	}
+
+	if (scmi_mbox_chan_setup(info, &sdev->dev, prot_id)) {
+		dev_err(&sdev->dev, "failed to setup transport\n");
+		scmi_device_destroy(sdev);
+	}
+
+	/* setup handle now as the transport is ready */
+	scmi_set_handle(sdev);
+}
+
+static int scmi_probe(struct platform_device *pdev)
+{
+	int ret;
+	struct scmi_handle *handle;
+	const struct scmi_desc *desc;
+	struct scmi_info *info;
+	struct device *dev = &pdev->dev;
+	struct device_node *child, *np = dev->of_node;
+
+	/* Only mailbox method supported, check for the presence of one */
+	if (scmi_mailbox_check(np)) {
+		dev_err(dev, "no mailbox found in %pOF\n", np);
+		return -EINVAL;
+	}
+
+	desc = of_match_device(scmi_of_match, dev)->data;
+
+	info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
+	if (!info)
+		return -ENOMEM;
+
+	info->dev = dev;
+	info->desc = desc;
+	INIT_LIST_HEAD(&info->node);
+
+	ret = scmi_xfer_info_init(info);
+	if (ret)
+		return ret;
+
+	platform_set_drvdata(pdev, info);
+	idr_init(&info->tx_idr);
+
+	handle = &info->handle;
+	handle->dev = info->dev;
+	handle->version = &info->version;
+
+	ret = scmi_mbox_chan_setup(info, dev, SCMI_PROTOCOL_BASE);
+	if (ret)
+		return ret;
+
+	ret = scmi_base_protocol_init(handle);
+	if (ret) {
+		dev_err(dev, "unable to communicate with SCMI(%d)\n", ret);
+		return ret;
+	}
+
+	mutex_lock(&scmi_list_mutex);
+	list_add_tail(&info->node, &scmi_list);
+	mutex_unlock(&scmi_list_mutex);
+
+	for_each_available_child_of_node(np, child) {
+		u32 prot_id;
+
+		if (of_property_read_u32(child, "reg", &prot_id))
+			continue;
+
+		prot_id &= MSG_PROTOCOL_ID_MASK;
+
+		if (!scmi_is_protocol_implemented(handle, prot_id)) {
+			dev_err(dev, "SCMI protocol %d not implemented\n",
+				prot_id);
+			continue;
+		}
+
+		scmi_create_protocol_device(child, info, prot_id);
+	}
+
+	return 0;
+}
+
+static struct platform_driver scmi_driver = {
+	.driver = {
+		   .name = "arm-scmi",
+		   .of_match_table = scmi_of_match,
+		   },
+	.probe = scmi_probe,
+	.remove = scmi_remove,
+};
+
+module_platform_driver(scmi_driver);
+
+MODULE_ALIAS("platform: arm-scmi");
+MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+MODULE_DESCRIPTION("ARM SCMI protocol driver");
+MODULE_LICENSE("GPL v2");

+ 481 - 0
drivers/firmware/arm_scmi/perf.c

@@ -0,0 +1,481 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Performance Protocol
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/pm_opp.h>
+#include <linux/sort.h>
+
+#include "common.h"
+
+enum scmi_performance_protocol_cmd {
+	PERF_DOMAIN_ATTRIBUTES = 0x3,
+	PERF_DESCRIBE_LEVELS = 0x4,
+	PERF_LIMITS_SET = 0x5,
+	PERF_LIMITS_GET = 0x6,
+	PERF_LEVEL_SET = 0x7,
+	PERF_LEVEL_GET = 0x8,
+	PERF_NOTIFY_LIMITS = 0x9,
+	PERF_NOTIFY_LEVEL = 0xa,
+};
+
+struct scmi_opp {
+	u32 perf;
+	u32 power;
+	u32 trans_latency_us;
+};
+
+struct scmi_msg_resp_perf_attributes {
+	__le16 num_domains;
+	__le16 flags;
+#define POWER_SCALE_IN_MILLIWATT(x)	((x) & BIT(0))
+	__le32 stats_addr_low;
+	__le32 stats_addr_high;
+	__le32 stats_size;
+};
+
+struct scmi_msg_resp_perf_domain_attributes {
+	__le32 flags;
+#define SUPPORTS_SET_LIMITS(x)		((x) & BIT(31))
+#define SUPPORTS_SET_PERF_LVL(x)	((x) & BIT(30))
+#define SUPPORTS_PERF_LIMIT_NOTIFY(x)	((x) & BIT(29))
+#define SUPPORTS_PERF_LEVEL_NOTIFY(x)	((x) & BIT(28))
+	__le32 rate_limit_us;
+	__le32 sustained_freq_khz;
+	__le32 sustained_perf_level;
+	    u8 name[SCMI_MAX_STR_SIZE];
+};
+
+struct scmi_msg_perf_describe_levels {
+	__le32 domain;
+	__le32 level_index;
+};
+
+struct scmi_perf_set_limits {
+	__le32 domain;
+	__le32 max_level;
+	__le32 min_level;
+};
+
+struct scmi_perf_get_limits {
+	__le32 max_level;
+	__le32 min_level;
+};
+
+struct scmi_perf_set_level {
+	__le32 domain;
+	__le32 level;
+};
+
+struct scmi_perf_notify_level_or_limits {
+	__le32 domain;
+	__le32 notify_enable;
+};
+
+struct scmi_msg_resp_perf_describe_levels {
+	__le16 num_returned;
+	__le16 num_remaining;
+	struct {
+		__le32 perf_val;
+		__le32 power;
+		__le16 transition_latency_us;
+		__le16 reserved;
+	} opp[0];
+};
+
+struct perf_dom_info {
+	bool set_limits;
+	bool set_perf;
+	bool perf_limit_notify;
+	bool perf_level_notify;
+	u32 opp_count;
+	u32 sustained_freq_khz;
+	u32 sustained_perf_level;
+	u32 mult_factor;
+	char name[SCMI_MAX_STR_SIZE];
+	struct scmi_opp opp[MAX_OPPS];
+};
+
+struct scmi_perf_info {
+	int num_domains;
+	bool power_scale_mw;
+	u64 stats_addr;
+	u32 stats_size;
+	struct perf_dom_info *dom_info;
+};
+
+static int scmi_perf_attributes_get(const struct scmi_handle *handle,
+				    struct scmi_perf_info *pi)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_perf_attributes *attr;
+
+	ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
+				 SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t);
+	if (ret)
+		return ret;
+
+	attr = t->rx.buf;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		u16 flags = le16_to_cpu(attr->flags);
+
+		pi->num_domains = le16_to_cpu(attr->num_domains);
+		pi->power_scale_mw = POWER_SCALE_IN_MILLIWATT(flags);
+		pi->stats_addr = le32_to_cpu(attr->stats_addr_low) |
+				(u64)le32_to_cpu(attr->stats_addr_high) << 32;
+		pi->stats_size = le32_to_cpu(attr->stats_size);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
+				struct perf_dom_info *dom_info)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_perf_domain_attributes *attr;
+
+	ret = scmi_one_xfer_init(handle, PERF_DOMAIN_ATTRIBUTES,
+				 SCMI_PROTOCOL_PERF, sizeof(domain),
+				 sizeof(*attr), &t);
+	if (ret)
+		return ret;
+
+	*(__le32 *)t->tx.buf = cpu_to_le32(domain);
+	attr = t->rx.buf;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		u32 flags = le32_to_cpu(attr->flags);
+
+		dom_info->set_limits = SUPPORTS_SET_LIMITS(flags);
+		dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags);
+		dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags);
+		dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags);
+		dom_info->sustained_freq_khz =
+					le32_to_cpu(attr->sustained_freq_khz);
+		dom_info->sustained_perf_level =
+					le32_to_cpu(attr->sustained_perf_level);
+		dom_info->mult_factor =	(dom_info->sustained_freq_khz * 1000) /
+					dom_info->sustained_perf_level;
+		memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int opp_cmp_func(const void *opp1, const void *opp2)
+{
+	const struct scmi_opp *t1 = opp1, *t2 = opp2;
+
+	return t1->perf - t2->perf;
+}
+
+static int
+scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
+			      struct perf_dom_info *perf_dom)
+{
+	int ret, cnt;
+	u32 tot_opp_cnt = 0;
+	u16 num_returned, num_remaining;
+	struct scmi_xfer *t;
+	struct scmi_opp *opp;
+	struct scmi_msg_perf_describe_levels *dom_info;
+	struct scmi_msg_resp_perf_describe_levels *level_info;
+
+	ret = scmi_one_xfer_init(handle, PERF_DESCRIBE_LEVELS,
+				 SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t);
+	if (ret)
+		return ret;
+
+	dom_info = t->tx.buf;
+	level_info = t->rx.buf;
+
+	do {
+		dom_info->domain = cpu_to_le32(domain);
+		/* Set the number of OPPs to be skipped/already read */
+		dom_info->level_index = cpu_to_le32(tot_opp_cnt);
+
+		ret = scmi_do_xfer(handle, t);
+		if (ret)
+			break;
+
+		num_returned = le16_to_cpu(level_info->num_returned);
+		num_remaining = le16_to_cpu(level_info->num_remaining);
+		if (tot_opp_cnt + num_returned > MAX_OPPS) {
+			dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS");
+			break;
+		}
+
+		opp = &perf_dom->opp[tot_opp_cnt];
+		for (cnt = 0; cnt < num_returned; cnt++, opp++) {
+			opp->perf = le32_to_cpu(level_info->opp[cnt].perf_val);
+			opp->power = le32_to_cpu(level_info->opp[cnt].power);
+			opp->trans_latency_us = le16_to_cpu
+				(level_info->opp[cnt].transition_latency_us);
+
+			dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n",
+				opp->perf, opp->power, opp->trans_latency_us);
+		}
+
+		tot_opp_cnt += num_returned;
+		/*
+		 * check for both returned and remaining to avoid infinite
+		 * loop due to buggy firmware
+		 */
+	} while (num_returned && num_remaining);
+
+	perf_dom->opp_count = tot_opp_cnt;
+	scmi_one_xfer_put(handle, t);
+
+	sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL);
+	return ret;
+}
+
+static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
+				u32 max_perf, u32 min_perf)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_perf_set_limits *limits;
+
+	ret = scmi_one_xfer_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF,
+				 sizeof(*limits), 0, &t);
+	if (ret)
+		return ret;
+
+	limits = t->tx.buf;
+	limits->domain = cpu_to_le32(domain);
+	limits->max_level = cpu_to_le32(max_perf);
+	limits->min_level = cpu_to_le32(min_perf);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
+				u32 *max_perf, u32 *min_perf)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_perf_get_limits *limits;
+
+	ret = scmi_one_xfer_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF,
+				 sizeof(__le32), 0, &t);
+	if (ret)
+		return ret;
+
+	*(__le32 *)t->tx.buf = cpu_to_le32(domain);
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		limits = t->rx.buf;
+
+		*max_perf = le32_to_cpu(limits->max_level);
+		*min_perf = le32_to_cpu(limits->min_level);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
+			       u32 level, bool poll)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_perf_set_level *lvl;
+
+	ret = scmi_one_xfer_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF,
+				 sizeof(*lvl), 0, &t);
+	if (ret)
+		return ret;
+
+	t->hdr.poll_completion = poll;
+	lvl = t->tx.buf;
+	lvl->domain = cpu_to_le32(domain);
+	lvl->level = cpu_to_le32(level);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
+			       u32 *level, bool poll)
+{
+	int ret;
+	struct scmi_xfer *t;
+
+	ret = scmi_one_xfer_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF,
+				 sizeof(u32), sizeof(u32), &t);
+	if (ret)
+		return ret;
+
+	t->hdr.poll_completion = poll;
+	*(__le32 *)t->tx.buf = cpu_to_le32(domain);
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret)
+		*level = le32_to_cpu(*(__le32 *)t->rx.buf);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+/* Device specific ops */
+static int scmi_dev_domain_id(struct device *dev)
+{
+	struct of_phandle_args clkspec;
+
+	if (of_parse_phandle_with_args(dev->of_node, "clocks", "#clock-cells",
+				       0, &clkspec))
+		return -EINVAL;
+
+	return clkspec.args[0];
+}
+
+static int scmi_dvfs_add_opps_to_device(const struct scmi_handle *handle,
+					struct device *dev)
+{
+	int idx, ret, domain;
+	unsigned long freq;
+	struct scmi_opp *opp;
+	struct perf_dom_info *dom;
+	struct scmi_perf_info *pi = handle->perf_priv;
+
+	domain = scmi_dev_domain_id(dev);
+	if (domain < 0)
+		return domain;
+
+	dom = pi->dom_info + domain;
+	if (!dom)
+		return -EIO;
+
+	for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) {
+		freq = opp->perf * dom->mult_factor;
+
+		ret = dev_pm_opp_add(dev, freq, 0);
+		if (ret) {
+			dev_warn(dev, "failed to add opp %luHz\n", freq);
+
+			while (idx-- > 0) {
+				freq = (--opp)->perf * dom->mult_factor;
+				dev_pm_opp_remove(dev, freq);
+			}
+			return ret;
+		}
+	}
+	return 0;
+}
+
+static int scmi_dvfs_get_transition_latency(const struct scmi_handle *handle,
+					    struct device *dev)
+{
+	struct perf_dom_info *dom;
+	struct scmi_perf_info *pi = handle->perf_priv;
+	int domain = scmi_dev_domain_id(dev);
+
+	if (domain < 0)
+		return domain;
+
+	dom = pi->dom_info + domain;
+	if (!dom)
+		return -EIO;
+
+	/* uS to nS */
+	return dom->opp[dom->opp_count - 1].trans_latency_us * 1000;
+}
+
+static int scmi_dvfs_freq_set(const struct scmi_handle *handle, u32 domain,
+			      unsigned long freq, bool poll)
+{
+	struct scmi_perf_info *pi = handle->perf_priv;
+	struct perf_dom_info *dom = pi->dom_info + domain;
+
+	return scmi_perf_level_set(handle, domain, freq / dom->mult_factor,
+				   poll);
+}
+
+static int scmi_dvfs_freq_get(const struct scmi_handle *handle, u32 domain,
+			      unsigned long *freq, bool poll)
+{
+	int ret;
+	u32 level;
+	struct scmi_perf_info *pi = handle->perf_priv;
+	struct perf_dom_info *dom = pi->dom_info + domain;
+
+	ret = scmi_perf_level_get(handle, domain, &level, poll);
+	if (!ret)
+		*freq = level * dom->mult_factor;
+
+	return ret;
+}
+
+static struct scmi_perf_ops perf_ops = {
+	.limits_set = scmi_perf_limits_set,
+	.limits_get = scmi_perf_limits_get,
+	.level_set = scmi_perf_level_set,
+	.level_get = scmi_perf_level_get,
+	.device_domain_id = scmi_dev_domain_id,
+	.get_transition_latency = scmi_dvfs_get_transition_latency,
+	.add_opps_to_device = scmi_dvfs_add_opps_to_device,
+	.freq_set = scmi_dvfs_freq_set,
+	.freq_get = scmi_dvfs_freq_get,
+};
+
+static int scmi_perf_protocol_init(struct scmi_handle *handle)
+{
+	int domain;
+	u32 version;
+	struct scmi_perf_info *pinfo;
+
+	scmi_version_get(handle, SCMI_PROTOCOL_PERF, &version);
+
+	dev_dbg(handle->dev, "Performance Version %d.%d\n",
+		PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
+
+	pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
+	if (!pinfo)
+		return -ENOMEM;
+
+	scmi_perf_attributes_get(handle, pinfo);
+
+	pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
+				       sizeof(*pinfo->dom_info), GFP_KERNEL);
+	if (!pinfo->dom_info)
+		return -ENOMEM;
+
+	for (domain = 0; domain < pinfo->num_domains; domain++) {
+		struct perf_dom_info *dom = pinfo->dom_info + domain;
+
+		scmi_perf_domain_attributes_get(handle, domain, dom);
+		scmi_perf_describe_levels_get(handle, domain, dom);
+	}
+
+	handle->perf_ops = &perf_ops;
+	handle->perf_priv = pinfo;
+
+	return 0;
+}
+
+static int __init scmi_perf_init(void)
+{
+	return scmi_protocol_register(SCMI_PROTOCOL_PERF,
+				      &scmi_perf_protocol_init);
+}
+subsys_initcall(scmi_perf_init);

+ 221 - 0
drivers/firmware/arm_scmi/power.c

@@ -0,0 +1,221 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Power Protocol
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include "common.h"
+
+enum scmi_power_protocol_cmd {
+	POWER_DOMAIN_ATTRIBUTES = 0x3,
+	POWER_STATE_SET = 0x4,
+	POWER_STATE_GET = 0x5,
+	POWER_STATE_NOTIFY = 0x6,
+};
+
+struct scmi_msg_resp_power_attributes {
+	__le16 num_domains;
+	__le16 reserved;
+	__le32 stats_addr_low;
+	__le32 stats_addr_high;
+	__le32 stats_size;
+};
+
+struct scmi_msg_resp_power_domain_attributes {
+	__le32 flags;
+#define SUPPORTS_STATE_SET_NOTIFY(x)	((x) & BIT(31))
+#define SUPPORTS_STATE_SET_ASYNC(x)	((x) & BIT(30))
+#define SUPPORTS_STATE_SET_SYNC(x)	((x) & BIT(29))
+	    u8 name[SCMI_MAX_STR_SIZE];
+};
+
+struct scmi_power_set_state {
+	__le32 flags;
+#define STATE_SET_ASYNC		BIT(0)
+	__le32 domain;
+	__le32 state;
+};
+
+struct scmi_power_state_notify {
+	__le32 domain;
+	__le32 notify_enable;
+};
+
+struct power_dom_info {
+	bool state_set_sync;
+	bool state_set_async;
+	bool state_set_notify;
+	char name[SCMI_MAX_STR_SIZE];
+};
+
+struct scmi_power_info {
+	int num_domains;
+	u64 stats_addr;
+	u32 stats_size;
+	struct power_dom_info *dom_info;
+};
+
+static int scmi_power_attributes_get(const struct scmi_handle *handle,
+				     struct scmi_power_info *pi)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_power_attributes *attr;
+
+	ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
+				 SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t);
+	if (ret)
+		return ret;
+
+	attr = t->rx.buf;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		pi->num_domains = le16_to_cpu(attr->num_domains);
+		pi->stats_addr = le32_to_cpu(attr->stats_addr_low) |
+				(u64)le32_to_cpu(attr->stats_addr_high) << 32;
+		pi->stats_size = le32_to_cpu(attr->stats_size);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
+				 struct power_dom_info *dom_info)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_power_domain_attributes *attr;
+
+	ret = scmi_one_xfer_init(handle, POWER_DOMAIN_ATTRIBUTES,
+				 SCMI_PROTOCOL_POWER, sizeof(domain),
+				 sizeof(*attr), &t);
+	if (ret)
+		return ret;
+
+	*(__le32 *)t->tx.buf = cpu_to_le32(domain);
+	attr = t->rx.buf;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		u32 flags = le32_to_cpu(attr->flags);
+
+		dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags);
+		dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags);
+		dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags);
+		memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_power_set_state *st;
+
+	ret = scmi_one_xfer_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER,
+				 sizeof(*st), 0, &t);
+	if (ret)
+		return ret;
+
+	st = t->tx.buf;
+	st->flags = cpu_to_le32(0);
+	st->domain = cpu_to_le32(domain);
+	st->state = cpu_to_le32(state);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state)
+{
+	int ret;
+	struct scmi_xfer *t;
+
+	ret = scmi_one_xfer_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER,
+				 sizeof(u32), sizeof(u32), &t);
+	if (ret)
+		return ret;
+
+	*(__le32 *)t->tx.buf = cpu_to_le32(domain);
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret)
+		*state = le32_to_cpu(*(__le32 *)t->rx.buf);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_power_num_domains_get(const struct scmi_handle *handle)
+{
+	struct scmi_power_info *pi = handle->power_priv;
+
+	return pi->num_domains;
+}
+
+static char *scmi_power_name_get(const struct scmi_handle *handle, u32 domain)
+{
+	struct scmi_power_info *pi = handle->power_priv;
+	struct power_dom_info *dom = pi->dom_info + domain;
+
+	return dom->name;
+}
+
+static struct scmi_power_ops power_ops = {
+	.num_domains_get = scmi_power_num_domains_get,
+	.name_get = scmi_power_name_get,
+	.state_set = scmi_power_state_set,
+	.state_get = scmi_power_state_get,
+};
+
+static int scmi_power_protocol_init(struct scmi_handle *handle)
+{
+	int domain;
+	u32 version;
+	struct scmi_power_info *pinfo;
+
+	scmi_version_get(handle, SCMI_PROTOCOL_POWER, &version);
+
+	dev_dbg(handle->dev, "Power Version %d.%d\n",
+		PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
+
+	pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
+	if (!pinfo)
+		return -ENOMEM;
+
+	scmi_power_attributes_get(handle, pinfo);
+
+	pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
+				       sizeof(*pinfo->dom_info), GFP_KERNEL);
+	if (!pinfo->dom_info)
+		return -ENOMEM;
+
+	for (domain = 0; domain < pinfo->num_domains; domain++) {
+		struct power_dom_info *dom = pinfo->dom_info + domain;
+
+		scmi_power_domain_attributes_get(handle, domain, dom);
+	}
+
+	handle->power_ops = &power_ops;
+	handle->power_priv = pinfo;
+
+	return 0;
+}
+
+static int __init scmi_power_init(void)
+{
+	return scmi_protocol_register(SCMI_PROTOCOL_POWER,
+				      &scmi_power_protocol_init);
+}
+subsys_initcall(scmi_power_init);

+ 129 - 0
drivers/firmware/arm_scmi/scmi_pm_domain.c

@@ -0,0 +1,129 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * SCMI Generic power domain support.
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/pm_domain.h>
+#include <linux/scmi_protocol.h>
+
+struct scmi_pm_domain {
+	struct generic_pm_domain genpd;
+	const struct scmi_handle *handle;
+	const char *name;
+	u32 domain;
+};
+
+#define to_scmi_pd(gpd) container_of(gpd, struct scmi_pm_domain, genpd)
+
+static int scmi_pd_power(struct generic_pm_domain *domain, bool power_on)
+{
+	int ret;
+	u32 state, ret_state;
+	struct scmi_pm_domain *pd = to_scmi_pd(domain);
+	const struct scmi_power_ops *ops = pd->handle->power_ops;
+
+	if (power_on)
+		state = SCMI_POWER_STATE_GENERIC_ON;
+	else
+		state = SCMI_POWER_STATE_GENERIC_OFF;
+
+	ret = ops->state_set(pd->handle, pd->domain, state);
+	if (!ret)
+		ret = ops->state_get(pd->handle, pd->domain, &ret_state);
+	if (!ret && state != ret_state)
+		return -EIO;
+
+	return ret;
+}
+
+static int scmi_pd_power_on(struct generic_pm_domain *domain)
+{
+	return scmi_pd_power(domain, true);
+}
+
+static int scmi_pd_power_off(struct generic_pm_domain *domain)
+{
+	return scmi_pd_power(domain, false);
+}
+
+static int scmi_pm_domain_probe(struct scmi_device *sdev)
+{
+	int num_domains, i;
+	struct device *dev = &sdev->dev;
+	struct device_node *np = dev->of_node;
+	struct scmi_pm_domain *scmi_pd;
+	struct genpd_onecell_data *scmi_pd_data;
+	struct generic_pm_domain **domains;
+	const struct scmi_handle *handle = sdev->handle;
+
+	if (!handle || !handle->power_ops)
+		return -ENODEV;
+
+	num_domains = handle->power_ops->num_domains_get(handle);
+	if (num_domains < 0) {
+		dev_err(dev, "number of domains not found\n");
+		return num_domains;
+	}
+
+	scmi_pd = devm_kcalloc(dev, num_domains, sizeof(*scmi_pd), GFP_KERNEL);
+	if (!scmi_pd)
+		return -ENOMEM;
+
+	scmi_pd_data = devm_kzalloc(dev, sizeof(*scmi_pd_data), GFP_KERNEL);
+	if (!scmi_pd_data)
+		return -ENOMEM;
+
+	domains = devm_kcalloc(dev, num_domains, sizeof(*domains), GFP_KERNEL);
+	if (!domains)
+		return -ENOMEM;
+
+	for (i = 0; i < num_domains; i++, scmi_pd++) {
+		u32 state;
+
+		domains[i] = &scmi_pd->genpd;
+
+		scmi_pd->domain = i;
+		scmi_pd->handle = handle;
+		scmi_pd->name = handle->power_ops->name_get(handle, i);
+		scmi_pd->genpd.name = scmi_pd->name;
+		scmi_pd->genpd.power_off = scmi_pd_power_off;
+		scmi_pd->genpd.power_on = scmi_pd_power_on;
+
+		if (handle->power_ops->state_get(handle, i, &state)) {
+			dev_warn(dev, "failed to get state for domain %d\n", i);
+			continue;
+		}
+
+		pm_genpd_init(&scmi_pd->genpd, NULL,
+			      state == SCMI_POWER_STATE_GENERIC_OFF);
+	}
+
+	scmi_pd_data->domains = domains;
+	scmi_pd_data->num_domains = num_domains;
+
+	of_genpd_add_provider_onecell(np, scmi_pd_data);
+
+	return 0;
+}
+
+static const struct scmi_device_id scmi_id_table[] = {
+	{ SCMI_PROTOCOL_POWER },
+	{ },
+};
+MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+
+static struct scmi_driver scmi_power_domain_driver = {
+	.name = "scmi-power-domain",
+	.probe = scmi_pm_domain_probe,
+	.id_table = scmi_id_table,
+};
+module_scmi_driver(scmi_power_domain_driver);
+
+MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+MODULE_DESCRIPTION("ARM SCMI power domain driver");
+MODULE_LICENSE("GPL v2");

+ 291 - 0
drivers/firmware/arm_scmi/sensors.c

@@ -0,0 +1,291 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Sensor Protocol
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#include "common.h"
+
+enum scmi_sensor_protocol_cmd {
+	SENSOR_DESCRIPTION_GET = 0x3,
+	SENSOR_CONFIG_SET = 0x4,
+	SENSOR_TRIP_POINT_SET = 0x5,
+	SENSOR_READING_GET = 0x6,
+};
+
+struct scmi_msg_resp_sensor_attributes {
+	__le16 num_sensors;
+	u8 max_requests;
+	u8 reserved;
+	__le32 reg_addr_low;
+	__le32 reg_addr_high;
+	__le32 reg_size;
+};
+
+struct scmi_msg_resp_sensor_description {
+	__le16 num_returned;
+	__le16 num_remaining;
+	struct {
+		__le32 id;
+		__le32 attributes_low;
+#define SUPPORTS_ASYNC_READ(x)	((x) & BIT(31))
+#define NUM_TRIP_POINTS(x)	(((x) >> 4) & 0xff)
+		__le32 attributes_high;
+#define SENSOR_TYPE(x)		((x) & 0xff)
+#define SENSOR_SCALE(x)		(((x) >> 11) & 0x3f)
+#define SENSOR_UPDATE_SCALE(x)	(((x) >> 22) & 0x1f)
+#define SENSOR_UPDATE_BASE(x)	(((x) >> 27) & 0x1f)
+		    u8 name[SCMI_MAX_STR_SIZE];
+	} desc[0];
+};
+
+struct scmi_msg_set_sensor_config {
+	__le32 id;
+	__le32 event_control;
+};
+
+struct scmi_msg_set_sensor_trip_point {
+	__le32 id;
+	__le32 event_control;
+#define SENSOR_TP_EVENT_MASK	(0x3)
+#define SENSOR_TP_DISABLED	0x0
+#define SENSOR_TP_POSITIVE	0x1
+#define SENSOR_TP_NEGATIVE	0x2
+#define SENSOR_TP_BOTH		0x3
+#define SENSOR_TP_ID(x)		(((x) & 0xff) << 4)
+	__le32 value_low;
+	__le32 value_high;
+};
+
+struct scmi_msg_sensor_reading_get {
+	__le32 id;
+	__le32 flags;
+#define SENSOR_READ_ASYNC	BIT(0)
+};
+
+struct sensors_info {
+	int num_sensors;
+	int max_requests;
+	u64 reg_addr;
+	u32 reg_size;
+	struct scmi_sensor_info *sensors;
+};
+
+static int scmi_sensor_attributes_get(const struct scmi_handle *handle,
+				      struct sensors_info *si)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_sensor_attributes *attr;
+
+	ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
+				 SCMI_PROTOCOL_SENSOR, 0, sizeof(*attr), &t);
+	if (ret)
+		return ret;
+
+	attr = t->rx.buf;
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		si->num_sensors = le16_to_cpu(attr->num_sensors);
+		si->max_requests = attr->max_requests;
+		si->reg_addr = le32_to_cpu(attr->reg_addr_low) |
+				(u64)le32_to_cpu(attr->reg_addr_high) << 32;
+		si->reg_size = le32_to_cpu(attr->reg_size);
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_sensor_description_get(const struct scmi_handle *handle,
+				       struct sensors_info *si)
+{
+	int ret, cnt;
+	u32 desc_index = 0;
+	u16 num_returned, num_remaining;
+	struct scmi_xfer *t;
+	struct scmi_msg_resp_sensor_description *buf;
+
+	ret = scmi_one_xfer_init(handle, SENSOR_DESCRIPTION_GET,
+				 SCMI_PROTOCOL_SENSOR, sizeof(__le32), 0, &t);
+	if (ret)
+		return ret;
+
+	buf = t->rx.buf;
+
+	do {
+		/* Set the number of sensors to be skipped/already read */
+		*(__le32 *)t->tx.buf = cpu_to_le32(desc_index);
+
+		ret = scmi_do_xfer(handle, t);
+		if (ret)
+			break;
+
+		num_returned = le16_to_cpu(buf->num_returned);
+		num_remaining = le16_to_cpu(buf->num_remaining);
+
+		if (desc_index + num_returned > si->num_sensors) {
+			dev_err(handle->dev, "No. of sensors can't exceed %d",
+				si->num_sensors);
+			break;
+		}
+
+		for (cnt = 0; cnt < num_returned; cnt++) {
+			u32 attrh;
+			struct scmi_sensor_info *s;
+
+			attrh = le32_to_cpu(buf->desc[cnt].attributes_high);
+			s = &si->sensors[desc_index + cnt];
+			s->id = le32_to_cpu(buf->desc[cnt].id);
+			s->type = SENSOR_TYPE(attrh);
+			memcpy(s->name, buf->desc[cnt].name, SCMI_MAX_STR_SIZE);
+		}
+
+		desc_index += num_returned;
+		/*
+		 * check for both returned and remaining to avoid infinite
+		 * loop due to buggy firmware
+		 */
+	} while (num_returned && num_remaining);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int
+scmi_sensor_configuration_set(const struct scmi_handle *handle, u32 sensor_id)
+{
+	int ret;
+	u32 evt_cntl = BIT(0);
+	struct scmi_xfer *t;
+	struct scmi_msg_set_sensor_config *cfg;
+
+	ret = scmi_one_xfer_init(handle, SENSOR_CONFIG_SET,
+				 SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t);
+	if (ret)
+		return ret;
+
+	cfg = t->tx.buf;
+	cfg->id = cpu_to_le32(sensor_id);
+	cfg->event_control = cpu_to_le32(evt_cntl);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_sensor_trip_point_set(const struct scmi_handle *handle,
+				      u32 sensor_id, u8 trip_id, u64 trip_value)
+{
+	int ret;
+	u32 evt_cntl = SENSOR_TP_BOTH;
+	struct scmi_xfer *t;
+	struct scmi_msg_set_sensor_trip_point *trip;
+
+	ret = scmi_one_xfer_init(handle, SENSOR_TRIP_POINT_SET,
+				 SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t);
+	if (ret)
+		return ret;
+
+	trip = t->tx.buf;
+	trip->id = cpu_to_le32(sensor_id);
+	trip->event_control = cpu_to_le32(evt_cntl | SENSOR_TP_ID(trip_id));
+	trip->value_low = cpu_to_le32(trip_value & 0xffffffff);
+	trip->value_high = cpu_to_le32(trip_value >> 32);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static int scmi_sensor_reading_get(const struct scmi_handle *handle,
+				   u32 sensor_id, bool async, u64 *value)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_msg_sensor_reading_get *sensor;
+
+	ret = scmi_one_xfer_init(handle, SENSOR_READING_GET,
+				 SCMI_PROTOCOL_SENSOR, sizeof(*sensor),
+				 sizeof(u64), &t);
+	if (ret)
+		return ret;
+
+	sensor = t->tx.buf;
+	sensor->id = cpu_to_le32(sensor_id);
+	sensor->flags = cpu_to_le32(async ? SENSOR_READ_ASYNC : 0);
+
+	ret = scmi_do_xfer(handle, t);
+	if (!ret) {
+		__le32 *pval = t->rx.buf;
+
+		*value = le32_to_cpu(*pval);
+		*value |= (u64)le32_to_cpu(*(pval + 1)) << 32;
+	}
+
+	scmi_one_xfer_put(handle, t);
+	return ret;
+}
+
+static const struct scmi_sensor_info *
+scmi_sensor_info_get(const struct scmi_handle *handle, u32 sensor_id)
+{
+	struct sensors_info *si = handle->sensor_priv;
+
+	return si->sensors + sensor_id;
+}
+
+static int scmi_sensor_count_get(const struct scmi_handle *handle)
+{
+	struct sensors_info *si = handle->sensor_priv;
+
+	return si->num_sensors;
+}
+
+static struct scmi_sensor_ops sensor_ops = {
+	.count_get = scmi_sensor_count_get,
+	.info_get = scmi_sensor_info_get,
+	.configuration_set = scmi_sensor_configuration_set,
+	.trip_point_set = scmi_sensor_trip_point_set,
+	.reading_get = scmi_sensor_reading_get,
+};
+
+static int scmi_sensors_protocol_init(struct scmi_handle *handle)
+{
+	u32 version;
+	struct sensors_info *sinfo;
+
+	scmi_version_get(handle, SCMI_PROTOCOL_SENSOR, &version);
+
+	dev_dbg(handle->dev, "Sensor Version %d.%d\n",
+		PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
+
+	sinfo = devm_kzalloc(handle->dev, sizeof(*sinfo), GFP_KERNEL);
+	if (!sinfo)
+		return -ENOMEM;
+
+	scmi_sensor_attributes_get(handle, sinfo);
+
+	sinfo->sensors = devm_kcalloc(handle->dev, sinfo->num_sensors,
+				      sizeof(*sinfo->sensors), GFP_KERNEL);
+	if (!sinfo->sensors)
+		return -ENOMEM;
+
+	scmi_sensor_description_get(handle, sinfo);
+
+	handle->sensor_ops = &sensor_ops;
+	handle->sensor_priv = sinfo;
+
+	return 0;
+}
+
+static int __init scmi_sensors_init(void)
+{
+	return scmi_protocol_register(SCMI_PROTOCOL_SENSOR,
+				      &scmi_sensors_protocol_init);
+}
+subsys_initcall(scmi_sensors_init);

+ 89 - 122
drivers/firmware/arm_scpi.c

@@ -28,6 +28,7 @@
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
 
 #include <linux/bitmap.h>
 #include <linux/bitmap.h>
+#include <linux/bitfield.h>
 #include <linux/device.h>
 #include <linux/device.h>
 #include <linux/err.h>
 #include <linux/err.h>
 #include <linux/export.h>
 #include <linux/export.h>
@@ -45,48 +46,32 @@
 #include <linux/sort.h>
 #include <linux/sort.h>
 #include <linux/spinlock.h>
 #include <linux/spinlock.h>
 
 
-#define CMD_ID_SHIFT		0
-#define CMD_ID_MASK		0x7f
-#define CMD_TOKEN_ID_SHIFT	8
-#define CMD_TOKEN_ID_MASK	0xff
-#define CMD_DATA_SIZE_SHIFT	16
-#define CMD_DATA_SIZE_MASK	0x1ff
-#define CMD_LEGACY_DATA_SIZE_SHIFT	20
-#define CMD_LEGACY_DATA_SIZE_MASK	0x1ff
-#define PACK_SCPI_CMD(cmd_id, tx_sz)			\
-	((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) |	\
-	(((tx_sz) & CMD_DATA_SIZE_MASK) << CMD_DATA_SIZE_SHIFT))
-#define ADD_SCPI_TOKEN(cmd, token)			\
-	((cmd) |= (((token) & CMD_TOKEN_ID_MASK) << CMD_TOKEN_ID_SHIFT))
-#define PACK_LEGACY_SCPI_CMD(cmd_id, tx_sz)				\
-	((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) |			       \
-	(((tx_sz) & CMD_LEGACY_DATA_SIZE_MASK) << CMD_LEGACY_DATA_SIZE_SHIFT))
-
-#define CMD_SIZE(cmd)	(((cmd) >> CMD_DATA_SIZE_SHIFT) & CMD_DATA_SIZE_MASK)
-#define CMD_LEGACY_SIZE(cmd)	(((cmd) >> CMD_LEGACY_DATA_SIZE_SHIFT) & \
-					CMD_LEGACY_DATA_SIZE_MASK)
-#define CMD_UNIQ_MASK	(CMD_TOKEN_ID_MASK << CMD_TOKEN_ID_SHIFT | CMD_ID_MASK)
+#define CMD_ID_MASK		GENMASK(6, 0)
+#define CMD_TOKEN_ID_MASK	GENMASK(15, 8)
+#define CMD_DATA_SIZE_MASK	GENMASK(24, 16)
+#define CMD_LEGACY_DATA_SIZE_MASK	GENMASK(28, 20)
+#define PACK_SCPI_CMD(cmd_id, tx_sz)		\
+	(FIELD_PREP(CMD_ID_MASK, cmd_id) |	\
+	FIELD_PREP(CMD_DATA_SIZE_MASK, tx_sz))
+#define PACK_LEGACY_SCPI_CMD(cmd_id, tx_sz)	\
+	(FIELD_PREP(CMD_ID_MASK, cmd_id) |	\
+	FIELD_PREP(CMD_LEGACY_DATA_SIZE_MASK, tx_sz))
+
+#define CMD_SIZE(cmd)	FIELD_GET(CMD_DATA_SIZE_MASK, cmd)
+#define CMD_UNIQ_MASK	(CMD_TOKEN_ID_MASK | CMD_ID_MASK)
 #define CMD_XTRACT_UNIQ(cmd)	((cmd) & CMD_UNIQ_MASK)
 #define CMD_XTRACT_UNIQ(cmd)	((cmd) & CMD_UNIQ_MASK)
 
 
 #define SCPI_SLOT		0
 #define SCPI_SLOT		0
 
 
 #define MAX_DVFS_DOMAINS	8
 #define MAX_DVFS_DOMAINS	8
 #define MAX_DVFS_OPPS		16
 #define MAX_DVFS_OPPS		16
-#define DVFS_LATENCY(hdr)	(le32_to_cpu(hdr) >> 16)
-#define DVFS_OPP_COUNT(hdr)	((le32_to_cpu(hdr) >> 8) & 0xff)
-
-#define PROTOCOL_REV_MINOR_BITS	16
-#define PROTOCOL_REV_MINOR_MASK	((1U << PROTOCOL_REV_MINOR_BITS) - 1)
-#define PROTOCOL_REV_MAJOR(x)	((x) >> PROTOCOL_REV_MINOR_BITS)
-#define PROTOCOL_REV_MINOR(x)	((x) & PROTOCOL_REV_MINOR_MASK)
-
-#define FW_REV_MAJOR_BITS	24
-#define FW_REV_MINOR_BITS	16
-#define FW_REV_PATCH_MASK	((1U << FW_REV_MINOR_BITS) - 1)
-#define FW_REV_MINOR_MASK	((1U << FW_REV_MAJOR_BITS) - 1)
-#define FW_REV_MAJOR(x)		((x) >> FW_REV_MAJOR_BITS)
-#define FW_REV_MINOR(x)		(((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS)
-#define FW_REV_PATCH(x)		((x) & FW_REV_PATCH_MASK)
+
+#define PROTO_REV_MAJOR_MASK	GENMASK(31, 16)
+#define PROTO_REV_MINOR_MASK	GENMASK(15, 0)
+
+#define FW_REV_MAJOR_MASK	GENMASK(31, 24)
+#define FW_REV_MINOR_MASK	GENMASK(23, 16)
+#define FW_REV_PATCH_MASK	GENMASK(15, 0)
 
 
 #define MAX_RX_TIMEOUT		(msecs_to_jiffies(30))
 #define MAX_RX_TIMEOUT		(msecs_to_jiffies(30))
 
 
@@ -311,10 +296,6 @@ struct clk_get_info {
 	u8 name[20];
 	u8 name[20];
 } __packed;
 } __packed;
 
 
-struct clk_get_value {
-	__le32 rate;
-} __packed;
-
 struct clk_set_value {
 struct clk_set_value {
 	__le16 id;
 	__le16 id;
 	__le16 reserved;
 	__le16 reserved;
@@ -328,7 +309,9 @@ struct legacy_clk_set_value {
 } __packed;
 } __packed;
 
 
 struct dvfs_info {
 struct dvfs_info {
-	__le32 header;
+	u8 domain;
+	u8 opp_count;
+	__le16 latency;
 	struct {
 	struct {
 		__le32 freq;
 		__le32 freq;
 		__le32 m_volt;
 		__le32 m_volt;
@@ -340,10 +323,6 @@ struct dvfs_set {
 	u8 index;
 	u8 index;
 } __packed;
 } __packed;
 
 
-struct sensor_capabilities {
-	__le16 sensors;
-} __packed;
-
 struct _scpi_sensor_info {
 struct _scpi_sensor_info {
 	__le16 sensor_id;
 	__le16 sensor_id;
 	u8 class;
 	u8 class;
@@ -351,11 +330,6 @@ struct _scpi_sensor_info {
 	char name[20];
 	char name[20];
 };
 };
 
 
-struct sensor_value {
-	__le32 lo_val;
-	__le32 hi_val;
-} __packed;
-
 struct dev_pstate_set {
 struct dev_pstate_set {
 	__le16 dev_id;
 	__le16 dev_id;
 	u8 pstate;
 	u8 pstate;
@@ -419,19 +393,20 @@ static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd)
 		unsigned int len;
 		unsigned int len;
 
 
 		if (scpi_info->is_legacy) {
 		if (scpi_info->is_legacy) {
-			struct legacy_scpi_shared_mem *mem = ch->rx_payload;
+			struct legacy_scpi_shared_mem __iomem *mem =
+							ch->rx_payload;
 
 
 			/* RX Length is not replied by the legacy Firmware */
 			/* RX Length is not replied by the legacy Firmware */
 			len = match->rx_len;
 			len = match->rx_len;
 
 
-			match->status = le32_to_cpu(mem->status);
+			match->status = ioread32(&mem->status);
 			memcpy_fromio(match->rx_buf, mem->payload, len);
 			memcpy_fromio(match->rx_buf, mem->payload, len);
 		} else {
 		} else {
-			struct scpi_shared_mem *mem = ch->rx_payload;
+			struct scpi_shared_mem __iomem *mem = ch->rx_payload;
 
 
-			len = min(match->rx_len, CMD_SIZE(cmd));
+			len = min_t(unsigned int, match->rx_len, CMD_SIZE(cmd));
 
 
-			match->status = le32_to_cpu(mem->status);
+			match->status = ioread32(&mem->status);
 			memcpy_fromio(match->rx_buf, mem->payload, len);
 			memcpy_fromio(match->rx_buf, mem->payload, len);
 		}
 		}
 
 
@@ -445,11 +420,11 @@ static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd)
 static void scpi_handle_remote_msg(struct mbox_client *c, void *msg)
 static void scpi_handle_remote_msg(struct mbox_client *c, void *msg)
 {
 {
 	struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
 	struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
-	struct scpi_shared_mem *mem = ch->rx_payload;
+	struct scpi_shared_mem __iomem *mem = ch->rx_payload;
 	u32 cmd = 0;
 	u32 cmd = 0;
 
 
 	if (!scpi_info->is_legacy)
 	if (!scpi_info->is_legacy)
-		cmd = le32_to_cpu(mem->command);
+		cmd = ioread32(&mem->command);
 
 
 	scpi_process_cmd(ch, cmd);
 	scpi_process_cmd(ch, cmd);
 }
 }
@@ -459,7 +434,7 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg)
 	unsigned long flags;
 	unsigned long flags;
 	struct scpi_xfer *t = msg;
 	struct scpi_xfer *t = msg;
 	struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
 	struct scpi_chan *ch = container_of(c, struct scpi_chan, cl);
-	struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload;
+	struct scpi_shared_mem __iomem *mem = ch->tx_payload;
 
 
 	if (t->tx_buf) {
 	if (t->tx_buf) {
 		if (scpi_info->is_legacy)
 		if (scpi_info->is_legacy)
@@ -471,14 +446,14 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg)
 	if (t->rx_buf) {
 	if (t->rx_buf) {
 		if (!(++ch->token))
 		if (!(++ch->token))
 			++ch->token;
 			++ch->token;
-		ADD_SCPI_TOKEN(t->cmd, ch->token);
+		t->cmd |= FIELD_PREP(CMD_TOKEN_ID_MASK, ch->token);
 		spin_lock_irqsave(&ch->rx_lock, flags);
 		spin_lock_irqsave(&ch->rx_lock, flags);
 		list_add_tail(&t->node, &ch->rx_pending);
 		list_add_tail(&t->node, &ch->rx_pending);
 		spin_unlock_irqrestore(&ch->rx_lock, flags);
 		spin_unlock_irqrestore(&ch->rx_lock, flags);
 	}
 	}
 
 
 	if (!scpi_info->is_legacy)
 	if (!scpi_info->is_legacy)
-		mem->command = cpu_to_le32(t->cmd);
+		iowrite32(t->cmd, &mem->command);
 }
 }
 
 
 static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch)
 static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch)
@@ -583,13 +558,13 @@ scpi_clk_get_range(u16 clk_id, unsigned long *min, unsigned long *max)
 static unsigned long scpi_clk_get_val(u16 clk_id)
 static unsigned long scpi_clk_get_val(u16 clk_id)
 {
 {
 	int ret;
 	int ret;
-	struct clk_get_value clk;
+	__le32 rate;
 	__le16 le_clk_id = cpu_to_le16(clk_id);
 	__le16 le_clk_id = cpu_to_le16(clk_id);
 
 
 	ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id,
 	ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id,
-				sizeof(le_clk_id), &clk, sizeof(clk));
+				sizeof(le_clk_id), &rate, sizeof(rate));
 
 
-	return ret ? ret : le32_to_cpu(clk.rate);
+	return ret ? ret : le32_to_cpu(rate);
 }
 }
 
 
 static int scpi_clk_set_val(u16 clk_id, unsigned long rate)
 static int scpi_clk_set_val(u16 clk_id, unsigned long rate)
@@ -665,8 +640,8 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
 	if (!info)
 	if (!info)
 		return ERR_PTR(-ENOMEM);
 		return ERR_PTR(-ENOMEM);
 
 
-	info->count = DVFS_OPP_COUNT(buf.header);
-	info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */
+	info->count = buf.opp_count;
+	info->latency = le16_to_cpu(buf.latency) * 1000; /* uS to nS */
 
 
 	info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL);
 	info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL);
 	if (!info->opps) {
 	if (!info->opps) {
@@ -713,9 +688,6 @@ static int scpi_dvfs_get_transition_latency(struct device *dev)
 	if (IS_ERR(info))
 	if (IS_ERR(info))
 		return PTR_ERR(info);
 		return PTR_ERR(info);
 
 
-	if (!info->latency)
-		return 0;
-
 	return info->latency;
 	return info->latency;
 }
 }
 
 
@@ -746,13 +718,13 @@ static int scpi_dvfs_add_opps_to_device(struct device *dev)
 
 
 static int scpi_sensor_get_capability(u16 *sensors)
 static int scpi_sensor_get_capability(u16 *sensors)
 {
 {
-	struct sensor_capabilities cap_buf;
+	__le16 cap;
 	int ret;
 	int ret;
 
 
-	ret = scpi_send_message(CMD_SENSOR_CAPABILITIES, NULL, 0, &cap_buf,
-				sizeof(cap_buf));
+	ret = scpi_send_message(CMD_SENSOR_CAPABILITIES, NULL, 0, &cap,
+				sizeof(cap));
 	if (!ret)
 	if (!ret)
-		*sensors = le16_to_cpu(cap_buf.sensors);
+		*sensors = le16_to_cpu(cap);
 
 
 	return ret;
 	return ret;
 }
 }
@@ -776,20 +748,19 @@ static int scpi_sensor_get_info(u16 sensor_id, struct scpi_sensor_info *info)
 static int scpi_sensor_get_value(u16 sensor, u64 *val)
 static int scpi_sensor_get_value(u16 sensor, u64 *val)
 {
 {
 	__le16 id = cpu_to_le16(sensor);
 	__le16 id = cpu_to_le16(sensor);
-	struct sensor_value buf;
+	__le64 value;
 	int ret;
 	int ret;
 
 
 	ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id),
 	ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id),
-				&buf, sizeof(buf));
+				&value, sizeof(value));
 	if (ret)
 	if (ret)
 		return ret;
 		return ret;
 
 
 	if (scpi_info->is_legacy)
 	if (scpi_info->is_legacy)
-		/* only 32-bits supported, hi_val can be junk */
-		*val = le32_to_cpu(buf.lo_val);
+		/* only 32-bits supported, upper 32 bits can be junk */
+		*val = le32_to_cpup((__le32 *)&value);
 	else
 	else
-		*val = (u64)le32_to_cpu(buf.hi_val) << 32 |
-			le32_to_cpu(buf.lo_val);
+		*val = le64_to_cpu(value);
 
 
 	return 0;
 	return 0;
 }
 }
@@ -864,9 +835,9 @@ static ssize_t protocol_version_show(struct device *dev,
 {
 {
 	struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
 	struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
 
 
-	return sprintf(buf, "%d.%d\n",
-		       PROTOCOL_REV_MAJOR(scpi_info->protocol_version),
-		       PROTOCOL_REV_MINOR(scpi_info->protocol_version));
+	return sprintf(buf, "%lu.%lu\n",
+		FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version),
+		FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version));
 }
 }
 static DEVICE_ATTR_RO(protocol_version);
 static DEVICE_ATTR_RO(protocol_version);
 
 
@@ -875,10 +846,10 @@ static ssize_t firmware_version_show(struct device *dev,
 {
 {
 	struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
 	struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev);
 
 
-	return sprintf(buf, "%d.%d.%d\n",
-		       FW_REV_MAJOR(scpi_info->firmware_version),
-		       FW_REV_MINOR(scpi_info->firmware_version),
-		       FW_REV_PATCH(scpi_info->firmware_version));
+	return sprintf(buf, "%lu.%lu.%lu\n",
+		FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version),
+		FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version),
+		FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version));
 }
 }
 static DEVICE_ATTR_RO(firmware_version);
 static DEVICE_ATTR_RO(firmware_version);
 
 
@@ -889,37 +860,26 @@ static struct attribute *versions_attrs[] = {
 };
 };
 ATTRIBUTE_GROUPS(versions);
 ATTRIBUTE_GROUPS(versions);
 
 
-static void
-scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count)
+static void scpi_free_channels(void *data)
 {
 {
+	struct scpi_drvinfo *info = data;
 	int i;
 	int i;
 
 
-	for (i = 0; i < count && pchan->chan; i++, pchan++) {
-		mbox_free_channel(pchan->chan);
-		devm_kfree(dev, pchan->xfers);
-		devm_iounmap(dev, pchan->rx_payload);
-	}
+	for (i = 0; i < info->num_chans; i++)
+		mbox_free_channel(info->channels[i].chan);
 }
 }
 
 
 static int scpi_remove(struct platform_device *pdev)
 static int scpi_remove(struct platform_device *pdev)
 {
 {
 	int i;
 	int i;
-	struct device *dev = &pdev->dev;
 	struct scpi_drvinfo *info = platform_get_drvdata(pdev);
 	struct scpi_drvinfo *info = platform_get_drvdata(pdev);
 
 
 	scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */
 	scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */
 
 
-	of_platform_depopulate(dev);
-	sysfs_remove_groups(&dev->kobj, versions_groups);
-	scpi_free_channels(dev, info->channels, info->num_chans);
-	platform_set_drvdata(pdev, NULL);
-
 	for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) {
 	for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) {
 		kfree(info->dvfs[i]->opps);
 		kfree(info->dvfs[i]->opps);
 		kfree(info->dvfs[i]);
 		kfree(info->dvfs[i]);
 	}
 	}
-	devm_kfree(dev, info->channels);
-	devm_kfree(dev, info);
 
 
 	return 0;
 	return 0;
 }
 }
@@ -952,7 +912,6 @@ static int scpi_probe(struct platform_device *pdev)
 {
 {
 	int count, idx, ret;
 	int count, idx, ret;
 	struct resource res;
 	struct resource res;
-	struct scpi_chan *scpi_chan;
 	struct device *dev = &pdev->dev;
 	struct device *dev = &pdev->dev;
 	struct device_node *np = dev->of_node;
 	struct device_node *np = dev->of_node;
 
 
@@ -969,13 +928,19 @@ static int scpi_probe(struct platform_device *pdev)
 		return -ENODEV;
 		return -ENODEV;
 	}
 	}
 
 
-	scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL);
-	if (!scpi_chan)
+	scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan),
+					   GFP_KERNEL);
+	if (!scpi_info->channels)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
-	for (idx = 0; idx < count; idx++) {
+	ret = devm_add_action(dev, scpi_free_channels, scpi_info);
+	if (ret)
+		return ret;
+
+	for (; scpi_info->num_chans < count; scpi_info->num_chans++) {
 		resource_size_t size;
 		resource_size_t size;
-		struct scpi_chan *pchan = scpi_chan + idx;
+		int idx = scpi_info->num_chans;
+		struct scpi_chan *pchan = scpi_info->channels + idx;
 		struct mbox_client *cl = &pchan->cl;
 		struct mbox_client *cl = &pchan->cl;
 		struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
 		struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
 
 
@@ -983,15 +948,14 @@ static int scpi_probe(struct platform_device *pdev)
 		of_node_put(shmem);
 		of_node_put(shmem);
 		if (ret) {
 		if (ret) {
 			dev_err(dev, "failed to get SCPI payload mem resource\n");
 			dev_err(dev, "failed to get SCPI payload mem resource\n");
-			goto err;
+			return ret;
 		}
 		}
 
 
 		size = resource_size(&res);
 		size = resource_size(&res);
 		pchan->rx_payload = devm_ioremap(dev, res.start, size);
 		pchan->rx_payload = devm_ioremap(dev, res.start, size);
 		if (!pchan->rx_payload) {
 		if (!pchan->rx_payload) {
 			dev_err(dev, "failed to ioremap SCPI payload\n");
 			dev_err(dev, "failed to ioremap SCPI payload\n");
-			ret = -EADDRNOTAVAIL;
-			goto err;
+			return -EADDRNOTAVAIL;
 		}
 		}
 		pchan->tx_payload = pchan->rx_payload + (size >> 1);
 		pchan->tx_payload = pchan->rx_payload + (size >> 1);
 
 
@@ -1017,14 +981,9 @@ static int scpi_probe(struct platform_device *pdev)
 				dev_err(dev, "failed to get channel%d err %d\n",
 				dev_err(dev, "failed to get channel%d err %d\n",
 					idx, ret);
 					idx, ret);
 		}
 		}
-err:
-		scpi_free_channels(dev, scpi_chan, idx);
-		scpi_info = NULL;
 		return ret;
 		return ret;
 	}
 	}
 
 
-	scpi_info->channels = scpi_chan;
-	scpi_info->num_chans = count;
 	scpi_info->commands = scpi_std_commands;
 	scpi_info->commands = scpi_std_commands;
 
 
 	platform_set_drvdata(pdev, scpi_info);
 	platform_set_drvdata(pdev, scpi_info);
@@ -1043,23 +1002,31 @@ err:
 	ret = scpi_init_versions(scpi_info);
 	ret = scpi_init_versions(scpi_info);
 	if (ret) {
 	if (ret) {
 		dev_err(dev, "incorrect or no SCP firmware found\n");
 		dev_err(dev, "incorrect or no SCP firmware found\n");
-		scpi_remove(pdev);
 		return ret;
 		return ret;
 	}
 	}
 
 
-	_dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n",
-		  PROTOCOL_REV_MAJOR(scpi_info->protocol_version),
-		  PROTOCOL_REV_MINOR(scpi_info->protocol_version),
-		  FW_REV_MAJOR(scpi_info->firmware_version),
-		  FW_REV_MINOR(scpi_info->firmware_version),
-		  FW_REV_PATCH(scpi_info->firmware_version));
+	if (scpi_info->is_legacy && !scpi_info->protocol_version &&
+	    !scpi_info->firmware_version)
+		dev_info(dev, "SCP Protocol legacy pre-1.0 firmware\n");
+	else
+		dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n",
+			 FIELD_GET(PROTO_REV_MAJOR_MASK,
+				   scpi_info->protocol_version),
+			 FIELD_GET(PROTO_REV_MINOR_MASK,
+				   scpi_info->protocol_version),
+			 FIELD_GET(FW_REV_MAJOR_MASK,
+				   scpi_info->firmware_version),
+			 FIELD_GET(FW_REV_MINOR_MASK,
+				   scpi_info->firmware_version),
+			 FIELD_GET(FW_REV_PATCH_MASK,
+				   scpi_info->firmware_version));
 	scpi_info->scpi_ops = &scpi_ops;
 	scpi_info->scpi_ops = &scpi_ops;
 
 
-	ret = sysfs_create_groups(&dev->kobj, versions_groups);
+	ret = devm_device_add_groups(dev, versions_groups);
 	if (ret)
 	if (ret)
 		dev_err(dev, "unable to create sysfs version group\n");
 		dev_err(dev, "unable to create sysfs version group\n");
 
 
-	return of_platform_populate(dev->of_node, NULL, NULL, dev);
+	return devm_of_platform_populate(dev);
 }
 }
 
 
 static const struct of_device_id scpi_of_match[] = {
 static const struct of_device_id scpi_of_match[] = {

+ 12 - 13
drivers/firmware/meson/meson_sm.c

@@ -17,8 +17,10 @@
 #include <linux/arm-smccc.h>
 #include <linux/arm-smccc.h>
 #include <linux/bug.h>
 #include <linux/bug.h>
 #include <linux/io.h>
 #include <linux/io.h>
+#include <linux/module.h>
 #include <linux/of.h>
 #include <linux/of.h>
 #include <linux/of_device.h>
 #include <linux/of_device.h>
+#include <linux/platform_device.h>
 #include <linux/printk.h>
 #include <linux/printk.h>
 #include <linux/types.h>
 #include <linux/types.h>
 #include <linux/sizes.h>
 #include <linux/sizes.h>
@@ -217,21 +219,11 @@ static const struct of_device_id meson_sm_ids[] = {
 	{ /* sentinel */ },
 	{ /* sentinel */ },
 };
 };
 
 
-int __init meson_sm_init(void)
+static int __init meson_sm_probe(struct platform_device *pdev)
 {
 {
 	const struct meson_sm_chip *chip;
 	const struct meson_sm_chip *chip;
-	const struct of_device_id *matched_np;
-	struct device_node *np;
 
 
-	np = of_find_matching_node_and_match(NULL, meson_sm_ids, &matched_np);
-	if (!np)
-		return -ENODEV;
-
-	chip = matched_np->data;
-	if (!chip) {
-		pr_err("unable to setup secure-monitor data\n");
-		goto out;
-	}
+	chip = of_match_device(meson_sm_ids, &pdev->dev)->data;
 
 
 	if (chip->cmd_shmem_in_base) {
 	if (chip->cmd_shmem_in_base) {
 		fw.sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base,
 		fw.sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base,
@@ -257,4 +249,11 @@ out_in_base:
 out:
 out:
 	return -EINVAL;
 	return -EINVAL;
 }
 }
-device_initcall(meson_sm_init);
+
+static struct platform_driver meson_sm_driver = {
+	.driver = {
+		.name = "meson-sm",
+		.of_match_table = of_match_ptr(meson_sm_ids),
+	},
+};
+module_platform_driver_probe(meson_sm_driver, meson_sm_probe);

+ 65 - 79
drivers/firmware/tegra/bpmp.c

@@ -70,57 +70,20 @@ void tegra_bpmp_put(struct tegra_bpmp *bpmp)
 }
 }
 EXPORT_SYMBOL_GPL(tegra_bpmp_put);
 EXPORT_SYMBOL_GPL(tegra_bpmp_put);
 
 
-static int tegra_bpmp_channel_get_index(struct tegra_bpmp_channel *channel)
-{
-	return channel - channel->bpmp->channels;
-}
-
 static int
 static int
 tegra_bpmp_channel_get_thread_index(struct tegra_bpmp_channel *channel)
 tegra_bpmp_channel_get_thread_index(struct tegra_bpmp_channel *channel)
 {
 {
 	struct tegra_bpmp *bpmp = channel->bpmp;
 	struct tegra_bpmp *bpmp = channel->bpmp;
-	unsigned int offset, count;
+	unsigned int count;
 	int index;
 	int index;
 
 
-	offset = bpmp->soc->channels.thread.offset;
 	count = bpmp->soc->channels.thread.count;
 	count = bpmp->soc->channels.thread.count;
 
 
-	index = tegra_bpmp_channel_get_index(channel);
-	if (index < 0)
-		return index;
-
-	if (index < offset || index >= offset + count)
+	index = channel - channel->bpmp->threaded_channels;
+	if (index < 0 || index >= count)
 		return -EINVAL;
 		return -EINVAL;
 
 
-	return index - offset;
-}
-
-static struct tegra_bpmp_channel *
-tegra_bpmp_channel_get_thread(struct tegra_bpmp *bpmp, unsigned int index)
-{
-	unsigned int offset = bpmp->soc->channels.thread.offset;
-	unsigned int count = bpmp->soc->channels.thread.count;
-
-	if (index >= count)
-		return NULL;
-
-	return &bpmp->channels[offset + index];
-}
-
-static struct tegra_bpmp_channel *
-tegra_bpmp_channel_get_tx(struct tegra_bpmp *bpmp)
-{
-	unsigned int offset = bpmp->soc->channels.cpu_tx.offset;
-
-	return &bpmp->channels[offset + smp_processor_id()];
-}
-
-static struct tegra_bpmp_channel *
-tegra_bpmp_channel_get_rx(struct tegra_bpmp *bpmp)
-{
-	unsigned int offset = bpmp->soc->channels.cpu_rx.offset;
-
-	return &bpmp->channels[offset];
+	return index;
 }
 }
 
 
 static bool tegra_bpmp_message_valid(const struct tegra_bpmp_message *msg)
 static bool tegra_bpmp_message_valid(const struct tegra_bpmp_message *msg)
@@ -271,11 +234,7 @@ tegra_bpmp_write_threaded(struct tegra_bpmp *bpmp, unsigned int mrq,
 		goto unlock;
 		goto unlock;
 	}
 	}
 
 
-	channel = tegra_bpmp_channel_get_thread(bpmp, index);
-	if (!channel) {
-		err = -EINVAL;
-		goto unlock;
-	}
+	channel = &bpmp->threaded_channels[index];
 
 
 	if (!tegra_bpmp_master_free(channel)) {
 	if (!tegra_bpmp_master_free(channel)) {
 		err = -EBUSY;
 		err = -EBUSY;
@@ -328,12 +287,18 @@ int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp,
 	if (!tegra_bpmp_message_valid(msg))
 	if (!tegra_bpmp_message_valid(msg))
 		return -EINVAL;
 		return -EINVAL;
 
 
-	channel = tegra_bpmp_channel_get_tx(bpmp);
+	channel = bpmp->tx_channel;
+
+	spin_lock(&bpmp->atomic_tx_lock);
 
 
 	err = tegra_bpmp_channel_write(channel, msg->mrq, MSG_ACK,
 	err = tegra_bpmp_channel_write(channel, msg->mrq, MSG_ACK,
 				       msg->tx.data, msg->tx.size);
 				       msg->tx.data, msg->tx.size);
-	if (err < 0)
+	if (err < 0) {
+		spin_unlock(&bpmp->atomic_tx_lock);
 		return err;
 		return err;
+	}
+
+	spin_unlock(&bpmp->atomic_tx_lock);
 
 
 	err = mbox_send_message(bpmp->mbox.channel, NULL);
 	err = mbox_send_message(bpmp->mbox.channel, NULL);
 	if (err < 0)
 	if (err < 0)
@@ -607,7 +572,7 @@ static void tegra_bpmp_handle_rx(struct mbox_client *client, void *data)
 	unsigned int i, count;
 	unsigned int i, count;
 	unsigned long *busy;
 	unsigned long *busy;
 
 
-	channel = tegra_bpmp_channel_get_rx(bpmp);
+	channel = bpmp->rx_channel;
 	count = bpmp->soc->channels.thread.count;
 	count = bpmp->soc->channels.thread.count;
 	busy = bpmp->threaded.busy;
 	busy = bpmp->threaded.busy;
 
 
@@ -619,9 +584,7 @@ static void tegra_bpmp_handle_rx(struct mbox_client *client, void *data)
 	for_each_set_bit(i, busy, count) {
 	for_each_set_bit(i, busy, count) {
 		struct tegra_bpmp_channel *channel;
 		struct tegra_bpmp_channel *channel;
 
 
-		channel = tegra_bpmp_channel_get_thread(bpmp, i);
-		if (!channel)
-			continue;
+		channel = &bpmp->threaded_channels[i];
 
 
 		if (tegra_bpmp_master_acked(channel)) {
 		if (tegra_bpmp_master_acked(channel)) {
 			tegra_bpmp_channel_signal(channel);
 			tegra_bpmp_channel_signal(channel);
@@ -698,7 +661,6 @@ static void tegra_bpmp_channel_cleanup(struct tegra_bpmp_channel *channel)
 
 
 static int tegra_bpmp_probe(struct platform_device *pdev)
 static int tegra_bpmp_probe(struct platform_device *pdev)
 {
 {
-	struct tegra_bpmp_channel *channel;
 	struct tegra_bpmp *bpmp;
 	struct tegra_bpmp *bpmp;
 	unsigned int i;
 	unsigned int i;
 	char tag[32];
 	char tag[32];
@@ -732,7 +694,7 @@ static int tegra_bpmp_probe(struct platform_device *pdev)
 	}
 	}
 
 
 	bpmp->rx.virt = gen_pool_dma_alloc(bpmp->rx.pool, 4096, &bpmp->rx.phys);
 	bpmp->rx.virt = gen_pool_dma_alloc(bpmp->rx.pool, 4096, &bpmp->rx.phys);
-	if (!bpmp->rx.pool) {
+	if (!bpmp->rx.virt) {
 		dev_err(&pdev->dev, "failed to allocate from RX pool\n");
 		dev_err(&pdev->dev, "failed to allocate from RX pool\n");
 		err = -ENOMEM;
 		err = -ENOMEM;
 		goto free_tx;
 		goto free_tx;
@@ -758,24 +720,45 @@ static int tegra_bpmp_probe(struct platform_device *pdev)
 		goto free_rx;
 		goto free_rx;
 	}
 	}
 
 
-	bpmp->num_channels = bpmp->soc->channels.cpu_tx.count +
-			     bpmp->soc->channels.thread.count +
-			     bpmp->soc->channels.cpu_rx.count;
+	spin_lock_init(&bpmp->atomic_tx_lock);
+	bpmp->tx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->tx_channel),
+					GFP_KERNEL);
+	if (!bpmp->tx_channel) {
+		err = -ENOMEM;
+		goto free_rx;
+	}
 
 
-	bpmp->channels = devm_kcalloc(&pdev->dev, bpmp->num_channels,
-				      sizeof(*channel), GFP_KERNEL);
-	if (!bpmp->channels) {
+	bpmp->rx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->rx_channel),
+	                                GFP_KERNEL);
+	if (!bpmp->rx_channel) {
 		err = -ENOMEM;
 		err = -ENOMEM;
 		goto free_rx;
 		goto free_rx;
 	}
 	}
 
 
-	/* message channel initialization */
-	for (i = 0; i < bpmp->num_channels; i++) {
-		struct tegra_bpmp_channel *channel = &bpmp->channels[i];
+	bpmp->threaded_channels = devm_kcalloc(&pdev->dev, bpmp->threaded.count,
+					       sizeof(*bpmp->threaded_channels),
+					       GFP_KERNEL);
+	if (!bpmp->threaded_channels) {
+		err = -ENOMEM;
+		goto free_rx;
+	}
 
 
-		err = tegra_bpmp_channel_init(channel, bpmp, i);
+	err = tegra_bpmp_channel_init(bpmp->tx_channel, bpmp,
+				      bpmp->soc->channels.cpu_tx.offset);
+	if (err < 0)
+		goto free_rx;
+
+	err = tegra_bpmp_channel_init(bpmp->rx_channel, bpmp,
+				      bpmp->soc->channels.cpu_rx.offset);
+	if (err < 0)
+		goto cleanup_tx_channel;
+
+	for (i = 0; i < bpmp->threaded.count; i++) {
+		err = tegra_bpmp_channel_init(
+			&bpmp->threaded_channels[i], bpmp,
+			bpmp->soc->channels.thread.offset + i);
 		if (err < 0)
 		if (err < 0)
-			goto cleanup_channels;
+			goto cleanup_threaded_channels;
 	}
 	}
 
 
 	/* mbox registration */
 	/* mbox registration */
@@ -788,15 +771,14 @@ static int tegra_bpmp_probe(struct platform_device *pdev)
 	if (IS_ERR(bpmp->mbox.channel)) {
 	if (IS_ERR(bpmp->mbox.channel)) {
 		err = PTR_ERR(bpmp->mbox.channel);
 		err = PTR_ERR(bpmp->mbox.channel);
 		dev_err(&pdev->dev, "failed to get HSP mailbox: %d\n", err);
 		dev_err(&pdev->dev, "failed to get HSP mailbox: %d\n", err);
-		goto cleanup_channels;
+		goto cleanup_threaded_channels;
 	}
 	}
 
 
 	/* reset message channels */
 	/* reset message channels */
-	for (i = 0; i < bpmp->num_channels; i++) {
-		struct tegra_bpmp_channel *channel = &bpmp->channels[i];
-
-		tegra_bpmp_channel_reset(channel);
-	}
+	tegra_bpmp_channel_reset(bpmp->tx_channel);
+	tegra_bpmp_channel_reset(bpmp->rx_channel);
+	for (i = 0; i < bpmp->threaded.count; i++)
+		tegra_bpmp_channel_reset(&bpmp->threaded_channels[i]);
 
 
 	err = tegra_bpmp_request_mrq(bpmp, MRQ_PING,
 	err = tegra_bpmp_request_mrq(bpmp, MRQ_PING,
 				     tegra_bpmp_mrq_handle_ping, bpmp);
 				     tegra_bpmp_mrq_handle_ping, bpmp);
@@ -845,9 +827,15 @@ free_mrq:
 	tegra_bpmp_free_mrq(bpmp, MRQ_PING, bpmp);
 	tegra_bpmp_free_mrq(bpmp, MRQ_PING, bpmp);
 free_mbox:
 free_mbox:
 	mbox_free_channel(bpmp->mbox.channel);
 	mbox_free_channel(bpmp->mbox.channel);
-cleanup_channels:
-	while (i--)
-		tegra_bpmp_channel_cleanup(&bpmp->channels[i]);
+cleanup_threaded_channels:
+	for (i = 0; i < bpmp->threaded.count; i++) {
+		if (bpmp->threaded_channels[i].bpmp)
+			tegra_bpmp_channel_cleanup(&bpmp->threaded_channels[i]);
+	}
+
+	tegra_bpmp_channel_cleanup(bpmp->rx_channel);
+cleanup_tx_channel:
+	tegra_bpmp_channel_cleanup(bpmp->tx_channel);
 free_rx:
 free_rx:
 	gen_pool_free(bpmp->rx.pool, (unsigned long)bpmp->rx.virt, 4096);
 	gen_pool_free(bpmp->rx.pool, (unsigned long)bpmp->rx.virt, 4096);
 free_tx:
 free_tx:
@@ -858,18 +846,16 @@ free_tx:
 static const struct tegra_bpmp_soc tegra186_soc = {
 static const struct tegra_bpmp_soc tegra186_soc = {
 	.channels = {
 	.channels = {
 		.cpu_tx = {
 		.cpu_tx = {
-			.offset = 0,
-			.count = 6,
+			.offset = 3,
 			.timeout = 60 * USEC_PER_SEC,
 			.timeout = 60 * USEC_PER_SEC,
 		},
 		},
 		.thread = {
 		.thread = {
-			.offset = 6,
-			.count = 7,
+			.offset = 0,
+			.count = 3,
 			.timeout = 600 * USEC_PER_SEC,
 			.timeout = 600 * USEC_PER_SEC,
 		},
 		},
 		.cpu_rx = {
 		.cpu_rx = {
 			.offset = 13,
 			.offset = 13,
-			.count = 1,
 			.timeout = 0,
 			.timeout = 0,
 		},
 		},
 	},
 	},

+ 12 - 0
drivers/hwmon/Kconfig

@@ -317,6 +317,18 @@ config SENSORS_APPLESMC
 	  Say Y here if you have an applicable laptop and want to experience
 	  Say Y here if you have an applicable laptop and want to experience
 	  the awesome power of applesmc.
 	  the awesome power of applesmc.
 
 
+config SENSORS_ARM_SCMI
+	tristate "ARM SCMI Sensors"
+	depends on ARM_SCMI_PROTOCOL
+	depends on THERMAL || !THERMAL_OF
+	help
+	  This driver provides support for temperature, voltage, current
+	  and power sensors available on SCMI based platforms. The actual
+	  number and type of sensors exported depend on the platform.
+
+	  This driver can also be built as a module.  If so, the module
+	  will be called scmi-hwmon.
+
 config SENSORS_ARM_SCPI
 config SENSORS_ARM_SCPI
 	tristate "ARM SCPI Sensors"
 	tristate "ARM SCPI Sensors"
 	depends on ARM_SCPI_PROTOCOL
 	depends on ARM_SCPI_PROTOCOL

+ 1 - 0
drivers/hwmon/Makefile

@@ -46,6 +46,7 @@ obj-$(CONFIG_SENSORS_ADT7462)	+= adt7462.o
 obj-$(CONFIG_SENSORS_ADT7470)	+= adt7470.o
 obj-$(CONFIG_SENSORS_ADT7470)	+= adt7470.o
 obj-$(CONFIG_SENSORS_ADT7475)	+= adt7475.o
 obj-$(CONFIG_SENSORS_ADT7475)	+= adt7475.o
 obj-$(CONFIG_SENSORS_APPLESMC)	+= applesmc.o
 obj-$(CONFIG_SENSORS_APPLESMC)	+= applesmc.o
+obj-$(CONFIG_SENSORS_ARM_SCMI)	+= scmi-hwmon.o
 obj-$(CONFIG_SENSORS_ARM_SCPI)	+= scpi-hwmon.o
 obj-$(CONFIG_SENSORS_ARM_SCPI)	+= scpi-hwmon.o
 obj-$(CONFIG_SENSORS_ASC7621)	+= asc7621.o
 obj-$(CONFIG_SENSORS_ASC7621)	+= asc7621.o
 obj-$(CONFIG_SENSORS_ASPEED)	+= aspeed-pwm-tacho.o
 obj-$(CONFIG_SENSORS_ASPEED)	+= aspeed-pwm-tacho.o

+ 225 - 0
drivers/hwmon/scmi-hwmon.c

@@ -0,0 +1,225 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface(SCMI) based hwmon sensor driver
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ * Sudeep Holla <sudeep.holla@arm.com>
+ */
+
+#include <linux/hwmon.h>
+#include <linux/module.h>
+#include <linux/scmi_protocol.h>
+#include <linux/slab.h>
+#include <linux/sysfs.h>
+#include <linux/thermal.h>
+
+struct scmi_sensors {
+	const struct scmi_handle *handle;
+	const struct scmi_sensor_info **info[hwmon_max];
+};
+
+static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
+			   u32 attr, int channel, long *val)
+{
+	int ret;
+	u64 value;
+	const struct scmi_sensor_info *sensor;
+	struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
+	const struct scmi_handle *h = scmi_sensors->handle;
+
+	sensor = *(scmi_sensors->info[type] + channel);
+	ret = h->sensor_ops->reading_get(h, sensor->id, false, &value);
+	if (!ret)
+		*val = value;
+
+	return ret;
+}
+
+static int
+scmi_hwmon_read_string(struct device *dev, enum hwmon_sensor_types type,
+		       u32 attr, int channel, const char **str)
+{
+	const struct scmi_sensor_info *sensor;
+	struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
+
+	sensor = *(scmi_sensors->info[type] + channel);
+	*str = sensor->name;
+
+	return 0;
+}
+
+static umode_t
+scmi_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type,
+		      u32 attr, int channel)
+{
+	const struct scmi_sensor_info *sensor;
+	const struct scmi_sensors *scmi_sensors = drvdata;
+
+	sensor = *(scmi_sensors->info[type] + channel);
+	if (sensor && sensor->name)
+		return S_IRUGO;
+
+	return 0;
+}
+
+static const struct hwmon_ops scmi_hwmon_ops = {
+	.is_visible = scmi_hwmon_is_visible,
+	.read = scmi_hwmon_read,
+	.read_string = scmi_hwmon_read_string,
+};
+
+static struct hwmon_chip_info scmi_chip_info = {
+	.ops = &scmi_hwmon_ops,
+	.info = NULL,
+};
+
+static int scmi_hwmon_add_chan_info(struct hwmon_channel_info *scmi_hwmon_chan,
+				    struct device *dev, int num,
+				    enum hwmon_sensor_types type, u32 config)
+{
+	int i;
+	u32 *cfg = devm_kcalloc(dev, num + 1, sizeof(*cfg), GFP_KERNEL);
+
+	if (!cfg)
+		return -ENOMEM;
+
+	scmi_hwmon_chan->type = type;
+	scmi_hwmon_chan->config = cfg;
+	for (i = 0; i < num; i++, cfg++)
+		*cfg = config;
+
+	return 0;
+}
+
+static enum hwmon_sensor_types scmi_types[] = {
+	[TEMPERATURE_C] = hwmon_temp,
+	[VOLTAGE] = hwmon_in,
+	[CURRENT] = hwmon_curr,
+	[POWER] = hwmon_power,
+	[ENERGY] = hwmon_energy,
+};
+
+static u32 hwmon_attributes[] = {
+	[hwmon_chip] = HWMON_C_REGISTER_TZ,
+	[hwmon_temp] = HWMON_T_INPUT | HWMON_T_LABEL,
+	[hwmon_in] = HWMON_I_INPUT | HWMON_I_LABEL,
+	[hwmon_curr] = HWMON_C_INPUT | HWMON_C_LABEL,
+	[hwmon_power] = HWMON_P_INPUT | HWMON_P_LABEL,
+	[hwmon_energy] = HWMON_E_INPUT | HWMON_E_LABEL,
+};
+
+static int scmi_hwmon_probe(struct scmi_device *sdev)
+{
+	int i, idx;
+	u16 nr_sensors;
+	enum hwmon_sensor_types type;
+	struct scmi_sensors *scmi_sensors;
+	const struct scmi_sensor_info *sensor;
+	int nr_count[hwmon_max] = {0}, nr_types = 0;
+	const struct hwmon_chip_info *chip_info;
+	struct device *hwdev, *dev = &sdev->dev;
+	struct hwmon_channel_info *scmi_hwmon_chan;
+	const struct hwmon_channel_info **ptr_scmi_ci;
+	const struct scmi_handle *handle = sdev->handle;
+
+	if (!handle || !handle->sensor_ops)
+		return -ENODEV;
+
+	nr_sensors = handle->sensor_ops->count_get(handle);
+	if (!nr_sensors)
+		return -EIO;
+
+	scmi_sensors = devm_kzalloc(dev, sizeof(*scmi_sensors), GFP_KERNEL);
+	if (!scmi_sensors)
+		return -ENOMEM;
+
+	scmi_sensors->handle = handle;
+
+	for (i = 0; i < nr_sensors; i++) {
+		sensor = handle->sensor_ops->info_get(handle, i);
+		if (!sensor)
+			return -EINVAL;
+
+		switch (sensor->type) {
+		case TEMPERATURE_C:
+		case VOLTAGE:
+		case CURRENT:
+		case POWER:
+		case ENERGY:
+			type = scmi_types[sensor->type];
+			if (!nr_count[type])
+				nr_types++;
+			nr_count[type]++;
+			break;
+		}
+	}
+
+	if (nr_count[hwmon_temp])
+		nr_count[hwmon_chip]++, nr_types++;
+
+	scmi_hwmon_chan = devm_kcalloc(dev, nr_types, sizeof(*scmi_hwmon_chan),
+				       GFP_KERNEL);
+	if (!scmi_hwmon_chan)
+		return -ENOMEM;
+
+	ptr_scmi_ci = devm_kcalloc(dev, nr_types + 1, sizeof(*ptr_scmi_ci),
+				   GFP_KERNEL);
+	if (!ptr_scmi_ci)
+		return -ENOMEM;
+
+	scmi_chip_info.info = ptr_scmi_ci;
+	chip_info = &scmi_chip_info;
+
+	for (type = 0; type < hwmon_max && nr_count[type]; type++) {
+		scmi_hwmon_add_chan_info(scmi_hwmon_chan, dev, nr_count[type],
+					 type, hwmon_attributes[type]);
+		*ptr_scmi_ci++ = scmi_hwmon_chan++;
+
+		scmi_sensors->info[type] =
+			devm_kcalloc(dev, nr_count[type],
+				     sizeof(*scmi_sensors->info), GFP_KERNEL);
+		if (!scmi_sensors->info[type])
+			return -ENOMEM;
+	}
+
+	for (i = nr_sensors - 1; i >= 0 ; i--) {
+		sensor = handle->sensor_ops->info_get(handle, i);
+		if (!sensor)
+			continue;
+
+		switch (sensor->type) {
+		case TEMPERATURE_C:
+		case VOLTAGE:
+		case CURRENT:
+		case POWER:
+		case ENERGY:
+			type = scmi_types[sensor->type];
+			idx = --nr_count[type];
+			*(scmi_sensors->info[type] + idx) = sensor;
+			break;
+		}
+	}
+
+	hwdev = devm_hwmon_device_register_with_info(dev, "scmi_sensors",
+						     scmi_sensors, chip_info,
+						     NULL);
+
+	return PTR_ERR_OR_ZERO(hwdev);
+}
+
+static const struct scmi_device_id scmi_id_table[] = {
+	{ SCMI_PROTOCOL_SENSOR },
+	{ },
+};
+MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+
+static struct scmi_driver scmi_hwmon_drv = {
+	.name		= "scmi-hwmon",
+	.probe		= scmi_hwmon_probe,
+	.id_table	= scmi_id_table,
+};
+module_scmi_driver(scmi_hwmon_drv);
+
+MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+MODULE_DESCRIPTION("ARM SCMI HWMON interface driver");
+MODULE_LICENSE("GPL v2");

+ 1 - 1
drivers/memory/emif.c

@@ -127,7 +127,7 @@ static int emif_regdump_show(struct seq_file *s, void *unused)
 
 
 	for (i = 0; i < EMIF_MAX_NUM_FREQUENCIES && regs_cache[i]; i++) {
 	for (i = 0; i < EMIF_MAX_NUM_FREQUENCIES && regs_cache[i]; i++) {
 		do_emif_regdump_show(s, emif, regs_cache[i]);
 		do_emif_regdump_show(s, emif, regs_cache[i]);
-		seq_printf(s, "\n");
+		seq_putc(s, '\n');
 	}
 	}
 
 
 	return 0;
 	return 0;

+ 1 - 0
drivers/memory/samsung/Kconfig

@@ -1,3 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
 config SAMSUNG_MC
 config SAMSUNG_MC
 	bool "Samsung Exynos Memory Controller support" if COMPILE_TEST
 	bool "Samsung Exynos Memory Controller support" if COMPILE_TEST
 	help
 	help

+ 1 - 0
drivers/memory/samsung/Makefile

@@ -1 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_EXYNOS_SROM)	+= exynos-srom.o
 obj-$(CONFIG_EXYNOS_SROM)	+= exynos-srom.o

+ 7 - 11
drivers/memory/samsung/exynos-srom.c

@@ -1,14 +1,10 @@
-/*
- * Copyright (c) 2015 Samsung Electronics Co., Ltd.
- *	      http://www.samsung.com/
- *
- * EXYNOS - SROM Controller support
- * Author: Pankaj Dubey <pankaj.dubey@samsung.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
+// SPDX-License-Identifier: GPL-2.0
+//
+// Copyright (c) 2015 Samsung Electronics Co., Ltd.
+//	      http://www.samsung.com/
+//
+// EXYNOS - SROM Controller support
+// Author: Pankaj Dubey <pankaj.dubey@samsung.com>
 
 
 #include <linux/io.h>
 #include <linux/io.h>
 #include <linux/init.h>
 #include <linux/init.h>

+ 2 - 5
drivers/memory/samsung/exynos-srom.h

@@ -1,13 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
 /*
 /*
  * Copyright (c) 2015 Samsung Electronics Co., Ltd.
  * Copyright (c) 2015 Samsung Electronics Co., Ltd.
  *		http://www.samsung.com
  *		http://www.samsung.com
  *
  *
  * Exynos SROMC register definitions
  * Exynos SROMC register definitions
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
-*/
+ */
 
 
 #ifndef __EXYNOS_SROM_H
 #ifndef __EXYNOS_SROM_H
 #define __EXYNOS_SROM_H __FILE__
 #define __EXYNOS_SROM_H __FILE__

+ 0 - 1
drivers/memory/ti-emif-pm.c

@@ -271,7 +271,6 @@ static int ti_emif_probe(struct platform_device *pdev)
 	emif_data->pm_data.ti_emif_base_addr_virt = devm_ioremap_resource(dev,
 	emif_data->pm_data.ti_emif_base_addr_virt = devm_ioremap_resource(dev,
 									  res);
 									  res);
 	if (IS_ERR(emif_data->pm_data.ti_emif_base_addr_virt)) {
 	if (IS_ERR(emif_data->pm_data.ti_emif_base_addr_virt)) {
-		dev_err(dev, "could not ioremap emif mem\n");
 		ret = PTR_ERR(emif_data->pm_data.ti_emif_base_addr_virt);
 		ret = PTR_ERR(emif_data->pm_data.ti_emif_base_addr_virt);
 		return ret;
 		return ret;
 	}
 	}

+ 33 - 0
drivers/perf/Kconfig

@@ -5,6 +5,39 @@
 menu "Performance monitor support"
 menu "Performance monitor support"
 	depends on PERF_EVENTS
 	depends on PERF_EVENTS
 
 
+config ARM_CCI_PMU
+	bool
+	select ARM_CCI
+
+config ARM_CCI400_PMU
+	bool "ARM CCI400 PMU support"
+	depends on (ARM && CPU_V7) || ARM64
+	select ARM_CCI400_COMMON
+	select ARM_CCI_PMU
+	help
+	  Support for PMU events monitoring on the ARM CCI-400 (cache coherent
+	  interconnect). CCI-400 supports counting events related to the
+	  connected slave/master interfaces.
+
+config ARM_CCI5xx_PMU
+	bool "ARM CCI-500/CCI-550 PMU support"
+	depends on (ARM && CPU_V7) || ARM64
+	select ARM_CCI_PMU
+	help
+	  Support for PMU events monitoring on the ARM CCI-500/CCI-550 cache
+	  coherent interconnects. Both of them provide 8 independent event counters,
+	  which can count events pertaining to the slave/master interfaces as well
+	  as the internal events to the CCI.
+
+	  If unsure, say Y
+
+config ARM_CCN
+	tristate "ARM CCN driver support"
+	depends on ARM || ARM64
+	help
+	  PMU (perf) driver supporting the ARM CCN (Cache Coherent Network)
+	  interconnect.
+
 config ARM_PMU
 config ARM_PMU
 	depends on ARM || ARM64
 	depends on ARM || ARM64
 	bool "ARM PMU framework"
 	bool "ARM PMU framework"

+ 2 - 0
drivers/perf/Makefile

@@ -1,4 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 # SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_ARM_CCI_PMU) += arm-cci.o
+obj-$(CONFIG_ARM_CCN) += arm-ccn.o
 obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o
 obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o
 obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
 obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
 obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
 obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o

+ 1722 - 0
drivers/perf/arm-cci.c

@@ -0,0 +1,1722 @@
+// SPDX-License-Identifier: GPL-2.0
+// CCI Cache Coherent Interconnect PMU driver
+// Copyright (C) 2013-2018 Arm Ltd.
+// Author: Punit Agrawal <punit.agrawal@arm.com>, Suzuki Poulose <suzuki.poulose@arm.com>
+
+#include <linux/arm-cci.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/perf_event.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#define DRIVER_NAME		"ARM-CCI PMU"
+
+#define CCI_PMCR		0x0100
+#define CCI_PID2		0x0fe8
+
+#define CCI_PMCR_CEN		0x00000001
+#define CCI_PMCR_NCNT_MASK	0x0000f800
+#define CCI_PMCR_NCNT_SHIFT	11
+
+#define CCI_PID2_REV_MASK	0xf0
+#define CCI_PID2_REV_SHIFT	4
+
+#define CCI_PMU_EVT_SEL		0x000
+#define CCI_PMU_CNTR		0x004
+#define CCI_PMU_CNTR_CTRL	0x008
+#define CCI_PMU_OVRFLW		0x00c
+
+#define CCI_PMU_OVRFLW_FLAG	1
+
+#define CCI_PMU_CNTR_SIZE(model)	((model)->cntr_size)
+#define CCI_PMU_CNTR_BASE(model, idx)	((idx) * CCI_PMU_CNTR_SIZE(model))
+#define CCI_PMU_CNTR_MASK		((1ULL << 32) -1)
+#define CCI_PMU_CNTR_LAST(cci_pmu)	(cci_pmu->num_cntrs - 1)
+
+#define CCI_PMU_MAX_HW_CNTRS(model) \
+	((model)->num_hw_cntrs + (model)->fixed_hw_cntrs)
+
+/* Types of interfaces that can generate events */
+enum {
+	CCI_IF_SLAVE,
+	CCI_IF_MASTER,
+#ifdef CONFIG_ARM_CCI5xx_PMU
+	CCI_IF_GLOBAL,
+#endif
+	CCI_IF_MAX,
+};
+
+struct event_range {
+	u32 min;
+	u32 max;
+};
+
+struct cci_pmu_hw_events {
+	struct perf_event **events;
+	unsigned long *used_mask;
+	raw_spinlock_t pmu_lock;
+};
+
+struct cci_pmu;
+/*
+ * struct cci_pmu_model:
+ * @fixed_hw_cntrs - Number of fixed event counters
+ * @num_hw_cntrs - Maximum number of programmable event counters
+ * @cntr_size - Size of an event counter mapping
+ */
+struct cci_pmu_model {
+	char *name;
+	u32 fixed_hw_cntrs;
+	u32 num_hw_cntrs;
+	u32 cntr_size;
+	struct attribute **format_attrs;
+	struct attribute **event_attrs;
+	struct event_range event_ranges[CCI_IF_MAX];
+	int (*validate_hw_event)(struct cci_pmu *, unsigned long);
+	int (*get_event_idx)(struct cci_pmu *, struct cci_pmu_hw_events *, unsigned long);
+	void (*write_counters)(struct cci_pmu *, unsigned long *);
+};
+
+static struct cci_pmu_model cci_pmu_models[];
+
+struct cci_pmu {
+	void __iomem *base;
+	void __iomem *ctrl_base;
+	struct pmu pmu;
+	int cpu;
+	int nr_irqs;
+	int *irqs;
+	unsigned long active_irqs;
+	const struct cci_pmu_model *model;
+	struct cci_pmu_hw_events hw_events;
+	struct platform_device *plat_device;
+	int num_cntrs;
+	atomic_t active_events;
+	struct mutex reserve_mutex;
+};
+
+#define to_cci_pmu(c)	(container_of(c, struct cci_pmu, pmu))
+
+static struct cci_pmu *g_cci_pmu;
+
+enum cci_models {
+#ifdef CONFIG_ARM_CCI400_PMU
+	CCI400_R0,
+	CCI400_R1,
+#endif
+#ifdef CONFIG_ARM_CCI5xx_PMU
+	CCI500_R0,
+	CCI550_R0,
+#endif
+	CCI_MODEL_MAX
+};
+
+static void pmu_write_counters(struct cci_pmu *cci_pmu,
+				 unsigned long *mask);
+static ssize_t cci_pmu_format_show(struct device *dev,
+			struct device_attribute *attr, char *buf);
+static ssize_t cci_pmu_event_show(struct device *dev,
+			struct device_attribute *attr, char *buf);
+
+#define CCI_EXT_ATTR_ENTRY(_name, _func, _config) 				\
+	&((struct dev_ext_attribute[]) {					\
+		{ __ATTR(_name, S_IRUGO, _func, NULL), (void *)_config }	\
+	})[0].attr.attr
+
+#define CCI_FORMAT_EXT_ATTR_ENTRY(_name, _config) \
+	CCI_EXT_ATTR_ENTRY(_name, cci_pmu_format_show, (char *)_config)
+#define CCI_EVENT_EXT_ATTR_ENTRY(_name, _config) \
+	CCI_EXT_ATTR_ENTRY(_name, cci_pmu_event_show, (unsigned long)_config)
+
+/* CCI400 PMU Specific definitions */
+
+#ifdef CONFIG_ARM_CCI400_PMU
+
+/* Port ids */
+#define CCI400_PORT_S0		0
+#define CCI400_PORT_S1		1
+#define CCI400_PORT_S2		2
+#define CCI400_PORT_S3		3
+#define CCI400_PORT_S4		4
+#define CCI400_PORT_M0		5
+#define CCI400_PORT_M1		6
+#define CCI400_PORT_M2		7
+
+#define CCI400_R1_PX		5
+
+/*
+ * Instead of an event id to monitor CCI cycles, a dedicated counter is
+ * provided. Use 0xff to represent CCI cycles and hope that no future revisions
+ * make use of this event in hardware.
+ */
+enum cci400_perf_events {
+	CCI400_PMU_CYCLES = 0xff
+};
+
+#define CCI400_PMU_CYCLE_CNTR_IDX	0
+#define CCI400_PMU_CNTR0_IDX		1
+
+/*
+ * CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8
+ * ports and bits 4:0 are event codes. There are different event codes
+ * associated with each port type.
+ *
+ * Additionally, the range of events associated with the port types changed
+ * between Rev0 and Rev1.
+ *
+ * The constants below define the range of valid codes for each port type for
+ * the different revisions and are used to validate the event to be monitored.
+ */
+
+#define CCI400_PMU_EVENT_MASK		0xffUL
+#define CCI400_PMU_EVENT_SOURCE_SHIFT	5
+#define CCI400_PMU_EVENT_SOURCE_MASK	0x7
+#define CCI400_PMU_EVENT_CODE_SHIFT	0
+#define CCI400_PMU_EVENT_CODE_MASK	0x1f
+#define CCI400_PMU_EVENT_SOURCE(event) \
+	((event >> CCI400_PMU_EVENT_SOURCE_SHIFT) & \
+			CCI400_PMU_EVENT_SOURCE_MASK)
+#define CCI400_PMU_EVENT_CODE(event) \
+	((event >> CCI400_PMU_EVENT_CODE_SHIFT) & CCI400_PMU_EVENT_CODE_MASK)
+
+#define CCI400_R0_SLAVE_PORT_MIN_EV	0x00
+#define CCI400_R0_SLAVE_PORT_MAX_EV	0x13
+#define CCI400_R0_MASTER_PORT_MIN_EV	0x14
+#define CCI400_R0_MASTER_PORT_MAX_EV	0x1a
+
+#define CCI400_R1_SLAVE_PORT_MIN_EV	0x00
+#define CCI400_R1_SLAVE_PORT_MAX_EV	0x14
+#define CCI400_R1_MASTER_PORT_MIN_EV	0x00
+#define CCI400_R1_MASTER_PORT_MAX_EV	0x11
+
+#define CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(_name, _config) \
+	CCI_EXT_ATTR_ENTRY(_name, cci400_pmu_cycle_event_show, \
+					(unsigned long)_config)
+
+static ssize_t cci400_pmu_cycle_event_show(struct device *dev,
+			struct device_attribute *attr, char *buf);
+
+static struct attribute *cci400_pmu_format_attrs[] = {
+	CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"),
+	CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-7"),
+	NULL
+};
+
+static struct attribute *cci400_r0_pmu_event_attrs[] = {
+	/* Slave events */
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13),
+	/* Master events */
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x14),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_addr_hazard, 0x15),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_id_hazard, 0x16),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_tt_full, 0x17),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x18),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x19),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_tt_full, 0x1A),
+	/* Special event for cycles counter */
+	CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff),
+	NULL
+};
+
+static struct attribute *cci400_r1_pmu_event_attrs[] = {
+	/* Slave events */
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_slave_id_hazard, 0x14),
+	/* Master events */
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x0),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_stall_cycle_addr_hazard, 0x1),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_master_id_hazard, 0x2),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_hi_prio_rtq_full, 0x3),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x4),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x5),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_wtq_full, 0x6),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_low_prio_rtq_full, 0x7),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_mid_prio_rtq_full, 0x8),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn0, 0x9),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn1, 0xA),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn2, 0xB),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn3, 0xC),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn0, 0xD),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn1, 0xE),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn2, 0xF),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn3, 0x10),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_unique_or_line_unique_addr_hazard, 0x11),
+	/* Special event for cycles counter */
+	CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff),
+	NULL
+};
+
+static ssize_t cci400_pmu_cycle_event_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	struct dev_ext_attribute *eattr = container_of(attr,
+				struct dev_ext_attribute, attr);
+	return snprintf(buf, PAGE_SIZE, "config=0x%lx\n", (unsigned long)eattr->var);
+}
+
+static int cci400_get_event_idx(struct cci_pmu *cci_pmu,
+				struct cci_pmu_hw_events *hw,
+				unsigned long cci_event)
+{
+	int idx;
+
+	/* cycles event idx is fixed */
+	if (cci_event == CCI400_PMU_CYCLES) {
+		if (test_and_set_bit(CCI400_PMU_CYCLE_CNTR_IDX, hw->used_mask))
+			return -EAGAIN;
+
+		return CCI400_PMU_CYCLE_CNTR_IDX;
+	}
+
+	for (idx = CCI400_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx)
+		if (!test_and_set_bit(idx, hw->used_mask))
+			return idx;
+
+	/* No counters available */
+	return -EAGAIN;
+}
+
+static int cci400_validate_hw_event(struct cci_pmu *cci_pmu, unsigned long hw_event)
+{
+	u8 ev_source = CCI400_PMU_EVENT_SOURCE(hw_event);
+	u8 ev_code = CCI400_PMU_EVENT_CODE(hw_event);
+	int if_type;
+
+	if (hw_event & ~CCI400_PMU_EVENT_MASK)
+		return -ENOENT;
+
+	if (hw_event == CCI400_PMU_CYCLES)
+		return hw_event;
+
+	switch (ev_source) {
+	case CCI400_PORT_S0:
+	case CCI400_PORT_S1:
+	case CCI400_PORT_S2:
+	case CCI400_PORT_S3:
+	case CCI400_PORT_S4:
+		/* Slave Interface */
+		if_type = CCI_IF_SLAVE;
+		break;
+	case CCI400_PORT_M0:
+	case CCI400_PORT_M1:
+	case CCI400_PORT_M2:
+		/* Master Interface */
+		if_type = CCI_IF_MASTER;
+		break;
+	default:
+		return -ENOENT;
+	}
+
+	if (ev_code >= cci_pmu->model->event_ranges[if_type].min &&
+		ev_code <= cci_pmu->model->event_ranges[if_type].max)
+		return hw_event;
+
+	return -ENOENT;
+}
+
+static int probe_cci400_revision(struct cci_pmu *cci_pmu)
+{
+	int rev;
+	rev = readl_relaxed(cci_pmu->ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK;
+	rev >>= CCI_PID2_REV_SHIFT;
+
+	if (rev < CCI400_R1_PX)
+		return CCI400_R0;
+	else
+		return CCI400_R1;
+}
+
+static const struct cci_pmu_model *probe_cci_model(struct cci_pmu *cci_pmu)
+{
+	if (platform_has_secure_cci_access())
+		return &cci_pmu_models[probe_cci400_revision(cci_pmu)];
+	return NULL;
+}
+#else	/* !CONFIG_ARM_CCI400_PMU */
+static inline struct cci_pmu_model *probe_cci_model(struct cci_pmu *cci_pmu)
+{
+	return NULL;
+}
+#endif	/* CONFIG_ARM_CCI400_PMU */
+
+#ifdef CONFIG_ARM_CCI5xx_PMU
+
+/*
+ * CCI5xx PMU event id is an 9-bit value made of two parts.
+ *	 bits [8:5] - Source for the event
+ *	 bits [4:0] - Event code (specific to type of interface)
+ *
+ *
+ */
+
+/* Port ids */
+#define CCI5xx_PORT_S0			0x0
+#define CCI5xx_PORT_S1			0x1
+#define CCI5xx_PORT_S2			0x2
+#define CCI5xx_PORT_S3			0x3
+#define CCI5xx_PORT_S4			0x4
+#define CCI5xx_PORT_S5			0x5
+#define CCI5xx_PORT_S6			0x6
+
+#define CCI5xx_PORT_M0			0x8
+#define CCI5xx_PORT_M1			0x9
+#define CCI5xx_PORT_M2			0xa
+#define CCI5xx_PORT_M3			0xb
+#define CCI5xx_PORT_M4			0xc
+#define CCI5xx_PORT_M5			0xd
+#define CCI5xx_PORT_M6			0xe
+
+#define CCI5xx_PORT_GLOBAL		0xf
+
+#define CCI5xx_PMU_EVENT_MASK		0x1ffUL
+#define CCI5xx_PMU_EVENT_SOURCE_SHIFT	0x5
+#define CCI5xx_PMU_EVENT_SOURCE_MASK	0xf
+#define CCI5xx_PMU_EVENT_CODE_SHIFT	0x0
+#define CCI5xx_PMU_EVENT_CODE_MASK	0x1f
+
+#define CCI5xx_PMU_EVENT_SOURCE(event)	\
+	((event >> CCI5xx_PMU_EVENT_SOURCE_SHIFT) & CCI5xx_PMU_EVENT_SOURCE_MASK)
+#define CCI5xx_PMU_EVENT_CODE(event)	\
+	((event >> CCI5xx_PMU_EVENT_CODE_SHIFT) & CCI5xx_PMU_EVENT_CODE_MASK)
+
+#define CCI5xx_SLAVE_PORT_MIN_EV	0x00
+#define CCI5xx_SLAVE_PORT_MAX_EV	0x1f
+#define CCI5xx_MASTER_PORT_MIN_EV	0x00
+#define CCI5xx_MASTER_PORT_MAX_EV	0x06
+#define CCI5xx_GLOBAL_PORT_MIN_EV	0x00
+#define CCI5xx_GLOBAL_PORT_MAX_EV	0x0f
+
+
+#define CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(_name, _config) \
+	CCI_EXT_ATTR_ENTRY(_name, cci5xx_pmu_global_event_show, \
+					(unsigned long) _config)
+
+static ssize_t cci5xx_pmu_global_event_show(struct device *dev,
+				struct device_attribute *attr, char *buf);
+
+static struct attribute *cci5xx_pmu_format_attrs[] = {
+	CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"),
+	CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-8"),
+	NULL,
+};
+
+static struct attribute *cci5xx_pmu_event_attrs[] = {
+	/* Slave events */
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_arvalid, 0x0),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_dev, 0x1),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_nonshareable, 0x2),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_non_alloc, 0x3),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_alloc, 0x4),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_invalidate, 0x5),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maint, 0x6),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rval, 0x8),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rlast_snoop, 0x9),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_awalid, 0xA),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_dev, 0xB),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_non_shareable, 0xC),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wb, 0xD),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wlu, 0xE),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wunique, 0xF),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_evict, 0x10),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_wrevict, 0x11),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_beat, 0x12),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_acvalid, 0x13),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_read, 0x14),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_clean, 0x15),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_data_transfer_low, 0x16),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_arvalid, 0x17),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall, 0x18),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall, 0x19),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_stall, 0x1A),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_w_resp_stall, 0x1B),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_srq_stall, 0x1C),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_s_data_stall, 0x1D),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_rq_stall_ot_limit, 0x1E),
+	CCI_EVENT_EXT_ATTR_ENTRY(si_r_stall_arbit, 0x1F),
+
+	/* Master events */
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_beat_any, 0x0),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_beat_any, 0x1),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall, 0x2),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_stall, 0x3),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall, 0x4),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_stall, 0x5),
+	CCI_EVENT_EXT_ATTR_ENTRY(mi_w_resp_stall, 0x6),
+
+	/* Global events */
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_0_1, 0x0),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_2_3, 0x1),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_4_5, 0x2),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_6_7, 0x3),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_0_1, 0x4),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_2_3, 0x5),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_4_5, 0x6),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_6_7, 0x7),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_back_invalidation, 0x8),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_alloc_busy, 0x9),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_tt_full, 0xA),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE),
+	CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF),
+	NULL
+};
+
+static ssize_t cci5xx_pmu_global_event_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct dev_ext_attribute *eattr = container_of(attr,
+					struct dev_ext_attribute, attr);
+	/* Global events have single fixed source code */
+	return snprintf(buf, PAGE_SIZE, "event=0x%lx,source=0x%x\n",
+				(unsigned long)eattr->var, CCI5xx_PORT_GLOBAL);
+}
+
+/*
+ * CCI500 provides 8 independent event counters that can count
+ * any of the events available.
+ * CCI500 PMU event source ids
+ *	0x0-0x6 - Slave interfaces
+ *	0x8-0xD - Master interfaces
+ *	0xf     - Global Events
+ *	0x7,0xe - Reserved
+ */
+static int cci500_validate_hw_event(struct cci_pmu *cci_pmu,
+					unsigned long hw_event)
+{
+	u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event);
+	u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event);
+	int if_type;
+
+	if (hw_event & ~CCI5xx_PMU_EVENT_MASK)
+		return -ENOENT;
+
+	switch (ev_source) {
+	case CCI5xx_PORT_S0:
+	case CCI5xx_PORT_S1:
+	case CCI5xx_PORT_S2:
+	case CCI5xx_PORT_S3:
+	case CCI5xx_PORT_S4:
+	case CCI5xx_PORT_S5:
+	case CCI5xx_PORT_S6:
+		if_type = CCI_IF_SLAVE;
+		break;
+	case CCI5xx_PORT_M0:
+	case CCI5xx_PORT_M1:
+	case CCI5xx_PORT_M2:
+	case CCI5xx_PORT_M3:
+	case CCI5xx_PORT_M4:
+	case CCI5xx_PORT_M5:
+		if_type = CCI_IF_MASTER;
+		break;
+	case CCI5xx_PORT_GLOBAL:
+		if_type = CCI_IF_GLOBAL;
+		break;
+	default:
+		return -ENOENT;
+	}
+
+	if (ev_code >= cci_pmu->model->event_ranges[if_type].min &&
+		ev_code <= cci_pmu->model->event_ranges[if_type].max)
+		return hw_event;
+
+	return -ENOENT;
+}
+
+/*
+ * CCI550 provides 8 independent event counters that can count
+ * any of the events available.
+ * CCI550 PMU event source ids
+ *	0x0-0x6 - Slave interfaces
+ *	0x8-0xe - Master interfaces
+ *	0xf     - Global Events
+ *	0x7	- Reserved
+ */
+static int cci550_validate_hw_event(struct cci_pmu *cci_pmu,
+					unsigned long hw_event)
+{
+	u32 ev_source = CCI5xx_PMU_EVENT_SOURCE(hw_event);
+	u32 ev_code = CCI5xx_PMU_EVENT_CODE(hw_event);
+	int if_type;
+
+	if (hw_event & ~CCI5xx_PMU_EVENT_MASK)
+		return -ENOENT;
+
+	switch (ev_source) {
+	case CCI5xx_PORT_S0:
+	case CCI5xx_PORT_S1:
+	case CCI5xx_PORT_S2:
+	case CCI5xx_PORT_S3:
+	case CCI5xx_PORT_S4:
+	case CCI5xx_PORT_S5:
+	case CCI5xx_PORT_S6:
+		if_type = CCI_IF_SLAVE;
+		break;
+	case CCI5xx_PORT_M0:
+	case CCI5xx_PORT_M1:
+	case CCI5xx_PORT_M2:
+	case CCI5xx_PORT_M3:
+	case CCI5xx_PORT_M4:
+	case CCI5xx_PORT_M5:
+	case CCI5xx_PORT_M6:
+		if_type = CCI_IF_MASTER;
+		break;
+	case CCI5xx_PORT_GLOBAL:
+		if_type = CCI_IF_GLOBAL;
+		break;
+	default:
+		return -ENOENT;
+	}
+
+	if (ev_code >= cci_pmu->model->event_ranges[if_type].min &&
+		ev_code <= cci_pmu->model->event_ranges[if_type].max)
+		return hw_event;
+
+	return -ENOENT;
+}
+
+#endif	/* CONFIG_ARM_CCI5xx_PMU */
+
+/*
+ * Program the CCI PMU counters which have PERF_HES_ARCH set
+ * with the event period and mark them ready before we enable
+ * PMU.
+ */
+static void cci_pmu_sync_counters(struct cci_pmu *cci_pmu)
+{
+	int i;
+	struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events;
+
+	DECLARE_BITMAP(mask, cci_pmu->num_cntrs);
+
+	bitmap_zero(mask, cci_pmu->num_cntrs);
+	for_each_set_bit(i, cci_pmu->hw_events.used_mask, cci_pmu->num_cntrs) {
+		struct perf_event *event = cci_hw->events[i];
+
+		if (WARN_ON(!event))
+			continue;
+
+		/* Leave the events which are not counting */
+		if (event->hw.state & PERF_HES_STOPPED)
+			continue;
+		if (event->hw.state & PERF_HES_ARCH) {
+			set_bit(i, mask);
+			event->hw.state &= ~PERF_HES_ARCH;
+		}
+	}
+
+	pmu_write_counters(cci_pmu, mask);
+}
+
+/* Should be called with cci_pmu->hw_events->pmu_lock held */
+static void __cci_pmu_enable_nosync(struct cci_pmu *cci_pmu)
+{
+	u32 val;
+
+	/* Enable all the PMU counters. */
+	val = readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) | CCI_PMCR_CEN;
+	writel(val, cci_pmu->ctrl_base + CCI_PMCR);
+}
+
+/* Should be called with cci_pmu->hw_events->pmu_lock held */
+static void __cci_pmu_enable_sync(struct cci_pmu *cci_pmu)
+{
+	cci_pmu_sync_counters(cci_pmu);
+	__cci_pmu_enable_nosync(cci_pmu);
+}
+
+/* Should be called with cci_pmu->hw_events->pmu_lock held */
+static void __cci_pmu_disable(struct cci_pmu *cci_pmu)
+{
+	u32 val;
+
+	/* Disable all the PMU counters. */
+	val = readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) & ~CCI_PMCR_CEN;
+	writel(val, cci_pmu->ctrl_base + CCI_PMCR);
+}
+
+static ssize_t cci_pmu_format_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	struct dev_ext_attribute *eattr = container_of(attr,
+				struct dev_ext_attribute, attr);
+	return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var);
+}
+
+static ssize_t cci_pmu_event_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	struct dev_ext_attribute *eattr = container_of(attr,
+				struct dev_ext_attribute, attr);
+	/* source parameter is mandatory for normal PMU events */
+	return snprintf(buf, PAGE_SIZE, "source=?,event=0x%lx\n",
+					 (unsigned long)eattr->var);
+}
+
+static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx)
+{
+	return 0 <= idx && idx <= CCI_PMU_CNTR_LAST(cci_pmu);
+}
+
+static u32 pmu_read_register(struct cci_pmu *cci_pmu, int idx, unsigned int offset)
+{
+	return readl_relaxed(cci_pmu->base +
+			     CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset);
+}
+
+static void pmu_write_register(struct cci_pmu *cci_pmu, u32 value,
+			       int idx, unsigned int offset)
+{
+	writel_relaxed(value, cci_pmu->base +
+		       CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset);
+}
+
+static void pmu_disable_counter(struct cci_pmu *cci_pmu, int idx)
+{
+	pmu_write_register(cci_pmu, 0, idx, CCI_PMU_CNTR_CTRL);
+}
+
+static void pmu_enable_counter(struct cci_pmu *cci_pmu, int idx)
+{
+	pmu_write_register(cci_pmu, 1, idx, CCI_PMU_CNTR_CTRL);
+}
+
+static bool __maybe_unused
+pmu_counter_is_enabled(struct cci_pmu *cci_pmu, int idx)
+{
+	return (pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR_CTRL) & 0x1) != 0;
+}
+
+static void pmu_set_event(struct cci_pmu *cci_pmu, int idx, unsigned long event)
+{
+	pmu_write_register(cci_pmu, event, idx, CCI_PMU_EVT_SEL);
+}
+
+/*
+ * For all counters on the CCI-PMU, disable any 'enabled' counters,
+ * saving the changed counters in the mask, so that we can restore
+ * it later using pmu_restore_counters. The mask is private to the
+ * caller. We cannot rely on the used_mask maintained by the CCI_PMU
+ * as it only tells us if the counter is assigned to perf_event or not.
+ * The state of the perf_event cannot be locked by the PMU layer, hence
+ * we check the individual counter status (which can be locked by
+ * cci_pm->hw_events->pmu_lock).
+ *
+ * @mask should be initialised to empty by the caller.
+ */
+static void __maybe_unused
+pmu_save_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
+{
+	int i;
+
+	for (i = 0; i < cci_pmu->num_cntrs; i++) {
+		if (pmu_counter_is_enabled(cci_pmu, i)) {
+			set_bit(i, mask);
+			pmu_disable_counter(cci_pmu, i);
+		}
+	}
+}
+
+/*
+ * Restore the status of the counters. Reversal of the pmu_save_counters().
+ * For each counter set in the mask, enable the counter back.
+ */
+static void __maybe_unused
+pmu_restore_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
+{
+	int i;
+
+	for_each_set_bit(i, mask, cci_pmu->num_cntrs)
+		pmu_enable_counter(cci_pmu, i);
+}
+
+/*
+ * Returns the number of programmable counters actually implemented
+ * by the cci
+ */
+static u32 pmu_get_max_counters(struct cci_pmu *cci_pmu)
+{
+	return (readl_relaxed(cci_pmu->ctrl_base + CCI_PMCR) &
+		CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT;
+}
+
+static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *event)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	unsigned long cci_event = event->hw.config_base;
+	int idx;
+
+	if (cci_pmu->model->get_event_idx)
+		return cci_pmu->model->get_event_idx(cci_pmu, hw, cci_event);
+
+	/* Generic code to find an unused idx from the mask */
+	for(idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++)
+		if (!test_and_set_bit(idx, hw->used_mask))
+			return idx;
+
+	/* No counters available */
+	return -EAGAIN;
+}
+
+static int pmu_map_event(struct perf_event *event)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+
+	if (event->attr.type < PERF_TYPE_MAX ||
+			!cci_pmu->model->validate_hw_event)
+		return -ENOENT;
+
+	return	cci_pmu->model->validate_hw_event(cci_pmu, event->attr.config);
+}
+
+static int pmu_request_irq(struct cci_pmu *cci_pmu, irq_handler_t handler)
+{
+	int i;
+	struct platform_device *pmu_device = cci_pmu->plat_device;
+
+	if (unlikely(!pmu_device))
+		return -ENODEV;
+
+	if (cci_pmu->nr_irqs < 1) {
+		dev_err(&pmu_device->dev, "no irqs for CCI PMUs defined\n");
+		return -ENODEV;
+	}
+
+	/*
+	 * Register all available CCI PMU interrupts. In the interrupt handler
+	 * we iterate over the counters checking for interrupt source (the
+	 * overflowing counter) and clear it.
+	 *
+	 * This should allow handling of non-unique interrupt for the counters.
+	 */
+	for (i = 0; i < cci_pmu->nr_irqs; i++) {
+		int err = request_irq(cci_pmu->irqs[i], handler, IRQF_SHARED,
+				"arm-cci-pmu", cci_pmu);
+		if (err) {
+			dev_err(&pmu_device->dev, "unable to request IRQ%d for ARM CCI PMU counters\n",
+				cci_pmu->irqs[i]);
+			return err;
+		}
+
+		set_bit(i, &cci_pmu->active_irqs);
+	}
+
+	return 0;
+}
+
+static void pmu_free_irq(struct cci_pmu *cci_pmu)
+{
+	int i;
+
+	for (i = 0; i < cci_pmu->nr_irqs; i++) {
+		if (!test_and_clear_bit(i, &cci_pmu->active_irqs))
+			continue;
+
+		free_irq(cci_pmu->irqs[i], cci_pmu);
+	}
+}
+
+static u32 pmu_read_counter(struct perf_event *event)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	struct hw_perf_event *hw_counter = &event->hw;
+	int idx = hw_counter->idx;
+	u32 value;
+
+	if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) {
+		dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx);
+		return 0;
+	}
+	value = pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR);
+
+	return value;
+}
+
+static void pmu_write_counter(struct cci_pmu *cci_pmu, u32 value, int idx)
+{
+	pmu_write_register(cci_pmu, value, idx, CCI_PMU_CNTR);
+}
+
+static void __pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
+{
+	int i;
+	struct cci_pmu_hw_events *cci_hw = &cci_pmu->hw_events;
+
+	for_each_set_bit(i, mask, cci_pmu->num_cntrs) {
+		struct perf_event *event = cci_hw->events[i];
+
+		if (WARN_ON(!event))
+			continue;
+		pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i);
+	}
+}
+
+static void pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
+{
+	if (cci_pmu->model->write_counters)
+		cci_pmu->model->write_counters(cci_pmu, mask);
+	else
+		__pmu_write_counters(cci_pmu, mask);
+}
+
+#ifdef CONFIG_ARM_CCI5xx_PMU
+
+/*
+ * CCI-500/CCI-550 has advanced power saving policies, which could gate the
+ * clocks to the PMU counters, which makes the writes to them ineffective.
+ * The only way to write to those counters is when the global counters
+ * are enabled and the particular counter is enabled.
+ *
+ * So we do the following :
+ *
+ * 1) Disable all the PMU counters, saving their current state
+ * 2) Enable the global PMU profiling, now that all counters are
+ *    disabled.
+ *
+ * For each counter to be programmed, repeat steps 3-7:
+ *
+ * 3) Write an invalid event code to the event control register for the
+      counter, so that the counters are not modified.
+ * 4) Enable the counter control for the counter.
+ * 5) Set the counter value
+ * 6) Disable the counter
+ * 7) Restore the event in the target counter
+ *
+ * 8) Disable the global PMU.
+ * 9) Restore the status of the rest of the counters.
+ *
+ * We choose an event which for CCI-5xx is guaranteed not to count.
+ * We use the highest possible event code (0x1f) for the master interface 0.
+ */
+#define CCI5xx_INVALID_EVENT	((CCI5xx_PORT_M0 << CCI5xx_PMU_EVENT_SOURCE_SHIFT) | \
+				 (CCI5xx_PMU_EVENT_CODE_MASK << CCI5xx_PMU_EVENT_CODE_SHIFT))
+static void cci5xx_pmu_write_counters(struct cci_pmu *cci_pmu, unsigned long *mask)
+{
+	int i;
+	DECLARE_BITMAP(saved_mask, cci_pmu->num_cntrs);
+
+	bitmap_zero(saved_mask, cci_pmu->num_cntrs);
+	pmu_save_counters(cci_pmu, saved_mask);
+
+	/*
+	 * Now that all the counters are disabled, we can safely turn the PMU on,
+	 * without syncing the status of the counters
+	 */
+	__cci_pmu_enable_nosync(cci_pmu);
+
+	for_each_set_bit(i, mask, cci_pmu->num_cntrs) {
+		struct perf_event *event = cci_pmu->hw_events.events[i];
+
+		if (WARN_ON(!event))
+			continue;
+
+		pmu_set_event(cci_pmu, i, CCI5xx_INVALID_EVENT);
+		pmu_enable_counter(cci_pmu, i);
+		pmu_write_counter(cci_pmu, local64_read(&event->hw.prev_count), i);
+		pmu_disable_counter(cci_pmu, i);
+		pmu_set_event(cci_pmu, i, event->hw.config_base);
+	}
+
+	__cci_pmu_disable(cci_pmu);
+
+	pmu_restore_counters(cci_pmu, saved_mask);
+}
+
+#endif	/* CONFIG_ARM_CCI5xx_PMU */
+
+static u64 pmu_event_update(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	u64 delta, prev_raw_count, new_raw_count;
+
+	do {
+		prev_raw_count = local64_read(&hwc->prev_count);
+		new_raw_count = pmu_read_counter(event);
+	} while (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
+		 new_raw_count) != prev_raw_count);
+
+	delta = (new_raw_count - prev_raw_count) & CCI_PMU_CNTR_MASK;
+
+	local64_add(delta, &event->count);
+
+	return new_raw_count;
+}
+
+static void pmu_read(struct perf_event *event)
+{
+	pmu_event_update(event);
+}
+
+static void pmu_event_set_period(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	/*
+	 * The CCI PMU counters have a period of 2^32. To account for the
+	 * possiblity of extreme interrupt latency we program for a period of
+	 * half that. Hopefully we can handle the interrupt before another 2^31
+	 * events occur and the counter overtakes its previous value.
+	 */
+	u64 val = 1ULL << 31;
+	local64_set(&hwc->prev_count, val);
+
+	/*
+	 * CCI PMU uses PERF_HES_ARCH to keep track of the counters, whose
+	 * values needs to be sync-ed with the s/w state before the PMU is
+	 * enabled.
+	 * Mark this counter for sync.
+	 */
+	hwc->state |= PERF_HES_ARCH;
+}
+
+static irqreturn_t pmu_handle_irq(int irq_num, void *dev)
+{
+	unsigned long flags;
+	struct cci_pmu *cci_pmu = dev;
+	struct cci_pmu_hw_events *events = &cci_pmu->hw_events;
+	int idx, handled = IRQ_NONE;
+
+	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+
+	/* Disable the PMU while we walk through the counters */
+	__cci_pmu_disable(cci_pmu);
+	/*
+	 * Iterate over counters and update the corresponding perf events.
+	 * This should work regardless of whether we have per-counter overflow
+	 * interrupt or a combined overflow interrupt.
+	 */
+	for (idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) {
+		struct perf_event *event = events->events[idx];
+
+		if (!event)
+			continue;
+
+		/* Did this counter overflow? */
+		if (!(pmu_read_register(cci_pmu, idx, CCI_PMU_OVRFLW) &
+		      CCI_PMU_OVRFLW_FLAG))
+			continue;
+
+		pmu_write_register(cci_pmu, CCI_PMU_OVRFLW_FLAG, idx,
+							CCI_PMU_OVRFLW);
+
+		pmu_event_update(event);
+		pmu_event_set_period(event);
+		handled = IRQ_HANDLED;
+	}
+
+	/* Enable the PMU and sync possibly overflowed counters */
+	__cci_pmu_enable_sync(cci_pmu);
+	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+
+	return IRQ_RETVAL(handled);
+}
+
+static int cci_pmu_get_hw(struct cci_pmu *cci_pmu)
+{
+	int ret = pmu_request_irq(cci_pmu, pmu_handle_irq);
+	if (ret) {
+		pmu_free_irq(cci_pmu);
+		return ret;
+	}
+	return 0;
+}
+
+static void cci_pmu_put_hw(struct cci_pmu *cci_pmu)
+{
+	pmu_free_irq(cci_pmu);
+}
+
+static void hw_perf_event_destroy(struct perf_event *event)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	atomic_t *active_events = &cci_pmu->active_events;
+	struct mutex *reserve_mutex = &cci_pmu->reserve_mutex;
+
+	if (atomic_dec_and_mutex_lock(active_events, reserve_mutex)) {
+		cci_pmu_put_hw(cci_pmu);
+		mutex_unlock(reserve_mutex);
+	}
+}
+
+static void cci_pmu_enable(struct pmu *pmu)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(pmu);
+	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
+	int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_cntrs);
+	unsigned long flags;
+
+	if (!enabled)
+		return;
+
+	raw_spin_lock_irqsave(&hw_events->pmu_lock, flags);
+	__cci_pmu_enable_sync(cci_pmu);
+	raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags);
+
+}
+
+static void cci_pmu_disable(struct pmu *pmu)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(pmu);
+	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&hw_events->pmu_lock, flags);
+	__cci_pmu_disable(cci_pmu);
+	raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags);
+}
+
+/*
+ * Check if the idx represents a non-programmable counter.
+ * All the fixed event counters are mapped before the programmable
+ * counters.
+ */
+static bool pmu_fixed_hw_idx(struct cci_pmu *cci_pmu, int idx)
+{
+	return (idx >= 0) && (idx < cci_pmu->model->fixed_hw_cntrs);
+}
+
+static void cci_pmu_start(struct perf_event *event, int pmu_flags)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
+	struct hw_perf_event *hwc = &event->hw;
+	int idx = hwc->idx;
+	unsigned long flags;
+
+	/*
+	 * To handle interrupt latency, we always reprogram the period
+	 * regardlesss of PERF_EF_RELOAD.
+	 */
+	if (pmu_flags & PERF_EF_RELOAD)
+		WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE));
+
+	hwc->state = 0;
+
+	if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) {
+		dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx);
+		return;
+	}
+
+	raw_spin_lock_irqsave(&hw_events->pmu_lock, flags);
+
+	/* Configure the counter unless you are counting a fixed event */
+	if (!pmu_fixed_hw_idx(cci_pmu, idx))
+		pmu_set_event(cci_pmu, idx, hwc->config_base);
+
+	pmu_event_set_period(event);
+	pmu_enable_counter(cci_pmu, idx);
+
+	raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags);
+}
+
+static void cci_pmu_stop(struct perf_event *event, int pmu_flags)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	struct hw_perf_event *hwc = &event->hw;
+	int idx = hwc->idx;
+
+	if (hwc->state & PERF_HES_STOPPED)
+		return;
+
+	if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) {
+		dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx);
+		return;
+	}
+
+	/*
+	 * We always reprogram the counter, so ignore PERF_EF_UPDATE. See
+	 * cci_pmu_start()
+	 */
+	pmu_disable_counter(cci_pmu, idx);
+	pmu_event_update(event);
+	hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
+}
+
+static int cci_pmu_add(struct perf_event *event, int flags)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
+	struct hw_perf_event *hwc = &event->hw;
+	int idx;
+	int err = 0;
+
+	perf_pmu_disable(event->pmu);
+
+	/* If we don't have a space for the counter then finish early. */
+	idx = pmu_get_event_idx(hw_events, event);
+	if (idx < 0) {
+		err = idx;
+		goto out;
+	}
+
+	event->hw.idx = idx;
+	hw_events->events[idx] = event;
+
+	hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+	if (flags & PERF_EF_START)
+		cci_pmu_start(event, PERF_EF_RELOAD);
+
+	/* Propagate our changes to the userspace mapping. */
+	perf_event_update_userpage(event);
+
+out:
+	perf_pmu_enable(event->pmu);
+	return err;
+}
+
+static void cci_pmu_del(struct perf_event *event, int flags)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events;
+	struct hw_perf_event *hwc = &event->hw;
+	int idx = hwc->idx;
+
+	cci_pmu_stop(event, PERF_EF_UPDATE);
+	hw_events->events[idx] = NULL;
+	clear_bit(idx, hw_events->used_mask);
+
+	perf_event_update_userpage(event);
+}
+
+static int validate_event(struct pmu *cci_pmu,
+			  struct cci_pmu_hw_events *hw_events,
+			  struct perf_event *event)
+{
+	if (is_software_event(event))
+		return 1;
+
+	/*
+	 * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The
+	 * core perf code won't check that the pmu->ctx == leader->ctx
+	 * until after pmu->event_init(event).
+	 */
+	if (event->pmu != cci_pmu)
+		return 0;
+
+	if (event->state < PERF_EVENT_STATE_OFF)
+		return 1;
+
+	if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec)
+		return 1;
+
+	return pmu_get_event_idx(hw_events, event) >= 0;
+}
+
+static int validate_group(struct perf_event *event)
+{
+	struct perf_event *sibling, *leader = event->group_leader;
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)];
+	struct cci_pmu_hw_events fake_pmu = {
+		/*
+		 * Initialise the fake PMU. We only need to populate the
+		 * used_mask for the purposes of validation.
+		 */
+		.used_mask = mask,
+	};
+	memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long));
+
+	if (!validate_event(event->pmu, &fake_pmu, leader))
+		return -EINVAL;
+
+	for_each_sibling_event(sibling, leader) {
+		if (!validate_event(event->pmu, &fake_pmu, sibling))
+			return -EINVAL;
+	}
+
+	if (!validate_event(event->pmu, &fake_pmu, event))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int __hw_perf_event_init(struct perf_event *event)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	int mapping;
+
+	mapping = pmu_map_event(event);
+
+	if (mapping < 0) {
+		pr_debug("event %x:%llx not supported\n", event->attr.type,
+			 event->attr.config);
+		return mapping;
+	}
+
+	/*
+	 * We don't assign an index until we actually place the event onto
+	 * hardware. Use -1 to signify that we haven't decided where to put it
+	 * yet.
+	 */
+	hwc->idx		= -1;
+	hwc->config_base	= 0;
+	hwc->config		= 0;
+	hwc->event_base		= 0;
+
+	/*
+	 * Store the event encoding into the config_base field.
+	 */
+	hwc->config_base	    |= (unsigned long)mapping;
+
+	/*
+	 * Limit the sample_period to half of the counter width. That way, the
+	 * new counter value is far less likely to overtake the previous one
+	 * unless you have some serious IRQ latency issues.
+	 */
+	hwc->sample_period  = CCI_PMU_CNTR_MASK >> 1;
+	hwc->last_period    = hwc->sample_period;
+	local64_set(&hwc->period_left, hwc->sample_period);
+
+	if (event->group_leader != event) {
+		if (validate_group(event) != 0)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int cci_pmu_event_init(struct perf_event *event)
+{
+	struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu);
+	atomic_t *active_events = &cci_pmu->active_events;
+	int err = 0;
+
+	if (event->attr.type != event->pmu->type)
+		return -ENOENT;
+
+	/* Shared by all CPUs, no meaningful state to sample */
+	if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)
+		return -EOPNOTSUPP;
+
+	/* We have no filtering of any kind */
+	if (event->attr.exclude_user	||
+	    event->attr.exclude_kernel	||
+	    event->attr.exclude_hv	||
+	    event->attr.exclude_idle	||
+	    event->attr.exclude_host	||
+	    event->attr.exclude_guest)
+		return -EINVAL;
+
+	/*
+	 * Following the example set by other "uncore" PMUs, we accept any CPU
+	 * and rewrite its affinity dynamically rather than having perf core
+	 * handle cpu == -1 and pid == -1 for this case.
+	 *
+	 * The perf core will pin online CPUs for the duration of this call and
+	 * the event being installed into its context, so the PMU's CPU can't
+	 * change under our feet.
+	 */
+	if (event->cpu < 0)
+		return -EINVAL;
+	event->cpu = cci_pmu->cpu;
+
+	event->destroy = hw_perf_event_destroy;
+	if (!atomic_inc_not_zero(active_events)) {
+		mutex_lock(&cci_pmu->reserve_mutex);
+		if (atomic_read(active_events) == 0)
+			err = cci_pmu_get_hw(cci_pmu);
+		if (!err)
+			atomic_inc(active_events);
+		mutex_unlock(&cci_pmu->reserve_mutex);
+	}
+	if (err)
+		return err;
+
+	err = __hw_perf_event_init(event);
+	if (err)
+		hw_perf_event_destroy(event);
+
+	return err;
+}
+
+static ssize_t pmu_cpumask_attr_show(struct device *dev,
+				     struct device_attribute *attr, char *buf)
+{
+	struct pmu *pmu = dev_get_drvdata(dev);
+	struct cci_pmu *cci_pmu = to_cci_pmu(pmu);
+
+	return cpumap_print_to_pagebuf(true, buf, cpumask_of(cci_pmu->cpu));
+}
+
+static struct device_attribute pmu_cpumask_attr =
+	__ATTR(cpumask, S_IRUGO, pmu_cpumask_attr_show, NULL);
+
+static struct attribute *pmu_attrs[] = {
+	&pmu_cpumask_attr.attr,
+	NULL,
+};
+
+static struct attribute_group pmu_attr_group = {
+	.attrs = pmu_attrs,
+};
+
+static struct attribute_group pmu_format_attr_group = {
+	.name = "format",
+	.attrs = NULL,		/* Filled in cci_pmu_init_attrs */
+};
+
+static struct attribute_group pmu_event_attr_group = {
+	.name = "events",
+	.attrs = NULL,		/* Filled in cci_pmu_init_attrs */
+};
+
+static const struct attribute_group *pmu_attr_groups[] = {
+	&pmu_attr_group,
+	&pmu_format_attr_group,
+	&pmu_event_attr_group,
+	NULL
+};
+
+static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev)
+{
+	const struct cci_pmu_model *model = cci_pmu->model;
+	char *name = model->name;
+	u32 num_cntrs;
+
+	pmu_event_attr_group.attrs = model->event_attrs;
+	pmu_format_attr_group.attrs = model->format_attrs;
+
+	cci_pmu->pmu = (struct pmu) {
+		.name		= cci_pmu->model->name,
+		.task_ctx_nr	= perf_invalid_context,
+		.pmu_enable	= cci_pmu_enable,
+		.pmu_disable	= cci_pmu_disable,
+		.event_init	= cci_pmu_event_init,
+		.add		= cci_pmu_add,
+		.del		= cci_pmu_del,
+		.start		= cci_pmu_start,
+		.stop		= cci_pmu_stop,
+		.read		= pmu_read,
+		.attr_groups	= pmu_attr_groups,
+	};
+
+	cci_pmu->plat_device = pdev;
+	num_cntrs = pmu_get_max_counters(cci_pmu);
+	if (num_cntrs > cci_pmu->model->num_hw_cntrs) {
+		dev_warn(&pdev->dev,
+			"PMU implements more counters(%d) than supported by"
+			" the model(%d), truncated.",
+			num_cntrs, cci_pmu->model->num_hw_cntrs);
+		num_cntrs = cci_pmu->model->num_hw_cntrs;
+	}
+	cci_pmu->num_cntrs = num_cntrs + cci_pmu->model->fixed_hw_cntrs;
+
+	return perf_pmu_register(&cci_pmu->pmu, name, -1);
+}
+
+static int cci_pmu_offline_cpu(unsigned int cpu)
+{
+	int target;
+
+	if (!g_cci_pmu || cpu != g_cci_pmu->cpu)
+		return 0;
+
+	target = cpumask_any_but(cpu_online_mask, cpu);
+	if (target >= nr_cpu_ids)
+		return 0;
+
+	perf_pmu_migrate_context(&g_cci_pmu->pmu, cpu, target);
+	g_cci_pmu->cpu = target;
+	return 0;
+}
+
+static struct cci_pmu_model cci_pmu_models[] = {
+#ifdef CONFIG_ARM_CCI400_PMU
+	[CCI400_R0] = {
+		.name = "CCI_400",
+		.fixed_hw_cntrs = 1,	/* Cycle counter */
+		.num_hw_cntrs = 4,
+		.cntr_size = SZ_4K,
+		.format_attrs = cci400_pmu_format_attrs,
+		.event_attrs = cci400_r0_pmu_event_attrs,
+		.event_ranges = {
+			[CCI_IF_SLAVE] = {
+				CCI400_R0_SLAVE_PORT_MIN_EV,
+				CCI400_R0_SLAVE_PORT_MAX_EV,
+			},
+			[CCI_IF_MASTER] = {
+				CCI400_R0_MASTER_PORT_MIN_EV,
+				CCI400_R0_MASTER_PORT_MAX_EV,
+			},
+		},
+		.validate_hw_event = cci400_validate_hw_event,
+		.get_event_idx = cci400_get_event_idx,
+	},
+	[CCI400_R1] = {
+		.name = "CCI_400_r1",
+		.fixed_hw_cntrs = 1,	/* Cycle counter */
+		.num_hw_cntrs = 4,
+		.cntr_size = SZ_4K,
+		.format_attrs = cci400_pmu_format_attrs,
+		.event_attrs = cci400_r1_pmu_event_attrs,
+		.event_ranges = {
+			[CCI_IF_SLAVE] = {
+				CCI400_R1_SLAVE_PORT_MIN_EV,
+				CCI400_R1_SLAVE_PORT_MAX_EV,
+			},
+			[CCI_IF_MASTER] = {
+				CCI400_R1_MASTER_PORT_MIN_EV,
+				CCI400_R1_MASTER_PORT_MAX_EV,
+			},
+		},
+		.validate_hw_event = cci400_validate_hw_event,
+		.get_event_idx = cci400_get_event_idx,
+	},
+#endif
+#ifdef CONFIG_ARM_CCI5xx_PMU
+	[CCI500_R0] = {
+		.name = "CCI_500",
+		.fixed_hw_cntrs = 0,
+		.num_hw_cntrs = 8,
+		.cntr_size = SZ_64K,
+		.format_attrs = cci5xx_pmu_format_attrs,
+		.event_attrs = cci5xx_pmu_event_attrs,
+		.event_ranges = {
+			[CCI_IF_SLAVE] = {
+				CCI5xx_SLAVE_PORT_MIN_EV,
+				CCI5xx_SLAVE_PORT_MAX_EV,
+			},
+			[CCI_IF_MASTER] = {
+				CCI5xx_MASTER_PORT_MIN_EV,
+				CCI5xx_MASTER_PORT_MAX_EV,
+			},
+			[CCI_IF_GLOBAL] = {
+				CCI5xx_GLOBAL_PORT_MIN_EV,
+				CCI5xx_GLOBAL_PORT_MAX_EV,
+			},
+		},
+		.validate_hw_event = cci500_validate_hw_event,
+		.write_counters	= cci5xx_pmu_write_counters,
+	},
+	[CCI550_R0] = {
+		.name = "CCI_550",
+		.fixed_hw_cntrs = 0,
+		.num_hw_cntrs = 8,
+		.cntr_size = SZ_64K,
+		.format_attrs = cci5xx_pmu_format_attrs,
+		.event_attrs = cci5xx_pmu_event_attrs,
+		.event_ranges = {
+			[CCI_IF_SLAVE] = {
+				CCI5xx_SLAVE_PORT_MIN_EV,
+				CCI5xx_SLAVE_PORT_MAX_EV,
+			},
+			[CCI_IF_MASTER] = {
+				CCI5xx_MASTER_PORT_MIN_EV,
+				CCI5xx_MASTER_PORT_MAX_EV,
+			},
+			[CCI_IF_GLOBAL] = {
+				CCI5xx_GLOBAL_PORT_MIN_EV,
+				CCI5xx_GLOBAL_PORT_MAX_EV,
+			},
+		},
+		.validate_hw_event = cci550_validate_hw_event,
+		.write_counters	= cci5xx_pmu_write_counters,
+	},
+#endif
+};
+
+static const struct of_device_id arm_cci_pmu_matches[] = {
+#ifdef CONFIG_ARM_CCI400_PMU
+	{
+		.compatible = "arm,cci-400-pmu",
+		.data	= NULL,
+	},
+	{
+		.compatible = "arm,cci-400-pmu,r0",
+		.data	= &cci_pmu_models[CCI400_R0],
+	},
+	{
+		.compatible = "arm,cci-400-pmu,r1",
+		.data	= &cci_pmu_models[CCI400_R1],
+	},
+#endif
+#ifdef CONFIG_ARM_CCI5xx_PMU
+	{
+		.compatible = "arm,cci-500-pmu,r0",
+		.data = &cci_pmu_models[CCI500_R0],
+	},
+	{
+		.compatible = "arm,cci-550-pmu,r0",
+		.data = &cci_pmu_models[CCI550_R0],
+	},
+#endif
+	{},
+};
+
+static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs)
+{
+	int i;
+
+	for (i = 0; i < nr_irqs; i++)
+		if (irq == irqs[i])
+			return true;
+
+	return false;
+}
+
+static struct cci_pmu *cci_pmu_alloc(struct device *dev)
+{
+	struct cci_pmu *cci_pmu;
+	const struct cci_pmu_model *model;
+
+	/*
+	 * All allocations are devm_* hence we don't have to free
+	 * them explicitly on an error, as it would end up in driver
+	 * detach.
+	 */
+	cci_pmu = devm_kzalloc(dev, sizeof(*cci_pmu), GFP_KERNEL);
+	if (!cci_pmu)
+		return ERR_PTR(-ENOMEM);
+
+	cci_pmu->ctrl_base = *(void __iomem **)dev->platform_data;
+
+	model = of_device_get_match_data(dev);
+	if (!model) {
+		dev_warn(dev,
+			 "DEPRECATED compatible property, requires secure access to CCI registers");
+		model = probe_cci_model(cci_pmu);
+	}
+	if (!model) {
+		dev_warn(dev, "CCI PMU version not supported\n");
+		return ERR_PTR(-ENODEV);
+	}
+
+	cci_pmu->model = model;
+	cci_pmu->irqs = devm_kcalloc(dev, CCI_PMU_MAX_HW_CNTRS(model),
+					sizeof(*cci_pmu->irqs), GFP_KERNEL);
+	if (!cci_pmu->irqs)
+		return ERR_PTR(-ENOMEM);
+	cci_pmu->hw_events.events = devm_kcalloc(dev,
+					     CCI_PMU_MAX_HW_CNTRS(model),
+					     sizeof(*cci_pmu->hw_events.events),
+					     GFP_KERNEL);
+	if (!cci_pmu->hw_events.events)
+		return ERR_PTR(-ENOMEM);
+	cci_pmu->hw_events.used_mask = devm_kcalloc(dev,
+						BITS_TO_LONGS(CCI_PMU_MAX_HW_CNTRS(model)),
+						sizeof(*cci_pmu->hw_events.used_mask),
+						GFP_KERNEL);
+	if (!cci_pmu->hw_events.used_mask)
+		return ERR_PTR(-ENOMEM);
+
+	return cci_pmu;
+}
+
+static int cci_pmu_probe(struct platform_device *pdev)
+{
+	struct resource *res;
+	struct cci_pmu *cci_pmu;
+	int i, ret, irq;
+
+	cci_pmu = cci_pmu_alloc(&pdev->dev);
+	if (IS_ERR(cci_pmu))
+		return PTR_ERR(cci_pmu);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	cci_pmu->base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(cci_pmu->base))
+		return -ENOMEM;
+
+	/*
+	 * CCI PMU has one overflow interrupt per counter; but some may be tied
+	 * together to a common interrupt.
+	 */
+	cci_pmu->nr_irqs = 0;
+	for (i = 0; i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model); i++) {
+		irq = platform_get_irq(pdev, i);
+		if (irq < 0)
+			break;
+
+		if (is_duplicate_irq(irq, cci_pmu->irqs, cci_pmu->nr_irqs))
+			continue;
+
+		cci_pmu->irqs[cci_pmu->nr_irqs++] = irq;
+	}
+
+	/*
+	 * Ensure that the device tree has as many interrupts as the number
+	 * of counters.
+	 */
+	if (i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)) {
+		dev_warn(&pdev->dev, "In-correct number of interrupts: %d, should be %d\n",
+			i, CCI_PMU_MAX_HW_CNTRS(cci_pmu->model));
+		return -EINVAL;
+	}
+
+	raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock);
+	mutex_init(&cci_pmu->reserve_mutex);
+	atomic_set(&cci_pmu->active_events, 0);
+	cci_pmu->cpu = get_cpu();
+
+	ret = cci_pmu_init(cci_pmu, pdev);
+	if (ret) {
+		put_cpu();
+		return ret;
+	}
+
+	cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE,
+				  "perf/arm/cci:online", NULL,
+				  cci_pmu_offline_cpu);
+	put_cpu();
+	g_cci_pmu = cci_pmu;
+	pr_info("ARM %s PMU driver probed", cci_pmu->model->name);
+	return 0;
+}
+
+static struct platform_driver cci_pmu_driver = {
+	.driver = {
+		   .name = DRIVER_NAME,
+		   .of_match_table = arm_cci_pmu_matches,
+		  },
+	.probe = cci_pmu_probe,
+};
+
+builtin_platform_driver(cci_pmu_driver);
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("ARM CCI PMU support");

+ 0 - 0
drivers/bus/arm-ccn.c → drivers/perf/arm-ccn.c


+ 14 - 3
drivers/reset/Kconfig

@@ -49,6 +49,7 @@ config RESET_HSDK
 
 
 config RESET_IMX7
 config RESET_IMX7
 	bool "i.MX7 Reset Driver" if COMPILE_TEST
 	bool "i.MX7 Reset Driver" if COMPILE_TEST
+	depends on HAS_IOMEM
 	default SOC_IMX7D
 	default SOC_IMX7D
 	select MFD_SYSCON
 	select MFD_SYSCON
 	help
 	help
@@ -83,14 +84,24 @@ config RESET_PISTACHIO
 
 
 config RESET_SIMPLE
 config RESET_SIMPLE
 	bool "Simple Reset Controller Driver" if COMPILE_TEST
 	bool "Simple Reset Controller Driver" if COMPILE_TEST
-	default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX
+	default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED
 	help
 	help
 	  This enables a simple reset controller driver for reset lines that
 	  This enables a simple reset controller driver for reset lines that
 	  that can be asserted and deasserted by toggling bits in a contiguous,
 	  that can be asserted and deasserted by toggling bits in a contiguous,
 	  exclusive register space.
 	  exclusive register space.
 
 
-	  Currently this driver supports Altera SoCFPGAs, the RCC reset
-	  controller in STM32 MCUs, Allwinner SoCs, and ZTE's zx2967 family.
+	  Currently this driver supports:
+	   - Altera SoCFPGAs
+	   - ASPEED BMC SoCs
+	   - RCC reset controller in STM32 MCUs
+	   - Allwinner SoCs
+	   - ZTE's zx2967 family
+
+config RESET_STM32MP157
+	bool "STM32MP157 Reset Driver" if COMPILE_TEST
+	default MACH_STM32MP157
+	help
+	  This enables the RCC reset controller driver for STM32 MPUs.
 
 
 config RESET_SUNXI
 config RESET_SUNXI
 	bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI
 	bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI

+ 1 - 0
drivers/reset/Makefile

@@ -15,6 +15,7 @@ obj-$(CONFIG_RESET_MESON) += reset-meson.o
 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o
 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o
 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o
 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o
 obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o
 obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o
+obj-$(CONFIG_RESET_STM32MP157) += reset-stm32mp1.o
 obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o
 obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o
 obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o
 obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o
 obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o
 obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o

+ 95 - 1
drivers/reset/core.c

@@ -23,6 +23,9 @@
 static DEFINE_MUTEX(reset_list_mutex);
 static DEFINE_MUTEX(reset_list_mutex);
 static LIST_HEAD(reset_controller_list);
 static LIST_HEAD(reset_controller_list);
 
 
+static DEFINE_MUTEX(reset_lookup_mutex);
+static LIST_HEAD(reset_lookup_list);
+
 /**
 /**
  * struct reset_control - a reset control
  * struct reset_control - a reset control
  * @rcdev: a pointer to the reset controller device
  * @rcdev: a pointer to the reset controller device
@@ -148,6 +151,33 @@ int devm_reset_controller_register(struct device *dev,
 }
 }
 EXPORT_SYMBOL_GPL(devm_reset_controller_register);
 EXPORT_SYMBOL_GPL(devm_reset_controller_register);
 
 
+/**
+ * reset_controller_add_lookup - register a set of lookup entries
+ * @lookup: array of reset lookup entries
+ * @num_entries: number of entries in the lookup array
+ */
+void reset_controller_add_lookup(struct reset_control_lookup *lookup,
+				 unsigned int num_entries)
+{
+	struct reset_control_lookup *entry;
+	unsigned int i;
+
+	mutex_lock(&reset_lookup_mutex);
+	for (i = 0; i < num_entries; i++) {
+		entry = &lookup[i];
+
+		if (!entry->dev_id || !entry->provider) {
+			pr_warn("%s(): reset lookup entry badly specified, skipping\n",
+				__func__);
+			continue;
+		}
+
+		list_add_tail(&entry->list, &reset_lookup_list);
+	}
+	mutex_unlock(&reset_lookup_mutex);
+}
+EXPORT_SYMBOL_GPL(reset_controller_add_lookup);
+
 static inline struct reset_control_array *
 static inline struct reset_control_array *
 rstc_to_array(struct reset_control *rstc) {
 rstc_to_array(struct reset_control *rstc) {
 	return container_of(rstc, struct reset_control_array, base);
 	return container_of(rstc, struct reset_control_array, base);
@@ -493,6 +523,70 @@ struct reset_control *__of_reset_control_get(struct device_node *node,
 }
 }
 EXPORT_SYMBOL_GPL(__of_reset_control_get);
 EXPORT_SYMBOL_GPL(__of_reset_control_get);
 
 
+static struct reset_controller_dev *
+__reset_controller_by_name(const char *name)
+{
+	struct reset_controller_dev *rcdev;
+
+	lockdep_assert_held(&reset_list_mutex);
+
+	list_for_each_entry(rcdev, &reset_controller_list, list) {
+		if (!rcdev->dev)
+			continue;
+
+		if (!strcmp(name, dev_name(rcdev->dev)))
+			return rcdev;
+	}
+
+	return NULL;
+}
+
+static struct reset_control *
+__reset_control_get_from_lookup(struct device *dev, const char *con_id,
+				bool shared, bool optional)
+{
+	const struct reset_control_lookup *lookup;
+	struct reset_controller_dev *rcdev;
+	const char *dev_id = dev_name(dev);
+	struct reset_control *rstc = NULL;
+
+	if (!dev)
+		return ERR_PTR(-EINVAL);
+
+	mutex_lock(&reset_lookup_mutex);
+
+	list_for_each_entry(lookup, &reset_lookup_list, list) {
+		if (strcmp(lookup->dev_id, dev_id))
+			continue;
+
+		if ((!con_id && !lookup->con_id) ||
+		    ((con_id && lookup->con_id) &&
+		     !strcmp(con_id, lookup->con_id))) {
+			mutex_lock(&reset_list_mutex);
+			rcdev = __reset_controller_by_name(lookup->provider);
+			if (!rcdev) {
+				mutex_unlock(&reset_list_mutex);
+				mutex_unlock(&reset_lookup_mutex);
+				/* Reset provider may not be ready yet. */
+				return ERR_PTR(-EPROBE_DEFER);
+			}
+
+			rstc = __reset_control_get_internal(rcdev,
+							    lookup->index,
+							    shared);
+			mutex_unlock(&reset_list_mutex);
+			break;
+		}
+	}
+
+	mutex_unlock(&reset_lookup_mutex);
+
+	if (!rstc)
+		return optional ? NULL : ERR_PTR(-ENOENT);
+
+	return rstc;
+}
+
 struct reset_control *__reset_control_get(struct device *dev, const char *id,
 struct reset_control *__reset_control_get(struct device *dev, const char *id,
 					  int index, bool shared, bool optional)
 					  int index, bool shared, bool optional)
 {
 {
@@ -500,7 +594,7 @@ struct reset_control *__reset_control_get(struct device *dev, const char *id,
 		return __of_reset_control_get(dev->of_node, id, index, shared,
 		return __of_reset_control_get(dev->of_node, id, index, shared,
 					      optional);
 					      optional);
 
 
-	return optional ? NULL : ERR_PTR(-EINVAL);
+	return __reset_control_get_from_lookup(dev, id, shared, optional);
 }
 }
 EXPORT_SYMBOL_GPL(__reset_control_get);
 EXPORT_SYMBOL_GPL(__reset_control_get);
 
 

+ 5 - 17
drivers/reset/reset-meson.c

@@ -124,29 +124,21 @@ static int meson_reset_deassert(struct reset_controller_dev *rcdev,
 	return meson_reset_level(rcdev, id, false);
 	return meson_reset_level(rcdev, id, false);
 }
 }
 
 
-static const struct reset_control_ops meson_reset_meson8_ops = {
-	.reset		= meson_reset_reset,
-};
-
-static const struct reset_control_ops meson_reset_gx_ops = {
+static const struct reset_control_ops meson_reset_ops = {
 	.reset		= meson_reset_reset,
 	.reset		= meson_reset_reset,
 	.assert		= meson_reset_assert,
 	.assert		= meson_reset_assert,
 	.deassert	= meson_reset_deassert,
 	.deassert	= meson_reset_deassert,
 };
 };
 
 
 static const struct of_device_id meson_reset_dt_ids[] = {
 static const struct of_device_id meson_reset_dt_ids[] = {
-	 { .compatible = "amlogic,meson8b-reset",
-	   .data = &meson_reset_meson8_ops, },
-	 { .compatible = "amlogic,meson-gxbb-reset",
-	   .data = &meson_reset_gx_ops, },
-	 { .compatible = "amlogic,meson-axg-reset",
-	   .data = &meson_reset_gx_ops, },
+	 { .compatible = "amlogic,meson8b-reset" },
+	 { .compatible = "amlogic,meson-gxbb-reset" },
+	 { .compatible = "amlogic,meson-axg-reset" },
 	 { /* sentinel */ },
 	 { /* sentinel */ },
 };
 };
 
 
 static int meson_reset_probe(struct platform_device *pdev)
 static int meson_reset_probe(struct platform_device *pdev)
 {
 {
-	const struct reset_control_ops *ops;
 	struct meson_reset *data;
 	struct meson_reset *data;
 	struct resource *res;
 	struct resource *res;
 
 
@@ -154,10 +146,6 @@ static int meson_reset_probe(struct platform_device *pdev)
 	if (!data)
 	if (!data)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
-	ops = of_device_get_match_data(&pdev->dev);
-	if (!ops)
-		return -EINVAL;
-
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	data->reg_base = devm_ioremap_resource(&pdev->dev, res);
 	data->reg_base = devm_ioremap_resource(&pdev->dev, res);
 	if (IS_ERR(data->reg_base))
 	if (IS_ERR(data->reg_base))
@@ -169,7 +157,7 @@ static int meson_reset_probe(struct platform_device *pdev)
 
 
 	data->rcdev.owner = THIS_MODULE;
 	data->rcdev.owner = THIS_MODULE;
 	data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG;
 	data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG;
-	data->rcdev.ops = ops;
+	data->rcdev.ops = &meson_reset_ops;
 	data->rcdev.of_node = pdev->dev.of_node;
 	data->rcdev.of_node = pdev->dev.of_node;
 
 
 	return devm_reset_controller_register(&pdev->dev, &data->rcdev);
 	return devm_reset_controller_register(&pdev->dev, &data->rcdev);

+ 2 - 0
drivers/reset/reset-simple.c

@@ -125,6 +125,8 @@ static const struct of_device_id reset_simple_dt_ids[] = {
 		.data = &reset_simple_active_low },
 		.data = &reset_simple_active_low },
 	{ .compatible = "zte,zx296718-reset",
 	{ .compatible = "zte,zx296718-reset",
 		.data = &reset_simple_active_low },
 		.data = &reset_simple_active_low },
+	{ .compatible = "aspeed,ast2400-lpc-reset" },
+	{ .compatible = "aspeed,ast2500-lpc-reset" },
 	{ /* sentinel */ },
 	{ /* sentinel */ },
 };
 };
 
 

+ 115 - 0
drivers/reset/reset-stm32mp1.c

@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) STMicroelectronics 2018 - All Rights Reserved
+ * Author: Gabriel Fernandez <gabriel.fernandez@st.com> for STMicroelectronics.
+ */
+
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/reset-controller.h>
+
+#define CLR_OFFSET 0x4
+
+struct stm32_reset_data {
+	struct reset_controller_dev	rcdev;
+	void __iomem			*membase;
+};
+
+static inline struct stm32_reset_data *
+to_stm32_reset_data(struct reset_controller_dev *rcdev)
+{
+	return container_of(rcdev, struct stm32_reset_data, rcdev);
+}
+
+static int stm32_reset_update(struct reset_controller_dev *rcdev,
+			      unsigned long id, bool assert)
+{
+	struct stm32_reset_data *data = to_stm32_reset_data(rcdev);
+	int reg_width = sizeof(u32);
+	int bank = id / (reg_width * BITS_PER_BYTE);
+	int offset = id % (reg_width * BITS_PER_BYTE);
+	void __iomem *addr;
+
+	addr = data->membase + (bank * reg_width);
+	if (!assert)
+		addr += CLR_OFFSET;
+
+	writel(BIT(offset), addr);
+
+	return 0;
+}
+
+static int stm32_reset_assert(struct reset_controller_dev *rcdev,
+			      unsigned long id)
+{
+	return stm32_reset_update(rcdev, id, true);
+}
+
+static int stm32_reset_deassert(struct reset_controller_dev *rcdev,
+				unsigned long id)
+{
+	return stm32_reset_update(rcdev, id, false);
+}
+
+static int stm32_reset_status(struct reset_controller_dev *rcdev,
+			      unsigned long id)
+{
+	struct stm32_reset_data *data = to_stm32_reset_data(rcdev);
+	int reg_width = sizeof(u32);
+	int bank = id / (reg_width * BITS_PER_BYTE);
+	int offset = id % (reg_width * BITS_PER_BYTE);
+	u32 reg;
+
+	reg = readl(data->membase + (bank * reg_width));
+
+	return !!(reg & BIT(offset));
+}
+
+static const struct reset_control_ops stm32_reset_ops = {
+	.assert		= stm32_reset_assert,
+	.deassert	= stm32_reset_deassert,
+	.status		= stm32_reset_status,
+};
+
+static const struct of_device_id stm32_reset_dt_ids[] = {
+	{ .compatible = "st,stm32mp1-rcc"},
+	{ /* sentinel */ },
+};
+
+static int stm32_reset_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct stm32_reset_data *data;
+	void __iomem *membase;
+	struct resource *res;
+
+	data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return -ENOMEM;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	membase = devm_ioremap_resource(dev, res);
+	if (IS_ERR(membase))
+		return PTR_ERR(membase);
+
+	data->membase = membase;
+	data->rcdev.owner = THIS_MODULE;
+	data->rcdev.nr_resets = resource_size(res) * BITS_PER_BYTE;
+	data->rcdev.ops = &stm32_reset_ops;
+	data->rcdev.of_node = dev->of_node;
+
+	return devm_reset_controller_register(dev, &data->rcdev);
+}
+
+static struct platform_driver stm32_reset_driver = {
+	.probe	= stm32_reset_probe,
+	.driver = {
+		.name		= "stm32mp1-reset",
+		.of_match_table	= stm32_reset_dt_ids,
+	},
+};
+
+builtin_platform_driver(stm32_reset_driver);

+ 5 - 0
drivers/reset/reset-uniphier.c

@@ -63,6 +63,7 @@ static const struct uniphier_reset_data uniphier_pro4_sys_reset_data[] = {
 	UNIPHIER_RESETX(12, 0x2000, 6),		/* GIO (Ether, SATA, USB3) */
 	UNIPHIER_RESETX(12, 0x2000, 6),		/* GIO (Ether, SATA, USB3) */
 	UNIPHIER_RESETX(14, 0x2000, 17),	/* USB30 */
 	UNIPHIER_RESETX(14, 0x2000, 17),	/* USB30 */
 	UNIPHIER_RESETX(15, 0x2004, 17),	/* USB31 */
 	UNIPHIER_RESETX(15, 0x2004, 17),	/* USB31 */
+	UNIPHIER_RESETX(40, 0x2000, 13),	/* AIO */
 	UNIPHIER_RESET_END,
 	UNIPHIER_RESET_END,
 };
 };
 
 
@@ -72,6 +73,7 @@ static const struct uniphier_reset_data uniphier_pro5_sys_reset_data[] = {
 	UNIPHIER_RESETX(12, 0x2000, 6),		/* GIO (PCIe, USB3) */
 	UNIPHIER_RESETX(12, 0x2000, 6),		/* GIO (PCIe, USB3) */
 	UNIPHIER_RESETX(14, 0x2000, 17),	/* USB30 */
 	UNIPHIER_RESETX(14, 0x2000, 17),	/* USB30 */
 	UNIPHIER_RESETX(15, 0x2004, 17),	/* USB31 */
 	UNIPHIER_RESETX(15, 0x2004, 17),	/* USB31 */
+	UNIPHIER_RESETX(40, 0x2000, 13),	/* AIO */
 	UNIPHIER_RESET_END,
 	UNIPHIER_RESET_END,
 };
 };
 
 
@@ -88,6 +90,7 @@ static const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = {
 	UNIPHIER_RESETX(21, 0x2014, 1),		/* USB31-PHY1 */
 	UNIPHIER_RESETX(21, 0x2014, 1),		/* USB31-PHY1 */
 	UNIPHIER_RESETX(28, 0x2014, 12),	/* SATA */
 	UNIPHIER_RESETX(28, 0x2014, 12),	/* SATA */
 	UNIPHIER_RESET(29, 0x2014, 8),		/* SATA-PHY (active high) */
 	UNIPHIER_RESET(29, 0x2014, 8),		/* SATA-PHY (active high) */
+	UNIPHIER_RESETX(40, 0x2000, 13),	/* AIO */
 	UNIPHIER_RESET_END,
 	UNIPHIER_RESET_END,
 };
 };
 
 
@@ -121,6 +124,8 @@ static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = {
 static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = {
 static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = {
 	UNIPHIER_RESETX(2, 0x200c, 0),		/* NAND */
 	UNIPHIER_RESETX(2, 0x200c, 0),		/* NAND */
 	UNIPHIER_RESETX(4, 0x200c, 2),		/* eMMC */
 	UNIPHIER_RESETX(4, 0x200c, 2),		/* eMMC */
+	UNIPHIER_RESETX(6, 0x200c, 9),		/* Ether0 */
+	UNIPHIER_RESETX(7, 0x200c, 10),		/* Ether1 */
 	UNIPHIER_RESETX(8, 0x200c, 12),		/* STDMAC */
 	UNIPHIER_RESETX(8, 0x200c, 12),		/* STDMAC */
 	UNIPHIER_RESETX(12, 0x200c, 4),		/* USB30 link (GIO0) */
 	UNIPHIER_RESETX(12, 0x200c, 4),		/* USB30 link (GIO0) */
 	UNIPHIER_RESETX(13, 0x200c, 5),		/* USB31 link (GIO1) */
 	UNIPHIER_RESETX(13, 0x200c, 5),		/* USB31 link (GIO1) */

+ 7 - 2
drivers/soc/amlogic/meson-gx-pwrc-vpu.c

@@ -184,7 +184,8 @@ static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev)
 
 
 	rstc = devm_reset_control_array_get(&pdev->dev, false, false);
 	rstc = devm_reset_control_array_get(&pdev->dev, false, false);
 	if (IS_ERR(rstc)) {
 	if (IS_ERR(rstc)) {
-		dev_err(&pdev->dev, "failed to get reset lines\n");
+		if (PTR_ERR(rstc) != -EPROBE_DEFER)
+			dev_err(&pdev->dev, "failed to get reset lines\n");
 		return PTR_ERR(rstc);
 		return PTR_ERR(rstc);
 	}
 	}
 
 
@@ -224,7 +225,11 @@ static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev)
 
 
 static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev)
 static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev)
 {
 {
-	meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd);
+	bool powered_off;
+
+	powered_off = meson_gx_pwrc_vpu_get_power(&vpu_hdmi_pd);
+	if (!powered_off)
+		meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd);
 }
 }
 
 
 static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = {
 static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = {

+ 10 - 1
drivers/soc/amlogic/meson-gx-socinfo.c

@@ -33,6 +33,10 @@ static const struct meson_gx_soc_id {
 	{ "GXL", 0x21 },
 	{ "GXL", 0x21 },
 	{ "GXM", 0x22 },
 	{ "GXM", 0x22 },
 	{ "TXL", 0x23 },
 	{ "TXL", 0x23 },
+	{ "TXLX", 0x24 },
+	{ "AXG", 0x25 },
+	{ "GXLX", 0x26 },
+	{ "TXHD", 0x27 },
 };
 };
 
 
 static const struct meson_gx_package_id {
 static const struct meson_gx_package_id {
@@ -45,9 +49,14 @@ static const struct meson_gx_package_id {
 	{ "S905M", 0x1f, 0x20 },
 	{ "S905M", 0x1f, 0x20 },
 	{ "S905D", 0x21, 0 },
 	{ "S905D", 0x21, 0 },
 	{ "S905X", 0x21, 0x80 },
 	{ "S905X", 0x21, 0x80 },
+	{ "S905W", 0x21, 0xa0 },
 	{ "S905L", 0x21, 0xc0 },
 	{ "S905L", 0x21, 0xc0 },
 	{ "S905M2", 0x21, 0xe0 },
 	{ "S905M2", 0x21, 0xe0 },
 	{ "S912", 0x22, 0 },
 	{ "S912", 0x22, 0 },
+	{ "962X", 0x24, 0x10 },
+	{ "962E", 0x24, 0x20 },
+	{ "A113X", 0x25, 0x37 },
+	{ "A113D", 0x25, 0x22 },
 };
 };
 
 
 static inline unsigned int socinfo_to_major(u32 socinfo)
 static inline unsigned int socinfo_to_major(u32 socinfo)
@@ -98,7 +107,7 @@ static const char *socinfo_to_soc_id(u32 socinfo)
 	return "Unknown";
 	return "Unknown";
 }
 }
 
 
-int __init meson_gx_socinfo_init(void)
+static int __init meson_gx_socinfo_init(void)
 {
 {
 	struct soc_device_attribute *soc_dev_attr;
 	struct soc_device_attribute *soc_dev_attr;
 	struct soc_device *soc_dev;
 	struct soc_device *soc_dev;

+ 1 - 1
drivers/soc/amlogic/meson-mx-socinfo.c

@@ -104,7 +104,7 @@ static const struct of_device_id meson_mx_socinfo_analog_top_ids[] = {
 	{ /* sentinel */ }
 	{ /* sentinel */ }
 };
 };
 
 
-int __init meson_mx_socinfo_init(void)
+static int __init meson_mx_socinfo_init(void)
 {
 {
 	struct soc_device_attribute *soc_dev_attr;
 	struct soc_device_attribute *soc_dev_attr;
 	struct soc_device *soc_dev;
 	struct soc_device *soc_dev;

+ 1 - 0
drivers/soc/imx/gpc.c

@@ -254,6 +254,7 @@ static struct imx_pm_domain imx_gpc_domains[] = {
 	{
 	{
 		.base = {
 		.base = {
 			.name = "ARM",
 			.name = "ARM",
+			.flags = GENPD_FLAG_ALWAYS_ON,
 		},
 		},
 	}, {
 	}, {
 		.base = {
 		.base = {

+ 99 - 5
drivers/soc/mediatek/mtk-scpsys.c

@@ -24,6 +24,7 @@
 #include <dt-bindings/power/mt2712-power.h>
 #include <dt-bindings/power/mt2712-power.h>
 #include <dt-bindings/power/mt6797-power.h>
 #include <dt-bindings/power/mt6797-power.h>
 #include <dt-bindings/power/mt7622-power.h>
 #include <dt-bindings/power/mt7622-power.h>
+#include <dt-bindings/power/mt7623a-power.h>
 #include <dt-bindings/power/mt8173-power.h>
 #include <dt-bindings/power/mt8173-power.h>
 
 
 #define SPM_VDE_PWR_CON			0x0210
 #define SPM_VDE_PWR_CON			0x0210
@@ -518,7 +519,8 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
 		.name = "conn",
 		.name = "conn",
 		.sta_mask = PWR_STATUS_CONN,
 		.sta_mask = PWR_STATUS_CONN,
 		.ctl_offs = SPM_CONN_PWR_CON,
 		.ctl_offs = SPM_CONN_PWR_CON,
-		.bus_prot_mask = 0x0104,
+		.bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M |
+				 MT2701_TOP_AXI_PROT_EN_CONN_S,
 		.clk_id = {CLK_NONE},
 		.clk_id = {CLK_NONE},
 		.active_wakeup = true,
 		.active_wakeup = true,
 	},
 	},
@@ -528,7 +530,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
 		.ctl_offs = SPM_DIS_PWR_CON,
 		.ctl_offs = SPM_DIS_PWR_CON,
 		.sram_pdn_bits = GENMASK(11, 8),
 		.sram_pdn_bits = GENMASK(11, 8),
 		.clk_id = {CLK_MM},
 		.clk_id = {CLK_MM},
-		.bus_prot_mask = 0x0002,
+		.bus_prot_mask = MT2701_TOP_AXI_PROT_EN_MM_M0,
 		.active_wakeup = true,
 		.active_wakeup = true,
 	},
 	},
 	[MT2701_POWER_DOMAIN_MFG] = {
 	[MT2701_POWER_DOMAIN_MFG] = {
@@ -664,12 +666,48 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
 		.name = "mfg",
 		.name = "mfg",
 		.sta_mask = PWR_STATUS_MFG,
 		.sta_mask = PWR_STATUS_MFG,
 		.ctl_offs = SPM_MFG_PWR_CON,
 		.ctl_offs = SPM_MFG_PWR_CON,
-		.sram_pdn_bits = GENMASK(11, 8),
-		.sram_pdn_ack_bits = GENMASK(19, 16),
+		.sram_pdn_bits = GENMASK(8, 8),
+		.sram_pdn_ack_bits = GENMASK(16, 16),
 		.clk_id = {CLK_MFG},
 		.clk_id = {CLK_MFG},
 		.bus_prot_mask = BIT(14) | BIT(21) | BIT(23),
 		.bus_prot_mask = BIT(14) | BIT(21) | BIT(23),
 		.active_wakeup = true,
 		.active_wakeup = true,
 	},
 	},
+	[MT2712_POWER_DOMAIN_MFG_SC1] = {
+		.name = "mfg_sc1",
+		.sta_mask = BIT(22),
+		.ctl_offs = 0x02c0,
+		.sram_pdn_bits = GENMASK(8, 8),
+		.sram_pdn_ack_bits = GENMASK(16, 16),
+		.clk_id = {CLK_NONE},
+		.active_wakeup = true,
+	},
+	[MT2712_POWER_DOMAIN_MFG_SC2] = {
+		.name = "mfg_sc2",
+		.sta_mask = BIT(23),
+		.ctl_offs = 0x02c4,
+		.sram_pdn_bits = GENMASK(8, 8),
+		.sram_pdn_ack_bits = GENMASK(16, 16),
+		.clk_id = {CLK_NONE},
+		.active_wakeup = true,
+	},
+	[MT2712_POWER_DOMAIN_MFG_SC3] = {
+		.name = "mfg_sc3",
+		.sta_mask = BIT(30),
+		.ctl_offs = 0x01f8,
+		.sram_pdn_bits = GENMASK(8, 8),
+		.sram_pdn_ack_bits = GENMASK(16, 16),
+		.clk_id = {CLK_NONE},
+		.active_wakeup = true,
+	},
+};
+
+static const struct scp_subdomain scp_subdomain_mt2712[] = {
+	{MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_VDEC},
+	{MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_VENC},
+	{MT2712_POWER_DOMAIN_MM, MT2712_POWER_DOMAIN_ISP},
+	{MT2712_POWER_DOMAIN_MFG, MT2712_POWER_DOMAIN_MFG_SC1},
+	{MT2712_POWER_DOMAIN_MFG_SC1, MT2712_POWER_DOMAIN_MFG_SC2},
+	{MT2712_POWER_DOMAIN_MFG_SC2, MT2712_POWER_DOMAIN_MFG_SC3},
 };
 };
 
 
 /*
 /*
@@ -793,6 +831,47 @@ static const struct scp_domain_data scp_domain_data_mt7622[] = {
 	},
 	},
 };
 };
 
 
+/*
+ * MT7623A power domain support
+ */
+
+static const struct scp_domain_data scp_domain_data_mt7623a[] = {
+	[MT7623A_POWER_DOMAIN_CONN] = {
+		.name = "conn",
+		.sta_mask = PWR_STATUS_CONN,
+		.ctl_offs = SPM_CONN_PWR_CON,
+		.bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M |
+				 MT2701_TOP_AXI_PROT_EN_CONN_S,
+		.clk_id = {CLK_NONE},
+		.active_wakeup = true,
+	},
+	[MT7623A_POWER_DOMAIN_ETH] = {
+		.name = "eth",
+		.sta_mask = PWR_STATUS_ETH,
+		.ctl_offs = SPM_ETH_PWR_CON,
+		.sram_pdn_bits = GENMASK(11, 8),
+		.sram_pdn_ack_bits = GENMASK(15, 12),
+		.clk_id = {CLK_ETHIF},
+		.active_wakeup = true,
+	},
+	[MT7623A_POWER_DOMAIN_HIF] = {
+		.name = "hif",
+		.sta_mask = PWR_STATUS_HIF,
+		.ctl_offs = SPM_HIF_PWR_CON,
+		.sram_pdn_bits = GENMASK(11, 8),
+		.sram_pdn_ack_bits = GENMASK(15, 12),
+		.clk_id = {CLK_ETHIF},
+		.active_wakeup = true,
+	},
+	[MT7623A_POWER_DOMAIN_IFR_MSC] = {
+		.name = "ifr_msc",
+		.sta_mask = PWR_STATUS_IFR_MSC,
+		.ctl_offs = SPM_IFR_MSC_PWR_CON,
+		.clk_id = {CLK_NONE},
+		.active_wakeup = true,
+	},
+};
+
 /*
 /*
  * MT8173 power domain support
  * MT8173 power domain support
  */
  */
@@ -905,6 +984,8 @@ static const struct scp_soc_data mt2701_data = {
 static const struct scp_soc_data mt2712_data = {
 static const struct scp_soc_data mt2712_data = {
 	.domains = scp_domain_data_mt2712,
 	.domains = scp_domain_data_mt2712,
 	.num_domains = ARRAY_SIZE(scp_domain_data_mt2712),
 	.num_domains = ARRAY_SIZE(scp_domain_data_mt2712),
+	.subdomains = scp_subdomain_mt2712,
+	.num_subdomains = ARRAY_SIZE(scp_subdomain_mt2712),
 	.regs = {
 	.regs = {
 		.pwr_sta_offs = SPM_PWR_STATUS,
 		.pwr_sta_offs = SPM_PWR_STATUS,
 		.pwr_sta2nd_offs = SPM_PWR_STATUS_2ND
 		.pwr_sta2nd_offs = SPM_PWR_STATUS_2ND
@@ -934,6 +1015,16 @@ static const struct scp_soc_data mt7622_data = {
 	.bus_prot_reg_update = true,
 	.bus_prot_reg_update = true,
 };
 };
 
 
+static const struct scp_soc_data mt7623a_data = {
+	.domains = scp_domain_data_mt7623a,
+	.num_domains = ARRAY_SIZE(scp_domain_data_mt7623a),
+	.regs = {
+		.pwr_sta_offs = SPM_PWR_STATUS,
+		.pwr_sta2nd_offs = SPM_PWR_STATUS_2ND
+	},
+	.bus_prot_reg_update = true,
+};
+
 static const struct scp_soc_data mt8173_data = {
 static const struct scp_soc_data mt8173_data = {
 	.domains = scp_domain_data_mt8173,
 	.domains = scp_domain_data_mt8173,
 	.num_domains = ARRAY_SIZE(scp_domain_data_mt8173),
 	.num_domains = ARRAY_SIZE(scp_domain_data_mt8173),
@@ -963,6 +1054,9 @@ static const struct of_device_id of_scpsys_match_tbl[] = {
 	}, {
 	}, {
 		.compatible = "mediatek,mt7622-scpsys",
 		.compatible = "mediatek,mt7622-scpsys",
 		.data = &mt7622_data,
 		.data = &mt7622_data,
+	}, {
+		.compatible = "mediatek,mt7623a-scpsys",
+		.data = &mt7623a_data,
 	}, {
 	}, {
 		.compatible = "mediatek,mt8173-scpsys",
 		.compatible = "mediatek,mt8173-scpsys",
 		.data = &mt8173_data,
 		.data = &mt8173_data,
@@ -992,7 +1086,7 @@ static int scpsys_probe(struct platform_device *pdev)
 
 
 	pd_data = &scp->pd_data;
 	pd_data = &scp->pd_data;
 
 
-	for (i = 0, sd = soc->subdomains ; i < soc->num_subdomains ; i++) {
+	for (i = 0, sd = soc->subdomains; i < soc->num_subdomains; i++, sd++) {
 		ret = pm_genpd_add_subdomain(pd_data->domains[sd->origin],
 		ret = pm_genpd_add_subdomain(pd_data->domains[sd->origin],
 					     pd_data->domains[sd->subdomain]);
 					     pd_data->domains[sd->subdomain]);
 		if (ret && IS_ENABLED(CONFIG_PM))
 		if (ret && IS_ENABLED(CONFIG_PM))

+ 1 - 0
drivers/soc/qcom/Kconfig

@@ -47,6 +47,7 @@ config QCOM_QMI_HELPERS
 config QCOM_RMTFS_MEM
 config QCOM_RMTFS_MEM
 	tristate "Qualcomm Remote Filesystem memory driver"
 	tristate "Qualcomm Remote Filesystem memory driver"
 	depends on ARCH_QCOM
 	depends on ARCH_QCOM
+	select QCOM_SCM
 	help
 	help
 	  The Qualcomm remote filesystem memory driver is used for allocating
 	  The Qualcomm remote filesystem memory driver is used for allocating
 	  and exposing regions of shared memory with remote processors for the
 	  and exposing regions of shared memory with remote processors for the

+ 34 - 0
drivers/soc/qcom/rmtfs_mem.c

@@ -37,6 +37,8 @@ struct qcom_rmtfs_mem {
 	phys_addr_t size;
 	phys_addr_t size;
 
 
 	unsigned int client_id;
 	unsigned int client_id;
+
+	unsigned int perms;
 };
 };
 
 
 static ssize_t qcom_rmtfs_mem_show(struct device *dev,
 static ssize_t qcom_rmtfs_mem_show(struct device *dev,
@@ -151,9 +153,11 @@ static void qcom_rmtfs_mem_release_device(struct device *dev)
 static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
 static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
 {
 {
 	struct device_node *node = pdev->dev.of_node;
 	struct device_node *node = pdev->dev.of_node;
+	struct qcom_scm_vmperm perms[2];
 	struct reserved_mem *rmem;
 	struct reserved_mem *rmem;
 	struct qcom_rmtfs_mem *rmtfs_mem;
 	struct qcom_rmtfs_mem *rmtfs_mem;
 	u32 client_id;
 	u32 client_id;
+	u32 vmid;
 	int ret;
 	int ret;
 
 
 	rmem = of_reserved_mem_lookup(node);
 	rmem = of_reserved_mem_lookup(node);
@@ -204,10 +208,31 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
 
 
 	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
 	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
 
 
+	ret = of_property_read_u32(node, "qcom,vmid", &vmid);
+	if (ret < 0 && ret != -EINVAL) {
+		dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
+		goto remove_cdev;
+	} else if (!ret) {
+		perms[0].vmid = QCOM_SCM_VMID_HLOS;
+		perms[0].perm = QCOM_SCM_PERM_RW;
+		perms[1].vmid = vmid;
+		perms[1].perm = QCOM_SCM_PERM_RW;
+
+		rmtfs_mem->perms = BIT(QCOM_SCM_VMID_HLOS);
+		ret = qcom_scm_assign_mem(rmtfs_mem->addr, rmtfs_mem->size,
+					  &rmtfs_mem->perms, perms, 2);
+		if (ret < 0) {
+			dev_err(&pdev->dev, "assign memory failed\n");
+			goto remove_cdev;
+		}
+	}
+
 	dev_set_drvdata(&pdev->dev, rmtfs_mem);
 	dev_set_drvdata(&pdev->dev, rmtfs_mem);
 
 
 	return 0;
 	return 0;
 
 
+remove_cdev:
+	cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev);
 put_device:
 put_device:
 	put_device(&rmtfs_mem->dev);
 	put_device(&rmtfs_mem->dev);
 
 
@@ -217,6 +242,15 @@ put_device:
 static int qcom_rmtfs_mem_remove(struct platform_device *pdev)
 static int qcom_rmtfs_mem_remove(struct platform_device *pdev)
 {
 {
 	struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev);
 	struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev);
+	struct qcom_scm_vmperm perm;
+
+	if (rmtfs_mem->perms) {
+		perm.vmid = QCOM_SCM_VMID_HLOS;
+		perm.perm = QCOM_SCM_PERM_RW;
+
+		qcom_scm_assign_mem(rmtfs_mem->addr, rmtfs_mem->size,
+				    &rmtfs_mem->perms, &perm, 1);
+	}
 
 
 	cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev);
 	cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev);
 	put_device(&rmtfs_mem->dev);
 	put_device(&rmtfs_mem->dev);

+ 1 - 1
drivers/soc/qcom/wcnss_ctrl.c

@@ -249,7 +249,7 @@ static int wcnss_download_nv(struct wcnss_ctrl *wcnss, bool *expect_cbc)
 		/* Increment for next fragment */
 		/* Increment for next fragment */
 		req->seq++;
 		req->seq++;
 
 
-		data += req->hdr.len;
+		data += NV_FRAGMENT_SIZE;
 		left -= NV_FRAGMENT_SIZE;
 		left -= NV_FRAGMENT_SIZE;
 	} while (left > 0);
 	} while (left > 0);
 
 

+ 28 - 0
drivers/soc/rockchip/grf.c

@@ -43,6 +43,28 @@ static const struct rockchip_grf_info rk3036_grf __initconst = {
 	.num_values = ARRAY_SIZE(rk3036_defaults),
 	.num_values = ARRAY_SIZE(rk3036_defaults),
 };
 };
 
 
+#define RK3128_GRF_SOC_CON0		0x140
+
+static const struct rockchip_grf_value rk3128_defaults[] __initconst = {
+	{ "jtag switching", RK3128_GRF_SOC_CON0, HIWORD_UPDATE(0, 1, 8) },
+};
+
+static const struct rockchip_grf_info rk3128_grf __initconst = {
+	.values = rk3128_defaults,
+	.num_values = ARRAY_SIZE(rk3128_defaults),
+};
+
+#define RK3228_GRF_SOC_CON6		0x418
+
+static const struct rockchip_grf_value rk3228_defaults[] __initconst = {
+	{ "jtag switching", RK3228_GRF_SOC_CON6, HIWORD_UPDATE(0, 1, 8) },
+};
+
+static const struct rockchip_grf_info rk3228_grf __initconst = {
+	.values = rk3228_defaults,
+	.num_values = ARRAY_SIZE(rk3228_defaults),
+};
+
 #define RK3288_GRF_SOC_CON0		0x244
 #define RK3288_GRF_SOC_CON0		0x244
 
 
 static const struct rockchip_grf_value rk3288_defaults[] __initconst = {
 static const struct rockchip_grf_value rk3288_defaults[] __initconst = {
@@ -91,6 +113,12 @@ static const struct of_device_id rockchip_grf_dt_match[] __initconst = {
 	{
 	{
 		.compatible = "rockchip,rk3036-grf",
 		.compatible = "rockchip,rk3036-grf",
 		.data = (void *)&rk3036_grf,
 		.data = (void *)&rk3036_grf,
+	}, {
+		.compatible = "rockchip,rk3128-grf",
+		.data = (void *)&rk3128_grf,
+	}, {
+		.compatible = "rockchip,rk3228-grf",
+		.data = (void *)&rk3228_grf,
 	}, {
 	}, {
 		.compatible = "rockchip,rk3288-grf",
 		.compatible = "rockchip,rk3288-grf",
 		.data = (void *)&rk3288_grf,
 		.data = (void *)&rk3288_grf,

+ 47 - 48
drivers/soc/rockchip/pm_domains.c

@@ -67,7 +67,7 @@ struct rockchip_pm_domain {
 	struct regmap **qos_regmap;
 	struct regmap **qos_regmap;
 	u32 *qos_save_regs[MAX_QOS_REGS_NUM];
 	u32 *qos_save_regs[MAX_QOS_REGS_NUM];
 	int num_clks;
 	int num_clks;
-	struct clk *clks[];
+	struct clk_bulk_data *clks;
 };
 };
 
 
 struct rockchip_pmu {
 struct rockchip_pmu {
@@ -274,13 +274,18 @@ static void rockchip_do_pmu_set_power_domain(struct rockchip_pm_domain *pd,
 
 
 static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on)
 static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on)
 {
 {
-	int i;
+	struct rockchip_pmu *pmu = pd->pmu;
+	int ret;
 
 
-	mutex_lock(&pd->pmu->mutex);
+	mutex_lock(&pmu->mutex);
 
 
 	if (rockchip_pmu_domain_is_on(pd) != power_on) {
 	if (rockchip_pmu_domain_is_on(pd) != power_on) {
-		for (i = 0; i < pd->num_clks; i++)
-			clk_enable(pd->clks[i]);
+		ret = clk_bulk_enable(pd->num_clks, pd->clks);
+		if (ret < 0) {
+			dev_err(pmu->dev, "failed to enable clocks\n");
+			mutex_unlock(&pmu->mutex);
+			return ret;
+		}
 
 
 		if (!power_on) {
 		if (!power_on) {
 			rockchip_pmu_save_qos(pd);
 			rockchip_pmu_save_qos(pd);
@@ -298,11 +303,10 @@ static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on)
 			rockchip_pmu_restore_qos(pd);
 			rockchip_pmu_restore_qos(pd);
 		}
 		}
 
 
-		for (i = pd->num_clks - 1; i >= 0; i--)
-			clk_disable(pd->clks[i]);
+		clk_bulk_disable(pd->num_clks, pd->clks);
 	}
 	}
 
 
-	mutex_unlock(&pd->pmu->mutex);
+	mutex_unlock(&pmu->mutex);
 	return 0;
 	return 0;
 }
 }
 
 
@@ -364,8 +368,6 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 	const struct rockchip_domain_info *pd_info;
 	const struct rockchip_domain_info *pd_info;
 	struct rockchip_pm_domain *pd;
 	struct rockchip_pm_domain *pd;
 	struct device_node *qos_node;
 	struct device_node *qos_node;
-	struct clk *clk;
-	int clk_cnt;
 	int i, j;
 	int i, j;
 	u32 id;
 	u32 id;
 	int error;
 	int error;
@@ -391,41 +393,41 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 
 
-	clk_cnt = of_count_phandle_with_args(node, "clocks", "#clock-cells");
-	pd = devm_kzalloc(pmu->dev,
-			  sizeof(*pd) + clk_cnt * sizeof(pd->clks[0]),
-			  GFP_KERNEL);
+	pd = devm_kzalloc(pmu->dev, sizeof(*pd), GFP_KERNEL);
 	if (!pd)
 	if (!pd)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
 	pd->info = pd_info;
 	pd->info = pd_info;
 	pd->pmu = pmu;
 	pd->pmu = pmu;
 
 
-	for (i = 0; i < clk_cnt; i++) {
-		clk = of_clk_get(node, i);
-		if (IS_ERR(clk)) {
-			error = PTR_ERR(clk);
+	pd->num_clks = of_count_phandle_with_args(node, "clocks",
+						  "#clock-cells");
+	if (pd->num_clks > 0) {
+		pd->clks = devm_kcalloc(pmu->dev, pd->num_clks,
+					sizeof(*pd->clks), GFP_KERNEL);
+		if (!pd->clks)
+			return -ENOMEM;
+	} else {
+		dev_dbg(pmu->dev, "%s: doesn't have clocks: %d\n",
+			node->name, pd->num_clks);
+		pd->num_clks = 0;
+	}
+
+	for (i = 0; i < pd->num_clks; i++) {
+		pd->clks[i].clk = of_clk_get(node, i);
+		if (IS_ERR(pd->clks[i].clk)) {
+			error = PTR_ERR(pd->clks[i].clk);
 			dev_err(pmu->dev,
 			dev_err(pmu->dev,
 				"%s: failed to get clk at index %d: %d\n",
 				"%s: failed to get clk at index %d: %d\n",
 				node->name, i, error);
 				node->name, i, error);
-			goto err_out;
-		}
-
-		error = clk_prepare(clk);
-		if (error) {
-			dev_err(pmu->dev,
-				"%s: failed to prepare clk %pC (index %d): %d\n",
-				node->name, clk, i, error);
-			clk_put(clk);
-			goto err_out;
+			return error;
 		}
 		}
-
-		pd->clks[pd->num_clks++] = clk;
-
-		dev_dbg(pmu->dev, "added clock '%pC' to domain '%s'\n",
-			clk, node->name);
 	}
 	}
 
 
+	error = clk_bulk_prepare(pd->num_clks, pd->clks);
+	if (error)
+		goto err_put_clocks;
+
 	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
 	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
 						 NULL);
 						 NULL);
 
 
@@ -435,7 +437,7 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 					      GFP_KERNEL);
 					      GFP_KERNEL);
 		if (!pd->qos_regmap) {
 		if (!pd->qos_regmap) {
 			error = -ENOMEM;
 			error = -ENOMEM;
-			goto err_out;
+			goto err_unprepare_clocks;
 		}
 		}
 
 
 		for (j = 0; j < MAX_QOS_REGS_NUM; j++) {
 		for (j = 0; j < MAX_QOS_REGS_NUM; j++) {
@@ -445,7 +447,7 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 							    GFP_KERNEL);
 							    GFP_KERNEL);
 			if (!pd->qos_save_regs[j]) {
 			if (!pd->qos_save_regs[j]) {
 				error = -ENOMEM;
 				error = -ENOMEM;
-				goto err_out;
+				goto err_unprepare_clocks;
 			}
 			}
 		}
 		}
 
 
@@ -453,13 +455,13 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 			qos_node = of_parse_phandle(node, "pm_qos", j);
 			qos_node = of_parse_phandle(node, "pm_qos", j);
 			if (!qos_node) {
 			if (!qos_node) {
 				error = -ENODEV;
 				error = -ENODEV;
-				goto err_out;
+				goto err_unprepare_clocks;
 			}
 			}
 			pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);
 			pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);
 			if (IS_ERR(pd->qos_regmap[j])) {
 			if (IS_ERR(pd->qos_regmap[j])) {
 				error = -ENODEV;
 				error = -ENODEV;
 				of_node_put(qos_node);
 				of_node_put(qos_node);
-				goto err_out;
+				goto err_unprepare_clocks;
 			}
 			}
 			of_node_put(qos_node);
 			of_node_put(qos_node);
 		}
 		}
@@ -470,7 +472,7 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 		dev_err(pmu->dev,
 		dev_err(pmu->dev,
 			"failed to power on domain '%s': %d\n",
 			"failed to power on domain '%s': %d\n",
 			node->name, error);
 			node->name, error);
-		goto err_out;
+		goto err_unprepare_clocks;
 	}
 	}
 
 
 	pd->genpd.name = node->name;
 	pd->genpd.name = node->name;
@@ -486,17 +488,16 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 	pmu->genpd_data.domains[id] = &pd->genpd;
 	pmu->genpd_data.domains[id] = &pd->genpd;
 	return 0;
 	return 0;
 
 
-err_out:
-	while (--i >= 0) {
-		clk_unprepare(pd->clks[i]);
-		clk_put(pd->clks[i]);
-	}
+err_unprepare_clocks:
+	clk_bulk_unprepare(pd->num_clks, pd->clks);
+err_put_clocks:
+	clk_bulk_put(pd->num_clks, pd->clks);
 	return error;
 	return error;
 }
 }
 
 
 static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd)
 static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd)
 {
 {
-	int i, ret;
+	int ret;
 
 
 	/*
 	/*
 	 * We're in the error cleanup already, so we only complain,
 	 * We're in the error cleanup already, so we only complain,
@@ -507,10 +508,8 @@ static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd)
 		dev_err(pd->pmu->dev, "failed to remove domain '%s' : %d - state may be inconsistent\n",
 		dev_err(pd->pmu->dev, "failed to remove domain '%s' : %d - state may be inconsistent\n",
 			pd->genpd.name, ret);
 			pd->genpd.name, ret);
 
 
-	for (i = 0; i < pd->num_clks; i++) {
-		clk_unprepare(pd->clks[i]);
-		clk_put(pd->clks[i]);
-	}
+	clk_bulk_unprepare(pd->num_clks, pd->clks);
+	clk_bulk_put(pd->num_clks, pd->clks);
 
 
 	/* protect the zeroing of pm->num_clks */
 	/* protect the zeroing of pm->num_clks */
 	mutex_lock(&pd->pmu->mutex);
 	mutex_lock(&pd->pmu->mutex);

+ 7 - 0
drivers/soc/samsung/exynos-pmu.c

@@ -84,11 +84,15 @@ static const struct of_device_id exynos_pmu_of_device_ids[] = {
 	}, {
 	}, {
 		.compatible = "samsung,exynos5250-pmu",
 		.compatible = "samsung,exynos5250-pmu",
 		.data = exynos_pmu_data_arm_ptr(exynos5250_pmu_data),
 		.data = exynos_pmu_data_arm_ptr(exynos5250_pmu_data),
+	}, {
+		.compatible = "samsung,exynos5410-pmu",
 	}, {
 	}, {
 		.compatible = "samsung,exynos5420-pmu",
 		.compatible = "samsung,exynos5420-pmu",
 		.data = exynos_pmu_data_arm_ptr(exynos5420_pmu_data),
 		.data = exynos_pmu_data_arm_ptr(exynos5420_pmu_data),
 	}, {
 	}, {
 		.compatible = "samsung,exynos5433-pmu",
 		.compatible = "samsung,exynos5433-pmu",
+	}, {
+		.compatible = "samsung,exynos7-pmu",
 	},
 	},
 	{ /*sentinel*/ },
 	{ /*sentinel*/ },
 };
 };
@@ -126,6 +130,9 @@ static int exynos_pmu_probe(struct platform_device *pdev)
 
 
 	platform_set_drvdata(pdev, pmu_context);
 	platform_set_drvdata(pdev, pmu_context);
 
 
+	if (devm_of_platform_populate(dev))
+		dev_err(dev, "Error populating children, reboot and poweroff might not work properly\n");
+
 	dev_dbg(dev, "Exynos PMU Driver probe done\n");
 	dev_dbg(dev, "Exynos PMU Driver probe done\n");
 	return 0;
 	return 0;
 }
 }

+ 10 - 0
drivers/soc/tegra/Kconfig

@@ -104,6 +104,16 @@ config ARCH_TEGRA_186_SOC
 	  multi-format support, ISP for image capture processing and BPMP for
 	  multi-format support, ISP for image capture processing and BPMP for
 	  power management.
 	  power management.
 
 
+config ARCH_TEGRA_194_SOC
+	bool "NVIDIA Tegra194 SoC"
+	select MAILBOX
+	select TEGRA_BPMP
+	select TEGRA_HSP_MBOX
+	select TEGRA_IVC
+	select SOC_TEGRA_PMC
+	help
+	  Enable support for the NVIDIA Tegra194 SoC.
+
 endif
 endif
 endif
 endif
 
 

+ 28 - 70
drivers/soc/tegra/pmc.c

@@ -127,8 +127,7 @@ struct tegra_powergate {
 	unsigned int id;
 	unsigned int id;
 	struct clk **clks;
 	struct clk **clks;
 	unsigned int num_clks;
 	unsigned int num_clks;
-	struct reset_control **resets;
-	unsigned int num_resets;
+	struct reset_control *reset;
 };
 };
 
 
 struct tegra_io_pad_soc {
 struct tegra_io_pad_soc {
@@ -153,6 +152,7 @@ struct tegra_pmc_soc {
 
 
 	bool has_tsense_reset;
 	bool has_tsense_reset;
 	bool has_gpu_clamps;
 	bool has_gpu_clamps;
+	bool needs_mbist_war;
 
 
 	const struct tegra_io_pad_soc *io_pads;
 	const struct tegra_io_pad_soc *io_pads;
 	unsigned int num_io_pads;
 	unsigned int num_io_pads;
@@ -368,31 +368,8 @@ out:
 	return err;
 	return err;
 }
 }
 
 
-static int tegra_powergate_reset_assert(struct tegra_powergate *pg)
+int __weak tegra210_clk_handle_mbist_war(unsigned int id)
 {
 {
-	unsigned int i;
-	int err;
-
-	for (i = 0; i < pg->num_resets; i++) {
-		err = reset_control_assert(pg->resets[i]);
-		if (err)
-			return err;
-	}
-
-	return 0;
-}
-
-static int tegra_powergate_reset_deassert(struct tegra_powergate *pg)
-{
-	unsigned int i;
-	int err;
-
-	for (i = 0; i < pg->num_resets; i++) {
-		err = reset_control_deassert(pg->resets[i]);
-		if (err)
-			return err;
-	}
-
 	return 0;
 	return 0;
 }
 }
 
 
@@ -401,7 +378,7 @@ static int tegra_powergate_power_up(struct tegra_powergate *pg,
 {
 {
 	int err;
 	int err;
 
 
-	err = tegra_powergate_reset_assert(pg);
+	err = reset_control_assert(pg->reset);
 	if (err)
 	if (err)
 		return err;
 		return err;
 
 
@@ -425,12 +402,17 @@ static int tegra_powergate_power_up(struct tegra_powergate *pg,
 
 
 	usleep_range(10, 20);
 	usleep_range(10, 20);
 
 
-	err = tegra_powergate_reset_deassert(pg);
+	err = reset_control_deassert(pg->reset);
 	if (err)
 	if (err)
 		goto powergate_off;
 		goto powergate_off;
 
 
 	usleep_range(10, 20);
 	usleep_range(10, 20);
 
 
+	if (pg->pmc->soc->needs_mbist_war)
+		err = tegra210_clk_handle_mbist_war(pg->id);
+	if (err)
+		goto disable_clks;
+
 	if (disable_clocks)
 	if (disable_clocks)
 		tegra_powergate_disable_clocks(pg);
 		tegra_powergate_disable_clocks(pg);
 
 
@@ -456,7 +438,7 @@ static int tegra_powergate_power_down(struct tegra_powergate *pg)
 
 
 	usleep_range(10, 20);
 	usleep_range(10, 20);
 
 
-	err = tegra_powergate_reset_assert(pg);
+	err = reset_control_assert(pg->reset);
 	if (err)
 	if (err)
 		goto disable_clks;
 		goto disable_clks;
 
 
@@ -475,7 +457,7 @@ static int tegra_powergate_power_down(struct tegra_powergate *pg)
 assert_resets:
 assert_resets:
 	tegra_powergate_enable_clocks(pg);
 	tegra_powergate_enable_clocks(pg);
 	usleep_range(10, 20);
 	usleep_range(10, 20);
-	tegra_powergate_reset_deassert(pg);
+	reset_control_deassert(pg->reset);
 	usleep_range(10, 20);
 	usleep_range(10, 20);
 
 
 disable_clks:
 disable_clks:
@@ -586,8 +568,8 @@ int tegra_powergate_sequence_power_up(unsigned int id, struct clk *clk,
 	pg.id = id;
 	pg.id = id;
 	pg.clks = &clk;
 	pg.clks = &clk;
 	pg.num_clks = 1;
 	pg.num_clks = 1;
-	pg.resets = &rst;
-	pg.num_resets = 1;
+	pg.reset = rst;
+	pg.pmc = pmc;
 
 
 	err = tegra_powergate_power_up(&pg, false);
 	err = tegra_powergate_power_up(&pg, false);
 	if (err)
 	if (err)
@@ -775,45 +757,22 @@ err:
 static int tegra_powergate_of_get_resets(struct tegra_powergate *pg,
 static int tegra_powergate_of_get_resets(struct tegra_powergate *pg,
 					 struct device_node *np, bool off)
 					 struct device_node *np, bool off)
 {
 {
-	struct reset_control *rst;
-	unsigned int i, count;
 	int err;
 	int err;
 
 
-	count = of_count_phandle_with_args(np, "resets", "#reset-cells");
-	if (count == 0)
-		return -ENODEV;
-
-	pg->resets = kcalloc(count, sizeof(rst), GFP_KERNEL);
-	if (!pg->resets)
-		return -ENOMEM;
-
-	for (i = 0; i < count; i++) {
-		pg->resets[i] = of_reset_control_get_by_index(np, i);
-		if (IS_ERR(pg->resets[i])) {
-			err = PTR_ERR(pg->resets[i]);
-			goto error;
-		}
-
-		if (off)
-			err = reset_control_assert(pg->resets[i]);
-		else
-			err = reset_control_deassert(pg->resets[i]);
-
-		if (err) {
-			reset_control_put(pg->resets[i]);
-			goto error;
-		}
+	pg->reset = of_reset_control_array_get_exclusive(np);
+	if (IS_ERR(pg->reset)) {
+		err = PTR_ERR(pg->reset);
+		pr_err("failed to get device resets: %d\n", err);
+		return err;
 	}
 	}
 
 
-	pg->num_resets = count;
-
-	return 0;
-
-error:
-	while (i--)
-		reset_control_put(pg->resets[i]);
+	if (off)
+		err = reset_control_assert(pg->reset);
+	else
+		err = reset_control_deassert(pg->reset);
 
 
-	kfree(pg->resets);
+	if (err)
+		reset_control_put(pg->reset);
 
 
 	return err;
 	return err;
 }
 }
@@ -905,10 +864,7 @@ remove_genpd:
 	pm_genpd_remove(&pg->genpd);
 	pm_genpd_remove(&pg->genpd);
 
 
 remove_resets:
 remove_resets:
-	while (pg->num_resets--)
-		reset_control_put(pg->resets[pg->num_resets]);
-
-	kfree(pg->resets);
+	reset_control_put(pg->reset);
 
 
 remove_clks:
 remove_clks:
 	while (pg->num_clks--)
 	while (pg->num_clks--)
@@ -1815,6 +1771,7 @@ static const struct tegra_pmc_soc tegra210_pmc_soc = {
 	.cpu_powergates = tegra210_cpu_powergates,
 	.cpu_powergates = tegra210_cpu_powergates,
 	.has_tsense_reset = true,
 	.has_tsense_reset = true,
 	.has_gpu_clamps = true,
 	.has_gpu_clamps = true,
+	.needs_mbist_war = true,
 	.num_io_pads = ARRAY_SIZE(tegra210_io_pads),
 	.num_io_pads = ARRAY_SIZE(tegra210_io_pads),
 	.io_pads = tegra210_io_pads,
 	.io_pads = tegra210_io_pads,
 	.regs = &tegra20_pmc_regs,
 	.regs = &tegra20_pmc_regs,
@@ -1920,6 +1877,7 @@ static const struct tegra_pmc_soc tegra186_pmc_soc = {
 };
 };
 
 
 static const struct of_device_id tegra_pmc_match[] = {
 static const struct of_device_id tegra_pmc_match[] = {
+	{ .compatible = "nvidia,tegra194-pmc", .data = &tegra186_pmc_soc },
 	{ .compatible = "nvidia,tegra186-pmc", .data = &tegra186_pmc_soc },
 	{ .compatible = "nvidia,tegra186-pmc", .data = &tegra186_pmc_soc },
 	{ .compatible = "nvidia,tegra210-pmc", .data = &tegra210_pmc_soc },
 	{ .compatible = "nvidia,tegra210-pmc", .data = &tegra210_pmc_soc },
 	{ .compatible = "nvidia,tegra132-pmc", .data = &tegra124_pmc_soc },
 	{ .compatible = "nvidia,tegra132-pmc", .data = &tegra124_pmc_soc },

+ 23 - 0
drivers/tee/optee/core.c

@@ -356,6 +356,27 @@ static bool optee_msg_api_uid_is_optee_api(optee_invoke_fn *invoke_fn)
 	return false;
 	return false;
 }
 }
 
 
+static void optee_msg_get_os_revision(optee_invoke_fn *invoke_fn)
+{
+	union {
+		struct arm_smccc_res smccc;
+		struct optee_smc_call_get_os_revision_result result;
+	} res = {
+		.result = {
+			.build_id = 0
+		}
+	};
+
+	invoke_fn(OPTEE_SMC_CALL_GET_OS_REVISION, 0, 0, 0, 0, 0, 0, 0,
+		  &res.smccc);
+
+	if (res.result.build_id)
+		pr_info("revision %lu.%lu (%08lx)", res.result.major,
+			res.result.minor, res.result.build_id);
+	else
+		pr_info("revision %lu.%lu", res.result.major, res.result.minor);
+}
+
 static bool optee_msg_api_revision_is_compatible(optee_invoke_fn *invoke_fn)
 static bool optee_msg_api_revision_is_compatible(optee_invoke_fn *invoke_fn)
 {
 {
 	union {
 	union {
@@ -547,6 +568,8 @@ static struct optee *optee_probe(struct device_node *np)
 		return ERR_PTR(-EINVAL);
 		return ERR_PTR(-EINVAL);
 	}
 	}
 
 
+	optee_msg_get_os_revision(invoke_fn);
+
 	if (!optee_msg_api_revision_is_compatible(invoke_fn)) {
 	if (!optee_msg_api_revision_is_compatible(invoke_fn)) {
 		pr_warn("api revision mismatch\n");
 		pr_warn("api revision mismatch\n");
 		return ERR_PTR(-EINVAL);
 		return ERR_PTR(-EINVAL);

+ 9 - 1
drivers/tee/optee/optee_smc.h

@@ -112,12 +112,20 @@ struct optee_smc_calls_revision_result {
  * Trusted OS, not of the API.
  * Trusted OS, not of the API.
  *
  *
  * Returns revision in a0-1 in the same way as OPTEE_SMC_CALLS_REVISION
  * Returns revision in a0-1 in the same way as OPTEE_SMC_CALLS_REVISION
- * described above.
+ * described above. May optionally return a 32-bit build identifier in a2,
+ * with zero meaning unspecified.
  */
  */
 #define OPTEE_SMC_FUNCID_GET_OS_REVISION OPTEE_MSG_FUNCID_GET_OS_REVISION
 #define OPTEE_SMC_FUNCID_GET_OS_REVISION OPTEE_MSG_FUNCID_GET_OS_REVISION
 #define OPTEE_SMC_CALL_GET_OS_REVISION \
 #define OPTEE_SMC_CALL_GET_OS_REVISION \
 	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_OS_REVISION)
 	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_OS_REVISION)
 
 
+struct optee_smc_call_get_os_revision_result {
+	unsigned long major;
+	unsigned long minor;
+	unsigned long build_id;
+	unsigned long reserved1;
+};
+
 /*
 /*
  * Call with struct optee_msg_arg as argument
  * Call with struct optee_msg_arg as argument
  *
  *

+ 9 - 5
drivers/tee/tee_core.c

@@ -693,7 +693,7 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc,
 {
 {
 	struct tee_device *teedev;
 	struct tee_device *teedev;
 	void *ret;
 	void *ret;
-	int rc;
+	int rc, max_id;
 	int offs = 0;
 	int offs = 0;
 
 
 	if (!teedesc || !teedesc->name || !teedesc->ops ||
 	if (!teedesc || !teedesc->name || !teedesc->ops ||
@@ -707,16 +707,20 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc,
 		goto err;
 		goto err;
 	}
 	}
 
 
-	if (teedesc->flags & TEE_DESC_PRIVILEGED)
+	max_id = TEE_NUM_DEVICES / 2;
+
+	if (teedesc->flags & TEE_DESC_PRIVILEGED) {
 		offs = TEE_NUM_DEVICES / 2;
 		offs = TEE_NUM_DEVICES / 2;
+		max_id = TEE_NUM_DEVICES;
+	}
 
 
 	spin_lock(&driver_lock);
 	spin_lock(&driver_lock);
-	teedev->id = find_next_zero_bit(dev_mask, TEE_NUM_DEVICES, offs);
-	if (teedev->id < TEE_NUM_DEVICES)
+	teedev->id = find_next_zero_bit(dev_mask, max_id, offs);
+	if (teedev->id < max_id)
 		set_bit(teedev->id, dev_mask);
 		set_bit(teedev->id, dev_mask);
 	spin_unlock(&driver_lock);
 	spin_unlock(&driver_lock);
 
 
-	if (teedev->id >= TEE_NUM_DEVICES) {
+	if (teedev->id >= max_id) {
 		ret = ERR_PTR(-ENOMEM);
 		ret = ERR_PTR(-ENOMEM);
 		goto err;
 		goto err;
 	}
 	}

+ 3 - 0
include/dt-bindings/power/mt2712-power.h

@@ -22,5 +22,8 @@
 #define MT2712_POWER_DOMAIN_USB		5
 #define MT2712_POWER_DOMAIN_USB		5
 #define MT2712_POWER_DOMAIN_USB2	6
 #define MT2712_POWER_DOMAIN_USB2	6
 #define MT2712_POWER_DOMAIN_MFG		7
 #define MT2712_POWER_DOMAIN_MFG		7
+#define MT2712_POWER_DOMAIN_MFG_SC1	8
+#define MT2712_POWER_DOMAIN_MFG_SC2	9
+#define MT2712_POWER_DOMAIN_MFG_SC3	10
 
 
 #endif /* _DT_BINDINGS_POWER_MT2712_POWER_H */
 #endif /* _DT_BINDINGS_POWER_MT2712_POWER_H */

+ 10 - 0
include/dt-bindings/power/mt7623a-power.h

@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _DT_BINDINGS_POWER_MT7623A_POWER_H
+#define _DT_BINDINGS_POWER_MT7623A_POWER_H
+
+#define MT7623A_POWER_DOMAIN_CONN	0
+#define MT7623A_POWER_DOMAIN_ETH	1
+#define MT7623A_POWER_DOMAIN_HIF	2
+#define MT7623A_POWER_DOMAIN_IFR_MSC	3
+
+#endif /* _DT_BINDINGS_POWER_MT7623A_POWER_H */

+ 108 - 0
include/dt-bindings/reset/stm32mp1-resets.h

@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 or BSD-3-Clause */
+/*
+ * Copyright (C) STMicroelectronics 2018 - All Rights Reserved
+ * Author: Gabriel Fernandez <gabriel.fernandez@st.com> for STMicroelectronics.
+ */
+
+#ifndef _DT_BINDINGS_STM32MP1_RESET_H_
+#define _DT_BINDINGS_STM32MP1_RESET_H_
+
+#define LTDC_R		3072
+#define DSI_R		3076
+#define DDRPERFM_R	3080
+#define USBPHY_R	3088
+#define SPI6_R		3136
+#define I2C4_R		3138
+#define I2C6_R		3139
+#define USART1_R	3140
+#define STGEN_R		3156
+#define GPIOZ_R		3200
+#define CRYP1_R		3204
+#define HASH1_R		3205
+#define RNG1_R		3206
+#define AXIM_R		3216
+#define GPU_R		3269
+#define ETHMAC_R	3274
+#define FMC_R		3276
+#define QSPI_R		3278
+#define SDMMC1_R	3280
+#define SDMMC2_R	3281
+#define CRC1_R		3284
+#define USBH_R		3288
+#define MDMA_R		3328
+#define MCU_R		8225
+#define TIM2_R		19456
+#define TIM3_R		19457
+#define TIM4_R		19458
+#define TIM5_R		19459
+#define TIM6_R		19460
+#define TIM7_R		19461
+#define TIM12_R		16462
+#define TIM13_R		16463
+#define TIM14_R		16464
+#define LPTIM1_R	19465
+#define SPI2_R		19467
+#define SPI3_R		19468
+#define USART2_R	19470
+#define USART3_R	19471
+#define UART4_R		19472
+#define UART5_R		19473
+#define UART7_R		19474
+#define UART8_R		19475
+#define I2C1_R		19477
+#define I2C2_R		19478
+#define I2C3_R		19479
+#define I2C5_R		19480
+#define SPDIF_R		19482
+#define CEC_R		19483
+#define DAC12_R		19485
+#define MDIO_R		19847
+#define TIM1_R		19520
+#define TIM8_R		19521
+#define TIM15_R		19522
+#define TIM16_R		19523
+#define TIM17_R		19524
+#define SPI1_R		19528
+#define SPI4_R		19529
+#define SPI5_R		19530
+#define USART6_R	19533
+#define SAI1_R		19536
+#define SAI2_R		19537
+#define SAI3_R		19538
+#define DFSDM_R		19540
+#define FDCAN_R		19544
+#define LPTIM2_R	19584
+#define LPTIM3_R	19585
+#define LPTIM4_R	19586
+#define LPTIM5_R	19587
+#define SAI4_R		19592
+#define SYSCFG_R	19595
+#define VREF_R		19597
+#define TMPSENS_R	19600
+#define PMBCTRL_R	19601
+#define DMA1_R		19648
+#define DMA2_R		19649
+#define DMAMUX_R	19650
+#define ADC12_R		19653
+#define USBO_R		19656
+#define SDMMC3_R	19664
+#define CAMITF_R	19712
+#define CRYP2_R		19716
+#define HASH2_R		19717
+#define RNG2_R		19718
+#define CRC2_R		19719
+#define HSEM_R		19723
+#define MBOX_R		19724
+#define GPIOA_R		19776
+#define GPIOB_R		19777
+#define GPIOC_R		19778
+#define GPIOD_R		19779
+#define GPIOE_R		19780
+#define GPIOF_R		19781
+#define GPIOG_R		19782
+#define GPIOH_R		19783
+#define GPIOI_R		19784
+#define GPIOJ_R		19785
+#define GPIOK_R		19786
+
+#endif /* _DT_BINDINGS_STM32MP1_RESET_H_ */

+ 1 - 0
include/linux/hwmon.h

@@ -29,6 +29,7 @@ enum hwmon_sensor_types {
 	hwmon_humidity,
 	hwmon_humidity,
 	hwmon_fan,
 	hwmon_fan,
 	hwmon_pwm,
 	hwmon_pwm,
+	hwmon_max,
 };
 };
 
 
 enum hwmon_chip_attributes {
 enum hwmon_chip_attributes {

+ 30 - 0
include/linux/reset-controller.h

@@ -26,6 +26,31 @@ struct module;
 struct device_node;
 struct device_node;
 struct of_phandle_args;
 struct of_phandle_args;
 
 
+/**
+ * struct reset_control_lookup - represents a single lookup entry
+ *
+ * @list: internal list of all reset lookup entries
+ * @provider: name of the reset controller device controlling this reset line
+ * @index: ID of the reset controller in the reset controller device
+ * @dev_id: name of the device associated with this reset line
+ * @con_id name of the reset line (can be NULL)
+ */
+struct reset_control_lookup {
+	struct list_head list;
+	const char *provider;
+	unsigned int index;
+	const char *dev_id;
+	const char *con_id;
+};
+
+#define RESET_LOOKUP(_provider, _index, _dev_id, _con_id)		\
+	{								\
+		.provider = _provider,					\
+		.index = _index,					\
+		.dev_id = _dev_id,					\
+		.con_id = _con_id,					\
+	}
+
 /**
 /**
  * struct reset_controller_dev - reset controller entity that might
  * struct reset_controller_dev - reset controller entity that might
  *                               provide multiple reset controls
  *                               provide multiple reset controls
@@ -33,6 +58,7 @@ struct of_phandle_args;
  * @owner: kernel module of the reset controller driver
  * @owner: kernel module of the reset controller driver
  * @list: internal list of reset controller devices
  * @list: internal list of reset controller devices
  * @reset_control_head: head of internal list of requested reset controls
  * @reset_control_head: head of internal list of requested reset controls
+ * @dev: corresponding driver model device struct
  * @of_node: corresponding device tree node as phandle target
  * @of_node: corresponding device tree node as phandle target
  * @of_reset_n_cells: number of cells in reset line specifiers
  * @of_reset_n_cells: number of cells in reset line specifiers
  * @of_xlate: translation function to translate from specifier as found in the
  * @of_xlate: translation function to translate from specifier as found in the
@@ -44,6 +70,7 @@ struct reset_controller_dev {
 	struct module *owner;
 	struct module *owner;
 	struct list_head list;
 	struct list_head list;
 	struct list_head reset_control_head;
 	struct list_head reset_control_head;
+	struct device *dev;
 	struct device_node *of_node;
 	struct device_node *of_node;
 	int of_reset_n_cells;
 	int of_reset_n_cells;
 	int (*of_xlate)(struct reset_controller_dev *rcdev,
 	int (*of_xlate)(struct reset_controller_dev *rcdev,
@@ -58,4 +85,7 @@ struct device;
 int devm_reset_controller_register(struct device *dev,
 int devm_reset_controller_register(struct device *dev,
 				   struct reset_controller_dev *rcdev);
 				   struct reset_controller_dev *rcdev);
 
 
+void reset_controller_add_lookup(struct reset_control_lookup *lookup,
+				 unsigned int num_entries);
+
 #endif
 #endif

+ 277 - 0
include/linux/scmi_protocol.h

@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * SCMI Message Protocol driver header
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ */
+#include <linux/device.h>
+#include <linux/types.h>
+
+#define SCMI_MAX_STR_SIZE	16
+#define SCMI_MAX_NUM_RATES	16
+
+/**
+ * struct scmi_revision_info - version information structure
+ *
+ * @major_ver: Major ABI version. Change here implies risk of backward
+ *	compatibility break.
+ * @minor_ver: Minor ABI version. Change here implies new feature addition,
+ *	or compatible change in ABI.
+ * @num_protocols: Number of protocols that are implemented, excluding the
+ *	base protocol.
+ * @num_agents: Number of agents in the system.
+ * @impl_ver: A vendor-specific implementation version.
+ * @vendor_id: A vendor identifier(Null terminated ASCII string)
+ * @sub_vendor_id: A sub-vendor identifier(Null terminated ASCII string)
+ */
+struct scmi_revision_info {
+	u16 major_ver;
+	u16 minor_ver;
+	u8 num_protocols;
+	u8 num_agents;
+	u32 impl_ver;
+	char vendor_id[SCMI_MAX_STR_SIZE];
+	char sub_vendor_id[SCMI_MAX_STR_SIZE];
+};
+
+struct scmi_clock_info {
+	char name[SCMI_MAX_STR_SIZE];
+	bool rate_discrete;
+	union {
+		struct {
+			int num_rates;
+			u64 rates[SCMI_MAX_NUM_RATES];
+		} list;
+		struct {
+			u64 min_rate;
+			u64 max_rate;
+			u64 step_size;
+		} range;
+	};
+};
+
+struct scmi_handle;
+
+/**
+ * struct scmi_clk_ops - represents the various operations provided
+ *	by SCMI Clock Protocol
+ *
+ * @count_get: get the count of clocks provided by SCMI
+ * @info_get: get the information of the specified clock
+ * @rate_get: request the current clock rate of a clock
+ * @rate_set: set the clock rate of a clock
+ * @enable: enables the specified clock
+ * @disable: disables the specified clock
+ */
+struct scmi_clk_ops {
+	int (*count_get)(const struct scmi_handle *handle);
+
+	const struct scmi_clock_info *(*info_get)
+		(const struct scmi_handle *handle, u32 clk_id);
+	int (*rate_get)(const struct scmi_handle *handle, u32 clk_id,
+			u64 *rate);
+	int (*rate_set)(const struct scmi_handle *handle, u32 clk_id,
+			u32 config, u64 rate);
+	int (*enable)(const struct scmi_handle *handle, u32 clk_id);
+	int (*disable)(const struct scmi_handle *handle, u32 clk_id);
+};
+
+/**
+ * struct scmi_perf_ops - represents the various operations provided
+ *	by SCMI Performance Protocol
+ *
+ * @limits_set: sets limits on the performance level of a domain
+ * @limits_get: gets limits on the performance level of a domain
+ * @level_set: sets the performance level of a domain
+ * @level_get: gets the performance level of a domain
+ * @device_domain_id: gets the scmi domain id for a given device
+ * @get_transition_latency: gets the DVFS transition latency for a given device
+ * @add_opps_to_device: adds all the OPPs for a given device
+ * @freq_set: sets the frequency for a given device using sustained frequency
+ *	to sustained performance level mapping
+ * @freq_get: gets the frequency for a given device using sustained frequency
+ *	to sustained performance level mapping
+ */
+struct scmi_perf_ops {
+	int (*limits_set)(const struct scmi_handle *handle, u32 domain,
+			  u32 max_perf, u32 min_perf);
+	int (*limits_get)(const struct scmi_handle *handle, u32 domain,
+			  u32 *max_perf, u32 *min_perf);
+	int (*level_set)(const struct scmi_handle *handle, u32 domain,
+			 u32 level, bool poll);
+	int (*level_get)(const struct scmi_handle *handle, u32 domain,
+			 u32 *level, bool poll);
+	int (*device_domain_id)(struct device *dev);
+	int (*get_transition_latency)(const struct scmi_handle *handle,
+				      struct device *dev);
+	int (*add_opps_to_device)(const struct scmi_handle *handle,
+				  struct device *dev);
+	int (*freq_set)(const struct scmi_handle *handle, u32 domain,
+			unsigned long rate, bool poll);
+	int (*freq_get)(const struct scmi_handle *handle, u32 domain,
+			unsigned long *rate, bool poll);
+};
+
+/**
+ * struct scmi_power_ops - represents the various operations provided
+ *	by SCMI Power Protocol
+ *
+ * @num_domains_get: get the count of power domains provided by SCMI
+ * @name_get: gets the name of a power domain
+ * @state_set: sets the power state of a power domain
+ * @state_get: gets the power state of a power domain
+ */
+struct scmi_power_ops {
+	int (*num_domains_get)(const struct scmi_handle *handle);
+	char *(*name_get)(const struct scmi_handle *handle, u32 domain);
+#define SCMI_POWER_STATE_TYPE_SHIFT	30
+#define SCMI_POWER_STATE_ID_MASK	(BIT(28) - 1)
+#define SCMI_POWER_STATE_PARAM(type, id) \
+	((((type) & BIT(0)) << SCMI_POWER_STATE_TYPE_SHIFT) | \
+		((id) & SCMI_POWER_STATE_ID_MASK))
+#define SCMI_POWER_STATE_GENERIC_ON	SCMI_POWER_STATE_PARAM(0, 0)
+#define SCMI_POWER_STATE_GENERIC_OFF	SCMI_POWER_STATE_PARAM(1, 0)
+	int (*state_set)(const struct scmi_handle *handle, u32 domain,
+			 u32 state);
+	int (*state_get)(const struct scmi_handle *handle, u32 domain,
+			 u32 *state);
+};
+
+struct scmi_sensor_info {
+	u32 id;
+	u8 type;
+	char name[SCMI_MAX_STR_SIZE];
+};
+
+/*
+ * Partial list from Distributed Management Task Force (DMTF) specification:
+ * DSP0249 (Platform Level Data Model specification)
+ */
+enum scmi_sensor_class {
+	NONE = 0x0,
+	TEMPERATURE_C = 0x2,
+	VOLTAGE = 0x5,
+	CURRENT = 0x6,
+	POWER = 0x7,
+	ENERGY = 0x8,
+};
+
+/**
+ * struct scmi_sensor_ops - represents the various operations provided
+ *	by SCMI Sensor Protocol
+ *
+ * @count_get: get the count of sensors provided by SCMI
+ * @info_get: get the information of the specified sensor
+ * @configuration_set: control notifications on cross-over events for
+ *	the trip-points
+ * @trip_point_set: selects and configures a trip-point of interest
+ * @reading_get: gets the current value of the sensor
+ */
+struct scmi_sensor_ops {
+	int (*count_get)(const struct scmi_handle *handle);
+
+	const struct scmi_sensor_info *(*info_get)
+		(const struct scmi_handle *handle, u32 sensor_id);
+	int (*configuration_set)(const struct scmi_handle *handle,
+				 u32 sensor_id);
+	int (*trip_point_set)(const struct scmi_handle *handle, u32 sensor_id,
+			      u8 trip_id, u64 trip_value);
+	int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id,
+			   bool async, u64 *value);
+};
+
+/**
+ * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
+ *
+ * @dev: pointer to the SCMI device
+ * @version: pointer to the structure containing SCMI version information
+ * @power_ops: pointer to set of power protocol operations
+ * @perf_ops: pointer to set of performance protocol operations
+ * @clk_ops: pointer to set of clock protocol operations
+ * @sensor_ops: pointer to set of sensor protocol operations
+ */
+struct scmi_handle {
+	struct device *dev;
+	struct scmi_revision_info *version;
+	struct scmi_perf_ops *perf_ops;
+	struct scmi_clk_ops *clk_ops;
+	struct scmi_power_ops *power_ops;
+	struct scmi_sensor_ops *sensor_ops;
+	/* for protocol internal use */
+	void *perf_priv;
+	void *clk_priv;
+	void *power_priv;
+	void *sensor_priv;
+};
+
+enum scmi_std_protocol {
+	SCMI_PROTOCOL_BASE = 0x10,
+	SCMI_PROTOCOL_POWER = 0x11,
+	SCMI_PROTOCOL_SYSTEM = 0x12,
+	SCMI_PROTOCOL_PERF = 0x13,
+	SCMI_PROTOCOL_CLOCK = 0x14,
+	SCMI_PROTOCOL_SENSOR = 0x15,
+};
+
+struct scmi_device {
+	u32 id;
+	u8 protocol_id;
+	struct device dev;
+	struct scmi_handle *handle;
+};
+
+#define to_scmi_dev(d) container_of(d, struct scmi_device, dev)
+
+struct scmi_device *
+scmi_device_create(struct device_node *np, struct device *parent, int protocol);
+void scmi_device_destroy(struct scmi_device *scmi_dev);
+
+struct scmi_device_id {
+	u8 protocol_id;
+};
+
+struct scmi_driver {
+	const char *name;
+	int (*probe)(struct scmi_device *sdev);
+	void (*remove)(struct scmi_device *sdev);
+	const struct scmi_device_id *id_table;
+
+	struct device_driver driver;
+};
+
+#define to_scmi_driver(d) container_of(d, struct scmi_driver, driver)
+
+#ifdef CONFIG_ARM_SCMI_PROTOCOL
+int scmi_driver_register(struct scmi_driver *driver,
+			 struct module *owner, const char *mod_name);
+void scmi_driver_unregister(struct scmi_driver *driver);
+#else
+static inline int
+scmi_driver_register(struct scmi_driver *driver, struct module *owner,
+		     const char *mod_name)
+{
+	return -EINVAL;
+}
+
+static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
+#endif /* CONFIG_ARM_SCMI_PROTOCOL */
+
+#define scmi_register(driver) \
+	scmi_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
+#define scmi_unregister(driver) \
+	scmi_driver_unregister(driver)
+
+/**
+ * module_scmi_driver() - Helper macro for registering a scmi driver
+ * @__scmi_driver: scmi_driver structure
+ *
+ * Helper macro for scmi drivers to set up proper module init / exit
+ * functions.  Replaces module_init() and module_exit() and keeps people from
+ * printing pointless things to the kernel log when their driver is loaded.
+ */
+#define module_scmi_driver(__scmi_driver)	\
+	module_driver(__scmi_driver, scmi_register, scmi_unregister)
+
+typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
+int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
+void scmi_protocol_unregister(int protocol_id);

+ 4 - 0
include/linux/soc/mediatek/infracfg.h

@@ -21,6 +21,10 @@
 #define MT8173_TOP_AXI_PROT_EN_MFG_M1		BIT(22)
 #define MT8173_TOP_AXI_PROT_EN_MFG_M1		BIT(22)
 #define MT8173_TOP_AXI_PROT_EN_MFG_SNOOP_OUT	BIT(23)
 #define MT8173_TOP_AXI_PROT_EN_MFG_SNOOP_OUT	BIT(23)
 
 
+#define MT2701_TOP_AXI_PROT_EN_MM_M0		BIT(1)
+#define MT2701_TOP_AXI_PROT_EN_CONN_M		BIT(2)
+#define MT2701_TOP_AXI_PROT_EN_CONN_S		BIT(8)
+
 #define MT7622_TOP_AXI_PROT_EN_ETHSYS		(BIT(3) | BIT(17))
 #define MT7622_TOP_AXI_PROT_EN_ETHSYS		(BIT(3) | BIT(17))
 #define MT7622_TOP_AXI_PROT_EN_HIF0		(BIT(24) | BIT(25))
 #define MT7622_TOP_AXI_PROT_EN_HIF0		(BIT(24) | BIT(25))
 #define MT7622_TOP_AXI_PROT_EN_HIF1		(BIT(26) | BIT(27) | \
 #define MT7622_TOP_AXI_PROT_EN_HIF1		(BIT(26) | BIT(27) | \

+ 1 - 4
include/linux/soc/samsung/exynos-pmu.h

@@ -1,12 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
 /*
 /*
  * Copyright (c) 2014 Samsung Electronics Co., Ltd.
  * Copyright (c) 2014 Samsung Electronics Co., Ltd.
  *		http://www.samsung.com
  *		http://www.samsung.com
  *
  *
  * Header for EXYNOS PMU Driver support
  * Header for EXYNOS PMU Driver support
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
  */
  */
 
 
 #ifndef __LINUX_SOC_EXYNOS_PMU_H
 #ifndef __LINUX_SOC_EXYNOS_PMU_H

+ 1 - 5
include/linux/soc/samsung/exynos-regs-pmu.h

@@ -1,14 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
 /*
 /*
  * Copyright (c) 2010-2015 Samsung Electronics Co., Ltd.
  * Copyright (c) 2010-2015 Samsung Electronics Co., Ltd.
  *		http://www.samsung.com
  *		http://www.samsung.com
  *
  *
  * EXYNOS - Power management unit definition
  * EXYNOS - Power management unit definition
  *
  *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- *
  * Notice:
  * Notice:
  * This is not a list of all Exynos Power Management Unit SFRs.
  * This is not a list of all Exynos Power Management Unit SFRs.
  * There are too many of them, not mentioning subtle differences
  * There are too many of them, not mentioning subtle differences

+ 2 - 2
include/soc/tegra/bpmp.h

@@ -75,8 +75,8 @@ struct tegra_bpmp {
 		struct mbox_chan *channel;
 		struct mbox_chan *channel;
 	} mbox;
 	} mbox;
 
 
-	struct tegra_bpmp_channel *channels;
-	unsigned int num_channels;
+	spinlock_t atomic_tx_lock;
+	struct tegra_bpmp_channel *tx_channel, *rx_channel, *threaded_channels;
 
 
 	struct {
 	struct {
 		unsigned long *allocated;
 		unsigned long *allocated;