瀏覽代碼

Merge tag 'char-misc-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc updates from Greg KH:
 "Here is the "big" char/misc driver patchset for 4.13-rc1.

  Lots of stuff in here, a large thunderbolt update, w1 driver header
  reorg, the new mux driver subsystem, google firmware driver updates,
  and a raft of other smaller things. Full details in the shortlog.

  All of these have been in linux-next for a while with the only
  reported issue being a merge problem with this tree and the jc-docs
  tree in the w1 documentation area"

* tag 'char-misc-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (147 commits)
  misc: apds990x: Use sysfs_match_string() helper
  mei: drop unreachable code in mei_start
  mei: validate the message header only in first fragment.
  DocBook: w1: Update W1 file locations and names in DocBook
  mux: adg792a: always require I2C support
  nvmem: rockchip-efuse: add support for rk322x-efuse
  nvmem: core: add locking to nvmem_find_cell
  nvmem: core: Call put_device() in nvmem_unregister()
  nvmem: core: fix leaks on registration errors
  nvmem: correct Broadcom OTP controller driver writes
  w1: Add subsystem kernel public interface
  drivers/fsi: Add module license to core driver
  drivers/fsi: Use asynchronous slave mode
  drivers/fsi: Add hub master support
  drivers/fsi: Add SCOM FSI client device driver
  drivers/fsi/gpio: Add tracepoints for GPIO master
  drivers/fsi: Add GPIO based FSI master
  drivers/fsi: Document FSI master sysfs files in ABI
  drivers/fsi: Add error handling for slave
  drivers/fsi: Add tracepoints for low-level operations
  ...
Linus Torvalds 8 年之前
父節點
當前提交
f4dd029ee0
共有 100 個文件被更改,包括 8108 次插入873 次删除
  1. 38 0
      Documentation/ABI/testing/sysfs-bus-fsi
  2. 110 0
      Documentation/ABI/testing/sysfs-bus-thunderbolt
  3. 16 0
      Documentation/ABI/testing/sysfs-class-mux
  4. 9 9
      Documentation/DocBook/w1.tmpl
  5. 3 1
      Documentation/admin-guide/devices.txt
  6. 1 0
      Documentation/admin-guide/index.rst
  7. 7 0
      Documentation/admin-guide/kernel-parameters.txt
  8. 199 0
      Documentation/admin-guide/thunderbolt.rst
  9. 49 0
      Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt
  10. 24 0
      Documentation/devicetree/bindings/fsi/fsi-master-gpio.txt
  11. 99 0
      Documentation/devicetree/bindings/i2c/i2c-mux-gpmux.txt
  12. 39 0
      Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt
  13. 75 0
      Documentation/devicetree/bindings/mux/adi,adg792a.txt
  14. 69 0
      Documentation/devicetree/bindings/mux/gpio-mux.txt
  15. 60 0
      Documentation/devicetree/bindings/mux/mmio-mux.txt
  16. 157 0
      Documentation/devicetree/bindings/mux/mux-controller.txt
  17. 1 0
      Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt
  18. 6 1
      Documentation/driver-model/devres.txt
  19. 175 0
      Documentation/trace/coresight-cpu-debug.txt
  20. 22 0
      MAINTAINERS
  21. 64 0
      arch/arm64/boot/dts/hisilicon/hi6220.dtsi
  22. 32 0
      arch/arm64/boot/dts/qcom/msm8916.dtsi
  23. 0 1
      arch/x86/include/asm/mshyperv.h
  24. 2 0
      drivers/Kconfig
  25. 1 0
      drivers/Makefile
  26. 1 4
      drivers/auxdisplay/panel.c
  27. 47 7
      drivers/firmware/google/memconsole-coreboot.c
  28. 15 3
      drivers/firmware/google/memconsole-x86-legacy.c
  29. 6 8
      drivers/firmware/google/memconsole.c
  30. 4 3
      drivers/firmware/google/memconsole.h
  31. 17 22
      drivers/firmware/google/vpd.c
  32. 26 0
      drivers/fsi/Kconfig
  33. 3 0
      drivers/fsi/Makefile
  34. 841 0
      drivers/fsi/fsi-core.c
  35. 604 0
      drivers/fsi/fsi-master-gpio.c
  36. 327 0
      drivers/fsi/fsi-master-hub.c
  37. 43 0
      drivers/fsi/fsi-master.h
  38. 263 0
      drivers/fsi/fsi-scom.c
  39. 6 2
      drivers/hv/channel.c
  40. 53 16
      drivers/hv/channel_mgmt.c
  41. 7 4
      drivers/hv/connection.c
  42. 7 2
      drivers/hv/hv.c
  43. 8 6
      drivers/hv/hv_kvp.c
  44. 54 110
      drivers/hv/hv_util.c
  45. 11 0
      drivers/hv/hyperv_vmbus.h
  46. 38 42
      drivers/hv/vmbus_drv.c
  47. 14 0
      drivers/hwtracing/coresight/Kconfig
  48. 1 0
      drivers/hwtracing/coresight/Makefile
  49. 700 0
      drivers/hwtracing/coresight/coresight-cpu-debug.c
  50. 2 5
      drivers/hwtracing/coresight/coresight-etb10.c
  51. 1 2
      drivers/hwtracing/coresight/coresight-etm-perf.c
  52. 17 8
      drivers/hwtracing/coresight/coresight-tmc-etf.c
  53. 7 0
      drivers/hwtracing/coresight/coresight-tmc.c
  54. 26 8
      drivers/hwtracing/coresight/coresight.c
  55. 32 15
      drivers/hwtracing/coresight/of_coresight.c
  56. 13 0
      drivers/i2c/muxes/Kconfig
  57. 1 0
      drivers/i2c/muxes/Makefile
  58. 173 0
      drivers/i2c/muxes/i2c-mux-gpmux.c
  59. 1 0
      drivers/iio/Kconfig
  60. 1 0
      drivers/iio/Makefile
  61. 60 0
      drivers/iio/inkern.c
  62. 18 0
      drivers/iio/multiplexer/Kconfig
  63. 6 0
      drivers/iio/multiplexer/Makefile
  64. 459 0
      drivers/iio/multiplexer/iio-mux.c
  65. 1 2
      drivers/ipack/ipack.c
  66. 4 1
      drivers/memory/ti-aemif.c
  67. 8 0
      drivers/misc/Kconfig
  68. 1 0
      drivers/misc/Makefile
  69. 8 8
      drivers/misc/apds990x.c
  70. 261 0
      drivers/misc/aspeed-lpc-snoop.c
  71. 1 1
      drivers/misc/bh1770glc.c
  72. 1 1
      drivers/misc/mei/bus.c
  73. 1 1
      drivers/misc/mei/hw.h
  74. 0 6
      drivers/misc/mei/init.c
  75. 19 7
      drivers/misc/mei/interrupt.c
  76. 0 1
      drivers/misc/mei/mei_dev.h
  77. 20 7
      drivers/misc/sram-exec.c
  78. 59 0
      drivers/mux/Kconfig
  79. 8 0
      drivers/mux/Makefile
  80. 157 0
      drivers/mux/mux-adg792a.c
  81. 547 0
      drivers/mux/mux-core.c
  82. 114 0
      drivers/mux/mux-gpio.c
  83. 141 0
      drivers/mux/mux-mmio.c
  84. 2 2
      drivers/nvmem/bcm-ocotp.c
  85. 16 6
      drivers/nvmem/core.c
  86. 4 0
      drivers/nvmem/rockchip-efuse.c
  87. 1 1
      drivers/platform/goldfish/goldfish_pipe.c
  88. 1 1
      drivers/power/supply/ds2760_battery.c
  89. 1 1
      drivers/power/supply/ds2780_battery.c
  90. 1 1
      drivers/power/supply/ds2781_battery.c
  91. 3 9
      drivers/pps/Kconfig
  92. 2 4
      drivers/pps/clients/Kconfig
  93. 2 1
      drivers/pps/generators/Kconfig
  94. 369 239
      drivers/spmi/spmi-pmic-arb.c
  95. 7 6
      drivers/thunderbolt/Kconfig
  96. 1 1
      drivers/thunderbolt/Makefile
  97. 91 78
      drivers/thunderbolt/cap.c
  98. 475 190
      drivers/thunderbolt/ctl.c
  99. 86 19
      drivers/thunderbolt/ctl.h
  100. 524 0
      drivers/thunderbolt/dma_port.c

+ 38 - 0
Documentation/ABI/testing/sysfs-bus-fsi

@@ -0,0 +1,38 @@
+What:           /sys/bus/platform/devices/fsi-master/rescan
+Date:		May 2017
+KernelVersion:  4.12
+Contact:        cbostic@linux.vnet.ibm.com
+Description:
+                Initiates a FSI master scan for all connected slave devices
+		on its links.
+
+What:           /sys/bus/platform/devices/fsi-master/break
+Date:		May 2017
+KernelVersion:  4.12
+Contact:        cbostic@linux.vnet.ibm.com
+Description:
+		Sends an FSI BREAK command on a master's communication
+		link to any connnected slaves.  A BREAK resets connected
+		device's logic and preps it to receive further commands
+		from the master.
+
+What:           /sys/bus/platform/devices/fsi-master/slave@00:00/term
+Date:		May 2017
+KernelVersion:  4.12
+Contact:        cbostic@linux.vnet.ibm.com
+Description:
+		Sends an FSI terminate command from the master to its
+		connected slave. A terminate resets the slave's state machines
+		that control access to the internally connected engines.  In
+		addition the slave freezes its internal error register for
+		debugging purposes.  This command is also needed to abort any
+		ongoing operation in case of an expired 'Master Time Out'
+		timer.
+
+What:           /sys/bus/platform/devices/fsi-master/slave@00:00/raw
+Date:		May 2017
+KernelVersion:  4.12
+Contact:        cbostic@linux.vnet.ibm.com
+Description:
+		Provides a means of reading/writing a 32 bit value from/to a
+		specified FSI bus address.

+ 110 - 0
Documentation/ABI/testing/sysfs-bus-thunderbolt

@@ -0,0 +1,110 @@
+What: /sys/bus/thunderbolt/devices/.../domainX/security
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	This attribute holds current Thunderbolt security level
+		set by the system BIOS. Possible values are:
+
+		none: All devices are automatically authorized
+		user: Devices are only authorized based on writing
+		      appropriate value to the authorized attribute
+		secure: Require devices that support secure connect at
+			minimum. User needs to authorize each device.
+		dponly: Automatically tunnel Display port (and USB). No
+			PCIe tunnels are created.
+
+What: /sys/bus/thunderbolt/devices/.../authorized
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	This attribute is used to authorize Thunderbolt devices
+		after they have been connected. If the device is not
+		authorized, no devices such as PCIe and Display port are
+		available to the system.
+
+		Contents of this attribute will be 0 when the device is not
+		yet authorized.
+
+		Possible values are supported:
+		1: The device will be authorized and connected
+
+		When key attribute contains 32 byte hex string the possible
+		values are:
+		1: The 32 byte hex string is added to the device NVM and
+		   the device is authorized.
+		2: Send a challenge based on the 32 byte hex string. If the
+		   challenge response from device is valid, the device is
+		   authorized. In case of failure errno will be ENOKEY if
+		   the device did not contain a key at all, and
+		   EKEYREJECTED if the challenge response did not match.
+
+What: /sys/bus/thunderbolt/devices/.../key
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	When a devices supports Thunderbolt secure connect it will
+		have this attribute. Writing 32 byte hex string changes
+		authorization to use the secure connection method instead.
+
+What:		/sys/bus/thunderbolt/devices/.../device
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	This attribute contains id of this device extracted from
+		the device DROM.
+
+What:		/sys/bus/thunderbolt/devices/.../device_name
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	This attribute contains name of this device extracted from
+		the device DROM.
+
+What:		/sys/bus/thunderbolt/devices/.../vendor
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	This attribute contains vendor id of this device extracted
+		from the device DROM.
+
+What:		/sys/bus/thunderbolt/devices/.../vendor_name
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	This attribute contains vendor name of this device extracted
+		from the device DROM.
+
+What:		/sys/bus/thunderbolt/devices/.../unique_id
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	This attribute contains unique_id string of this device.
+		This is either read from hardware registers (UUID on
+		newer hardware) or based on UID from the device DROM.
+		Can be used to uniquely identify particular device.
+
+What:		/sys/bus/thunderbolt/devices/.../nvm_version
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	If the device has upgradeable firmware the version
+		number is available here. Format: %x.%x, major.minor.
+		If the device is in safe mode reading the file returns
+		-ENODATA instead as the NVM version is not available.
+
+What:		/sys/bus/thunderbolt/devices/.../nvm_authenticate
+Date:		Sep 2017
+KernelVersion:	4.13
+Contact:	thunderbolt-software@lists.01.org
+Description:	When new NVM image is written to the non-active NVM
+		area (through non_activeX NVMem device), the
+		authentication procedure is started by writing 1 to
+		this file. If everything goes well, the device is
+		restarted with the new NVM firmware. If the image
+		verification fails an error code is returned instead.
+
+		When read holds status of the last authentication
+		operation if an error occurred during the process. This
+		is directly the status value from the DMA configuration
+		based mailbox before the device is power cycled. Writing
+		0 here clears the status.

+ 16 - 0
Documentation/ABI/testing/sysfs-class-mux

@@ -0,0 +1,16 @@
+What:		/sys/class/mux/
+Date:		April 2017
+KernelVersion:	4.13
+Contact:	Peter Rosin <peda@axentia.se>
+Description:
+		The mux/ class sub-directory belongs to the Generic MUX
+		Framework and provides a sysfs interface for using MUX
+		controllers.
+
+What:		/sys/class/mux/muxchipN/
+Date:		April 2017
+KernelVersion:	4.13
+Contact:	Peter Rosin <peda@axentia.se>
+Description:
+		A /sys/class/mux/muxchipN directory is created for each
+		probed MUX chip where N is a simple enumeration.

+ 9 - 9
Documentation/DocBook/w1.tmpl

@@ -51,9 +51,9 @@
     <sect1 id="w1_internal_api">
     <sect1 id="w1_internal_api">
       <title>W1 API internal to the kernel</title>
       <title>W1 API internal to the kernel</title>
       <sect2 id="w1.h">
       <sect2 id="w1.h">
-        <title>drivers/w1/w1.h</title>
-        <para>W1 core functions.</para>
-!Idrivers/w1/w1.h
+        <title>include/linux/w1.h</title>
+        <para>W1 kernel API functions.</para>
+!Iinclude/linux/w1.h
       </sect2>
       </sect2>
 
 
       <sect2 id="w1.c">
       <sect2 id="w1.c">
@@ -62,18 +62,18 @@
 !Idrivers/w1/w1.c
 !Idrivers/w1/w1.c
       </sect2>
       </sect2>
 
 
-      <sect2 id="w1_family.h">
-        <title>drivers/w1/w1_family.h</title>
-        <para>Allows registering device family operations.</para>
-!Idrivers/w1/w1_family.h
-      </sect2>
-
       <sect2 id="w1_family.c">
       <sect2 id="w1_family.c">
         <title>drivers/w1/w1_family.c</title>
         <title>drivers/w1/w1_family.c</title>
         <para>Allows registering device family operations.</para>
         <para>Allows registering device family operations.</para>
 !Edrivers/w1/w1_family.c
 !Edrivers/w1/w1_family.c
       </sect2>
       </sect2>
 
 
+      <sect2 id="w1_internal.h">
+        <title>drivers/w1/w1_internal.h</title>
+        <para>W1 internal initialization for master devices.</para>
+!Idrivers/w1/w1_internal.h
+      </sect2>
+
       <sect2 id="w1_int.c">
       <sect2 id="w1_int.c">
         <title>drivers/w1/w1_int.c</title>
         <title>drivers/w1/w1_int.c</title>
         <para>W1 internal initialization for master devices.</para>
         <para>W1 internal initialization for master devices.</para>

+ 3 - 1
Documentation/admin-guide/devices.txt

@@ -369,8 +369,10 @@
 		237 = /dev/loop-control Loopback control device
 		237 = /dev/loop-control Loopback control device
 		238 = /dev/vhost-net	Host kernel accelerator for virtio net
 		238 = /dev/vhost-net	Host kernel accelerator for virtio net
 		239 = /dev/uhid		User-space I/O driver support for HID subsystem
 		239 = /dev/uhid		User-space I/O driver support for HID subsystem
+		240 = /dev/userio	Serio driver testing device
+		241 = /dev/vhost-vsock	Host kernel driver for virtio vsock
 
 
-		240-254			Reserved for local use
+		242-254			Reserved for local use
 		255			Reserved for MISC_DYNAMIC_MINOR
 		255			Reserved for MISC_DYNAMIC_MINOR
 
 
   11 char	Raw keyboard device	(Linux/SPARC only)
   11 char	Raw keyboard device	(Linux/SPARC only)

+ 1 - 0
Documentation/admin-guide/index.rst

@@ -61,6 +61,7 @@ configure specific aspects of kernel behavior to your liking.
    java
    java
    ras
    ras
    pm/index
    pm/index
+   thunderbolt
 
 
 .. only::  subproject and html
 .. only::  subproject and html
 
 

+ 7 - 0
Documentation/admin-guide/kernel-parameters.txt

@@ -649,6 +649,13 @@
 			/proc/<pid>/coredump_filter.
 			/proc/<pid>/coredump_filter.
 			See also Documentation/filesystems/proc.txt.
 			See also Documentation/filesystems/proc.txt.
 
 
+	coresight_cpu_debug.enable
+			[ARM,ARM64]
+			Format: <bool>
+			Enable/disable the CPU sampling based debugging.
+			0: default value, disable debugging
+			1: enable debugging at boot time
+
 	cpuidle.off=1	[CPU_IDLE]
 	cpuidle.off=1	[CPU_IDLE]
 			disable the cpuidle sub-system
 			disable the cpuidle sub-system
 
 

+ 199 - 0
Documentation/admin-guide/thunderbolt.rst

@@ -0,0 +1,199 @@
+=============
+ Thunderbolt
+=============
+The interface presented here is not meant for end users. Instead there
+should be a userspace tool that handles all the low-level details, keeps
+database of the authorized devices and prompts user for new connections.
+
+More details about the sysfs interface for Thunderbolt devices can be
+found in ``Documentation/ABI/testing/sysfs-bus-thunderbolt``.
+
+Those users who just want to connect any device without any sort of
+manual work, can add following line to
+``/etc/udev/rules.d/99-local.rules``::
+
+  ACTION=="add", SUBSYSTEM=="thunderbolt", ATTR{authorized}=="0", ATTR{authorized}="1"
+
+This will authorize all devices automatically when they appear. However,
+keep in mind that this bypasses the security levels and makes the system
+vulnerable to DMA attacks.
+
+Security levels and how to use them
+-----------------------------------
+Starting from Intel Falcon Ridge Thunderbolt controller there are 4
+security levels available. The reason for these is the fact that the
+connected devices can be DMA masters and thus read contents of the host
+memory without CPU and OS knowing about it. There are ways to prevent
+this by setting up an IOMMU but it is not always available for various
+reasons.
+
+The security levels are as follows:
+
+  none
+    All devices are automatically connected by the firmware. No user
+    approval is needed. In BIOS settings this is typically called
+    *Legacy mode*.
+
+  user
+    User is asked whether the device is allowed to be connected.
+    Based on the device identification information available through
+    ``/sys/bus/thunderbolt/devices``. user then can do the decision.
+    In BIOS settings this is typically called *Unique ID*.
+
+  secure
+    User is asked whether the device is allowed to be connected. In
+    addition to UUID the device (if it supports secure connect) is sent
+    a challenge that should match the expected one based on a random key
+    written to ``key`` sysfs attribute. In BIOS settings this is
+    typically called *One time saved key*.
+
+  dponly
+    The firmware automatically creates tunnels for Display Port and
+    USB. No PCIe tunneling is done. In BIOS settings this is
+    typically called *Display Port Only*.
+
+The current security level can be read from
+``/sys/bus/thunderbolt/devices/domainX/security`` where ``domainX`` is
+the Thunderbolt domain the host controller manages. There is typically
+one domain per Thunderbolt host controller.
+
+If the security level reads as ``user`` or ``secure`` the connected
+device must be authorized by the user before PCIe tunnels are created
+(e.g the PCIe device appears).
+
+Each Thunderbolt device plugged in will appear in sysfs under
+``/sys/bus/thunderbolt/devices``. The device directory carries
+information that can be used to identify the particular device,
+including its name and UUID.
+
+Authorizing devices when security level is ``user`` or ``secure``
+-----------------------------------------------------------------
+When a device is plugged in it will appear in sysfs as follows::
+
+  /sys/bus/thunderbolt/devices/0-1/authorized	- 0
+  /sys/bus/thunderbolt/devices/0-1/device	- 0x8004
+  /sys/bus/thunderbolt/devices/0-1/device_name	- Thunderbolt to FireWire Adapter
+  /sys/bus/thunderbolt/devices/0-1/vendor	- 0x1
+  /sys/bus/thunderbolt/devices/0-1/vendor_name	- Apple, Inc.
+  /sys/bus/thunderbolt/devices/0-1/unique_id	- e0376f00-0300-0100-ffff-ffffffffffff
+
+The ``authorized`` attribute reads 0 which means no PCIe tunnels are
+created yet. The user can authorize the device by simply::
+
+  # echo 1 > /sys/bus/thunderbolt/devices/0-1/authorized
+
+This will create the PCIe tunnels and the device is now connected.
+
+If the device supports secure connect, and the domain security level is
+set to ``secure``, it has an additional attribute ``key`` which can hold
+a random 32 byte value used for authorization and challenging the device in
+future connects::
+
+  /sys/bus/thunderbolt/devices/0-3/authorized	- 0
+  /sys/bus/thunderbolt/devices/0-3/device	- 0x305
+  /sys/bus/thunderbolt/devices/0-3/device_name	- AKiTiO Thunder3 PCIe Box
+  /sys/bus/thunderbolt/devices/0-3/key		-
+  /sys/bus/thunderbolt/devices/0-3/vendor	- 0x41
+  /sys/bus/thunderbolt/devices/0-3/vendor_name	- inXtron
+  /sys/bus/thunderbolt/devices/0-3/unique_id	- dc010000-0000-8508-a22d-32ca6421cb16
+
+Notice the key is empty by default.
+
+If the user does not want to use secure connect it can just ``echo 1``
+to the ``authorized`` attribute and the PCIe tunnels will be created in
+the same way than in ``user`` security level.
+
+If the user wants to use secure connect, the first time the device is
+plugged a key needs to be created and send to the device::
+
+  # key=$(openssl rand -hex 32)
+  # echo $key > /sys/bus/thunderbolt/devices/0-3/key
+  # echo 1 > /sys/bus/thunderbolt/devices/0-3/authorized
+
+Now the device is connected (PCIe tunnels are created) and in addition
+the key is stored on the device NVM.
+
+Next time the device is plugged in the user can verify (challenge) the
+device using the same key::
+
+  # echo $key > /sys/bus/thunderbolt/devices/0-3/key
+  # echo 2 > /sys/bus/thunderbolt/devices/0-3/authorized
+
+If the challenge the device returns back matches the one we expect based
+on the key, the device is connected and the PCIe tunnels are created.
+However, if the challenge failed no tunnels are created and error is
+returned to the user.
+
+If the user still wants to connect the device it can either approve
+the device without a key or write new key and write 1 to the
+``authorized`` file to get the new key stored on the device NVM.
+
+Upgrading NVM on Thunderbolt device or host
+-------------------------------------------
+Since most of the functionality is handled in a firmware running on a
+host controller or a device, it is important that the firmware can be
+upgraded to the latest where possible bugs in it have been fixed.
+Typically OEMs provide this firmware from their support site.
+
+There is also a central site which has links where to download firmwares
+for some machines:
+
+  `Thunderbolt Updates <https://thunderbolttechnology.net/updates>`_
+
+Before you upgrade firmware on a device or host, please make sure it is
+the suitable. Failing to do that may render the device (or host) in a
+state where it cannot be used properly anymore without special tools!
+
+Host NVM upgrade on Apple Macs is not supported.
+
+Once the NVM image has been downloaded, you need to plug in a
+Thunderbolt device so that the host controller appears. It does not
+matter which device is connected (unless you are upgrading NVM on a
+device - then you need to connect that particular device).
+
+Note OEM-specific method to power the controller up ("force power") may
+be available for your system in which case there is no need to plug in a
+Thunderbolt device.
+
+After that we can write the firmware to the non-active parts of the NVM
+of the host or device. As an example here is how Intel NUC6i7KYK (Skull
+Canyon) Thunderbolt controller NVM is upgraded::
+
+  # dd if=KYK_TBT_FW_0018.bin of=/sys/bus/thunderbolt/devices/0-0/nvm_non_active0/nvmem
+
+Once the operation completes we can trigger NVM authentication and
+upgrade process as follows::
+
+  # echo 1 > /sys/bus/thunderbolt/devices/0-0/nvm_authenticate
+
+If no errors are returned, the host controller shortly disappears. Once
+it comes back the driver notices it and initiates a full power cycle.
+After a while the host controller appears again and this time it should
+be fully functional.
+
+We can verify that the new NVM firmware is active by running following
+commands::
+
+  # cat /sys/bus/thunderbolt/devices/0-0/nvm_authenticate
+  0x0
+  # cat /sys/bus/thunderbolt/devices/0-0/nvm_version
+  18.0
+
+If ``nvm_authenticate`` contains anything else than 0x0 it is the error
+code from the last authentication cycle, which means the authentication
+of the NVM image failed.
+
+Note names of the NVMem devices ``nvm_activeN`` and ``nvm_non_activeN``
+depends on the order they are registered in the NVMem subsystem. N in
+the name is the identifier added by the NVMem subsystem.
+
+Upgrading NVM when host controller is in safe mode
+--------------------------------------------------
+If the existing NVM is not properly authenticated (or is missing) the
+host controller goes into safe mode which means that only available
+functionality is flashing new NVM image. When in this mode the reading
+``nvm_version`` fails with ``ENODATA`` and the device identification
+information is missing.
+
+To recover from this mode, one needs to flash a valid NVM image to the
+host host controller in the same way it is done in the previous chapter.

+ 49 - 0
Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt

@@ -0,0 +1,49 @@
+* CoreSight CPU Debug Component:
+
+CoreSight CPU debug component are compliant with the ARMv8 architecture
+reference manual (ARM DDI 0487A.k) Chapter 'Part H: External debug'. The
+external debug module is mainly used for two modes: self-hosted debug and
+external debug, and it can be accessed from mmio region from Coresight
+and eventually the debug module connects with CPU for debugging. And the
+debug module provides sample-based profiling extension, which can be used
+to sample CPU program counter, secure state and exception level, etc;
+usually every CPU has one dedicated debug module to be connected.
+
+Required properties:
+
+- compatible : should be "arm,coresight-cpu-debug"; supplemented with
+               "arm,primecell" since this driver is using the AMBA bus
+	       interface.
+
+- reg : physical base address and length of the register set.
+
+- clocks : the clock associated to this component.
+
+- clock-names : the name of the clock referenced by the code. Since we are
+                using the AMBA framework, the name of the clock providing
+		the interconnect should be "apb_pclk" and the clock is
+		mandatory. The interface between the debug logic and the
+		processor core is clocked by the internal CPU clock, so it
+		is enabled with CPU clock by default.
+
+- cpu : the CPU phandle the debug module is affined to. When omitted
+	the module is considered to belong to CPU0.
+
+Optional properties:
+
+- power-domains: a phandle to the debug power domain. We use "power-domains"
+                 binding to turn on the debug logic if it has own dedicated
+		 power domain and if necessary to use "cpuidle.off=1" or
+		 "nohlt" in the kernel command line or sysfs node to
+		 constrain idle states to ensure registers in the CPU power
+		 domain are accessible.
+
+Example:
+
+	debug@f6590000 {
+		compatible = "arm,coresight-cpu-debug","arm,primecell";
+		reg = <0 0xf6590000 0 0x1000>;
+		clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+		clock-names = "apb_pclk";
+		cpu = <&cpu0>;
+	};

+ 24 - 0
Documentation/devicetree/bindings/fsi/fsi-master-gpio.txt

@@ -0,0 +1,24 @@
+Device-tree bindings for gpio-based FSI master driver
+-----------------------------------------------------
+
+Required properties:
+ - compatible = "fsi-master-gpio";
+ - clock-gpios = <gpio-descriptor>;	: GPIO for FSI clock
+ - data-gpios = <gpio-descriptor>;	: GPIO for FSI data signal
+
+Optional properties:
+ - enable-gpios = <gpio-descriptor>;	: GPIO for enable signal
+ - trans-gpios = <gpio-descriptor>;	: GPIO for voltage translator enable
+ - mux-gpios = <gpio-descriptor>;	: GPIO for pin multiplexing with other
+                                          functions (eg, external FSI masters)
+
+Examples:
+
+    fsi-master {
+        compatible = "fsi-master-gpio", "fsi-master";
+        clock-gpios = <&gpio 0>;
+        data-gpios = <&gpio 1>;
+        enable-gpios = <&gpio 2>;
+        trans-gpios = <&gpio 3>;
+        mux-gpios = <&gpio 4>;
+    }

+ 99 - 0
Documentation/devicetree/bindings/i2c/i2c-mux-gpmux.txt

@@ -0,0 +1,99 @@
+General Purpose I2C Bus Mux
+
+This binding describes an I2C bus multiplexer that uses a mux controller
+from the mux subsystem to route the I2C signals.
+
+                                  .-----.  .-----.
+                                  | dev |  | dev |
+    .------------.                '-----'  '-----'
+    | SoC        |                   |        |
+    |            |          .--------+--------'
+    |   .------. |  .------+    child bus A, on MUX value set to 0
+    |   | I2C  |-|--| Mux  |
+    |   '------' |  '--+---+    child bus B, on MUX value set to 1
+    |   .------. |     |    '----------+--------+--------.
+    |   | MUX- | |     |               |        |        |
+    |   | Ctrl |-|-----+            .-----.  .-----.  .-----.
+    |   '------' |                  | dev |  | dev |  | dev |
+    '------------'                  '-----'  '-----'  '-----'
+
+Required properties:
+- compatible: i2c-mux
+- i2c-parent: The phandle of the I2C bus that this multiplexer's master-side
+  port is connected to.
+- mux-controls: The phandle of the mux controller to use for operating the
+  mux.
+* Standard I2C mux properties. See i2c-mux.txt in this directory.
+* I2C child bus nodes. See i2c-mux.txt in this directory. The sub-bus number
+  is also the mux-controller state described in ../mux/mux-controller.txt
+
+Optional properties:
+- mux-locked: If present, explicitly allow unrelated I2C transactions on the
+  parent I2C adapter at these times:
+   + during setup of the multiplexer
+   + between setup of the multiplexer and the child bus I2C transaction
+   + between the child bus I2C transaction and releasing of the multiplexer
+   + during releasing of the multiplexer
+  However, I2C transactions to devices behind all I2C multiplexers connected
+  to the same parent adapter that this multiplexer is connected to are blocked
+  for the full duration of the complete multiplexed I2C transaction (i.e.
+  including the times covered by the above list).
+  If mux-locked is not present, the multiplexer is assumed to be parent-locked.
+  This means that no unrelated I2C transactions are allowed on the parent I2C
+  adapter for the complete multiplexed I2C transaction.
+  The properties of mux-locked and parent-locked multiplexers are discussed
+  in more detail in Documentation/i2c/i2c-topology.
+
+For each i2c child node, an I2C child bus will be created. They will
+be numbered based on their order in the device tree.
+
+Whenever an access is made to a device on a child bus, the value set
+in the relevant node's reg property will be set as the state in the
+mux controller.
+
+Example:
+	mux: mux-controller {
+		compatible = "gpio-mux";
+		#mux-control-cells = <0>;
+
+		mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
+			    <&pioA 1 GPIO_ACTIVE_HIGH>;
+	};
+
+	i2c-mux {
+		compatible = "i2c-mux";
+		mux-locked;
+		i2c-parent = <&i2c1>;
+
+		mux-controls = <&mux>;
+
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		i2c@1 {
+			reg = <1>;
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			ssd1307: oled@3c {
+				compatible = "solomon,ssd1307fb-i2c";
+				reg = <0x3c>;
+				pwms = <&pwm 4 3000>;
+				reset-gpios = <&gpio2 7 1>;
+				reset-active-low;
+			};
+		};
+
+		i2c@3 {
+			reg = <3>;
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			pca9555: pca9555@20 {
+				compatible = "nxp,pca9555";
+				gpio-controller;
+				#gpio-cells = <2>;
+				reg = <0x20>;
+			};
+		};
+	};

+ 39 - 0
Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt

@@ -0,0 +1,39 @@
+I/O channel multiplexer bindings
+
+If a multiplexer is used to select which hardware signal is fed to
+e.g. an ADC channel, these bindings describe that situation.
+
+Required properties:
+- compatible : "io-channel-mux"
+- io-channels : Channel node of the parent channel that has multiplexed
+		input.
+- io-channel-names : Should be "parent".
+- #address-cells = <1>;
+- #size-cells = <0>;
+- mux-controls : Mux controller node to use for operating the mux
+- channels : List of strings, labeling the mux controller states.
+
+For each non-empty string in the channels property, an io-channel will
+be created. The number of this io-channel is the same as the index into
+the list of strings in the channels property, and also matches the mux
+controller state. The mux controller state is described in
+../mux/mux-controller.txt
+
+Example:
+	mux: mux-controller {
+		compatible = "mux-gpio";
+		#mux-control-cells = <0>;
+
+		mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
+			    <&pioA 1 GPIO_ACTIVE_HIGH>;
+	};
+
+	adc-mux {
+		compatible = "io-channel-mux";
+		io-channels = <&adc 0>;
+		io-channel-names = "parent";
+
+		mux-controls = <&mux>;
+
+		channels = "sync", "in", "system-regulator";
+	};

+ 75 - 0
Documentation/devicetree/bindings/mux/adi,adg792a.txt

@@ -0,0 +1,75 @@
+Bindings for Analog Devices ADG792A/G Triple 4:1 Multiplexers
+
+Required properties:
+- compatible : "adi,adg792a" or "adi,adg792g"
+- #mux-control-cells : <0> if parallel (the three muxes are bound together
+  with a single mux controller controlling all three muxes), or <1> if
+  not (one mux controller for each mux).
+* Standard mux-controller bindings as described in mux-controller.txt
+
+Optional properties for ADG792G:
+- gpio-controller : if present, #gpio-cells below is required.
+- #gpio-cells : should be <2>
+			  - First cell is the GPO line number, i.e. 0 or 1
+			  - Second cell is used to specify active high (0)
+			    or active low (1)
+
+Optional properties:
+- idle-state : if present, array of states that the mux controllers will have
+  when idle. The special state MUX_IDLE_AS_IS is the default and
+  MUX_IDLE_DISCONNECT is also supported.
+
+States 0 through 3 correspond to signals A through D in the datasheet.
+
+Example:
+
+	/*
+	 * Three independent mux controllers (of which one is used).
+	 * Mux 0 is disconnected when idle, mux 1 idles in the previously
+	 * selected state and mux 2 idles with signal B.
+	 */
+	&i2c0 {
+		mux: mux-controller@50 {
+			compatible = "adi,adg792a";
+			reg = <0x50>;
+			#mux-control-cells = <1>;
+
+			idle-state = <MUX_IDLE_DISCONNECT MUX_IDLE_AS_IS 1>;
+		};
+	};
+
+	adc-mux {
+		compatible = "io-channel-mux";
+		io-channels = <&adc 0>;
+		io-channel-names = "parent";
+
+		mux-controls = <&mux 2>;
+
+		channels = "sync-1", "", "out";
+	};
+
+
+	/*
+	 * Three parallel muxes with one mux controller, useful e.g. if
+	 * the adc is differential, thus needing two signals to be muxed
+	 * simultaneously for correct operation.
+	 */
+	&i2c0 {
+		pmux: mux-controller@50 {
+			compatible = "adi,adg792a";
+			reg = <0x50>;
+			#mux-control-cells = <0>;
+
+			idle-state = <1>;
+		};
+	};
+
+	diff-adc-mux {
+		compatible = "io-channel-mux";
+		io-channels = <&adc 0>;
+		io-channel-names = "parent";
+
+		mux-controls = <&pmux>;
+
+		channels = "sync-1", "", "out";
+	};

+ 69 - 0
Documentation/devicetree/bindings/mux/gpio-mux.txt

@@ -0,0 +1,69 @@
+GPIO-based multiplexer controller bindings
+
+Define what GPIO pins are used to control a multiplexer. Or several
+multiplexers, if the same pins control more than one multiplexer.
+
+Required properties:
+- compatible : "gpio-mux"
+- mux-gpios : list of gpios used to control the multiplexer, least
+	      significant bit first.
+- #mux-control-cells : <0>
+* Standard mux-controller bindings as decribed in mux-controller.txt
+
+Optional properties:
+- idle-state : if present, the state the mux will have when idle. The
+	       special state MUX_IDLE_AS_IS is the default.
+
+The multiplexer state is defined as the number represented by the
+multiplexer GPIO pins, where the first pin is the least significant
+bit. An active pin is a binary 1, an inactive pin is a binary 0.
+
+Example:
+
+	mux: mux-controller {
+		compatible = "gpio-mux";
+		#mux-control-cells = <0>;
+
+		mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
+			    <&pioA 1 GPIO_ACTIVE_HIGH>;
+	};
+
+	adc-mux {
+		compatible = "io-channel-mux";
+		io-channels = <&adc 0>;
+		io-channel-names = "parent";
+
+		mux-controls = <&mux>;
+
+		channels = "sync-1", "in", "out", "sync-2";
+	};
+
+	i2c-mux {
+		compatible = "i2c-mux";
+		i2c-parent = <&i2c1>;
+
+		mux-controls = <&mux>;
+
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		i2c@0 {
+			reg = <0>;
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			ssd1307: oled@3c {
+				/* ... */
+			};
+		};
+
+		i2c@3 {
+			reg = <3>;
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			pca9555: pca9555@20 {
+				/* ... */
+			};
+		};
+	};

+ 60 - 0
Documentation/devicetree/bindings/mux/mmio-mux.txt

@@ -0,0 +1,60 @@
+MMIO register bitfield-based multiplexer controller bindings
+
+Define register bitfields to be used to control multiplexers. The parent
+device tree node must be a syscon node to provide register access.
+
+Required properties:
+- compatible : "mmio-mux"
+- #mux-control-cells : <1>
+- mux-reg-masks : an array of register offset and pre-shifted bitfield mask
+                  pairs, each describing a single mux control.
+* Standard mux-controller bindings as decribed in mux-controller.txt
+
+Optional properties:
+- idle-states : if present, the state the muxes will have when idle. The
+		special state MUX_IDLE_AS_IS is the default.
+
+The multiplexer state of each multiplexer is defined as the value of the
+bitfield described by the corresponding register offset and bitfield mask pair
+in the mux-reg-masks array, accessed through the parent syscon.
+
+Example:
+
+	syscon {
+		compatible = "syscon";
+
+		mux: mux-controller {
+			compatible = "mmio-mux";
+			#mux-control-cells = <1>;
+
+			mux-reg-masks = <0x3 0x30>, /* 0: reg 0x3, bits 5:4 */
+					<0x3 0x40>, /* 1: reg 0x3, bit 6 */
+			idle-states = <MUX_IDLE_AS_IS>, <0>;
+		};
+	};
+
+	video-mux {
+		compatible = "video-mux";
+		mux-controls = <&mux 0>;
+
+		ports {
+			/* inputs 0..3 */
+			port@0 {
+				reg = <0>;
+			};
+			port@1 {
+				reg = <1>;
+			};
+			port@2 {
+				reg = <2>;
+			};
+			port@3 {
+				reg = <3>;
+			};
+
+			/* output */
+			port@4 {
+				reg = <4>;
+			};
+		};
+	};

+ 157 - 0
Documentation/devicetree/bindings/mux/mux-controller.txt

@@ -0,0 +1,157 @@
+Common multiplexer controller bindings
+======================================
+
+A multiplexer (or mux) controller will have one, or several, consumer devices
+that uses the mux controller. Thus, a mux controller can possibly control
+several parallel multiplexers. Presumably there will be at least one
+multiplexer needed by each consumer, but a single mux controller can of course
+control several multiplexers for a single consumer.
+
+A mux controller provides a number of states to its consumers, and the state
+space is a simple zero-based enumeration. I.e. 0-1 for a 2-way multiplexer,
+0-7 for an 8-way multiplexer, etc.
+
+
+Consumers
+---------
+
+Mux controller consumers should specify a list of mux controllers that they
+want to use with a property containing a 'mux-ctrl-list':
+
+	mux-ctrl-list ::= <single-mux-ctrl> [mux-ctrl-list]
+	single-mux-ctrl ::= <mux-ctrl-phandle> [mux-ctrl-specifier]
+	mux-ctrl-phandle : phandle to mux controller node
+	mux-ctrl-specifier : array of #mux-control-cells specifying the
+			     given mux controller (controller specific)
+
+Mux controller properties should be named "mux-controls". The exact meaning of
+each mux controller property must be documented in the device tree binding for
+each consumer. An optional property "mux-control-names" may contain a list of
+strings to label each of the mux controllers listed in the "mux-controls"
+property.
+
+Drivers for devices that use more than a single mux controller can use the
+"mux-control-names" property to map the name of the requested mux controller
+to an index into the list given by the "mux-controls" property.
+
+mux-ctrl-specifier typically encodes the chip-relative mux controller number.
+If the mux controller chip only provides a single mux controller, the
+mux-ctrl-specifier can typically be left out.
+
+Example:
+
+	/* One consumer of a 2-way mux controller (one GPIO-line) */
+	mux: mux-controller {
+		compatible = "gpio-mux";
+		#mux-control-cells = <0>;
+
+		mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>;
+	};
+
+	adc-mux {
+		compatible = "io-channel-mux";
+		io-channels = <&adc 0>;
+		io-channel-names = "parent";
+
+		mux-controls = <&mux>;
+		mux-control-names = "adc";
+
+		channels = "sync", "in";
+	};
+
+Note that in the example above, specifying the "mux-control-names" is redundant
+because there is only one mux controller in the list. However, if the driver
+for the consumer node in fact asks for a named mux controller, that name is of
+course still required.
+
+	/*
+	 * Two consumers (one for an ADC line and one for an i2c bus) of
+	 * parallel 4-way multiplexers controlled by the same two GPIO-lines.
+	 */
+	mux: mux-controller {
+		compatible = "gpio-mux";
+		#mux-control-cells = <0>;
+
+		mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>,
+			    <&pioA 1 GPIO_ACTIVE_HIGH>;
+	};
+
+	adc-mux {
+		compatible = "io-channel-mux";
+		io-channels = <&adc 0>;
+		io-channel-names = "parent";
+
+		mux-controls = <&mux>;
+
+		channels = "sync-1", "in", "out", "sync-2";
+	};
+
+	i2c-mux {
+		compatible = "i2c-mux";
+		i2c-parent = <&i2c1>;
+
+		mux-controls = <&mux>;
+
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		i2c@0 {
+			reg = <0>;
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			ssd1307: oled@3c {
+				/* ... */
+			};
+		};
+
+		i2c@3 {
+			reg = <3>;
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			pca9555: pca9555@20 {
+				/* ... */
+			};
+		};
+	};
+
+
+Mux controller nodes
+--------------------
+
+Mux controller nodes must specify the number of cells used for the
+specifier using the '#mux-control-cells' property.
+
+Optionally, mux controller nodes can also specify the state the mux should
+have when it is idle. The idle-state property is used for this. If the
+idle-state is not present, the mux controller is typically left as is when
+it is idle. For multiplexer chips that expose several mux controllers, the
+idle-state property is an array with one idle state for each mux controller.
+
+The special value (-1) may be used to indicate that the mux should be left
+as is when it is idle. This is the default, but can still be useful for
+mux controller chips with more than one mux controller, particularly when
+there is a need to "step past" a mux controller and set some other idle
+state for a mux controller with a higher index.
+
+Some mux controllers have the ability to disconnect the input/output of the
+multiplexer. Using this disconnected high-impedance state as the idle state
+is indicated with idle state (-2).
+
+These constants are available in
+
+      #include <dt-bindings/mux/mux.h>
+
+as MUX_IDLE_AS_IS (-1) and MUX_IDLE_DISCONNECT (-2).
+
+An example mux controller node look like this (the adg972a chip is a triple
+4-way multiplexer):
+
+	mux: mux-controller@50 {
+		compatible = "adi,adg792a";
+		reg = <0x50>;
+		#mux-control-cells = <1>;
+
+		idle-state = <MUX_IDLE_DISCONNECT MUX_IDLE_AS_IS 2>;
+	};

+ 1 - 0
Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt

@@ -4,6 +4,7 @@ Required properties:
 - compatible: Should be one of the following.
 - compatible: Should be one of the following.
   - "rockchip,rk3066a-efuse" - for RK3066a SoCs.
   - "rockchip,rk3066a-efuse" - for RK3066a SoCs.
   - "rockchip,rk3188-efuse" - for RK3188 SoCs.
   - "rockchip,rk3188-efuse" - for RK3188 SoCs.
+  - "rockchip,rk322x-efuse" - for RK322x SoCs.
   - "rockchip,rk3288-efuse" - for RK3288 SoCs.
   - "rockchip,rk3288-efuse" - for RK3288 SoCs.
   - "rockchip,rk3399-efuse" - for RK3399 SoCs.
   - "rockchip,rk3399-efuse" - for RK3399 SoCs.
 - reg: Should contain the registers location and exact eFuse size
 - reg: Should contain the registers location and exact eFuse size

+ 6 - 1
Documentation/driver-model/devres.txt

@@ -337,7 +337,12 @@ MEM
   devm_kzalloc()
   devm_kzalloc()
 
 
 MFD
 MFD
- devm_mfd_add_devices()
+  devm_mfd_add_devices()
+
+MUX
+  devm_mux_chip_alloc()
+  devm_mux_chip_register()
+  devm_mux_control_get()
 
 
 PER-CPU MEM
 PER-CPU MEM
   devm_alloc_percpu()
   devm_alloc_percpu()

+ 175 - 0
Documentation/trace/coresight-cpu-debug.txt

@@ -0,0 +1,175 @@
+		Coresight CPU Debug Module
+		==========================
+
+   Author:   Leo Yan <leo.yan@linaro.org>
+   Date:     April 5th, 2017
+
+Introduction
+------------
+
+Coresight CPU debug module is defined in ARMv8-a architecture reference manual
+(ARM DDI 0487A.k) Chapter 'Part H: External debug', the CPU can integrate
+debug module and it is mainly used for two modes: self-hosted debug and
+external debug. Usually the external debug mode is well known as the external
+debugger connects with SoC from JTAG port; on the other hand the program can
+explore debugging method which rely on self-hosted debug mode, this document
+is to focus on this part.
+
+The debug module provides sample-based profiling extension, which can be used
+to sample CPU program counter, secure state and exception level, etc; usually
+every CPU has one dedicated debug module to be connected. Based on self-hosted
+debug mechanism, Linux kernel can access these related registers from mmio
+region when the kernel panic happens. The callback notifier for kernel panic
+will dump related registers for every CPU; finally this is good for assistant
+analysis for panic.
+
+
+Implementation
+--------------
+
+- During driver registration, it uses EDDEVID and EDDEVID1 - two device ID
+  registers to decide if sample-based profiling is implemented or not. On some
+  platforms this hardware feature is fully or partially implemented; and if
+  this feature is not supported then registration will fail.
+
+- At the time this documentation was written, the debug driver mainly relies on
+  information gathered by the kernel panic callback notifier from three
+  sampling registers: EDPCSR, EDVIDSR and EDCIDSR: from EDPCSR we can get
+  program counter; EDVIDSR has information for secure state, exception level,
+  bit width, etc; EDCIDSR is context ID value which contains the sampled value
+  of CONTEXTIDR_EL1.
+
+- The driver supports a CPU running in either AArch64 or AArch32 mode. The
+  registers naming convention is a bit different between them, AArch64 uses
+  'ED' for register prefix (ARM DDI 0487A.k, chapter H9.1) and AArch32 uses
+  'DBG' as prefix (ARM DDI 0487A.k, chapter G5.1). The driver is unified to
+  use AArch64 naming convention.
+
+- ARMv8-a (ARM DDI 0487A.k) and ARMv7-a (ARM DDI 0406C.b) have different
+  register bits definition. So the driver consolidates two difference:
+
+  If PCSROffset=0b0000, on ARMv8-a the feature of EDPCSR is not implemented;
+  but ARMv7-a defines "PCSR samples are offset by a value that depends on the
+  instruction set state". For ARMv7-a, the driver checks furthermore if CPU
+  runs with ARM or thumb instruction set and calibrate PCSR value, the
+  detailed description for offset is in ARMv7-a ARM (ARM DDI 0406C.b) chapter
+  C11.11.34 "DBGPCSR, Program Counter Sampling Register".
+
+  If PCSROffset=0b0010, ARMv8-a defines "EDPCSR implemented, and samples have
+  no offset applied and do not sample the instruction set state in AArch32
+  state". So on ARMv8 if EDDEVID1.PCSROffset is 0b0010 and the CPU operates
+  in AArch32 state, EDPCSR is not sampled; when the CPU operates in AArch64
+  state EDPCSR is sampled and no offset are applied.
+
+
+Clock and power domain
+----------------------
+
+Before accessing debug registers, we should ensure the clock and power domain
+have been enabled properly. In ARMv8-a ARM (ARM DDI 0487A.k) chapter 'H9.1
+Debug registers', the debug registers are spread into two domains: the debug
+domain and the CPU domain.
+
+                                +---------------+
+                                |               |
+                                |               |
+                     +----------+--+            |
+        dbg_clock -->|          |**|            |<-- cpu_clock
+                     |    Debug |**|   CPU      |
+ dbg_power_domain -->|          |**|            |<-- cpu_power_domain
+                     +----------+--+            |
+                                |               |
+                                |               |
+                                +---------------+
+
+For debug domain, the user uses DT binding "clocks" and "power-domains" to
+specify the corresponding clock source and power supply for the debug logic.
+The driver calls the pm_runtime_{put|get} operations as needed to handle the
+debug power domain.
+
+For CPU domain, the different SoC designs have different power management
+schemes and finally this heavily impacts external debug module. So we can
+divide into below cases:
+
+- On systems with a sane power controller which can behave correctly with
+  respect to CPU power domain, the CPU power domain can be controlled by
+  register EDPRCR in driver. The driver firstly writes bit EDPRCR.COREPURQ
+  to power up the CPU, and then writes bit EDPRCR.CORENPDRQ for emulation
+  of CPU power down. As result, this can ensure the CPU power domain is
+  powered on properly during the period when access debug related registers;
+
+- Some designs will power down an entire cluster if all CPUs on the cluster
+  are powered down - including the parts of the debug registers that should
+  remain powered in the debug power domain. The bits in EDPRCR are not
+  respected in these cases, so these designs do not support debug over
+  power down in the way that the CoreSight / Debug designers anticipated.
+  This means that even checking EDPRSR has the potential to cause a bus hang
+  if the target register is unpowered.
+
+  In this case, accessing to the debug registers while they are not powered
+  is a recipe for disaster; so we need preventing CPU low power states at boot
+  time or when user enable module at the run time. Please see chapter
+  "How to use the module" for detailed usage info for this.
+
+
+Device Tree Bindings
+--------------------
+
+See Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt for details.
+
+
+How to use the module
+---------------------
+
+If you want to enable debugging functionality at boot time, you can add
+"coresight_cpu_debug.enable=1" to the kernel command line parameter.
+
+The driver also can work as module, so can enable the debugging when insmod
+module:
+# insmod coresight_cpu_debug.ko debug=1
+
+When boot time or insmod module you have not enabled the debugging, the driver
+uses the debugfs file system to provide a knob to dynamically enable or disable
+debugging:
+
+To enable it, write a '1' into /sys/kernel/debug/coresight_cpu_debug/enable:
+# echo 1 > /sys/kernel/debug/coresight_cpu_debug/enable
+
+To disable it, write a '0' into /sys/kernel/debug/coresight_cpu_debug/enable:
+# echo 0 > /sys/kernel/debug/coresight_cpu_debug/enable
+
+As explained in chapter "Clock and power domain", if you are working on one
+platform which has idle states to power off debug logic and the power
+controller cannot work well for the request from EDPRCR, then you should
+firstly constraint CPU idle states before enable CPU debugging feature; so can
+ensure the accessing to debug logic.
+
+If you want to limit idle states at boot time, you can use "nohlt" or
+"cpuidle.off=1" in the kernel command line.
+
+At the runtime you can disable idle states with below methods:
+
+Set latency request to /dev/cpu_dma_latency to disable all CPUs specific idle
+states (if latency = 0uS then disable all idle states):
+# echo "what_ever_latency_you_need_in_uS" > /dev/cpu_dma_latency
+
+Disable specific CPU's specific idle state:
+# echo 1 > /sys/devices/system/cpu/cpu$cpu/cpuidle/state$state/disable
+
+
+Output format
+-------------
+
+Here is an example of the debugging output format:
+
+ARM external debug module:
+coresight-cpu-debug 850000.debug: CPU[0]:
+coresight-cpu-debug 850000.debug:  EDPRSR:  00000001 (Power:On DLK:Unlock)
+coresight-cpu-debug 850000.debug:  EDPCSR:  [<ffff00000808e9bc>] handle_IPI+0x174/0x1d8
+coresight-cpu-debug 850000.debug:  EDCIDSR: 00000000
+coresight-cpu-debug 850000.debug:  EDVIDSR: 90000000 (State:Non-secure Mode:EL1/0 Width:64bits VMID:0)
+coresight-cpu-debug 852000.debug: CPU[1]:
+coresight-cpu-debug 852000.debug:  EDPRSR:  00000001 (Power:On DLK:Unlock)
+coresight-cpu-debug 852000.debug:  EDPCSR:  [<ffff0000087fab34>] debug_notifier_call+0x23c/0x358
+coresight-cpu-debug 852000.debug:  EDCIDSR: 00000000
+coresight-cpu-debug 852000.debug:  EDVIDSR: 90000000 (State:Non-secure Mode:EL1/0 Width:64bits VMID:0)

+ 22 - 0
MAINTAINERS

@@ -1207,7 +1207,9 @@ L:	linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 S:	Maintained
 S:	Maintained
 F:	drivers/hwtracing/coresight/*
 F:	drivers/hwtracing/coresight/*
 F:	Documentation/trace/coresight.txt
 F:	Documentation/trace/coresight.txt
+F:	Documentation/trace/coresight-cpu-debug.txt
 F:	Documentation/devicetree/bindings/arm/coresight.txt
 F:	Documentation/devicetree/bindings/arm/coresight.txt
+F:	Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt
 F:	Documentation/ABI/testing/sysfs-bus-coresight-devices-*
 F:	Documentation/ABI/testing/sysfs-bus-coresight-devices-*
 F:	tools/perf/arch/arm/util/pmu.c
 F:	tools/perf/arch/arm/util/pmu.c
 F:	tools/perf/arch/arm/util/auxtrace.c
 F:	tools/perf/arch/arm/util/auxtrace.c
@@ -6489,6 +6491,13 @@ F:	Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector
 F:	Documentation/devicetree/bindings/iio/adc/envelope-detector.txt
 F:	Documentation/devicetree/bindings/iio/adc/envelope-detector.txt
 F:	drivers/iio/adc/envelope-detector.c
 F:	drivers/iio/adc/envelope-detector.c
 
 
+IIO MULTIPLEXER
+M:	Peter Rosin <peda@axentia.se>
+L:	linux-iio@vger.kernel.org
+S:	Maintained
+F:	Documentation/devicetree/bindings/iio/multiplexer/iio-mux.txt
+F:	drivers/iio/multiplexer/iio-mux.c
+
 IIO SUBSYSTEM AND DRIVERS
 IIO SUBSYSTEM AND DRIVERS
 M:	Jonathan Cameron <jic23@kernel.org>
 M:	Jonathan Cameron <jic23@kernel.org>
 R:	Hartmut Knaack <knaack.h@gmx.de>
 R:	Hartmut Knaack <knaack.h@gmx.de>
@@ -8724,6 +8733,15 @@ S:	Orphan
 F:	drivers/mmc/host/mmc_spi.c
 F:	drivers/mmc/host/mmc_spi.c
 F:	include/linux/spi/mmc_spi.h
 F:	include/linux/spi/mmc_spi.h
 
 
+MULTIPLEXER SUBSYSTEM
+M:	Peter Rosin <peda@axentia.se>
+S:	Maintained
+F:	Documentation/ABI/testing/mux/sysfs-class-mux*
+F:	Documentation/devicetree/bindings/mux/
+F:	include/linux/dt-bindings/mux/
+F:	include/linux/mux/
+F:	drivers/mux/
+
 MULTISOUND SOUND DRIVER
 MULTISOUND SOUND DRIVER
 M:	Andrew Veliath <andrewtv@usa.net>
 M:	Andrew Veliath <andrewtv@usa.net>
 S:	Maintained
 S:	Maintained
@@ -11336,6 +11354,9 @@ F:	Documentation/tee.txt
 
 
 THUNDERBOLT DRIVER
 THUNDERBOLT DRIVER
 M:	Andreas Noever <andreas.noever@gmail.com>
 M:	Andreas Noever <andreas.noever@gmail.com>
+M:	Michael Jamet <michael.jamet@intel.com>
+M:	Mika Westerberg <mika.westerberg@linux.intel.com>
+M:	Yehezkel Bernat <yehezkel.bernat@intel.com>
 S:	Maintained
 S:	Maintained
 F:	drivers/thunderbolt/
 F:	drivers/thunderbolt/
 
 
@@ -13789,6 +13810,7 @@ M:	Evgeniy Polyakov <zbr@ioremap.net>
 S:	Maintained
 S:	Maintained
 F:	Documentation/w1/
 F:	Documentation/w1/
 F:	drivers/w1/
 F:	drivers/w1/
+F:	include/linux/w1.h
 
 
 W83791D HARDWARE MONITORING DRIVER
 W83791D HARDWARE MONITORING DRIVER
 M:	Marc Hulsman <m.hulsman@tudelft.nl>
 M:	Marc Hulsman <m.hulsman@tudelft.nl>

+ 64 - 0
arch/arm64/boot/dts/hisilicon/hi6220.dtsi

@@ -887,5 +887,69 @@
 				};
 				};
 			};
 			};
 		};
 		};
+
+		debug@f6590000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf6590000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu0>;
+		};
+
+		debug@f6592000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf6592000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu1>;
+		};
+
+		debug@f6594000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf6594000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu2>;
+		};
+
+		debug@f6596000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf6596000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu3>;
+		};
+
+		debug@f65d0000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf65d0000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu4>;
+		};
+
+		debug@f65d2000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf65d2000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu5>;
+		};
+
+		debug@f65d4000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf65d4000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu6>;
+		};
+
+		debug@f65d6000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0 0xf65d6000 0 0x1000>;
+			clocks = <&sys_ctrl HI6220_DAPB_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&cpu7>;
+		};
 	};
 	};
 };
 };

+ 32 - 0
arch/arm64/boot/dts/qcom/msm8916.dtsi

@@ -1116,6 +1116,38 @@
 			};
 			};
 		};
 		};
 
 
+		debug@850000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0x850000 0x1000>;
+			clocks = <&rpmcc RPM_QDSS_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&CPU0>;
+		};
+
+		debug@852000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0x852000 0x1000>;
+			clocks = <&rpmcc RPM_QDSS_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&CPU1>;
+		};
+
+		debug@854000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0x854000 0x1000>;
+			clocks = <&rpmcc RPM_QDSS_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&CPU2>;
+		};
+
+		debug@856000 {
+			compatible = "arm,coresight-cpu-debug","arm,primecell";
+			reg = <0x856000 0x1000>;
+			clocks = <&rpmcc RPM_QDSS_CLK>;
+			clock-names = "apb_pclk";
+			cpu = <&CPU3>;
+		};
+
 		etm@85c000 {
 		etm@85c000 {
 			compatible = "arm,coresight-etm4x", "arm,primecell";
 			compatible = "arm,coresight-etm4x", "arm,primecell";
 			reg = <0x85c000 0x1000>;
 			reg = <0x85c000 0x1000>;

+ 0 - 1
arch/x86/include/asm/mshyperv.h

@@ -136,7 +136,6 @@ static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
 	}
 	}
 }
 }
 
 
-#define hv_get_current_tick(tick) rdmsrl(HV_X64_MSR_TIME_REF_COUNT, tick)
 #define hv_init_timer(timer, tick) wrmsrl(timer, tick)
 #define hv_init_timer(timer, tick) wrmsrl(timer, tick)
 #define hv_init_timer_config(config, val) wrmsrl(config, val)
 #define hv_init_timer_config(config, val) wrmsrl(config, val)
 
 

+ 2 - 0
drivers/Kconfig

@@ -206,4 +206,6 @@ source "drivers/fsi/Kconfig"
 
 
 source "drivers/tee/Kconfig"
 source "drivers/tee/Kconfig"
 
 
+source "drivers/mux/Kconfig"
+
 endmenu
 endmenu

+ 1 - 0
drivers/Makefile

@@ -181,3 +181,4 @@ obj-$(CONFIG_NVMEM)		+= nvmem/
 obj-$(CONFIG_FPGA)		+= fpga/
 obj-$(CONFIG_FPGA)		+= fpga/
 obj-$(CONFIG_FSI)		+= fsi/
 obj-$(CONFIG_FSI)		+= fsi/
 obj-$(CONFIG_TEE)		+= tee/
 obj-$(CONFIG_TEE)		+= tee/
+obj-$(CONFIG_MULTIPLEXER)	+= mux/

+ 1 - 4
drivers/auxdisplay/panel.c

@@ -1345,14 +1345,11 @@ static inline void input_state_falling(struct logical_input *input)
 
 
 static void panel_process_inputs(void)
 static void panel_process_inputs(void)
 {
 {
-	struct list_head *item;
 	struct logical_input *input;
 	struct logical_input *input;
 
 
 	keypressed = 0;
 	keypressed = 0;
 	inputs_stable = 1;
 	inputs_stable = 1;
-	list_for_each(item, &logical_inputs) {
-		input = list_entry(item, struct logical_input, list);
-
+	list_for_each_entry(input, &logical_inputs, list) {
 		switch (input->state) {
 		switch (input->state) {
 		case INPUT_ST_LOW:
 		case INPUT_ST_LOW:
 			if ((phys_curr & input->mask) != input->value)
 			if ((phys_curr & input->mask) != input->value)

+ 47 - 7
drivers/firmware/google/memconsole-coreboot.c

@@ -26,12 +26,52 @@
 
 
 /* CBMEM firmware console log descriptor. */
 /* CBMEM firmware console log descriptor. */
 struct cbmem_cons {
 struct cbmem_cons {
-	u32 buffer_size;
-	u32 buffer_cursor;
-	u8  buffer_body[0];
+	u32 size_dont_access_after_boot;
+	u32 cursor;
+	u8  body[0];
 } __packed;
 } __packed;
 
 
+#define CURSOR_MASK ((1 << 28) - 1)
+#define OVERFLOW (1 << 31)
+
 static struct cbmem_cons __iomem *cbmem_console;
 static struct cbmem_cons __iomem *cbmem_console;
+static u32 cbmem_console_size;
+
+/*
+ * The cbmem_console structure is read again on every access because it may
+ * change at any time if runtime firmware logs new messages. This may rarely
+ * lead to race conditions where the firmware overwrites the beginning of the
+ * ring buffer with more lines after we have already read |cursor|. It should be
+ * rare and harmless enough that we don't spend extra effort working around it.
+ */
+static ssize_t memconsole_coreboot_read(char *buf, loff_t pos, size_t count)
+{
+	u32 cursor = cbmem_console->cursor & CURSOR_MASK;
+	u32 flags = cbmem_console->cursor & ~CURSOR_MASK;
+	u32 size = cbmem_console_size;
+	struct seg {	/* describes ring buffer segments in logical order */
+		u32 phys;	/* physical offset from start of mem buffer */
+		u32 len;	/* length of segment */
+	} seg[2] = { {0}, {0} };
+	size_t done = 0;
+	int i;
+
+	if (flags & OVERFLOW) {
+		if (cursor > size)	/* Shouldn't really happen, but... */
+			cursor = 0;
+		seg[0] = (struct seg){.phys = cursor, .len = size - cursor};
+		seg[1] = (struct seg){.phys = 0, .len = cursor};
+	} else {
+		seg[0] = (struct seg){.phys = 0, .len = min(cursor, size)};
+	}
+
+	for (i = 0; i < ARRAY_SIZE(seg) && count > done; i++) {
+		done += memory_read_from_buffer(buf + done, count - done, &pos,
+			cbmem_console->body + seg[i].phys, seg[i].len);
+		pos -= seg[i].len;
+	}
+	return done;
+}
 
 
 static int memconsole_coreboot_init(phys_addr_t physaddr)
 static int memconsole_coreboot_init(phys_addr_t physaddr)
 {
 {
@@ -42,17 +82,17 @@ static int memconsole_coreboot_init(phys_addr_t physaddr)
 	if (!tmp_cbmc)
 	if (!tmp_cbmc)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
+	/* Read size only once to prevent overrun attack through /dev/mem. */
+	cbmem_console_size = tmp_cbmc->size_dont_access_after_boot;
 	cbmem_console = memremap(physaddr,
 	cbmem_console = memremap(physaddr,
-				 tmp_cbmc->buffer_size + sizeof(*cbmem_console),
+				 cbmem_console_size + sizeof(*cbmem_console),
 				 MEMREMAP_WB);
 				 MEMREMAP_WB);
 	memunmap(tmp_cbmc);
 	memunmap(tmp_cbmc);
 
 
 	if (!cbmem_console)
 	if (!cbmem_console)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
-	memconsole_setup(cbmem_console->buffer_body,
-		min(cbmem_console->buffer_cursor, cbmem_console->buffer_size));
-
+	memconsole_setup(memconsole_coreboot_read);
 	return 0;
 	return 0;
 }
 }
 
 

+ 15 - 3
drivers/firmware/google/memconsole-x86-legacy.c

@@ -48,6 +48,15 @@ struct biosmemcon_ebda {
 	};
 	};
 } __packed;
 } __packed;
 
 
+static char *memconsole_baseaddr;
+static size_t memconsole_length;
+
+static ssize_t memconsole_read(char *buf, loff_t pos, size_t count)
+{
+	return memory_read_from_buffer(buf, count, &pos, memconsole_baseaddr,
+				       memconsole_length);
+}
+
 static void found_v1_header(struct biosmemcon_ebda *hdr)
 static void found_v1_header(struct biosmemcon_ebda *hdr)
 {
 {
 	pr_info("memconsole: BIOS console v1 EBDA structure found at %p\n",
 	pr_info("memconsole: BIOS console v1 EBDA structure found at %p\n",
@@ -56,7 +65,9 @@ static void found_v1_header(struct biosmemcon_ebda *hdr)
 		hdr->v1.buffer_addr, hdr->v1.start,
 		hdr->v1.buffer_addr, hdr->v1.start,
 		hdr->v1.end, hdr->v1.num_chars);
 		hdr->v1.end, hdr->v1.num_chars);
 
 
-	memconsole_setup(phys_to_virt(hdr->v1.buffer_addr), hdr->v1.num_chars);
+	memconsole_baseaddr = phys_to_virt(hdr->v1.buffer_addr);
+	memconsole_length = hdr->v1.num_chars;
+	memconsole_setup(memconsole_read);
 }
 }
 
 
 static void found_v2_header(struct biosmemcon_ebda *hdr)
 static void found_v2_header(struct biosmemcon_ebda *hdr)
@@ -67,8 +78,9 @@ static void found_v2_header(struct biosmemcon_ebda *hdr)
 		hdr->v2.buffer_addr, hdr->v2.start,
 		hdr->v2.buffer_addr, hdr->v2.start,
 		hdr->v2.end, hdr->v2.num_bytes);
 		hdr->v2.end, hdr->v2.num_bytes);
 
 
-	memconsole_setup(phys_to_virt(hdr->v2.buffer_addr + hdr->v2.start),
-			 hdr->v2.end - hdr->v2.start);
+	memconsole_baseaddr = phys_to_virt(hdr->v2.buffer_addr + hdr->v2.start);
+	memconsole_length = hdr->v2.end - hdr->v2.start;
+	memconsole_setup(memconsole_read);
 }
 }
 
 
 /*
 /*

+ 6 - 8
drivers/firmware/google/memconsole.c

@@ -22,15 +22,15 @@
 
 
 #include "memconsole.h"
 #include "memconsole.h"
 
 
-static char *memconsole_baseaddr;
-static size_t memconsole_length;
+static ssize_t (*memconsole_read_func)(char *, loff_t, size_t);
 
 
 static ssize_t memconsole_read(struct file *filp, struct kobject *kobp,
 static ssize_t memconsole_read(struct file *filp, struct kobject *kobp,
 			       struct bin_attribute *bin_attr, char *buf,
 			       struct bin_attribute *bin_attr, char *buf,
 			       loff_t pos, size_t count)
 			       loff_t pos, size_t count)
 {
 {
-	return memory_read_from_buffer(buf, count, &pos, memconsole_baseaddr,
-				       memconsole_length);
+	if (WARN_ON_ONCE(!memconsole_read_func))
+		return -EIO;
+	return memconsole_read_func(buf, pos, count);
 }
 }
 
 
 static struct bin_attribute memconsole_bin_attr = {
 static struct bin_attribute memconsole_bin_attr = {
@@ -38,16 +38,14 @@ static struct bin_attribute memconsole_bin_attr = {
 	.read = memconsole_read,
 	.read = memconsole_read,
 };
 };
 
 
-void memconsole_setup(void *baseaddr, size_t length)
+void memconsole_setup(ssize_t (*read_func)(char *, loff_t, size_t))
 {
 {
-	memconsole_baseaddr = baseaddr;
-	memconsole_length = length;
+	memconsole_read_func = read_func;
 }
 }
 EXPORT_SYMBOL(memconsole_setup);
 EXPORT_SYMBOL(memconsole_setup);
 
 
 int memconsole_sysfs_init(void)
 int memconsole_sysfs_init(void)
 {
 {
-	memconsole_bin_attr.size = memconsole_length;
 	return sysfs_create_bin_file(firmware_kobj, &memconsole_bin_attr);
 	return sysfs_create_bin_file(firmware_kobj, &memconsole_bin_attr);
 }
 }
 EXPORT_SYMBOL(memconsole_sysfs_init);
 EXPORT_SYMBOL(memconsole_sysfs_init);

+ 4 - 3
drivers/firmware/google/memconsole.h

@@ -18,13 +18,14 @@
 #ifndef __FIRMWARE_GOOGLE_MEMCONSOLE_H
 #ifndef __FIRMWARE_GOOGLE_MEMCONSOLE_H
 #define __FIRMWARE_GOOGLE_MEMCONSOLE_H
 #define __FIRMWARE_GOOGLE_MEMCONSOLE_H
 
 
+#include <linux/types.h>
+
 /*
 /*
  * memconsole_setup
  * memconsole_setup
  *
  *
- * Initialize the memory console from raw (virtual) base
- * address and length.
+ * Initialize the memory console, passing the function to handle read accesses.
  */
  */
-void memconsole_setup(void *baseaddr, size_t length);
+void memconsole_setup(ssize_t (*read_func)(char *, loff_t, size_t));
 
 
 /*
 /*
  * memconsole_sysfs_init
  * memconsole_sysfs_init

+ 17 - 22
drivers/firmware/google/vpd.c

@@ -118,14 +118,13 @@ static int vpd_section_attrib_add(const u8 *key, s32 key_len,
 	info = kzalloc(sizeof(*info), GFP_KERNEL);
 	info = kzalloc(sizeof(*info), GFP_KERNEL);
 	if (!info)
 	if (!info)
 		return -ENOMEM;
 		return -ENOMEM;
-	info->key = kzalloc(key_len + 1, GFP_KERNEL);
+
+	info->key = kstrndup(key, key_len, GFP_KERNEL);
 	if (!info->key) {
 	if (!info->key) {
 		ret = -ENOMEM;
 		ret = -ENOMEM;
 		goto free_info;
 		goto free_info;
 	}
 	}
 
 
-	memcpy(info->key, key, key_len);
-
 	sysfs_bin_attr_init(&info->bin_attr);
 	sysfs_bin_attr_init(&info->bin_attr);
 	info->bin_attr.attr.name = info->key;
 	info->bin_attr.attr.name = info->key;
 	info->bin_attr.attr.mode = 0444;
 	info->bin_attr.attr.mode = 0444;
@@ -191,8 +190,7 @@ static int vpd_section_create_attribs(struct vpd_section *sec)
 static int vpd_section_init(const char *name, struct vpd_section *sec,
 static int vpd_section_init(const char *name, struct vpd_section *sec,
 			    phys_addr_t physaddr, size_t size)
 			    phys_addr_t physaddr, size_t size)
 {
 {
-	int ret;
-	int raw_len;
+	int err;
 
 
 	sec->baseaddr = memremap(physaddr, size, MEMREMAP_WB);
 	sec->baseaddr = memremap(physaddr, size, MEMREMAP_WB);
 	if (!sec->baseaddr)
 	if (!sec->baseaddr)
@@ -201,10 +199,11 @@ static int vpd_section_init(const char *name, struct vpd_section *sec,
 	sec->name = name;
 	sec->name = name;
 
 
 	/* We want to export the raw partion with name ${name}_raw */
 	/* We want to export the raw partion with name ${name}_raw */
-	raw_len = strlen(name) + 5;
-	sec->raw_name = kzalloc(raw_len, GFP_KERNEL);
-	strncpy(sec->raw_name, name, raw_len);
-	strncat(sec->raw_name, "_raw", raw_len);
+	sec->raw_name = kasprintf(GFP_KERNEL, "%s_raw", name);
+	if (!sec->raw_name) {
+		err = -ENOMEM;
+		goto err_iounmap;
+	}
 
 
 	sysfs_bin_attr_init(&sec->bin_attr);
 	sysfs_bin_attr_init(&sec->bin_attr);
 	sec->bin_attr.attr.name = sec->raw_name;
 	sec->bin_attr.attr.name = sec->raw_name;
@@ -213,14 +212,14 @@ static int vpd_section_init(const char *name, struct vpd_section *sec,
 	sec->bin_attr.read = vpd_section_read;
 	sec->bin_attr.read = vpd_section_read;
 	sec->bin_attr.private = sec;
 	sec->bin_attr.private = sec;
 
 
-	ret = sysfs_create_bin_file(vpd_kobj, &sec->bin_attr);
-	if (ret)
-		goto free_sec;
+	err = sysfs_create_bin_file(vpd_kobj, &sec->bin_attr);
+	if (err)
+		goto err_free_raw_name;
 
 
 	sec->kobj = kobject_create_and_add(name, vpd_kobj);
 	sec->kobj = kobject_create_and_add(name, vpd_kobj);
 	if (!sec->kobj) {
 	if (!sec->kobj) {
-		ret = -EINVAL;
-		goto sysfs_remove;
+		err = -EINVAL;
+		goto err_sysfs_remove;
 	}
 	}
 
 
 	INIT_LIST_HEAD(&sec->attribs);
 	INIT_LIST_HEAD(&sec->attribs);
@@ -230,14 +229,13 @@ static int vpd_section_init(const char *name, struct vpd_section *sec,
 
 
 	return 0;
 	return 0;
 
 
-sysfs_remove:
+err_sysfs_remove:
 	sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
 	sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
-
-free_sec:
+err_free_raw_name:
 	kfree(sec->raw_name);
 	kfree(sec->raw_name);
+err_iounmap:
 	iounmap(sec->baseaddr);
 	iounmap(sec->baseaddr);
-
-	return ret;
+	return err;
 }
 }
 
 
 static int vpd_section_destroy(struct vpd_section *sec)
 static int vpd_section_destroy(struct vpd_section *sec)
@@ -319,9 +317,6 @@ static int __init vpd_platform_init(void)
 	if (!vpd_kobj)
 	if (!vpd_kobj)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
-	memset(&ro_vpd, 0, sizeof(ro_vpd));
-	memset(&rw_vpd, 0, sizeof(rw_vpd));
-
 	platform_driver_register(&vpd_driver);
 	platform_driver_register(&vpd_driver);
 
 
 	return 0;
 	return 0;

+ 26 - 0
drivers/fsi/Kconfig

@@ -6,7 +6,33 @@ menu "FSI support"
 
 
 config FSI
 config FSI
 	tristate "FSI support"
 	tristate "FSI support"
+	select CRC4
 	---help---
 	---help---
 	  FSI - the FRU Support Interface - is a simple bus for low-level
 	  FSI - the FRU Support Interface - is a simple bus for low-level
 	  access to POWER-based hardware.
 	  access to POWER-based hardware.
+
+if FSI
+
+config FSI_MASTER_GPIO
+	tristate "GPIO-based FSI master"
+	depends on GPIOLIB
+	select CRC4
+	---help---
+	This option enables a FSI master driver using GPIO lines.
+
+config FSI_MASTER_HUB
+	tristate "FSI hub master"
+	---help---
+	This option enables a FSI hub master driver.  Hub is a type of FSI
+	master that is connected to the upstream master via a slave.  Hubs
+	allow chaining of FSI links to an arbitrary depth.  This allows for
+	a high target device fanout.
+
+config FSI_SCOM
+	tristate "SCOM FSI client device driver"
+	---help---
+	This option enables an FSI based SCOM device driver.
+
+endif
+
 endmenu
 endmenu

+ 3 - 0
drivers/fsi/Makefile

@@ -1,2 +1,5 @@
 
 
 obj-$(CONFIG_FSI) += fsi-core.o
 obj-$(CONFIG_FSI) += fsi-core.o
+obj-$(CONFIG_FSI_MASTER_HUB) += fsi-master-hub.o
+obj-$(CONFIG_FSI_MASTER_GPIO) += fsi-master-gpio.o
+obj-$(CONFIG_FSI_SCOM) += fsi-scom.o

+ 841 - 0
drivers/fsi/fsi-core.c

@@ -13,9 +13,830 @@
  * GNU General Public License for more details.
  * GNU General Public License for more details.
  */
  */
 
 
+#include <linux/crc4.h>
 #include <linux/device.h>
 #include <linux/device.h>
 #include <linux/fsi.h>
 #include <linux/fsi.h>
+#include <linux/idr.h>
 #include <linux/module.h>
 #include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/bitops.h>
+
+#include "fsi-master.h"
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/fsi.h>
+
+#define FSI_SLAVE_CONF_NEXT_MASK	GENMASK(31, 31)
+#define FSI_SLAVE_CONF_SLOTS_MASK	GENMASK(23, 16)
+#define FSI_SLAVE_CONF_SLOTS_SHIFT	16
+#define FSI_SLAVE_CONF_VERSION_MASK	GENMASK(15, 12)
+#define FSI_SLAVE_CONF_VERSION_SHIFT	12
+#define FSI_SLAVE_CONF_TYPE_MASK	GENMASK(11, 4)
+#define FSI_SLAVE_CONF_TYPE_SHIFT	4
+#define FSI_SLAVE_CONF_CRC_SHIFT	4
+#define FSI_SLAVE_CONF_CRC_MASK		GENMASK(3, 0)
+#define FSI_SLAVE_CONF_DATA_BITS	28
+
+#define FSI_PEEK_BASE			0x410
+
+static const int engine_page_size = 0x400;
+
+#define FSI_SLAVE_BASE			0x800
+
+/*
+ * FSI slave engine control register offsets
+ */
+#define FSI_SMODE		0x0	/* R/W: Mode register */
+#define FSI_SISC		0x8	/* R/W: Interrupt condition */
+#define FSI_SSTAT		0x14	/* R  : Slave status */
+#define FSI_LLMODE		0x100	/* R/W: Link layer mode register */
+
+/*
+ * SMODE fields
+ */
+#define FSI_SMODE_WSC		0x80000000	/* Warm start done */
+#define FSI_SMODE_ECRC		0x20000000	/* Hw CRC check */
+#define FSI_SMODE_SID_SHIFT	24		/* ID shift */
+#define FSI_SMODE_SID_MASK	3		/* ID Mask */
+#define FSI_SMODE_ED_SHIFT	20		/* Echo delay shift */
+#define FSI_SMODE_ED_MASK	0xf		/* Echo delay mask */
+#define FSI_SMODE_SD_SHIFT	16		/* Send delay shift */
+#define FSI_SMODE_SD_MASK	0xf		/* Send delay mask */
+#define FSI_SMODE_LBCRR_SHIFT	8		/* Clk ratio shift */
+#define FSI_SMODE_LBCRR_MASK	0xf		/* Clk ratio mask */
+
+/*
+ * LLMODE fields
+ */
+#define FSI_LLMODE_ASYNC	0x1
+
+#define FSI_SLAVE_SIZE_23b		0x800000
+
+static DEFINE_IDA(master_ida);
+
+struct fsi_slave {
+	struct device		dev;
+	struct fsi_master	*master;
+	int			id;
+	int			link;
+	uint32_t		size;	/* size of slave address space */
+};
+
+#define to_fsi_master(d) container_of(d, struct fsi_master, dev)
+#define to_fsi_slave(d) container_of(d, struct fsi_slave, dev)
+
+static const int slave_retries = 2;
+static int discard_errors;
+
+static int fsi_master_read(struct fsi_master *master, int link,
+		uint8_t slave_id, uint32_t addr, void *val, size_t size);
+static int fsi_master_write(struct fsi_master *master, int link,
+		uint8_t slave_id, uint32_t addr, const void *val, size_t size);
+static int fsi_master_break(struct fsi_master *master, int link);
+
+/*
+ * fsi_device_read() / fsi_device_write() / fsi_device_peek()
+ *
+ * FSI endpoint-device support
+ *
+ * Read / write / peek accessors for a client
+ *
+ * Parameters:
+ * dev:  Structure passed to FSI client device drivers on probe().
+ * addr: FSI address of given device.  Client should pass in its base address
+ *       plus desired offset to access its register space.
+ * val:  For read/peek this is the value read at the specified address. For
+ *       write this is value to write to the specified address.
+ *       The data in val must be FSI bus endian (big endian).
+ * size: Size in bytes of the operation.  Sizes supported are 1, 2 and 4 bytes.
+ *       Addresses must be aligned on size boundaries or an error will result.
+ */
+int fsi_device_read(struct fsi_device *dev, uint32_t addr, void *val,
+		size_t size)
+{
+	if (addr > dev->size || size > dev->size || addr > dev->size - size)
+		return -EINVAL;
+
+	return fsi_slave_read(dev->slave, dev->addr + addr, val, size);
+}
+EXPORT_SYMBOL_GPL(fsi_device_read);
+
+int fsi_device_write(struct fsi_device *dev, uint32_t addr, const void *val,
+		size_t size)
+{
+	if (addr > dev->size || size > dev->size || addr > dev->size - size)
+		return -EINVAL;
+
+	return fsi_slave_write(dev->slave, dev->addr + addr, val, size);
+}
+EXPORT_SYMBOL_GPL(fsi_device_write);
+
+int fsi_device_peek(struct fsi_device *dev, void *val)
+{
+	uint32_t addr = FSI_PEEK_BASE + ((dev->unit - 2) * sizeof(uint32_t));
+
+	return fsi_slave_read(dev->slave, addr, val, sizeof(uint32_t));
+}
+
+static void fsi_device_release(struct device *_device)
+{
+	struct fsi_device *device = to_fsi_dev(_device);
+
+	kfree(device);
+}
+
+static struct fsi_device *fsi_create_device(struct fsi_slave *slave)
+{
+	struct fsi_device *dev;
+
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev)
+		return NULL;
+
+	dev->dev.parent = &slave->dev;
+	dev->dev.bus = &fsi_bus_type;
+	dev->dev.release = fsi_device_release;
+
+	return dev;
+}
+
+/* FSI slave support */
+static int fsi_slave_calc_addr(struct fsi_slave *slave, uint32_t *addrp,
+		uint8_t *idp)
+{
+	uint32_t addr = *addrp;
+	uint8_t id = *idp;
+
+	if (addr > slave->size)
+		return -EINVAL;
+
+	/* For 23 bit addressing, we encode the extra two bits in the slave
+	 * id (and the slave's actual ID needs to be 0).
+	 */
+	if (addr > 0x1fffff) {
+		if (slave->id != 0)
+			return -EINVAL;
+		id = (addr >> 21) & 0x3;
+		addr &= 0x1fffff;
+	}
+
+	*addrp = addr;
+	*idp = id;
+	return 0;
+}
+
+int fsi_slave_report_and_clear_errors(struct fsi_slave *slave)
+{
+	struct fsi_master *master = slave->master;
+	uint32_t irq, stat;
+	int rc, link;
+	uint8_t id;
+
+	link = slave->link;
+	id = slave->id;
+
+	rc = fsi_master_read(master, link, id, FSI_SLAVE_BASE + FSI_SISC,
+			&irq, sizeof(irq));
+	if (rc)
+		return rc;
+
+	rc =  fsi_master_read(master, link, id, FSI_SLAVE_BASE + FSI_SSTAT,
+			&stat, sizeof(stat));
+	if (rc)
+		return rc;
+
+	dev_info(&slave->dev, "status: 0x%08x, sisc: 0x%08x\n",
+			be32_to_cpu(stat), be32_to_cpu(irq));
+
+	/* clear interrupts */
+	return fsi_master_write(master, link, id, FSI_SLAVE_BASE + FSI_SISC,
+			&irq, sizeof(irq));
+}
+
+static int fsi_slave_set_smode(struct fsi_master *master, int link, int id);
+
+int fsi_slave_handle_error(struct fsi_slave *slave, bool write, uint32_t addr,
+		size_t size)
+{
+	struct fsi_master *master = slave->master;
+	int rc, link;
+	uint32_t reg;
+	uint8_t id;
+
+	if (discard_errors)
+		return -1;
+
+	link = slave->link;
+	id = slave->id;
+
+	dev_dbg(&slave->dev, "handling error on %s to 0x%08x[%zd]",
+			write ? "write" : "read", addr, size);
+
+	/* try a simple clear of error conditions, which may fail if we've lost
+	 * communication with the slave
+	 */
+	rc = fsi_slave_report_and_clear_errors(slave);
+	if (!rc)
+		return 0;
+
+	/* send a TERM and retry */
+	if (master->term) {
+		rc = master->term(master, link, id);
+		if (!rc) {
+			rc = fsi_master_read(master, link, id, 0,
+					&reg, sizeof(reg));
+			if (!rc)
+				rc = fsi_slave_report_and_clear_errors(slave);
+			if (!rc)
+				return 0;
+		}
+	}
+
+	/* getting serious, reset the slave via BREAK */
+	rc = fsi_master_break(master, link);
+	if (rc)
+		return rc;
+
+	rc = fsi_slave_set_smode(master, link, id);
+	if (rc)
+		return rc;
+
+	return fsi_slave_report_and_clear_errors(slave);
+}
+
+int fsi_slave_read(struct fsi_slave *slave, uint32_t addr,
+			void *val, size_t size)
+{
+	uint8_t id = slave->id;
+	int rc, err_rc, i;
+
+	rc = fsi_slave_calc_addr(slave, &addr, &id);
+	if (rc)
+		return rc;
+
+	for (i = 0; i < slave_retries; i++) {
+		rc = fsi_master_read(slave->master, slave->link,
+				id, addr, val, size);
+		if (!rc)
+			break;
+
+		err_rc = fsi_slave_handle_error(slave, false, addr, size);
+		if (err_rc)
+			break;
+	}
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(fsi_slave_read);
+
+int fsi_slave_write(struct fsi_slave *slave, uint32_t addr,
+			const void *val, size_t size)
+{
+	uint8_t id = slave->id;
+	int rc, err_rc, i;
+
+	rc = fsi_slave_calc_addr(slave, &addr, &id);
+	if (rc)
+		return rc;
+
+	for (i = 0; i < slave_retries; i++) {
+		rc = fsi_master_write(slave->master, slave->link,
+				id, addr, val, size);
+		if (!rc)
+			break;
+
+		err_rc = fsi_slave_handle_error(slave, true, addr, size);
+		if (err_rc)
+			break;
+	}
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(fsi_slave_write);
+
+extern int fsi_slave_claim_range(struct fsi_slave *slave,
+		uint32_t addr, uint32_t size)
+{
+	if (addr + size < addr)
+		return -EINVAL;
+
+	if (addr + size > slave->size)
+		return -EINVAL;
+
+	/* todo: check for overlapping claims */
+	return 0;
+}
+EXPORT_SYMBOL_GPL(fsi_slave_claim_range);
+
+extern void fsi_slave_release_range(struct fsi_slave *slave,
+		uint32_t addr, uint32_t size)
+{
+}
+EXPORT_SYMBOL_GPL(fsi_slave_release_range);
+
+static int fsi_slave_scan(struct fsi_slave *slave)
+{
+	uint32_t engine_addr;
+	uint32_t conf;
+	int rc, i;
+
+	/*
+	 * scan engines
+	 *
+	 * We keep the peek mode and slave engines for the core; so start
+	 * at the third slot in the configuration table. We also need to
+	 * skip the chip ID entry at the start of the address space.
+	 */
+	engine_addr = engine_page_size * 3;
+	for (i = 2; i < engine_page_size / sizeof(uint32_t); i++) {
+		uint8_t slots, version, type, crc;
+		struct fsi_device *dev;
+
+		rc = fsi_slave_read(slave, (i + 1) * sizeof(conf),
+				&conf, sizeof(conf));
+		if (rc) {
+			dev_warn(&slave->dev,
+				"error reading slave registers\n");
+			return -1;
+		}
+		conf = be32_to_cpu(conf);
+
+		crc = crc4(0, conf, 32);
+		if (crc) {
+			dev_warn(&slave->dev,
+				"crc error in slave register at 0x%04x\n",
+				i);
+			return -1;
+		}
+
+		slots = (conf & FSI_SLAVE_CONF_SLOTS_MASK)
+			>> FSI_SLAVE_CONF_SLOTS_SHIFT;
+		version = (conf & FSI_SLAVE_CONF_VERSION_MASK)
+			>> FSI_SLAVE_CONF_VERSION_SHIFT;
+		type = (conf & FSI_SLAVE_CONF_TYPE_MASK)
+			>> FSI_SLAVE_CONF_TYPE_SHIFT;
+
+		/*
+		 * Unused address areas are marked by a zero type value; this
+		 * skips the defined address areas
+		 */
+		if (type != 0 && slots != 0) {
+
+			/* create device */
+			dev = fsi_create_device(slave);
+			if (!dev)
+				return -ENOMEM;
+
+			dev->slave = slave;
+			dev->engine_type = type;
+			dev->version = version;
+			dev->unit = i;
+			dev->addr = engine_addr;
+			dev->size = slots * engine_page_size;
+
+			dev_dbg(&slave->dev,
+			"engine[%i]: type %x, version %x, addr %x size %x\n",
+					dev->unit, dev->engine_type, version,
+					dev->addr, dev->size);
+
+			dev_set_name(&dev->dev, "%02x:%02x:%02x:%02x",
+					slave->master->idx, slave->link,
+					slave->id, i - 2);
+
+			rc = device_register(&dev->dev);
+			if (rc) {
+				dev_warn(&slave->dev, "add failed: %d\n", rc);
+				put_device(&dev->dev);
+			}
+		}
+
+		engine_addr += slots * engine_page_size;
+
+		if (!(conf & FSI_SLAVE_CONF_NEXT_MASK))
+			break;
+	}
+
+	return 0;
+}
+
+static ssize_t fsi_slave_sysfs_raw_read(struct file *file,
+		struct kobject *kobj, struct bin_attribute *attr, char *buf,
+		loff_t off, size_t count)
+{
+	struct fsi_slave *slave = to_fsi_slave(kobj_to_dev(kobj));
+	size_t total_len, read_len;
+	int rc;
+
+	if (off < 0)
+		return -EINVAL;
+
+	if (off > 0xffffffff || count > 0xffffffff || off + count > 0xffffffff)
+		return -EINVAL;
+
+	for (total_len = 0; total_len < count; total_len += read_len) {
+		read_len = min_t(size_t, count, 4);
+		read_len -= off & 0x3;
+
+		rc = fsi_slave_read(slave, off, buf + total_len, read_len);
+		if (rc)
+			return rc;
+
+		off += read_len;
+	}
+
+	return count;
+}
+
+static ssize_t fsi_slave_sysfs_raw_write(struct file *file,
+		struct kobject *kobj, struct bin_attribute *attr,
+		char *buf, loff_t off, size_t count)
+{
+	struct fsi_slave *slave = to_fsi_slave(kobj_to_dev(kobj));
+	size_t total_len, write_len;
+	int rc;
+
+	if (off < 0)
+		return -EINVAL;
+
+	if (off > 0xffffffff || count > 0xffffffff || off + count > 0xffffffff)
+		return -EINVAL;
+
+	for (total_len = 0; total_len < count; total_len += write_len) {
+		write_len = min_t(size_t, count, 4);
+		write_len -= off & 0x3;
+
+		rc = fsi_slave_write(slave, off, buf + total_len, write_len);
+		if (rc)
+			return rc;
+
+		off += write_len;
+	}
+
+	return count;
+}
+
+static struct bin_attribute fsi_slave_raw_attr = {
+	.attr = {
+		.name = "raw",
+		.mode = 0600,
+	},
+	.size = 0,
+	.read = fsi_slave_sysfs_raw_read,
+	.write = fsi_slave_sysfs_raw_write,
+};
+
+static ssize_t fsi_slave_sysfs_term_write(struct file *file,
+		struct kobject *kobj, struct bin_attribute *attr,
+		char *buf, loff_t off, size_t count)
+{
+	struct fsi_slave *slave = to_fsi_slave(kobj_to_dev(kobj));
+	struct fsi_master *master = slave->master;
+
+	if (!master->term)
+		return -ENODEV;
+
+	master->term(master, slave->link, slave->id);
+	return count;
+}
+
+static struct bin_attribute fsi_slave_term_attr = {
+	.attr = {
+		.name = "term",
+		.mode = 0200,
+	},
+	.size = 0,
+	.write = fsi_slave_sysfs_term_write,
+};
+
+/* Encode slave local bus echo delay */
+static inline uint32_t fsi_smode_echodly(int x)
+{
+	return (x & FSI_SMODE_ED_MASK) << FSI_SMODE_ED_SHIFT;
+}
+
+/* Encode slave local bus send delay */
+static inline uint32_t fsi_smode_senddly(int x)
+{
+	return (x & FSI_SMODE_SD_MASK) << FSI_SMODE_SD_SHIFT;
+}
+
+/* Encode slave local bus clock rate ratio */
+static inline uint32_t fsi_smode_lbcrr(int x)
+{
+	return (x & FSI_SMODE_LBCRR_MASK) << FSI_SMODE_LBCRR_SHIFT;
+}
+
+/* Encode slave ID */
+static inline uint32_t fsi_smode_sid(int x)
+{
+	return (x & FSI_SMODE_SID_MASK) << FSI_SMODE_SID_SHIFT;
+}
+
+static const uint32_t fsi_slave_smode(int id)
+{
+	return FSI_SMODE_WSC | FSI_SMODE_ECRC
+		| fsi_smode_sid(id)
+		| fsi_smode_echodly(0xf) | fsi_smode_senddly(0xf)
+		| fsi_smode_lbcrr(0x8);
+}
+
+static int fsi_slave_set_smode(struct fsi_master *master, int link, int id)
+{
+	uint32_t smode;
+
+	/* set our smode register with the slave ID field to 0; this enables
+	 * extended slave addressing
+	 */
+	smode = fsi_slave_smode(id);
+	smode = cpu_to_be32(smode);
+
+	return fsi_master_write(master, link, id, FSI_SLAVE_BASE + FSI_SMODE,
+			&smode, sizeof(smode));
+}
+
+static void fsi_slave_release(struct device *dev)
+{
+	struct fsi_slave *slave = to_fsi_slave(dev);
+
+	kfree(slave);
+}
+
+static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
+{
+	uint32_t chip_id, llmode;
+	struct fsi_slave *slave;
+	uint8_t crc;
+	int rc;
+
+	/* Currently, we only support single slaves on a link, and use the
+	 * full 23-bit address range
+	 */
+	if (id != 0)
+		return -EINVAL;
+
+	rc = fsi_master_read(master, link, id, 0, &chip_id, sizeof(chip_id));
+	if (rc) {
+		dev_dbg(&master->dev, "can't read slave %02x:%02x %d\n",
+				link, id, rc);
+		return -ENODEV;
+	}
+	chip_id = be32_to_cpu(chip_id);
+
+	crc = crc4(0, chip_id, 32);
+	if (crc) {
+		dev_warn(&master->dev, "slave %02x:%02x invalid chip id CRC!\n",
+				link, id);
+		return -EIO;
+	}
+
+	dev_info(&master->dev, "fsi: found chip %08x at %02x:%02x:%02x\n",
+			chip_id, master->idx, link, id);
+
+	rc = fsi_slave_set_smode(master, link, id);
+	if (rc) {
+		dev_warn(&master->dev,
+				"can't set smode on slave:%02x:%02x %d\n",
+				link, id, rc);
+		return -ENODEV;
+	}
+
+	/* If we're behind a master that doesn't provide a self-running bus
+	 * clock, put the slave into async mode
+	 */
+	if (master->flags & FSI_MASTER_FLAG_SWCLOCK) {
+		llmode = cpu_to_be32(FSI_LLMODE_ASYNC);
+		rc = fsi_master_write(master, link, id,
+				FSI_SLAVE_BASE + FSI_LLMODE,
+				&llmode, sizeof(llmode));
+		if (rc)
+			dev_warn(&master->dev,
+				"can't set llmode on slave:%02x:%02x %d\n",
+				link, id, rc);
+	}
+
+	/* We can communicate with a slave; create the slave device and
+	 * register.
+	 */
+	slave = kzalloc(sizeof(*slave), GFP_KERNEL);
+	if (!slave)
+		return -ENOMEM;
+
+	slave->master = master;
+	slave->dev.parent = &master->dev;
+	slave->dev.release = fsi_slave_release;
+	slave->link = link;
+	slave->id = id;
+	slave->size = FSI_SLAVE_SIZE_23b;
+
+	dev_set_name(&slave->dev, "slave@%02x:%02x", link, id);
+	rc = device_register(&slave->dev);
+	if (rc < 0) {
+		dev_warn(&master->dev, "failed to create slave device: %d\n",
+				rc);
+		put_device(&slave->dev);
+		return rc;
+	}
+
+	rc = device_create_bin_file(&slave->dev, &fsi_slave_raw_attr);
+	if (rc)
+		dev_warn(&slave->dev, "failed to create raw attr: %d\n", rc);
+
+	rc = device_create_bin_file(&slave->dev, &fsi_slave_term_attr);
+	if (rc)
+		dev_warn(&slave->dev, "failed to create term attr: %d\n", rc);
+
+	rc = fsi_slave_scan(slave);
+	if (rc)
+		dev_dbg(&master->dev, "failed during slave scan with: %d\n",
+				rc);
+
+	return rc;
+}
+
+/* FSI master support */
+static int fsi_check_access(uint32_t addr, size_t size)
+{
+	if (size != 1 && size != 2 && size != 4)
+		return -EINVAL;
+
+	if ((addr & 0x3) != (size & 0x3))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int fsi_master_read(struct fsi_master *master, int link,
+		uint8_t slave_id, uint32_t addr, void *val, size_t size)
+{
+	int rc;
+
+	trace_fsi_master_read(master, link, slave_id, addr, size);
+
+	rc = fsi_check_access(addr, size);
+	if (!rc)
+		rc = master->read(master, link, slave_id, addr, val, size);
+
+	trace_fsi_master_rw_result(master, link, slave_id, addr, size,
+			false, val, rc);
+
+	return rc;
+}
+
+static int fsi_master_write(struct fsi_master *master, int link,
+		uint8_t slave_id, uint32_t addr, const void *val, size_t size)
+{
+	int rc;
+
+	trace_fsi_master_write(master, link, slave_id, addr, size, val);
+
+	rc = fsi_check_access(addr, size);
+	if (!rc)
+		rc = master->write(master, link, slave_id, addr, val, size);
+
+	trace_fsi_master_rw_result(master, link, slave_id, addr, size,
+			true, val, rc);
+
+	return rc;
+}
+
+static int fsi_master_link_enable(struct fsi_master *master, int link)
+{
+	if (master->link_enable)
+		return master->link_enable(master, link);
+
+	return 0;
+}
+
+/*
+ * Issue a break command on this link
+ */
+static int fsi_master_break(struct fsi_master *master, int link)
+{
+	trace_fsi_master_break(master, link);
+
+	if (master->send_break)
+		return master->send_break(master, link);
+
+	return 0;
+}
+
+static int fsi_master_scan(struct fsi_master *master)
+{
+	int link, rc;
+
+	for (link = 0; link < master->n_links; link++) {
+		rc = fsi_master_link_enable(master, link);
+		if (rc) {
+			dev_dbg(&master->dev,
+				"enable link %d failed: %d\n", link, rc);
+			continue;
+		}
+		rc = fsi_master_break(master, link);
+		if (rc) {
+			dev_dbg(&master->dev,
+				"break to link %d failed: %d\n", link, rc);
+			continue;
+		}
+
+		fsi_slave_init(master, link, 0);
+	}
+
+	return 0;
+}
+
+static int fsi_slave_remove_device(struct device *dev, void *arg)
+{
+	device_unregister(dev);
+	return 0;
+}
+
+static int fsi_master_remove_slave(struct device *dev, void *arg)
+{
+	device_for_each_child(dev, NULL, fsi_slave_remove_device);
+	device_unregister(dev);
+	return 0;
+}
+
+static void fsi_master_unscan(struct fsi_master *master)
+{
+	device_for_each_child(&master->dev, NULL, fsi_master_remove_slave);
+}
+
+static ssize_t master_rescan_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t count)
+{
+	struct fsi_master *master = to_fsi_master(dev);
+	int rc;
+
+	fsi_master_unscan(master);
+	rc = fsi_master_scan(master);
+	if (rc < 0)
+		return rc;
+
+	return count;
+}
+
+static DEVICE_ATTR(rescan, 0200, NULL, master_rescan_store);
+
+static ssize_t master_break_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t count)
+{
+	struct fsi_master *master = to_fsi_master(dev);
+
+	fsi_master_break(master, 0);
+
+	return count;
+}
+
+static DEVICE_ATTR(break, 0200, NULL, master_break_store);
+
+int fsi_master_register(struct fsi_master *master)
+{
+	int rc;
+
+	if (!master)
+		return -EINVAL;
+
+	master->idx = ida_simple_get(&master_ida, 0, INT_MAX, GFP_KERNEL);
+	dev_set_name(&master->dev, "fsi%d", master->idx);
+
+	rc = device_register(&master->dev);
+	if (rc) {
+		ida_simple_remove(&master_ida, master->idx);
+		return rc;
+	}
+
+	rc = device_create_file(&master->dev, &dev_attr_rescan);
+	if (rc) {
+		device_unregister(&master->dev);
+		ida_simple_remove(&master_ida, master->idx);
+		return rc;
+	}
+
+	rc = device_create_file(&master->dev, &dev_attr_break);
+	if (rc) {
+		device_unregister(&master->dev);
+		ida_simple_remove(&master_ida, master->idx);
+		return rc;
+	}
+
+	fsi_master_scan(master);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(fsi_master_register);
+
+void fsi_master_unregister(struct fsi_master *master)
+{
+	if (master->idx >= 0) {
+		ida_simple_remove(&master_ida, master->idx);
+		master->idx = -1;
+	}
+
+	fsi_master_unscan(master);
+	device_unregister(&master->dev);
+}
+EXPORT_SYMBOL_GPL(fsi_master_unregister);
 
 
 /* FSI core & Linux bus type definitions */
 /* FSI core & Linux bus type definitions */
 
 
@@ -39,6 +860,23 @@ static int fsi_bus_match(struct device *dev, struct device_driver *drv)
 	return 0;
 	return 0;
 }
 }
 
 
+int fsi_driver_register(struct fsi_driver *fsi_drv)
+{
+	if (!fsi_drv)
+		return -EINVAL;
+	if (!fsi_drv->id_table)
+		return -EINVAL;
+
+	return driver_register(&fsi_drv->drv);
+}
+EXPORT_SYMBOL_GPL(fsi_driver_register);
+
+void fsi_driver_unregister(struct fsi_driver *fsi_drv)
+{
+	driver_unregister(&fsi_drv->drv);
+}
+EXPORT_SYMBOL_GPL(fsi_driver_unregister);
+
 struct bus_type fsi_bus_type = {
 struct bus_type fsi_bus_type = {
 	.name		= "fsi",
 	.name		= "fsi",
 	.match		= fsi_bus_match,
 	.match		= fsi_bus_match,
@@ -57,3 +895,6 @@ static void fsi_exit(void)
 
 
 module_init(fsi_init);
 module_init(fsi_init);
 module_exit(fsi_exit);
 module_exit(fsi_exit);
+module_param(discard_errors, int, 0664);
+MODULE_LICENSE("GPL");
+MODULE_PARM_DESC(discard_errors, "Don't invoke error handling on bus accesses");

+ 604 - 0
drivers/fsi/fsi-master-gpio.c

@@ -0,0 +1,604 @@
+/*
+ * A FSI master controller, using a simple GPIO bit-banging interface
+ */
+
+#include <linux/crc4.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/fsi.h>
+#include <linux/gpio/consumer.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "fsi-master.h"
+
+#define	FSI_GPIO_STD_DLY	1	/* Standard pin delay in nS */
+#define	FSI_ECHO_DELAY_CLOCKS	16	/* Number clocks for echo delay */
+#define	FSI_PRE_BREAK_CLOCKS	50	/* Number clocks to prep for break */
+#define	FSI_BREAK_CLOCKS	256	/* Number of clocks to issue break */
+#define	FSI_POST_BREAK_CLOCKS	16000	/* Number clocks to set up cfam */
+#define	FSI_INIT_CLOCKS		5000	/* Clock out any old data */
+#define	FSI_GPIO_STD_DELAY	10	/* Standard GPIO delay in nS */
+					/* todo: adjust down as low as */
+					/* possible or eliminate */
+#define	FSI_GPIO_CMD_DPOLL      0x2
+#define	FSI_GPIO_CMD_TERM	0x3f
+#define FSI_GPIO_CMD_ABS_AR	0x4
+
+#define	FSI_GPIO_DPOLL_CLOCKS	100      /* < 21 will cause slave to hang */
+
+/* Bus errors */
+#define	FSI_GPIO_ERR_BUSY	1	/* Slave stuck in busy state */
+#define	FSI_GPIO_RESP_ERRA	2	/* Any (misc) Error */
+#define	FSI_GPIO_RESP_ERRC	3	/* Slave reports master CRC error */
+#define	FSI_GPIO_MTOE		4	/* Master time out error */
+#define	FSI_GPIO_CRC_INVAL	5	/* Master reports slave CRC error */
+
+/* Normal slave responses */
+#define	FSI_GPIO_RESP_BUSY	1
+#define	FSI_GPIO_RESP_ACK	0
+#define	FSI_GPIO_RESP_ACKD	4
+
+#define	FSI_GPIO_MAX_BUSY	100
+#define	FSI_GPIO_MTOE_COUNT	1000
+#define	FSI_GPIO_DRAIN_BITS	20
+#define	FSI_GPIO_CRC_SIZE	4
+#define	FSI_GPIO_MSG_ID_SIZE		2
+#define	FSI_GPIO_MSG_RESPID_SIZE	2
+#define	FSI_GPIO_PRIME_SLAVE_CLOCKS	100
+
+struct fsi_master_gpio {
+	struct fsi_master	master;
+	struct device		*dev;
+	spinlock_t		cmd_lock;	/* Lock for commands */
+	struct gpio_desc	*gpio_clk;
+	struct gpio_desc	*gpio_data;
+	struct gpio_desc	*gpio_trans;	/* Voltage translator */
+	struct gpio_desc	*gpio_enable;	/* FSI enable */
+	struct gpio_desc	*gpio_mux;	/* Mux control */
+};
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/fsi_master_gpio.h>
+
+#define to_fsi_master_gpio(m) container_of(m, struct fsi_master_gpio, master)
+
+struct fsi_gpio_msg {
+	uint64_t	msg;
+	uint8_t		bits;
+};
+
+static void clock_toggle(struct fsi_master_gpio *master, int count)
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		ndelay(FSI_GPIO_STD_DLY);
+		gpiod_set_value(master->gpio_clk, 0);
+		ndelay(FSI_GPIO_STD_DLY);
+		gpiod_set_value(master->gpio_clk, 1);
+	}
+}
+
+static int sda_in(struct fsi_master_gpio *master)
+{
+	int in;
+
+	ndelay(FSI_GPIO_STD_DLY);
+	in = gpiod_get_value(master->gpio_data);
+	return in ? 1 : 0;
+}
+
+static void sda_out(struct fsi_master_gpio *master, int value)
+{
+	gpiod_set_value(master->gpio_data, value);
+}
+
+static void set_sda_input(struct fsi_master_gpio *master)
+{
+	gpiod_direction_input(master->gpio_data);
+	gpiod_set_value(master->gpio_trans, 0);
+}
+
+static void set_sda_output(struct fsi_master_gpio *master, int value)
+{
+	gpiod_set_value(master->gpio_trans, 1);
+	gpiod_direction_output(master->gpio_data, value);
+}
+
+static void clock_zeros(struct fsi_master_gpio *master, int count)
+{
+	set_sda_output(master, 1);
+	clock_toggle(master, count);
+}
+
+static void serial_in(struct fsi_master_gpio *master, struct fsi_gpio_msg *msg,
+			uint8_t num_bits)
+{
+	uint8_t bit, in_bit;
+
+	set_sda_input(master);
+
+	for (bit = 0; bit < num_bits; bit++) {
+		clock_toggle(master, 1);
+		in_bit = sda_in(master);
+		msg->msg <<= 1;
+		msg->msg |= ~in_bit & 0x1;	/* Data is active low */
+	}
+	msg->bits += num_bits;
+
+	trace_fsi_master_gpio_in(master, num_bits, msg->msg);
+}
+
+static void serial_out(struct fsi_master_gpio *master,
+			const struct fsi_gpio_msg *cmd)
+{
+	uint8_t bit;
+	uint64_t msg = ~cmd->msg;	/* Data is active low */
+	uint64_t sda_mask = 0x1ULL << (cmd->bits - 1);
+	uint64_t last_bit = ~0;
+	int next_bit;
+
+	trace_fsi_master_gpio_out(master, cmd->bits, cmd->msg);
+
+	if (!cmd->bits) {
+		dev_warn(master->dev, "trying to output 0 bits\n");
+		return;
+	}
+	set_sda_output(master, 0);
+
+	/* Send the start bit */
+	sda_out(master, 0);
+	clock_toggle(master, 1);
+
+	/* Send the message */
+	for (bit = 0; bit < cmd->bits; bit++) {
+		next_bit = (msg & sda_mask) >> (cmd->bits - 1);
+		if (last_bit ^ next_bit) {
+			sda_out(master, next_bit);
+			last_bit = next_bit;
+		}
+		clock_toggle(master, 1);
+		msg <<= 1;
+	}
+}
+
+static void msg_push_bits(struct fsi_gpio_msg *msg, uint64_t data, int bits)
+{
+	msg->msg <<= bits;
+	msg->msg |= data & ((1ull << bits) - 1);
+	msg->bits += bits;
+}
+
+static void msg_push_crc(struct fsi_gpio_msg *msg)
+{
+	uint8_t crc;
+	int top;
+
+	top = msg->bits & 0x3;
+
+	/* start bit, and any non-aligned top bits */
+	crc = crc4(0, 1 << top | msg->msg >> (msg->bits - top), top + 1);
+
+	/* aligned bits */
+	crc = crc4(crc, msg->msg, msg->bits - top);
+
+	msg_push_bits(msg, crc, 4);
+}
+
+/*
+ * Encode an Absolute Address command
+ */
+static void build_abs_ar_command(struct fsi_gpio_msg *cmd,
+		uint8_t id, uint32_t addr, size_t size, const void *data)
+{
+	bool write = !!data;
+	uint8_t ds;
+	int i;
+
+	cmd->bits = 0;
+	cmd->msg = 0;
+
+	msg_push_bits(cmd, id, 2);
+	msg_push_bits(cmd, FSI_GPIO_CMD_ABS_AR, 3);
+	msg_push_bits(cmd, write ? 0 : 1, 1);
+
+	/*
+	 * The read/write size is encoded in the lower bits of the address
+	 * (as it must be naturally-aligned), and the following ds bit.
+	 *
+	 *	size	addr:1	addr:0	ds
+	 *	1	x	x	0
+	 *	2	x	0	1
+	 *	4	0	1	1
+	 *
+	 */
+	ds = size > 1 ? 1 : 0;
+	addr &= ~(size - 1);
+	if (size == 4)
+		addr |= 1;
+
+	msg_push_bits(cmd, addr & ((1 << 21) - 1), 21);
+	msg_push_bits(cmd, ds, 1);
+	for (i = 0; write && i < size; i++)
+		msg_push_bits(cmd, ((uint8_t *)data)[i], 8);
+
+	msg_push_crc(cmd);
+}
+
+static void build_dpoll_command(struct fsi_gpio_msg *cmd, uint8_t slave_id)
+{
+	cmd->bits = 0;
+	cmd->msg = 0;
+
+	msg_push_bits(cmd, slave_id, 2);
+	msg_push_bits(cmd, FSI_GPIO_CMD_DPOLL, 3);
+	msg_push_crc(cmd);
+}
+
+static void echo_delay(struct fsi_master_gpio *master)
+{
+	set_sda_output(master, 1);
+	clock_toggle(master, FSI_ECHO_DELAY_CLOCKS);
+}
+
+static void build_term_command(struct fsi_gpio_msg *cmd, uint8_t slave_id)
+{
+	cmd->bits = 0;
+	cmd->msg = 0;
+
+	msg_push_bits(cmd, slave_id, 2);
+	msg_push_bits(cmd, FSI_GPIO_CMD_TERM, 6);
+	msg_push_crc(cmd);
+}
+
+/*
+ * Store information on master errors so handler can detect and clean
+ * up the bus
+ */
+static void fsi_master_gpio_error(struct fsi_master_gpio *master, int error)
+{
+
+}
+
+static int read_one_response(struct fsi_master_gpio *master,
+		uint8_t data_size, struct fsi_gpio_msg *msgp, uint8_t *tagp)
+{
+	struct fsi_gpio_msg msg;
+	uint8_t id, tag;
+	uint32_t crc;
+	int i;
+
+	/* wait for the start bit */
+	for (i = 0; i < FSI_GPIO_MTOE_COUNT; i++) {
+		msg.bits = 0;
+		msg.msg = 0;
+		serial_in(master, &msg, 1);
+		if (msg.msg)
+			break;
+	}
+	if (i == FSI_GPIO_MTOE_COUNT) {
+		dev_dbg(master->dev,
+			"Master time out waiting for response\n");
+		fsi_master_gpio_error(master, FSI_GPIO_MTOE);
+		return -EIO;
+	}
+
+	msg.bits = 0;
+	msg.msg = 0;
+
+	/* Read slave ID & response tag */
+	serial_in(master, &msg, 4);
+
+	id = (msg.msg >> FSI_GPIO_MSG_RESPID_SIZE) & 0x3;
+	tag = msg.msg & 0x3;
+
+	/* If we have an ACK and we're expecting data, clock the data in too */
+	if (tag == FSI_GPIO_RESP_ACK && data_size)
+		serial_in(master, &msg, data_size * 8);
+
+	/* read CRC */
+	serial_in(master, &msg, FSI_GPIO_CRC_SIZE);
+
+	/* we have a whole message now; check CRC */
+	crc = crc4(0, 1, 1);
+	crc = crc4(crc, msg.msg, msg.bits);
+	if (crc) {
+		dev_dbg(master->dev, "ERR response CRC\n");
+		fsi_master_gpio_error(master, FSI_GPIO_CRC_INVAL);
+		return -EIO;
+	}
+
+	if (msgp)
+		*msgp = msg;
+	if (tagp)
+		*tagp = tag;
+
+	return 0;
+}
+
+static int issue_term(struct fsi_master_gpio *master, uint8_t slave)
+{
+	struct fsi_gpio_msg cmd;
+	uint8_t tag;
+	int rc;
+
+	build_term_command(&cmd, slave);
+	serial_out(master, &cmd);
+	echo_delay(master);
+
+	rc = read_one_response(master, 0, NULL, &tag);
+	if (rc < 0) {
+		dev_err(master->dev,
+				"TERM failed; lost communication with slave\n");
+		return -EIO;
+	} else if (tag != FSI_GPIO_RESP_ACK) {
+		dev_err(master->dev, "TERM failed; response %d\n", tag);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int poll_for_response(struct fsi_master_gpio *master,
+		uint8_t slave, uint8_t size, void *data)
+{
+	struct fsi_gpio_msg response, cmd;
+	int busy_count = 0, rc, i;
+	uint8_t tag;
+	uint8_t *data_byte = data;
+
+retry:
+	rc = read_one_response(master, size, &response, &tag);
+	if (rc)
+		return rc;
+
+	switch (tag) {
+	case FSI_GPIO_RESP_ACK:
+		if (size && data) {
+			uint64_t val = response.msg;
+			/* clear crc & mask */
+			val >>= 4;
+			val &= (1ull << (size * 8)) - 1;
+
+			for (i = 0; i < size; i++) {
+				data_byte[size-i-1] = val;
+				val >>= 8;
+			}
+		}
+		break;
+	case FSI_GPIO_RESP_BUSY:
+		/*
+		 * Its necessary to clock slave before issuing
+		 * d-poll, not indicated in the hardware protocol
+		 * spec. < 20 clocks causes slave to hang, 21 ok.
+		 */
+		clock_zeros(master, FSI_GPIO_DPOLL_CLOCKS);
+		if (busy_count++ < FSI_GPIO_MAX_BUSY) {
+			build_dpoll_command(&cmd, slave);
+			serial_out(master, &cmd);
+			echo_delay(master);
+			goto retry;
+		}
+		dev_warn(master->dev,
+			"ERR slave is stuck in busy state, issuing TERM\n");
+		issue_term(master, slave);
+		rc = -EIO;
+		break;
+
+	case FSI_GPIO_RESP_ERRA:
+	case FSI_GPIO_RESP_ERRC:
+		dev_dbg(master->dev, "ERR%c received: 0x%x\n",
+			tag == FSI_GPIO_RESP_ERRA ? 'A' : 'C',
+			(int)response.msg);
+		fsi_master_gpio_error(master, response.msg);
+		rc = -EIO;
+		break;
+	}
+
+	/* Clock the slave enough to be ready for next operation */
+	clock_zeros(master, FSI_GPIO_PRIME_SLAVE_CLOCKS);
+	return rc;
+}
+
+static int fsi_master_gpio_xfer(struct fsi_master_gpio *master, uint8_t slave,
+		struct fsi_gpio_msg *cmd, size_t resp_len, void *resp)
+{
+	unsigned long flags;
+	int rc;
+
+	spin_lock_irqsave(&master->cmd_lock, flags);
+	serial_out(master, cmd);
+	echo_delay(master);
+	rc = poll_for_response(master, slave, resp_len, resp);
+	spin_unlock_irqrestore(&master->cmd_lock, flags);
+
+	return rc;
+}
+
+static int fsi_master_gpio_read(struct fsi_master *_master, int link,
+		uint8_t id, uint32_t addr, void *val, size_t size)
+{
+	struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
+	struct fsi_gpio_msg cmd;
+
+	if (link != 0)
+		return -ENODEV;
+
+	build_abs_ar_command(&cmd, id, addr, size, NULL);
+	return fsi_master_gpio_xfer(master, id, &cmd, size, val);
+}
+
+static int fsi_master_gpio_write(struct fsi_master *_master, int link,
+		uint8_t id, uint32_t addr, const void *val, size_t size)
+{
+	struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
+	struct fsi_gpio_msg cmd;
+
+	if (link != 0)
+		return -ENODEV;
+
+	build_abs_ar_command(&cmd, id, addr, size, val);
+	return fsi_master_gpio_xfer(master, id, &cmd, 0, NULL);
+}
+
+static int fsi_master_gpio_term(struct fsi_master *_master,
+		int link, uint8_t id)
+{
+	struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
+	struct fsi_gpio_msg cmd;
+
+	if (link != 0)
+		return -ENODEV;
+
+	build_term_command(&cmd, id);
+	return fsi_master_gpio_xfer(master, id, &cmd, 0, NULL);
+}
+
+static int fsi_master_gpio_break(struct fsi_master *_master, int link)
+{
+	struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
+
+	if (link != 0)
+		return -ENODEV;
+
+	trace_fsi_master_gpio_break(master);
+
+	set_sda_output(master, 1);
+	sda_out(master, 1);
+	clock_toggle(master, FSI_PRE_BREAK_CLOCKS);
+	sda_out(master, 0);
+	clock_toggle(master, FSI_BREAK_CLOCKS);
+	echo_delay(master);
+	sda_out(master, 1);
+	clock_toggle(master, FSI_POST_BREAK_CLOCKS);
+
+	/* Wait for logic reset to take effect */
+	udelay(200);
+
+	return 0;
+}
+
+static void fsi_master_gpio_init(struct fsi_master_gpio *master)
+{
+	gpiod_direction_output(master->gpio_mux, 1);
+	gpiod_direction_output(master->gpio_trans, 1);
+	gpiod_direction_output(master->gpio_enable, 1);
+	gpiod_direction_output(master->gpio_clk, 1);
+	gpiod_direction_output(master->gpio_data, 1);
+
+	/* todo: evaluate if clocks can be reduced */
+	clock_zeros(master, FSI_INIT_CLOCKS);
+}
+
+static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link)
+{
+	struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
+
+	if (link != 0)
+		return -ENODEV;
+	gpiod_set_value(master->gpio_enable, 1);
+
+	return 0;
+}
+
+static int fsi_master_gpio_probe(struct platform_device *pdev)
+{
+	struct fsi_master_gpio *master;
+	struct gpio_desc *gpio;
+
+	master = devm_kzalloc(&pdev->dev, sizeof(*master), GFP_KERNEL);
+	if (!master)
+		return -ENOMEM;
+
+	master->dev = &pdev->dev;
+	master->master.dev.parent = master->dev;
+
+	gpio = devm_gpiod_get(&pdev->dev, "clock", 0);
+	if (IS_ERR(gpio)) {
+		dev_err(&pdev->dev, "failed to get clock gpio\n");
+		return PTR_ERR(gpio);
+	}
+	master->gpio_clk = gpio;
+
+	gpio = devm_gpiod_get(&pdev->dev, "data", 0);
+	if (IS_ERR(gpio)) {
+		dev_err(&pdev->dev, "failed to get data gpio\n");
+		return PTR_ERR(gpio);
+	}
+	master->gpio_data = gpio;
+
+	/* Optional GPIOs */
+	gpio = devm_gpiod_get_optional(&pdev->dev, "trans", 0);
+	if (IS_ERR(gpio)) {
+		dev_err(&pdev->dev, "failed to get trans gpio\n");
+		return PTR_ERR(gpio);
+	}
+	master->gpio_trans = gpio;
+
+	gpio = devm_gpiod_get_optional(&pdev->dev, "enable", 0);
+	if (IS_ERR(gpio)) {
+		dev_err(&pdev->dev, "failed to get enable gpio\n");
+		return PTR_ERR(gpio);
+	}
+	master->gpio_enable = gpio;
+
+	gpio = devm_gpiod_get_optional(&pdev->dev, "mux", 0);
+	if (IS_ERR(gpio)) {
+		dev_err(&pdev->dev, "failed to get mux gpio\n");
+		return PTR_ERR(gpio);
+	}
+	master->gpio_mux = gpio;
+
+	master->master.n_links = 1;
+	master->master.flags = FSI_MASTER_FLAG_SWCLOCK;
+	master->master.read = fsi_master_gpio_read;
+	master->master.write = fsi_master_gpio_write;
+	master->master.term = fsi_master_gpio_term;
+	master->master.send_break = fsi_master_gpio_break;
+	master->master.link_enable = fsi_master_gpio_link_enable;
+	platform_set_drvdata(pdev, master);
+	spin_lock_init(&master->cmd_lock);
+
+	fsi_master_gpio_init(master);
+
+	return fsi_master_register(&master->master);
+}
+
+
+static int fsi_master_gpio_remove(struct platform_device *pdev)
+{
+	struct fsi_master_gpio *master = platform_get_drvdata(pdev);
+
+	devm_gpiod_put(&pdev->dev, master->gpio_clk);
+	devm_gpiod_put(&pdev->dev, master->gpio_data);
+	if (master->gpio_trans)
+		devm_gpiod_put(&pdev->dev, master->gpio_trans);
+	if (master->gpio_enable)
+		devm_gpiod_put(&pdev->dev, master->gpio_enable);
+	if (master->gpio_mux)
+		devm_gpiod_put(&pdev->dev, master->gpio_mux);
+	fsi_master_unregister(&master->master);
+
+	return 0;
+}
+
+static const struct of_device_id fsi_master_gpio_match[] = {
+	{ .compatible = "fsi-master-gpio" },
+	{ },
+};
+
+static struct platform_driver fsi_master_gpio_driver = {
+	.driver = {
+		.name		= "fsi-master-gpio",
+		.of_match_table	= fsi_master_gpio_match,
+	},
+	.probe	= fsi_master_gpio_probe,
+	.remove = fsi_master_gpio_remove,
+};
+
+module_platform_driver(fsi_master_gpio_driver);
+MODULE_LICENSE("GPL");

+ 327 - 0
drivers/fsi/fsi-master-hub.c

@@ -0,0 +1,327 @@
+/*
+ * FSI hub master driver
+ *
+ * Copyright (C) IBM Corporation 2016
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+#include <linux/fsi.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "fsi-master.h"
+
+/* Control Registers */
+#define FSI_MMODE		0x0		/* R/W: mode */
+#define FSI_MDLYR		0x4		/* R/W: delay */
+#define FSI_MCRSP		0x8		/* R/W: clock rate */
+#define FSI_MENP0		0x10		/* R/W: enable */
+#define FSI_MLEVP0		0x18		/* R: plug detect */
+#define FSI_MSENP0		0x18		/* S: Set enable */
+#define FSI_MCENP0		0x20		/* C: Clear enable */
+#define FSI_MAEB		0x70		/* R: Error address */
+#define FSI_MVER		0x74		/* R: master version/type */
+#define FSI_MRESP0		0xd0		/* W: Port reset */
+#define FSI_MESRB0		0x1d0		/* R: Master error status */
+#define FSI_MRESB0		0x1d0		/* W: Reset bridge */
+#define FSI_MECTRL		0x2e0		/* W: Error control */
+
+/* MMODE: Mode control */
+#define FSI_MMODE_EIP		0x80000000	/* Enable interrupt polling */
+#define FSI_MMODE_ECRC		0x40000000	/* Enable error recovery */
+#define FSI_MMODE_EPC		0x10000000	/* Enable parity checking */
+#define FSI_MMODE_P8_TO_LSB	0x00000010	/* Timeout value LSB */
+						/*   MSB=1, LSB=0 is 0.8 ms */
+						/*   MSB=0, LSB=1 is 0.9 ms */
+#define FSI_MMODE_CRS0SHFT	18		/* Clk rate selection 0 shift */
+#define FSI_MMODE_CRS0MASK	0x3ff		/* Clk rate selection 0 mask */
+#define FSI_MMODE_CRS1SHFT	8		/* Clk rate selection 1 shift */
+#define FSI_MMODE_CRS1MASK	0x3ff		/* Clk rate selection 1 mask */
+
+/* MRESB: Reset brindge */
+#define FSI_MRESB_RST_GEN	0x80000000	/* General reset */
+#define FSI_MRESB_RST_ERR	0x40000000	/* Error Reset */
+
+/* MRESB: Reset port */
+#define FSI_MRESP_RST_ALL_MASTER 0x20000000	/* Reset all FSI masters */
+#define FSI_MRESP_RST_ALL_LINK	0x10000000	/* Reset all FSI port contr. */
+#define FSI_MRESP_RST_MCR	0x08000000	/* Reset FSI master reg. */
+#define FSI_MRESP_RST_PYE	0x04000000	/* Reset FSI parity error */
+#define FSI_MRESP_RST_ALL	0xfc000000	/* Reset any error */
+
+/* MECTRL: Error control */
+#define FSI_MECTRL_EOAE		0x8000		/* Enable machine check when */
+						/* master 0 in error */
+#define FSI_MECTRL_P8_AUTO_TERM	0x4000		/* Auto terminate */
+
+#define FSI_ENGID_HUB_MASTER		0x1c
+#define FSI_HUB_LINK_OFFSET		0x80000
+#define FSI_HUB_LINK_SIZE		0x80000
+#define FSI_HUB_MASTER_MAX_LINKS	8
+
+#define FSI_LINK_ENABLE_SETUP_TIME	10	/* in mS */
+
+/*
+ * FSI hub master support
+ *
+ * A hub master increases the number of potential target devices that the
+ * primary FSI master can access. For each link a primary master supports,
+ * each of those links can in turn be chained to a hub master with multiple
+ * links of its own.
+ *
+ * The hub is controlled by a set of control registers exposed as a regular fsi
+ * device (the hub->upstream device), and provides access to the downstream FSI
+ * bus as through an address range on the slave itself (->addr and ->size).
+ *
+ * [This differs from "cascaded" masters, which expose the entire downstream
+ * bus entirely through the fsi device address range, and so have a smaller
+ * accessible address space.]
+ */
+struct fsi_master_hub {
+	struct fsi_master	master;
+	struct fsi_device	*upstream;
+	uint32_t		addr, size;	/* slave-relative addr of */
+						/* master address space */
+};
+
+#define to_fsi_master_hub(m) container_of(m, struct fsi_master_hub, master)
+
+static int hub_master_read(struct fsi_master *master, int link,
+			uint8_t id, uint32_t addr, void *val, size_t size)
+{
+	struct fsi_master_hub *hub = to_fsi_master_hub(master);
+
+	if (id != 0)
+		return -EINVAL;
+
+	addr += hub->addr + (link * FSI_HUB_LINK_SIZE);
+	return fsi_slave_read(hub->upstream->slave, addr, val, size);
+}
+
+static int hub_master_write(struct fsi_master *master, int link,
+			uint8_t id, uint32_t addr, const void *val, size_t size)
+{
+	struct fsi_master_hub *hub = to_fsi_master_hub(master);
+
+	if (id != 0)
+		return -EINVAL;
+
+	addr += hub->addr + (link * FSI_HUB_LINK_SIZE);
+	return fsi_slave_write(hub->upstream->slave, addr, val, size);
+}
+
+static int hub_master_break(struct fsi_master *master, int link)
+{
+	uint32_t addr, cmd;
+
+	addr = 0x4;
+	cmd = cpu_to_be32(0xc0de0000);
+
+	return hub_master_write(master, link, 0, addr, &cmd, sizeof(cmd));
+}
+
+static int hub_master_link_enable(struct fsi_master *master, int link)
+{
+	struct fsi_master_hub *hub = to_fsi_master_hub(master);
+	int idx, bit;
+	__be32 reg;
+	int rc;
+
+	idx = link / 32;
+	bit = link % 32;
+
+	reg = cpu_to_be32(0x80000000 >> bit);
+
+	rc = fsi_device_write(hub->upstream, FSI_MSENP0 + (4 * idx), &reg, 4);
+
+	mdelay(FSI_LINK_ENABLE_SETUP_TIME);
+
+	fsi_device_read(hub->upstream, FSI_MENP0 + (4 * idx), &reg, 4);
+
+	return rc;
+}
+
+static void hub_master_release(struct device *dev)
+{
+	struct fsi_master_hub *hub = to_fsi_master_hub(dev_to_fsi_master(dev));
+
+	kfree(hub);
+}
+
+/* mmode encoders */
+static inline u32 fsi_mmode_crs0(u32 x)
+{
+	return (x & FSI_MMODE_CRS0MASK) << FSI_MMODE_CRS0SHFT;
+}
+
+static inline u32 fsi_mmode_crs1(u32 x)
+{
+	return (x & FSI_MMODE_CRS1MASK) << FSI_MMODE_CRS1SHFT;
+}
+
+static int hub_master_init(struct fsi_master_hub *hub)
+{
+	struct fsi_device *dev = hub->upstream;
+	__be32 reg;
+	int rc;
+
+	reg = cpu_to_be32(FSI_MRESP_RST_ALL_MASTER | FSI_MRESP_RST_ALL_LINK
+			| FSI_MRESP_RST_MCR | FSI_MRESP_RST_PYE);
+	rc = fsi_device_write(dev, FSI_MRESP0, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	/* Initialize the MFSI (hub master) engine */
+	reg = cpu_to_be32(FSI_MRESP_RST_ALL_MASTER | FSI_MRESP_RST_ALL_LINK
+			| FSI_MRESP_RST_MCR | FSI_MRESP_RST_PYE);
+	rc = fsi_device_write(dev, FSI_MRESP0, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	reg = cpu_to_be32(FSI_MECTRL_EOAE | FSI_MECTRL_P8_AUTO_TERM);
+	rc = fsi_device_write(dev, FSI_MECTRL, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	reg = cpu_to_be32(FSI_MMODE_EIP | FSI_MMODE_ECRC | FSI_MMODE_EPC
+			| fsi_mmode_crs0(1) | fsi_mmode_crs1(1)
+			| FSI_MMODE_P8_TO_LSB);
+	rc = fsi_device_write(dev, FSI_MMODE, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	reg = cpu_to_be32(0xffff0000);
+	rc = fsi_device_write(dev, FSI_MDLYR, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	reg = ~0;
+	rc = fsi_device_write(dev, FSI_MSENP0, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	/* Leave enabled long enough for master logic to set up */
+	mdelay(FSI_LINK_ENABLE_SETUP_TIME);
+
+	rc = fsi_device_write(dev, FSI_MCENP0, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	rc = fsi_device_read(dev, FSI_MAEB, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	reg = cpu_to_be32(FSI_MRESP_RST_ALL_MASTER | FSI_MRESP_RST_ALL_LINK);
+	rc = fsi_device_write(dev, FSI_MRESP0, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	rc = fsi_device_read(dev, FSI_MLEVP0, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	/* Reset the master bridge */
+	reg = cpu_to_be32(FSI_MRESB_RST_GEN);
+	rc = fsi_device_write(dev, FSI_MRESB0, &reg, sizeof(reg));
+	if (rc)
+		return rc;
+
+	reg = cpu_to_be32(FSI_MRESB_RST_ERR);
+	return fsi_device_write(dev, FSI_MRESB0, &reg, sizeof(reg));
+}
+
+static int hub_master_probe(struct device *dev)
+{
+	struct fsi_device *fsi_dev = to_fsi_dev(dev);
+	struct fsi_master_hub *hub;
+	uint32_t reg, links;
+	__be32 __reg;
+	int rc;
+
+	rc = fsi_device_read(fsi_dev, FSI_MVER, &__reg, sizeof(__reg));
+	if (rc)
+		return rc;
+
+	reg = be32_to_cpu(__reg);
+	links = (reg >> 8) & 0xff;
+	dev_info(dev, "hub version %08x (%d links)\n", reg, links);
+
+	rc = fsi_slave_claim_range(fsi_dev->slave, FSI_HUB_LINK_OFFSET,
+			FSI_HUB_LINK_SIZE * links);
+	if (rc) {
+		dev_err(dev, "can't claim slave address range for links");
+		return rc;
+	}
+
+	hub = kzalloc(sizeof(*hub), GFP_KERNEL);
+	if (!hub) {
+		rc = -ENOMEM;
+		goto err_release;
+	}
+
+	hub->addr = FSI_HUB_LINK_OFFSET;
+	hub->size = FSI_HUB_LINK_SIZE * links;
+	hub->upstream = fsi_dev;
+
+	hub->master.dev.parent = dev;
+	hub->master.dev.release = hub_master_release;
+
+	hub->master.n_links = links;
+	hub->master.read = hub_master_read;
+	hub->master.write = hub_master_write;
+	hub->master.send_break = hub_master_break;
+	hub->master.link_enable = hub_master_link_enable;
+
+	dev_set_drvdata(dev, hub);
+
+	hub_master_init(hub);
+
+	rc = fsi_master_register(&hub->master);
+	if (!rc)
+		return 0;
+
+	kfree(hub);
+err_release:
+	fsi_slave_release_range(fsi_dev->slave, FSI_HUB_LINK_OFFSET,
+			FSI_HUB_LINK_SIZE * links);
+	return rc;
+}
+
+static int hub_master_remove(struct device *dev)
+{
+	struct fsi_master_hub *hub = dev_get_drvdata(dev);
+
+	fsi_master_unregister(&hub->master);
+	fsi_slave_release_range(hub->upstream->slave, hub->addr, hub->size);
+	return 0;
+}
+
+static struct fsi_device_id hub_master_ids[] = {
+	{
+		.engine_type = FSI_ENGID_HUB_MASTER,
+		.version = FSI_VERSION_ANY,
+	},
+	{ 0 }
+};
+
+static struct fsi_driver hub_master_driver = {
+	.id_table = hub_master_ids,
+	.drv = {
+		.name = "fsi-master-hub",
+		.bus = &fsi_bus_type,
+		.probe = hub_master_probe,
+		.remove = hub_master_remove,
+	}
+};
+
+module_fsi_driver(hub_master_driver);
+MODULE_LICENSE("GPL");

+ 43 - 0
drivers/fsi/fsi-master.h

@@ -0,0 +1,43 @@
+/*
+ * FSI master definitions. These comprise the core <--> master interface,
+ * to allow the core to interact with the (hardware-specific) masters.
+ *
+ * Copyright (C) IBM Corporation 2016
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DRIVERS_FSI_MASTER_H
+#define DRIVERS_FSI_MASTER_H
+
+#include <linux/device.h>
+
+#define FSI_MASTER_FLAG_SWCLOCK		0x1
+
+struct fsi_master {
+	struct device	dev;
+	int		idx;
+	int		n_links;
+	int		flags;
+	int		(*read)(struct fsi_master *, int link, uint8_t id,
+				uint32_t addr, void *val, size_t size);
+	int		(*write)(struct fsi_master *, int link, uint8_t id,
+				uint32_t addr, const void *val, size_t size);
+	int		(*term)(struct fsi_master *, int link, uint8_t id);
+	int		(*send_break)(struct fsi_master *, int link);
+	int		(*link_enable)(struct fsi_master *, int link);
+};
+
+#define dev_to_fsi_master(d) container_of(d, struct fsi_master, dev)
+
+extern int fsi_master_register(struct fsi_master *master);
+extern void fsi_master_unregister(struct fsi_master *master);
+
+#endif /* DRIVERS_FSI_MASTER_H */

+ 263 - 0
drivers/fsi/fsi-scom.c

@@ -0,0 +1,263 @@
+/*
+ * SCOM FSI Client device driver
+ *
+ * Copyright (C) IBM Corporation 2016
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERGCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/fsi.h>
+#include <linux/module.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/miscdevice.h>
+#include <linux/list.h>
+#include <linux/idr.h>
+
+#define FSI_ENGID_SCOM		0x5
+
+#define SCOM_FSI2PIB_DELAY	50
+
+/* SCOM engine register set */
+#define SCOM_DATA0_REG		0x00
+#define SCOM_DATA1_REG		0x04
+#define SCOM_CMD_REG		0x08
+#define SCOM_RESET_REG		0x1C
+
+#define SCOM_RESET_CMD		0x80000000
+#define SCOM_WRITE_CMD		0x80000000
+
+struct scom_device {
+	struct list_head link;
+	struct fsi_device *fsi_dev;
+	struct miscdevice mdev;
+	char	name[32];
+	int idx;
+};
+
+#define to_scom_dev(x)		container_of((x), struct scom_device, mdev)
+
+static struct list_head scom_devices;
+
+static DEFINE_IDA(scom_ida);
+
+static int put_scom(struct scom_device *scom_dev, uint64_t value,
+			uint32_t addr)
+{
+	int rc;
+	uint32_t data;
+
+	data = cpu_to_be32(SCOM_RESET_CMD);
+	rc = fsi_device_write(scom_dev->fsi_dev, SCOM_RESET_REG, &data,
+				sizeof(uint32_t));
+	if (rc)
+		return rc;
+
+	data = cpu_to_be32((value >> 32) & 0xffffffff);
+	rc = fsi_device_write(scom_dev->fsi_dev, SCOM_DATA0_REG, &data,
+				sizeof(uint32_t));
+	if (rc)
+		return rc;
+
+	data = cpu_to_be32(value & 0xffffffff);
+	rc = fsi_device_write(scom_dev->fsi_dev, SCOM_DATA1_REG, &data,
+				sizeof(uint32_t));
+	if (rc)
+		return rc;
+
+	data = cpu_to_be32(SCOM_WRITE_CMD | addr);
+	return fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data,
+				sizeof(uint32_t));
+}
+
+static int get_scom(struct scom_device *scom_dev, uint64_t *value,
+			uint32_t addr)
+{
+	uint32_t result, data;
+	int rc;
+
+	*value = 0ULL;
+	data = cpu_to_be32(addr);
+	rc = fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data,
+				sizeof(uint32_t));
+	if (rc)
+		return rc;
+
+	rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA0_REG, &result,
+				sizeof(uint32_t));
+	if (rc)
+		return rc;
+
+	*value |= (uint64_t)cpu_to_be32(result) << 32;
+	rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA1_REG, &result,
+				sizeof(uint32_t));
+	if (rc)
+		return rc;
+
+	*value |= cpu_to_be32(result);
+
+	return 0;
+}
+
+static ssize_t scom_read(struct file *filep, char __user *buf, size_t len,
+			loff_t *offset)
+{
+	int rc;
+	struct miscdevice *mdev =
+				(struct miscdevice *)filep->private_data;
+	struct scom_device *scom = to_scom_dev(mdev);
+	struct device *dev = &scom->fsi_dev->dev;
+	uint64_t val;
+
+	if (len != sizeof(uint64_t))
+		return -EINVAL;
+
+	rc = get_scom(scom, &val, *offset);
+	if (rc) {
+		dev_dbg(dev, "get_scom fail:%d\n", rc);
+		return rc;
+	}
+
+	rc = copy_to_user(buf, &val, len);
+	if (rc)
+		dev_dbg(dev, "copy to user failed:%d\n", rc);
+
+	return rc ? rc : len;
+}
+
+static ssize_t scom_write(struct file *filep, const char __user *buf,
+			size_t len, loff_t *offset)
+{
+	int rc;
+	struct miscdevice *mdev = filep->private_data;
+	struct scom_device *scom = to_scom_dev(mdev);
+	struct device *dev = &scom->fsi_dev->dev;
+	uint64_t val;
+
+	if (len != sizeof(uint64_t))
+		return -EINVAL;
+
+	rc = copy_from_user(&val, buf, len);
+	if (rc) {
+		dev_dbg(dev, "copy from user failed:%d\n", rc);
+		return -EINVAL;
+	}
+
+	rc = put_scom(scom, val, *offset);
+	if (rc) {
+		dev_dbg(dev, "put_scom failed with:%d\n", rc);
+		return rc;
+	}
+
+	return len;
+}
+
+static loff_t scom_llseek(struct file *file, loff_t offset, int whence)
+{
+	switch (whence) {
+	case SEEK_CUR:
+		break;
+	case SEEK_SET:
+		file->f_pos = offset;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return offset;
+}
+
+static const struct file_operations scom_fops = {
+	.owner	= THIS_MODULE,
+	.llseek	= scom_llseek,
+	.read	= scom_read,
+	.write	= scom_write,
+};
+
+static int scom_probe(struct device *dev)
+{
+	struct fsi_device *fsi_dev = to_fsi_dev(dev);
+	struct scom_device *scom;
+
+	scom = devm_kzalloc(dev, sizeof(*scom), GFP_KERNEL);
+	if (!scom)
+		return -ENOMEM;
+
+	scom->idx = ida_simple_get(&scom_ida, 1, INT_MAX, GFP_KERNEL);
+	snprintf(scom->name, sizeof(scom->name), "scom%d", scom->idx);
+	scom->fsi_dev = fsi_dev;
+	scom->mdev.minor = MISC_DYNAMIC_MINOR;
+	scom->mdev.fops = &scom_fops;
+	scom->mdev.name = scom->name;
+	scom->mdev.parent = dev;
+	list_add(&scom->link, &scom_devices);
+
+	return misc_register(&scom->mdev);
+}
+
+static int scom_remove(struct device *dev)
+{
+	struct scom_device *scom, *scom_tmp;
+	struct fsi_device *fsi_dev = to_fsi_dev(dev);
+
+	list_for_each_entry_safe(scom, scom_tmp, &scom_devices, link) {
+		if (scom->fsi_dev == fsi_dev) {
+			list_del(&scom->link);
+			ida_simple_remove(&scom_ida, scom->idx);
+			misc_deregister(&scom->mdev);
+		}
+	}
+
+	return 0;
+}
+
+static struct fsi_device_id scom_ids[] = {
+	{
+		.engine_type = FSI_ENGID_SCOM,
+		.version = FSI_VERSION_ANY,
+	},
+	{ 0 }
+};
+
+static struct fsi_driver scom_drv = {
+	.id_table = scom_ids,
+	.drv = {
+		.name = "scom",
+		.bus = &fsi_bus_type,
+		.probe = scom_probe,
+		.remove = scom_remove,
+	}
+};
+
+static int scom_init(void)
+{
+	INIT_LIST_HEAD(&scom_devices);
+	return fsi_driver_register(&scom_drv);
+}
+
+static void scom_exit(void)
+{
+	struct list_head *pos;
+	struct scom_device *scom;
+
+	list_for_each(pos, &scom_devices) {
+		scom = list_entry(pos, struct scom_device, link);
+		misc_deregister(&scom->mdev);
+		devm_kfree(&scom->fsi_dev->dev, scom);
+	}
+	fsi_driver_unregister(&scom_drv);
+}
+
+module_init(scom_init);
+module_exit(scom_exit);
+MODULE_LICENSE("GPL");

+ 6 - 2
drivers/hv/channel.c

@@ -630,9 +630,13 @@ void vmbus_close(struct vmbus_channel *channel)
 	 */
 	 */
 	list_for_each_safe(cur, tmp, &channel->sc_list) {
 	list_for_each_safe(cur, tmp, &channel->sc_list) {
 		cur_channel = list_entry(cur, struct vmbus_channel, sc_list);
 		cur_channel = list_entry(cur, struct vmbus_channel, sc_list);
-		if (cur_channel->state != CHANNEL_OPENED_STATE)
-			continue;
 		vmbus_close_internal(cur_channel);
 		vmbus_close_internal(cur_channel);
+		if (cur_channel->rescind) {
+			mutex_lock(&vmbus_connection.channel_mutex);
+			hv_process_channel_removal(cur_channel,
+					   cur_channel->offermsg.child_relid);
+			mutex_unlock(&vmbus_connection.channel_mutex);
+		}
 	}
 	}
 	/*
 	/*
 	 * Now close the primary.
 	 * Now close the primary.

+ 53 - 16
drivers/hv/channel_mgmt.c

@@ -428,7 +428,6 @@ void vmbus_free_channels(void)
 {
 {
 	struct vmbus_channel *channel, *tmp;
 	struct vmbus_channel *channel, *tmp;
 
 
-	mutex_lock(&vmbus_connection.channel_mutex);
 	list_for_each_entry_safe(channel, tmp, &vmbus_connection.chn_list,
 	list_for_each_entry_safe(channel, tmp, &vmbus_connection.chn_list,
 		listentry) {
 		listentry) {
 		/* hv_process_channel_removal() needs this */
 		/* hv_process_channel_removal() needs this */
@@ -436,7 +435,6 @@ void vmbus_free_channels(void)
 
 
 		vmbus_device_unregister(channel->device_obj);
 		vmbus_device_unregister(channel->device_obj);
 	}
 	}
-	mutex_unlock(&vmbus_connection.channel_mutex);
 }
 }
 
 
 /*
 /*
@@ -483,8 +481,10 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
 			list_add_tail(&newchannel->sc_list, &channel->sc_list);
 			list_add_tail(&newchannel->sc_list, &channel->sc_list);
 			channel->num_sc++;
 			channel->num_sc++;
 			spin_unlock_irqrestore(&channel->lock, flags);
 			spin_unlock_irqrestore(&channel->lock, flags);
-		} else
+		} else {
+			atomic_dec(&vmbus_connection.offer_in_progress);
 			goto err_free_chan;
 			goto err_free_chan;
+		}
 	}
 	}
 
 
 	dev_type = hv_get_dev_type(newchannel);
 	dev_type = hv_get_dev_type(newchannel);
@@ -511,6 +511,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
 	if (!fnew) {
 	if (!fnew) {
 		if (channel->sc_creation_callback != NULL)
 		if (channel->sc_creation_callback != NULL)
 			channel->sc_creation_callback(newchannel);
 			channel->sc_creation_callback(newchannel);
+		atomic_dec(&vmbus_connection.offer_in_progress);
 		return;
 		return;
 	}
 	}
 
 
@@ -532,9 +533,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
 	 * binding which eventually invokes the device driver's AddDevice()
 	 * binding which eventually invokes the device driver's AddDevice()
 	 * method.
 	 * method.
 	 */
 	 */
-	mutex_lock(&vmbus_connection.channel_mutex);
 	ret = vmbus_device_register(newchannel->device_obj);
 	ret = vmbus_device_register(newchannel->device_obj);
-	mutex_unlock(&vmbus_connection.channel_mutex);
 
 
 	if (ret != 0) {
 	if (ret != 0) {
 		pr_err("unable to add child device object (relid %d)\n",
 		pr_err("unable to add child device object (relid %d)\n",
@@ -542,6 +541,8 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
 		kfree(newchannel->device_obj);
 		kfree(newchannel->device_obj);
 		goto err_deq_chan;
 		goto err_deq_chan;
 	}
 	}
+
+	atomic_dec(&vmbus_connection.offer_in_progress);
 	return;
 	return;
 
 
 err_deq_chan:
 err_deq_chan:
@@ -797,6 +798,7 @@ static void vmbus_onoffer(struct vmbus_channel_message_header *hdr)
 	newchannel = alloc_channel();
 	newchannel = alloc_channel();
 	if (!newchannel) {
 	if (!newchannel) {
 		vmbus_release_relid(offer->child_relid);
 		vmbus_release_relid(offer->child_relid);
+		atomic_dec(&vmbus_connection.offer_in_progress);
 		pr_err("Unable to allocate channel object\n");
 		pr_err("Unable to allocate channel object\n");
 		return;
 		return;
 	}
 	}
@@ -843,16 +845,38 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
 
 
 	rescind = (struct vmbus_channel_rescind_offer *)hdr;
 	rescind = (struct vmbus_channel_rescind_offer *)hdr;
 
 
+	/*
+	 * The offer msg and the corresponding rescind msg
+	 * from the host are guranteed to be ordered -
+	 * offer comes in first and then the rescind.
+	 * Since we process these events in work elements,
+	 * and with preemption, we may end up processing
+	 * the events out of order. Given that we handle these
+	 * work elements on the same CPU, this is possible only
+	 * in the case of preemption. In any case wait here
+	 * until the offer processing has moved beyond the
+	 * point where the channel is discoverable.
+	 */
+
+	while (atomic_read(&vmbus_connection.offer_in_progress) != 0) {
+		/*
+		 * We wait here until any channel offer is currently
+		 * being processed.
+		 */
+		msleep(1);
+	}
+
 	mutex_lock(&vmbus_connection.channel_mutex);
 	mutex_lock(&vmbus_connection.channel_mutex);
 	channel = relid2channel(rescind->child_relid);
 	channel = relid2channel(rescind->child_relid);
+	mutex_unlock(&vmbus_connection.channel_mutex);
 
 
 	if (channel == NULL) {
 	if (channel == NULL) {
 		/*
 		/*
-		 * This is very impossible, because in
-		 * vmbus_process_offer(), we have already invoked
-		 * vmbus_release_relid() on error.
+		 * We failed in processing the offer message;
+		 * we would have cleaned up the relid in that
+		 * failure path.
 		 */
 		 */
-		goto out;
+		return;
 	}
 	}
 
 
 	spin_lock_irqsave(&channel->lock, flags);
 	spin_lock_irqsave(&channel->lock, flags);
@@ -864,7 +888,7 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
 	if (channel->device_obj) {
 	if (channel->device_obj) {
 		if (channel->chn_rescind_callback) {
 		if (channel->chn_rescind_callback) {
 			channel->chn_rescind_callback(channel);
 			channel->chn_rescind_callback(channel);
-			goto out;
+			return;
 		}
 		}
 		/*
 		/*
 		 * We will have to unregister this device from the
 		 * We will have to unregister this device from the
@@ -875,13 +899,26 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
 			vmbus_device_unregister(channel->device_obj);
 			vmbus_device_unregister(channel->device_obj);
 			put_device(dev);
 			put_device(dev);
 		}
 		}
-	} else {
-		hv_process_channel_removal(channel,
-			channel->offermsg.child_relid);
 	}
 	}
-
-out:
-	mutex_unlock(&vmbus_connection.channel_mutex);
+	if (channel->primary_channel != NULL) {
+		/*
+		 * Sub-channel is being rescinded. Following is the channel
+		 * close sequence when initiated from the driveri (refer to
+		 * vmbus_close() for details):
+		 * 1. Close all sub-channels first
+		 * 2. Then close the primary channel.
+		 */
+		if (channel->state == CHANNEL_OPEN_STATE) {
+			/*
+			 * The channel is currently not open;
+			 * it is safe for us to cleanup the channel.
+			 */
+			mutex_lock(&vmbus_connection.channel_mutex);
+			hv_process_channel_removal(channel,
+						channel->offermsg.child_relid);
+			mutex_unlock(&vmbus_connection.channel_mutex);
+		}
+	}
 }
 }
 
 
 void vmbus_hvsock_device_unregister(struct vmbus_channel *channel)
 void vmbus_hvsock_device_unregister(struct vmbus_channel *channel)

+ 7 - 4
drivers/hv/connection.c

@@ -93,10 +93,13 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
 	 * all the CPUs. This is needed for kexec to work correctly where
 	 * all the CPUs. This is needed for kexec to work correctly where
 	 * the CPU attempting to connect may not be CPU 0.
 	 * the CPU attempting to connect may not be CPU 0.
 	 */
 	 */
-	if (version >= VERSION_WIN8_1)
+	if (version >= VERSION_WIN8_1) {
 		msg->target_vcpu = hv_context.vp_index[smp_processor_id()];
 		msg->target_vcpu = hv_context.vp_index[smp_processor_id()];
-	else
+		vmbus_connection.connect_cpu = smp_processor_id();
+	} else {
 		msg->target_vcpu = 0;
 		msg->target_vcpu = 0;
+		vmbus_connection.connect_cpu = 0;
+	}
 
 
 	/*
 	/*
 	 * Add to list before we send the request since we may
 	 * Add to list before we send the request since we may
@@ -370,7 +373,7 @@ int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep)
 			break;
 			break;
 		case HV_STATUS_INSUFFICIENT_MEMORY:
 		case HV_STATUS_INSUFFICIENT_MEMORY:
 		case HV_STATUS_INSUFFICIENT_BUFFERS:
 		case HV_STATUS_INSUFFICIENT_BUFFERS:
-			ret = -ENOMEM;
+			ret = -ENOBUFS;
 			break;
 			break;
 		case HV_STATUS_SUCCESS:
 		case HV_STATUS_SUCCESS:
 			return ret;
 			return ret;
@@ -387,7 +390,7 @@ int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep)
 		else
 		else
 			mdelay(usec / 1000);
 			mdelay(usec / 1000);
 
 
-		if (usec < 256000)
+		if (retries < 22)
 			usec *= 2;
 			usec *= 2;
 	}
 	}
 	return ret;
 	return ret;

+ 7 - 2
drivers/hv/hv.c

@@ -82,10 +82,15 @@ int hv_post_message(union hv_connection_id connection_id,
 	aligned_msg->message_type = message_type;
 	aligned_msg->message_type = message_type;
 	aligned_msg->payload_size = payload_size;
 	aligned_msg->payload_size = payload_size;
 	memcpy((void *)aligned_msg->payload, payload, payload_size);
 	memcpy((void *)aligned_msg->payload, payload, payload_size);
-	put_cpu_ptr(hv_cpu);
 
 
 	status = hv_do_hypercall(HVCALL_POST_MESSAGE, aligned_msg, NULL);
 	status = hv_do_hypercall(HVCALL_POST_MESSAGE, aligned_msg, NULL);
 
 
+	/* Preemption must remain disabled until after the hypercall
+	 * so some other thread can't get scheduled onto this cpu and
+	 * corrupt the per-cpu post_msg_page
+	 */
+	put_cpu_ptr(hv_cpu);
+
 	return status & 0xFFFF;
 	return status & 0xFFFF;
 }
 }
 
 
@@ -96,7 +101,7 @@ static int hv_ce_set_next_event(unsigned long delta,
 
 
 	WARN_ON(!clockevent_state_oneshot(evt));
 	WARN_ON(!clockevent_state_oneshot(evt));
 
 
-	hv_get_current_tick(current_tick);
+	current_tick = hyperv_cs->read(NULL);
 	current_tick += delta;
 	current_tick += delta;
 	hv_init_timer(HV_X64_MSR_STIMER0_COUNT, current_tick);
 	hv_init_timer(HV_X64_MSR_STIMER0_COUNT, current_tick);
 	return 0;
 	return 0;

+ 8 - 6
drivers/hv/hv_kvp.c

@@ -112,7 +112,7 @@ static void kvp_poll_wrapper(void *channel)
 {
 {
 	/* Transaction is finished, reset the state here to avoid races. */
 	/* Transaction is finished, reset the state here to avoid races. */
 	kvp_transaction.state = HVUTIL_READY;
 	kvp_transaction.state = HVUTIL_READY;
-	hv_kvp_onchannelcallback(channel);
+	tasklet_schedule(&((struct vmbus_channel *)channel)->callback_event);
 }
 }
 
 
 static void kvp_register_done(void)
 static void kvp_register_done(void)
@@ -159,7 +159,7 @@ static void kvp_timeout_func(struct work_struct *dummy)
 
 
 static void kvp_host_handshake_func(struct work_struct *dummy)
 static void kvp_host_handshake_func(struct work_struct *dummy)
 {
 {
-	hv_poll_channel(kvp_transaction.recv_channel, hv_kvp_onchannelcallback);
+	tasklet_schedule(&kvp_transaction.recv_channel->callback_event);
 }
 }
 
 
 static int kvp_handle_handshake(struct hv_kvp_msg *msg)
 static int kvp_handle_handshake(struct hv_kvp_msg *msg)
@@ -625,16 +625,17 @@ void hv_kvp_onchannelcallback(void *context)
 		     NEGO_IN_PROGRESS,
 		     NEGO_IN_PROGRESS,
 		     NEGO_FINISHED} host_negotiatied = NEGO_NOT_STARTED;
 		     NEGO_FINISHED} host_negotiatied = NEGO_NOT_STARTED;
 
 
-	if (host_negotiatied == NEGO_NOT_STARTED &&
-	    kvp_transaction.state < HVUTIL_READY) {
+	if (kvp_transaction.state < HVUTIL_READY) {
 		/*
 		/*
 		 * If userspace daemon is not connected and host is asking
 		 * If userspace daemon is not connected and host is asking
 		 * us to negotiate we need to delay to not lose messages.
 		 * us to negotiate we need to delay to not lose messages.
 		 * This is important for Failover IP setting.
 		 * This is important for Failover IP setting.
 		 */
 		 */
-		host_negotiatied = NEGO_IN_PROGRESS;
-		schedule_delayed_work(&kvp_host_handshake_work,
+		if (host_negotiatied == NEGO_NOT_STARTED) {
+			host_negotiatied = NEGO_IN_PROGRESS;
+			schedule_delayed_work(&kvp_host_handshake_work,
 				      HV_UTIL_NEGO_TIMEOUT * HZ);
 				      HV_UTIL_NEGO_TIMEOUT * HZ);
+		}
 		return;
 		return;
 	}
 	}
 	if (kvp_transaction.state > HVUTIL_READY)
 	if (kvp_transaction.state > HVUTIL_READY)
@@ -702,6 +703,7 @@ void hv_kvp_onchannelcallback(void *context)
 				       VM_PKT_DATA_INBAND, 0);
 				       VM_PKT_DATA_INBAND, 0);
 
 
 		host_negotiatied = NEGO_FINISHED;
 		host_negotiatied = NEGO_FINISHED;
+		hv_poll_channel(kvp_transaction.recv_channel, kvp_poll_wrapper);
 	}
 	}
 
 
 }
 }

+ 54 - 110
drivers/hv/hv_util.c

@@ -202,27 +202,39 @@ static void shutdown_onchannelcallback(void *context)
 /*
 /*
  * Set the host time in a process context.
  * Set the host time in a process context.
  */
  */
+static struct work_struct adj_time_work;
 
 
-struct adj_time_work {
-	struct work_struct work;
-	u64	host_time;
-	u64	ref_time;
-	u8	flags;
-};
+/*
+ * The last time sample, received from the host. PTP device responds to
+ * requests by using this data and the current partition-wide time reference
+ * count.
+ */
+static struct {
+	u64				host_time;
+	u64				ref_time;
+	spinlock_t			lock;
+} host_ts;
 
 
-static void hv_set_host_time(struct work_struct *work)
+static struct timespec64 hv_get_adj_host_time(void)
 {
 {
-	struct adj_time_work *wrk;
-	struct timespec64 host_ts;
-	u64 reftime, newtime;
-
-	wrk = container_of(work, struct adj_time_work, work);
+	struct timespec64 ts;
+	u64 newtime, reftime;
+	unsigned long flags;
 
 
+	spin_lock_irqsave(&host_ts.lock, flags);
 	reftime = hyperv_cs->read(hyperv_cs);
 	reftime = hyperv_cs->read(hyperv_cs);
-	newtime = wrk->host_time + (reftime - wrk->ref_time);
-	host_ts = ns_to_timespec64((newtime - WLTIMEDELTA) * 100);
+	newtime = host_ts.host_time + (reftime - host_ts.ref_time);
+	ts = ns_to_timespec64((newtime - WLTIMEDELTA) * 100);
+	spin_unlock_irqrestore(&host_ts.lock, flags);
 
 
-	do_settimeofday64(&host_ts);
+	return ts;
+}
+
+static void hv_set_host_time(struct work_struct *work)
+{
+	struct timespec64 ts = hv_get_adj_host_time();
+
+	do_settimeofday64(&ts);
 }
 }
 
 
 /*
 /*
@@ -238,62 +250,35 @@ static void hv_set_host_time(struct work_struct *work)
  * typically used as a hint to the guest. The guest is under no obligation
  * typically used as a hint to the guest. The guest is under no obligation
  * to discipline the clock.
  * to discipline the clock.
  */
  */
-static struct adj_time_work  wrk;
-
-/*
- * The last time sample, received from the host. PTP device responds to
- * requests by using this data and the current partition-wide time reference
- * count.
- */
-static struct {
-	u64				host_time;
-	u64				ref_time;
-	struct system_time_snapshot	snap;
-	spinlock_t			lock;
-} host_ts;
-
 static inline void adj_guesttime(u64 hosttime, u64 reftime, u8 adj_flags)
 static inline void adj_guesttime(u64 hosttime, u64 reftime, u8 adj_flags)
 {
 {
 	unsigned long flags;
 	unsigned long flags;
 	u64 cur_reftime;
 	u64 cur_reftime;
 
 
 	/*
 	/*
-	 * This check is safe since we are executing in the
-	 * interrupt context and time synch messages are always
-	 * delivered on the same CPU.
+	 * Save the adjusted time sample from the host and the snapshot
+	 * of the current system time.
 	 */
 	 */
-	if (adj_flags & ICTIMESYNCFLAG_SYNC) {
-		/* Queue a job to do do_settimeofday64() */
-		if (work_pending(&wrk.work))
-			return;
-
-		wrk.host_time = hosttime;
-		wrk.ref_time = reftime;
-		wrk.flags = adj_flags;
-		schedule_work(&wrk.work);
-	} else {
-		/*
-		 * Save the adjusted time sample from the host and the snapshot
-		 * of the current system time for PTP device.
-		 */
-		spin_lock_irqsave(&host_ts.lock, flags);
-
-		cur_reftime = hyperv_cs->read(hyperv_cs);
-		host_ts.host_time = hosttime;
-		host_ts.ref_time = cur_reftime;
-		ktime_get_snapshot(&host_ts.snap);
-
-		/*
-		 * TimeSync v4 messages contain reference time (guest's Hyper-V
-		 * clocksource read when the time sample was generated), we can
-		 * improve the precision by adding the delta between now and the
-		 * time of generation.
-		 */
-		if (ts_srv_version > TS_VERSION_3)
-			host_ts.host_time += (cur_reftime - reftime);
-
-		spin_unlock_irqrestore(&host_ts.lock, flags);
-	}
+	spin_lock_irqsave(&host_ts.lock, flags);
+
+	cur_reftime = hyperv_cs->read(hyperv_cs);
+	host_ts.host_time = hosttime;
+	host_ts.ref_time = cur_reftime;
+
+	/*
+	 * TimeSync v4 messages contain reference time (guest's Hyper-V
+	 * clocksource read when the time sample was generated), we can
+	 * improve the precision by adding the delta between now and the
+	 * time of generation. For older protocols we set
+	 * reftime == cur_reftime on call.
+	 */
+	host_ts.host_time += (cur_reftime - reftime);
+
+	spin_unlock_irqrestore(&host_ts.lock, flags);
+
+	/* Schedule work to do do_settimeofday64() */
+	if (adj_flags & ICTIMESYNCFLAG_SYNC)
+		schedule_work(&adj_time_work);
 }
 }
 
 
 /*
 /*
@@ -341,8 +326,8 @@ static void timesync_onchannelcallback(void *context)
 					sizeof(struct vmbuspipe_hdr) +
 					sizeof(struct vmbuspipe_hdr) +
 					sizeof(struct icmsg_hdr)];
 					sizeof(struct icmsg_hdr)];
 				adj_guesttime(timedatap->parenttime,
 				adj_guesttime(timedatap->parenttime,
-						0,
-						timedatap->flags);
+					      hyperv_cs->read(hyperv_cs),
+					      timedatap->flags);
 			}
 			}
 		}
 		}
 
 
@@ -526,58 +511,17 @@ static int hv_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
 
 
 static int hv_ptp_gettime(struct ptp_clock_info *info, struct timespec64 *ts)
 static int hv_ptp_gettime(struct ptp_clock_info *info, struct timespec64 *ts)
 {
 {
-	unsigned long flags;
-	u64 newtime, reftime;
-
-	spin_lock_irqsave(&host_ts.lock, flags);
-	reftime = hyperv_cs->read(hyperv_cs);
-	newtime = host_ts.host_time + (reftime - host_ts.ref_time);
-	*ts = ns_to_timespec64((newtime - WLTIMEDELTA) * 100);
-	spin_unlock_irqrestore(&host_ts.lock, flags);
+	*ts = hv_get_adj_host_time();
 
 
 	return 0;
 	return 0;
 }
 }
 
 
-static int hv_ptp_get_syncdevicetime(ktime_t *device,
-				     struct system_counterval_t *system,
-				     void *ctx)
-{
-	system->cs = hyperv_cs;
-	system->cycles = host_ts.ref_time;
-	*device = ns_to_ktime((host_ts.host_time - WLTIMEDELTA) * 100);
-
-	return 0;
-}
-
-static int hv_ptp_getcrosststamp(struct ptp_clock_info *ptp,
-				 struct system_device_crosststamp *xtstamp)
-{
-	unsigned long flags;
-	int ret;
-
-	spin_lock_irqsave(&host_ts.lock, flags);
-
-	/*
-	 * host_ts contains the last time sample from the host and the snapshot
-	 * of system time. We don't need to calculate the time delta between
-	 * the reception and now as get_device_system_crosststamp() does the
-	 * required interpolation.
-	 */
-	ret = get_device_system_crosststamp(hv_ptp_get_syncdevicetime,
-					    NULL, &host_ts.snap, xtstamp);
-
-	spin_unlock_irqrestore(&host_ts.lock, flags);
-
-	return ret;
-}
-
 static struct ptp_clock_info ptp_hyperv_info = {
 static struct ptp_clock_info ptp_hyperv_info = {
 	.name		= "hyperv",
 	.name		= "hyperv",
 	.enable         = hv_ptp_enable,
 	.enable         = hv_ptp_enable,
 	.adjtime        = hv_ptp_adjtime,
 	.adjtime        = hv_ptp_adjtime,
 	.adjfreq        = hv_ptp_adjfreq,
 	.adjfreq        = hv_ptp_adjfreq,
 	.gettime64      = hv_ptp_gettime,
 	.gettime64      = hv_ptp_gettime,
-	.getcrosststamp = hv_ptp_getcrosststamp,
 	.settime64      = hv_ptp_settime,
 	.settime64      = hv_ptp_settime,
 	.owner		= THIS_MODULE,
 	.owner		= THIS_MODULE,
 };
 };
@@ -592,7 +536,7 @@ static int hv_timesync_init(struct hv_util_service *srv)
 
 
 	spin_lock_init(&host_ts.lock);
 	spin_lock_init(&host_ts.lock);
 
 
-	INIT_WORK(&wrk.work, hv_set_host_time);
+	INIT_WORK(&adj_time_work, hv_set_host_time);
 
 
 	/*
 	/*
 	 * ptp_clock_register() returns NULL when CONFIG_PTP_1588_CLOCK is
 	 * ptp_clock_register() returns NULL when CONFIG_PTP_1588_CLOCK is
@@ -613,7 +557,7 @@ static void hv_timesync_deinit(void)
 {
 {
 	if (hv_ptp_clock)
 	if (hv_ptp_clock)
 		ptp_clock_unregister(hv_ptp_clock);
 		ptp_clock_unregister(hv_ptp_clock);
-	cancel_work_sync(&wrk.work);
+	cancel_work_sync(&adj_time_work);
 }
 }
 
 
 static int __init init_hyperv_utils(void)
 static int __init init_hyperv_utils(void)

+ 11 - 0
drivers/hv/hyperv_vmbus.h

@@ -303,6 +303,13 @@ enum vmbus_connect_state {
 #define MAX_SIZE_CHANNEL_MESSAGE	HV_MESSAGE_PAYLOAD_BYTE_COUNT
 #define MAX_SIZE_CHANNEL_MESSAGE	HV_MESSAGE_PAYLOAD_BYTE_COUNT
 
 
 struct vmbus_connection {
 struct vmbus_connection {
+	/*
+	 * CPU on which the initial host contact was made.
+	 */
+	int connect_cpu;
+
+	atomic_t offer_in_progress;
+
 	enum vmbus_connect_state conn_state;
 	enum vmbus_connect_state conn_state;
 
 
 	atomic_t next_gpadl_handle;
 	atomic_t next_gpadl_handle;
@@ -411,6 +418,10 @@ static inline void hv_poll_channel(struct vmbus_channel *channel,
 	if (!channel)
 	if (!channel)
 		return;
 		return;
 
 
+	if (in_interrupt() && (channel->target_cpu == smp_processor_id())) {
+		cb(channel);
+		return;
+	}
 	smp_call_function_single(channel->target_cpu, cb, channel, true);
 	smp_call_function_single(channel->target_cpu, cb, channel, true);
 }
 }
 
 

+ 38 - 42
drivers/hv/vmbus_drv.c

@@ -608,40 +608,6 @@ static void vmbus_free_dynids(struct hv_driver *drv)
 	spin_unlock(&drv->dynids.lock);
 	spin_unlock(&drv->dynids.lock);
 }
 }
 
 
-/* Parse string of form: 1b4e28ba-2fa1-11d2-883f-b9a761bde3f */
-static int get_uuid_le(const char *str, uuid_le *uu)
-{
-	unsigned int b[16];
-	int i;
-
-	if (strlen(str) < 37)
-		return -1;
-
-	for (i = 0; i < 36; i++) {
-		switch (i) {
-		case 8: case 13: case 18: case 23:
-			if (str[i] != '-')
-				return -1;
-			break;
-		default:
-			if (!isxdigit(str[i]))
-				return -1;
-		}
-	}
-
-	/* unparse little endian output byte order */
-	if (sscanf(str,
-		   "%2x%2x%2x%2x-%2x%2x-%2x%2x-%2x%2x-%2x%2x%2x%2x%2x%2x",
-		   &b[3], &b[2], &b[1], &b[0],
-		   &b[5], &b[4], &b[7], &b[6], &b[8], &b[9],
-		   &b[10], &b[11], &b[12], &b[13], &b[14], &b[15]) != 16)
-		return -1;
-
-	for (i = 0; i < 16; i++)
-		uu->b[i] = b[i];
-	return 0;
-}
-
 /*
 /*
  * store_new_id - sysfs frontend to vmbus_add_dynid()
  * store_new_id - sysfs frontend to vmbus_add_dynid()
  *
  *
@@ -651,11 +617,12 @@ static ssize_t new_id_store(struct device_driver *driver, const char *buf,
 			    size_t count)
 			    size_t count)
 {
 {
 	struct hv_driver *drv = drv_to_hv_drv(driver);
 	struct hv_driver *drv = drv_to_hv_drv(driver);
-	uuid_le guid = NULL_UUID_LE;
+	uuid_le guid;
 	ssize_t retval;
 	ssize_t retval;
 
 
-	if (get_uuid_le(buf, &guid) != 0)
-		return -EINVAL;
+	retval = uuid_le_to_bin(buf, &guid);
+	if (retval)
+		return retval;
 
 
 	if (hv_vmbus_get_id(drv, &guid))
 	if (hv_vmbus_get_id(drv, &guid))
 		return -EEXIST;
 		return -EEXIST;
@@ -677,12 +644,14 @@ static ssize_t remove_id_store(struct device_driver *driver, const char *buf,
 {
 {
 	struct hv_driver *drv = drv_to_hv_drv(driver);
 	struct hv_driver *drv = drv_to_hv_drv(driver);
 	struct vmbus_dynid *dynid, *n;
 	struct vmbus_dynid *dynid, *n;
-	uuid_le guid = NULL_UUID_LE;
-	size_t retval = -ENODEV;
+	uuid_le guid;
+	ssize_t retval;
 
 
-	if (get_uuid_le(buf, &guid))
-		return -EINVAL;
+	retval = uuid_le_to_bin(buf, &guid);
+	if (retval)
+		return retval;
 
 
+	retval = -ENODEV;
 	spin_lock(&drv->dynids.lock);
 	spin_lock(&drv->dynids.lock);
 	list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
 	list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
 		struct hv_vmbus_device_id *id = &dynid->id;
 		struct hv_vmbus_device_id *id = &dynid->id;
@@ -798,8 +767,10 @@ static void vmbus_device_release(struct device *device)
 	struct hv_device *hv_dev = device_to_hv_device(device);
 	struct hv_device *hv_dev = device_to_hv_device(device);
 	struct vmbus_channel *channel = hv_dev->channel;
 	struct vmbus_channel *channel = hv_dev->channel;
 
 
+	mutex_lock(&vmbus_connection.channel_mutex);
 	hv_process_channel_removal(channel,
 	hv_process_channel_removal(channel,
 				   channel->offermsg.child_relid);
 				   channel->offermsg.child_relid);
+	mutex_unlock(&vmbus_connection.channel_mutex);
 	kfree(hv_dev);
 	kfree(hv_dev);
 
 
 }
 }
@@ -877,7 +848,32 @@ void vmbus_on_msg_dpc(unsigned long data)
 		INIT_WORK(&ctx->work, vmbus_onmessage_work);
 		INIT_WORK(&ctx->work, vmbus_onmessage_work);
 		memcpy(&ctx->msg, msg, sizeof(*msg));
 		memcpy(&ctx->msg, msg, sizeof(*msg));
 
 
-		queue_work(vmbus_connection.work_queue, &ctx->work);
+		/*
+		 * The host can generate a rescind message while we
+		 * may still be handling the original offer. We deal with
+		 * this condition by ensuring the processing is done on the
+		 * same CPU.
+		 */
+		switch (hdr->msgtype) {
+		case CHANNELMSG_RESCIND_CHANNELOFFER:
+			/*
+			 * If we are handling the rescind message;
+			 * schedule the work on the global work queue.
+			 */
+			schedule_work_on(vmbus_connection.connect_cpu,
+					 &ctx->work);
+			break;
+
+		case CHANNELMSG_OFFERCHANNEL:
+			atomic_inc(&vmbus_connection.offer_in_progress);
+			queue_work_on(vmbus_connection.connect_cpu,
+				      vmbus_connection.work_queue,
+				      &ctx->work);
+			break;
+
+		default:
+			queue_work(vmbus_connection.work_queue, &ctx->work);
+		}
 	} else
 	} else
 		entry->message_handler(hdr);
 		entry->message_handler(hdr);
 
 

+ 14 - 0
drivers/hwtracing/coresight/Kconfig

@@ -89,4 +89,18 @@ config CORESIGHT_STM
 	  logging useful software events or data coming from various entities
 	  logging useful software events or data coming from various entities
 	  in the system, possibly running different OSs
 	  in the system, possibly running different OSs
 
 
+config CORESIGHT_CPU_DEBUG
+	tristate "CoreSight CPU Debug driver"
+	depends on ARM || ARM64
+	depends on DEBUG_FS
+	help
+	  This driver provides support for coresight debugging module. This
+	  is primarily used to dump sample-based profiling registers when
+	  system triggers panic, the driver will parse context registers so
+	  can quickly get to know program counter (PC), secure state,
+	  exception level, etc. Before use debugging functionality, platform
+	  needs to ensure the clock domain and power domain are enabled
+	  properly, please refer Documentation/trace/coresight-cpu-debug.txt
+	  for detailed description and the example for usage.
+
 endif
 endif

+ 1 - 0
drivers/hwtracing/coresight/Makefile

@@ -16,3 +16,4 @@ obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \
 					coresight-etm4x-sysfs.o
 					coresight-etm4x-sysfs.o
 obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
 obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
 obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
 obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
+obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o

+ 700 - 0
drivers/hwtracing/coresight/coresight-cpu-debug.c

@@ -0,0 +1,700 @@
+/*
+ * Copyright (c) 2017 Linaro Limited. All rights reserved.
+ *
+ * Author: Leo Yan <leo.yan@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <linux/amba/bus.h>
+#include <linux/coresight.h>
+#include <linux/cpu.h>
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/pm_qos.h>
+#include <linux/slab.h>
+#include <linux/smp.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+
+#include "coresight-priv.h"
+
+#define EDPCSR				0x0A0
+#define EDCIDSR				0x0A4
+#define EDVIDSR				0x0A8
+#define EDPCSR_HI			0x0AC
+#define EDOSLAR				0x300
+#define EDPRCR				0x310
+#define EDPRSR				0x314
+#define EDDEVID1			0xFC4
+#define EDDEVID				0xFC8
+
+#define EDPCSR_PROHIBITED		0xFFFFFFFF
+
+/* bits definition for EDPCSR */
+#define EDPCSR_THUMB			BIT(0)
+#define EDPCSR_ARM_INST_MASK		GENMASK(31, 2)
+#define EDPCSR_THUMB_INST_MASK		GENMASK(31, 1)
+
+/* bits definition for EDPRCR */
+#define EDPRCR_COREPURQ			BIT(3)
+#define EDPRCR_CORENPDRQ		BIT(0)
+
+/* bits definition for EDPRSR */
+#define EDPRSR_DLK			BIT(6)
+#define EDPRSR_PU			BIT(0)
+
+/* bits definition for EDVIDSR */
+#define EDVIDSR_NS			BIT(31)
+#define EDVIDSR_E2			BIT(30)
+#define EDVIDSR_E3			BIT(29)
+#define EDVIDSR_HV			BIT(28)
+#define EDVIDSR_VMID			GENMASK(7, 0)
+
+/*
+ * bits definition for EDDEVID1:PSCROffset
+ *
+ * NOTE: armv8 and armv7 have different definition for the register,
+ * so consolidate the bits definition as below:
+ *
+ * 0b0000 - Sample offset applies based on the instruction state, we
+ *          rely on EDDEVID to check if EDPCSR is implemented or not
+ * 0b0001 - No offset applies.
+ * 0b0010 - No offset applies, but do not use in AArch32 mode
+ *
+ */
+#define EDDEVID1_PCSR_OFFSET_MASK	GENMASK(3, 0)
+#define EDDEVID1_PCSR_OFFSET_INS_SET	(0x0)
+#define EDDEVID1_PCSR_NO_OFFSET_DIS_AARCH32	(0x2)
+
+/* bits definition for EDDEVID */
+#define EDDEVID_PCSAMPLE_MODE		GENMASK(3, 0)
+#define EDDEVID_IMPL_EDPCSR		(0x1)
+#define EDDEVID_IMPL_EDPCSR_EDCIDSR	(0x2)
+#define EDDEVID_IMPL_FULL		(0x3)
+
+#define DEBUG_WAIT_SLEEP		1000
+#define DEBUG_WAIT_TIMEOUT		32000
+
+struct debug_drvdata {
+	void __iomem	*base;
+	struct device	*dev;
+	int		cpu;
+
+	bool		edpcsr_present;
+	bool		edcidsr_present;
+	bool		edvidsr_present;
+	bool		pc_has_offset;
+
+	u32		edpcsr;
+	u32		edpcsr_hi;
+	u32		edprsr;
+	u32		edvidsr;
+	u32		edcidsr;
+};
+
+static DEFINE_MUTEX(debug_lock);
+static DEFINE_PER_CPU(struct debug_drvdata *, debug_drvdata);
+static int debug_count;
+static struct dentry *debug_debugfs_dir;
+
+static bool debug_enable;
+module_param_named(enable, debug_enable, bool, 0600);
+MODULE_PARM_DESC(enable, "Control to enable coresight CPU debug functionality");
+
+static void debug_os_unlock(struct debug_drvdata *drvdata)
+{
+	/* Unlocks the debug registers */
+	writel_relaxed(0x0, drvdata->base + EDOSLAR);
+
+	/* Make sure the registers are unlocked before accessing */
+	wmb();
+}
+
+/*
+ * According to ARM DDI 0487A.k, before access external debug
+ * registers should firstly check the access permission; if any
+ * below condition has been met then cannot access debug
+ * registers to avoid lockup issue:
+ *
+ * - CPU power domain is powered off;
+ * - The OS Double Lock is locked;
+ *
+ * By checking EDPRSR can get to know if meet these conditions.
+ */
+static bool debug_access_permitted(struct debug_drvdata *drvdata)
+{
+	/* CPU is powered off */
+	if (!(drvdata->edprsr & EDPRSR_PU))
+		return false;
+
+	/* The OS Double Lock is locked */
+	if (drvdata->edprsr & EDPRSR_DLK)
+		return false;
+
+	return true;
+}
+
+static void debug_force_cpu_powered_up(struct debug_drvdata *drvdata)
+{
+	u32 edprcr;
+
+try_again:
+
+	/*
+	 * Send request to power management controller and assert
+	 * DBGPWRUPREQ signal; if power management controller has
+	 * sane implementation, it should enable CPU power domain
+	 * in case CPU is in low power state.
+	 */
+	edprcr = readl_relaxed(drvdata->base + EDPRCR);
+	edprcr |= EDPRCR_COREPURQ;
+	writel_relaxed(edprcr, drvdata->base + EDPRCR);
+
+	/* Wait for CPU to be powered up (timeout~=32ms) */
+	if (readx_poll_timeout_atomic(readl_relaxed, drvdata->base + EDPRSR,
+			drvdata->edprsr, (drvdata->edprsr & EDPRSR_PU),
+			DEBUG_WAIT_SLEEP, DEBUG_WAIT_TIMEOUT)) {
+		/*
+		 * Unfortunately the CPU cannot be powered up, so return
+		 * back and later has no permission to access other
+		 * registers. For this case, should disable CPU low power
+		 * states to ensure CPU power domain is enabled!
+		 */
+		dev_err(drvdata->dev, "%s: power up request for CPU%d failed\n",
+			__func__, drvdata->cpu);
+		return;
+	}
+
+	/*
+	 * At this point the CPU is powered up, so set the no powerdown
+	 * request bit so we don't lose power and emulate power down.
+	 */
+	edprcr = readl_relaxed(drvdata->base + EDPRCR);
+	edprcr |= EDPRCR_COREPURQ | EDPRCR_CORENPDRQ;
+	writel_relaxed(edprcr, drvdata->base + EDPRCR);
+
+	drvdata->edprsr = readl_relaxed(drvdata->base + EDPRSR);
+
+	/* The core power domain got switched off on use, try again */
+	if (unlikely(!(drvdata->edprsr & EDPRSR_PU)))
+		goto try_again;
+}
+
+static void debug_read_regs(struct debug_drvdata *drvdata)
+{
+	u32 save_edprcr;
+
+	CS_UNLOCK(drvdata->base);
+
+	/* Unlock os lock */
+	debug_os_unlock(drvdata);
+
+	/* Save EDPRCR register */
+	save_edprcr = readl_relaxed(drvdata->base + EDPRCR);
+
+	/*
+	 * Ensure CPU power domain is enabled to let registers
+	 * are accessiable.
+	 */
+	debug_force_cpu_powered_up(drvdata);
+
+	if (!debug_access_permitted(drvdata))
+		goto out;
+
+	drvdata->edpcsr = readl_relaxed(drvdata->base + EDPCSR);
+
+	/*
+	 * As described in ARM DDI 0487A.k, if the processing
+	 * element (PE) is in debug state, or sample-based
+	 * profiling is prohibited, EDPCSR reads as 0xFFFFFFFF;
+	 * EDCIDSR, EDVIDSR and EDPCSR_HI registers also become
+	 * UNKNOWN state. So directly bail out for this case.
+	 */
+	if (drvdata->edpcsr == EDPCSR_PROHIBITED)
+		goto out;
+
+	/*
+	 * A read of the EDPCSR normally has the side-effect of
+	 * indirectly writing to EDCIDSR, EDVIDSR and EDPCSR_HI;
+	 * at this point it's safe to read value from them.
+	 */
+	if (IS_ENABLED(CONFIG_64BIT))
+		drvdata->edpcsr_hi = readl_relaxed(drvdata->base + EDPCSR_HI);
+
+	if (drvdata->edcidsr_present)
+		drvdata->edcidsr = readl_relaxed(drvdata->base + EDCIDSR);
+
+	if (drvdata->edvidsr_present)
+		drvdata->edvidsr = readl_relaxed(drvdata->base + EDVIDSR);
+
+out:
+	/* Restore EDPRCR register */
+	writel_relaxed(save_edprcr, drvdata->base + EDPRCR);
+
+	CS_LOCK(drvdata->base);
+}
+
+#ifdef CONFIG_64BIT
+static unsigned long debug_adjust_pc(struct debug_drvdata *drvdata)
+{
+	return (unsigned long)drvdata->edpcsr_hi << 32 |
+	       (unsigned long)drvdata->edpcsr;
+}
+#else
+static unsigned long debug_adjust_pc(struct debug_drvdata *drvdata)
+{
+	unsigned long arm_inst_offset = 0, thumb_inst_offset = 0;
+	unsigned long pc;
+
+	pc = (unsigned long)drvdata->edpcsr;
+
+	if (drvdata->pc_has_offset) {
+		arm_inst_offset = 8;
+		thumb_inst_offset = 4;
+	}
+
+	/* Handle thumb instruction */
+	if (pc & EDPCSR_THUMB) {
+		pc = (pc & EDPCSR_THUMB_INST_MASK) - thumb_inst_offset;
+		return pc;
+	}
+
+	/*
+	 * Handle arm instruction offset, if the arm instruction
+	 * is not 4 byte alignment then it's possible the case
+	 * for implementation defined; keep original value for this
+	 * case and print info for notice.
+	 */
+	if (pc & BIT(1))
+		dev_emerg(drvdata->dev,
+			  "Instruction offset is implementation defined\n");
+	else
+		pc = (pc & EDPCSR_ARM_INST_MASK) - arm_inst_offset;
+
+	return pc;
+}
+#endif
+
+static void debug_dump_regs(struct debug_drvdata *drvdata)
+{
+	struct device *dev = drvdata->dev;
+	unsigned long pc;
+
+	dev_emerg(dev, " EDPRSR:  %08x (Power:%s DLK:%s)\n",
+		  drvdata->edprsr,
+		  drvdata->edprsr & EDPRSR_PU ? "On" : "Off",
+		  drvdata->edprsr & EDPRSR_DLK ? "Lock" : "Unlock");
+
+	if (!debug_access_permitted(drvdata)) {
+		dev_emerg(dev, "No permission to access debug registers!\n");
+		return;
+	}
+
+	if (drvdata->edpcsr == EDPCSR_PROHIBITED) {
+		dev_emerg(dev, "CPU is in Debug state or profiling is prohibited!\n");
+		return;
+	}
+
+	pc = debug_adjust_pc(drvdata);
+	dev_emerg(dev, " EDPCSR:  [<%p>] %pS\n", (void *)pc, (void *)pc);
+
+	if (drvdata->edcidsr_present)
+		dev_emerg(dev, " EDCIDSR: %08x\n", drvdata->edcidsr);
+
+	if (drvdata->edvidsr_present)
+		dev_emerg(dev, " EDVIDSR: %08x (State:%s Mode:%s Width:%dbits VMID:%x)\n",
+			  drvdata->edvidsr,
+			  drvdata->edvidsr & EDVIDSR_NS ?
+			  "Non-secure" : "Secure",
+			  drvdata->edvidsr & EDVIDSR_E3 ? "EL3" :
+				(drvdata->edvidsr & EDVIDSR_E2 ?
+				 "EL2" : "EL1/0"),
+			  drvdata->edvidsr & EDVIDSR_HV ? 64 : 32,
+			  drvdata->edvidsr & (u32)EDVIDSR_VMID);
+}
+
+static void debug_init_arch_data(void *info)
+{
+	struct debug_drvdata *drvdata = info;
+	u32 mode, pcsr_offset;
+	u32 eddevid, eddevid1;
+
+	CS_UNLOCK(drvdata->base);
+
+	/* Read device info */
+	eddevid  = readl_relaxed(drvdata->base + EDDEVID);
+	eddevid1 = readl_relaxed(drvdata->base + EDDEVID1);
+
+	CS_LOCK(drvdata->base);
+
+	/* Parse implementation feature */
+	mode = eddevid & EDDEVID_PCSAMPLE_MODE;
+	pcsr_offset = eddevid1 & EDDEVID1_PCSR_OFFSET_MASK;
+
+	drvdata->edpcsr_present  = false;
+	drvdata->edcidsr_present = false;
+	drvdata->edvidsr_present = false;
+	drvdata->pc_has_offset   = false;
+
+	switch (mode) {
+	case EDDEVID_IMPL_FULL:
+		drvdata->edvidsr_present = true;
+		/* Fall through */
+	case EDDEVID_IMPL_EDPCSR_EDCIDSR:
+		drvdata->edcidsr_present = true;
+		/* Fall through */
+	case EDDEVID_IMPL_EDPCSR:
+		/*
+		 * In ARM DDI 0487A.k, the EDDEVID1.PCSROffset is used to
+		 * define if has the offset for PC sampling value; if read
+		 * back EDDEVID1.PCSROffset == 0x2, then this means the debug
+		 * module does not sample the instruction set state when
+		 * armv8 CPU in AArch32 state.
+		 */
+		drvdata->edpcsr_present =
+			((IS_ENABLED(CONFIG_64BIT) && pcsr_offset != 0) ||
+			 (pcsr_offset != EDDEVID1_PCSR_NO_OFFSET_DIS_AARCH32));
+
+		drvdata->pc_has_offset =
+			(pcsr_offset == EDDEVID1_PCSR_OFFSET_INS_SET);
+		break;
+	default:
+		break;
+	}
+}
+
+/*
+ * Dump out information on panic.
+ */
+static int debug_notifier_call(struct notifier_block *self,
+			       unsigned long v, void *p)
+{
+	int cpu;
+	struct debug_drvdata *drvdata;
+
+	mutex_lock(&debug_lock);
+
+	/* Bail out if the functionality is disabled */
+	if (!debug_enable)
+		goto skip_dump;
+
+	pr_emerg("ARM external debug module:\n");
+
+	for_each_possible_cpu(cpu) {
+		drvdata = per_cpu(debug_drvdata, cpu);
+		if (!drvdata)
+			continue;
+
+		dev_emerg(drvdata->dev, "CPU[%d]:\n", drvdata->cpu);
+
+		debug_read_regs(drvdata);
+		debug_dump_regs(drvdata);
+	}
+
+skip_dump:
+	mutex_unlock(&debug_lock);
+	return 0;
+}
+
+static struct notifier_block debug_notifier = {
+	.notifier_call = debug_notifier_call,
+};
+
+static int debug_enable_func(void)
+{
+	struct debug_drvdata *drvdata;
+	int cpu, ret = 0;
+	cpumask_t mask;
+
+	/*
+	 * Use cpumask to track which debug power domains have
+	 * been powered on and use it to handle failure case.
+	 */
+	cpumask_clear(&mask);
+
+	for_each_possible_cpu(cpu) {
+		drvdata = per_cpu(debug_drvdata, cpu);
+		if (!drvdata)
+			continue;
+
+		ret = pm_runtime_get_sync(drvdata->dev);
+		if (ret < 0)
+			goto err;
+		else
+			cpumask_set_cpu(cpu, &mask);
+	}
+
+	return 0;
+
+err:
+	/*
+	 * If pm_runtime_get_sync() has failed, need rollback on
+	 * all the other CPUs that have been enabled before that.
+	 */
+	for_each_cpu(cpu, &mask) {
+		drvdata = per_cpu(debug_drvdata, cpu);
+		pm_runtime_put_noidle(drvdata->dev);
+	}
+
+	return ret;
+}
+
+static int debug_disable_func(void)
+{
+	struct debug_drvdata *drvdata;
+	int cpu, ret, err = 0;
+
+	/*
+	 * Disable debug power domains, records the error and keep
+	 * circling through all other CPUs when an error has been
+	 * encountered.
+	 */
+	for_each_possible_cpu(cpu) {
+		drvdata = per_cpu(debug_drvdata, cpu);
+		if (!drvdata)
+			continue;
+
+		ret = pm_runtime_put(drvdata->dev);
+		if (ret < 0)
+			err = ret;
+	}
+
+	return err;
+}
+
+static ssize_t debug_func_knob_write(struct file *f,
+		const char __user *buf, size_t count, loff_t *ppos)
+{
+	u8 val;
+	int ret;
+
+	ret = kstrtou8_from_user(buf, count, 2, &val);
+	if (ret)
+		return ret;
+
+	mutex_lock(&debug_lock);
+
+	if (val == debug_enable)
+		goto out;
+
+	if (val)
+		ret = debug_enable_func();
+	else
+		ret = debug_disable_func();
+
+	if (ret) {
+		pr_err("%s: unable to %s debug function: %d\n",
+		       __func__, val ? "enable" : "disable", ret);
+		goto err;
+	}
+
+	debug_enable = val;
+out:
+	ret = count;
+err:
+	mutex_unlock(&debug_lock);
+	return ret;
+}
+
+static ssize_t debug_func_knob_read(struct file *f,
+		char __user *ubuf, size_t count, loff_t *ppos)
+{
+	ssize_t ret;
+	char buf[3];
+
+	mutex_lock(&debug_lock);
+	snprintf(buf, sizeof(buf), "%d\n", debug_enable);
+	mutex_unlock(&debug_lock);
+
+	ret = simple_read_from_buffer(ubuf, count, ppos, buf, sizeof(buf));
+	return ret;
+}
+
+static const struct file_operations debug_func_knob_fops = {
+	.open	= simple_open,
+	.read	= debug_func_knob_read,
+	.write	= debug_func_knob_write,
+};
+
+static int debug_func_init(void)
+{
+	struct dentry *file;
+	int ret;
+
+	/* Create debugfs node */
+	debug_debugfs_dir = debugfs_create_dir("coresight_cpu_debug", NULL);
+	if (!debug_debugfs_dir) {
+		pr_err("%s: unable to create debugfs directory\n", __func__);
+		return -ENOMEM;
+	}
+
+	file = debugfs_create_file("enable", 0644, debug_debugfs_dir, NULL,
+				   &debug_func_knob_fops);
+	if (!file) {
+		pr_err("%s: unable to create enable knob file\n", __func__);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	/* Register function to be called for panic */
+	ret = atomic_notifier_chain_register(&panic_notifier_list,
+					     &debug_notifier);
+	if (ret) {
+		pr_err("%s: unable to register notifier: %d\n",
+		       __func__, ret);
+		goto err;
+	}
+
+	return 0;
+
+err:
+	debugfs_remove_recursive(debug_debugfs_dir);
+	return ret;
+}
+
+static void debug_func_exit(void)
+{
+	atomic_notifier_chain_unregister(&panic_notifier_list,
+					 &debug_notifier);
+	debugfs_remove_recursive(debug_debugfs_dir);
+}
+
+static int debug_probe(struct amba_device *adev, const struct amba_id *id)
+{
+	void __iomem *base;
+	struct device *dev = &adev->dev;
+	struct debug_drvdata *drvdata;
+	struct resource *res = &adev->res;
+	struct device_node *np = adev->dev.of_node;
+	int ret;
+
+	drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
+	if (!drvdata)
+		return -ENOMEM;
+
+	drvdata->cpu = np ? of_coresight_get_cpu(np) : 0;
+	if (per_cpu(debug_drvdata, drvdata->cpu)) {
+		dev_err(dev, "CPU%d drvdata has already been initialized\n",
+			drvdata->cpu);
+		return -EBUSY;
+	}
+
+	drvdata->dev = &adev->dev;
+	amba_set_drvdata(adev, drvdata);
+
+	/* Validity for the resource is already checked by the AMBA core */
+	base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(base))
+		return PTR_ERR(base);
+
+	drvdata->base = base;
+
+	get_online_cpus();
+	per_cpu(debug_drvdata, drvdata->cpu) = drvdata;
+	ret = smp_call_function_single(drvdata->cpu, debug_init_arch_data,
+				       drvdata, 1);
+	put_online_cpus();
+
+	if (ret) {
+		dev_err(dev, "CPU%d debug arch init failed\n", drvdata->cpu);
+		goto err;
+	}
+
+	if (!drvdata->edpcsr_present) {
+		dev_err(dev, "CPU%d sample-based profiling isn't implemented\n",
+			drvdata->cpu);
+		ret = -ENXIO;
+		goto err;
+	}
+
+	if (!debug_count++) {
+		ret = debug_func_init();
+		if (ret)
+			goto err_func_init;
+	}
+
+	mutex_lock(&debug_lock);
+	/* Turn off debug power domain if debugging is disabled */
+	if (!debug_enable)
+		pm_runtime_put(dev);
+	mutex_unlock(&debug_lock);
+
+	dev_info(dev, "Coresight debug-CPU%d initialized\n", drvdata->cpu);
+	return 0;
+
+err_func_init:
+	debug_count--;
+err:
+	per_cpu(debug_drvdata, drvdata->cpu) = NULL;
+	return ret;
+}
+
+static int debug_remove(struct amba_device *adev)
+{
+	struct device *dev = &adev->dev;
+	struct debug_drvdata *drvdata = amba_get_drvdata(adev);
+
+	per_cpu(debug_drvdata, drvdata->cpu) = NULL;
+
+	mutex_lock(&debug_lock);
+	/* Turn off debug power domain before rmmod the module */
+	if (debug_enable)
+		pm_runtime_put(dev);
+	mutex_unlock(&debug_lock);
+
+	if (!--debug_count)
+		debug_func_exit();
+
+	return 0;
+}
+
+static struct amba_id debug_ids[] = {
+	{       /* Debug for Cortex-A53 */
+		.id	= 0x000bbd03,
+		.mask	= 0x000fffff,
+	},
+	{       /* Debug for Cortex-A57 */
+		.id	= 0x000bbd07,
+		.mask	= 0x000fffff,
+	},
+	{       /* Debug for Cortex-A72 */
+		.id	= 0x000bbd08,
+		.mask	= 0x000fffff,
+	},
+	{ 0, 0 },
+};
+
+static struct amba_driver debug_driver = {
+	.drv = {
+		.name   = "coresight-cpu-debug",
+		.suppress_bind_attrs = true,
+	},
+	.probe		= debug_probe,
+	.remove		= debug_remove,
+	.id_table	= debug_ids,
+};
+
+module_amba_driver(debug_driver);
+
+MODULE_AUTHOR("Leo Yan <leo.yan@linaro.org>");
+MODULE_DESCRIPTION("ARM Coresight CPU Debug Driver");
+MODULE_LICENSE("GPL");

+ 2 - 5
drivers/hwtracing/coresight/coresight-etb10.c

@@ -375,7 +375,7 @@ static void etb_update_buffer(struct coresight_device *csdev,
 
 
 	/*
 	/*
 	 * Entries should be aligned to the frame size.  If they are not
 	 * Entries should be aligned to the frame size.  If they are not
-	 * go back to the last alignement point to give decoding tools a
+	 * go back to the last alignment point to give decoding tools a
 	 * chance to fix things.
 	 * chance to fix things.
 	 */
 	 */
 	if (write_ptr % ETB_FRAME_SIZE_WORDS) {
 	if (write_ptr % ETB_FRAME_SIZE_WORDS) {
@@ -675,11 +675,8 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
 
 
 	drvdata->buf = devm_kzalloc(dev,
 	drvdata->buf = devm_kzalloc(dev,
 				    drvdata->buffer_depth * 4, GFP_KERNEL);
 				    drvdata->buffer_depth * 4, GFP_KERNEL);
-	if (!drvdata->buf) {
-		dev_err(dev, "Failed to allocate %u bytes for buffer data\n",
-			drvdata->buffer_depth * 4);
+	if (!drvdata->buf)
 		return -ENOMEM;
 		return -ENOMEM;
-	}
 
 
 	desc.type = CORESIGHT_DEV_TYPE_SINK;
 	desc.type = CORESIGHT_DEV_TYPE_SINK;
 	desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
 	desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;

+ 1 - 2
drivers/hwtracing/coresight/coresight-etm-perf.c

@@ -201,6 +201,7 @@ static void *etm_setup_aux(int event_cpu, void **pages,
 	event_data = alloc_event_data(event_cpu);
 	event_data = alloc_event_data(event_cpu);
 	if (!event_data)
 	if (!event_data)
 		return NULL;
 		return NULL;
+	INIT_WORK(&event_data->work, free_event_data);
 
 
 	/*
 	/*
 	 * In theory nothing prevent tracers in a trace session from being
 	 * In theory nothing prevent tracers in a trace session from being
@@ -217,8 +218,6 @@ static void *etm_setup_aux(int event_cpu, void **pages,
 	if (!sink)
 	if (!sink)
 		goto err;
 		goto err;
 
 
-	INIT_WORK(&event_data->work, free_event_data);
-
 	mask = &event_data->mask;
 	mask = &event_data->mask;
 
 
 	/* Setup the path for each CPU in a trace session */
 	/* Setup the path for each CPU in a trace session */

+ 17 - 8
drivers/hwtracing/coresight/coresight-tmc-etf.c

@@ -166,9 +166,6 @@ out:
 	if (!used)
 	if (!used)
 		kfree(buf);
 		kfree(buf);
 
 
-	if (!ret)
-		dev_info(drvdata->dev, "TMC-ETB/ETF enabled\n");
-
 	return ret;
 	return ret;
 }
 }
 
 
@@ -204,15 +201,27 @@ out:
 
 
 static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
 static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
 {
 {
+	int ret;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
 	switch (mode) {
 	switch (mode) {
 	case CS_MODE_SYSFS:
 	case CS_MODE_SYSFS:
-		return tmc_enable_etf_sink_sysfs(csdev);
+		ret = tmc_enable_etf_sink_sysfs(csdev);
+		break;
 	case CS_MODE_PERF:
 	case CS_MODE_PERF:
-		return tmc_enable_etf_sink_perf(csdev);
+		ret = tmc_enable_etf_sink_perf(csdev);
+		break;
+	/* We shouldn't be here */
+	default:
+		ret = -EINVAL;
+		break;
 	}
 	}
 
 
-	/* We shouldn't be here */
-	return -EINVAL;
+	if (ret)
+		return ret;
+
+	dev_info(drvdata->dev, "TMC-ETB/ETF enabled\n");
+	return 0;
 }
 }
 
 
 static void tmc_disable_etf_sink(struct coresight_device *csdev)
 static void tmc_disable_etf_sink(struct coresight_device *csdev)
@@ -273,7 +282,7 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
 	drvdata->mode = CS_MODE_DISABLED;
 	drvdata->mode = CS_MODE_DISABLED;
 	spin_unlock_irqrestore(&drvdata->spinlock, flags);
 	spin_unlock_irqrestore(&drvdata->spinlock, flags);
 
 
-	dev_info(drvdata->dev, "TMC disabled\n");
+	dev_info(drvdata->dev, "TMC-ETF disabled\n");
 }
 }
 
 
 static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu,
 static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu,

+ 7 - 0
drivers/hwtracing/coresight/coresight-tmc.c

@@ -362,6 +362,13 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
 		desc.type = CORESIGHT_DEV_TYPE_SINK;
 		desc.type = CORESIGHT_DEV_TYPE_SINK;
 		desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
 		desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
 		desc.ops = &tmc_etr_cs_ops;
 		desc.ops = &tmc_etr_cs_ops;
+		/*
+		 * ETR configuration uses a 40-bit AXI master in place of
+		 * the embedded SRAM of ETB/ETF.
+		 */
+		ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
+		if (ret)
+			goto out;
 	} else {
 	} else {
 		desc.type = CORESIGHT_DEV_TYPE_LINKSINK;
 		desc.type = CORESIGHT_DEV_TYPE_LINKSINK;
 		desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO;
 		desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO;

+ 26 - 8
drivers/hwtracing/coresight/coresight.c

@@ -253,14 +253,22 @@ static int coresight_enable_source(struct coresight_device *csdev, u32 mode)
 	return 0;
 	return 0;
 }
 }
 
 
-static void coresight_disable_source(struct coresight_device *csdev)
+/**
+ *  coresight_disable_source - Drop the reference count by 1 and disable
+ *  the device if there are no users left.
+ *
+ *  @csdev - The coresight device to disable
+ *
+ *  Returns true if the device has been disabled.
+ */
+static bool coresight_disable_source(struct coresight_device *csdev)
 {
 {
 	if (atomic_dec_return(csdev->refcnt) == 0) {
 	if (atomic_dec_return(csdev->refcnt) == 0) {
-		if (source_ops(csdev)->disable) {
+		if (source_ops(csdev)->disable)
 			source_ops(csdev)->disable(csdev, NULL);
 			source_ops(csdev)->disable(csdev, NULL);
-			csdev->enable = false;
-		}
+		csdev->enable = false;
 	}
 	}
+	return !csdev->enable;
 }
 }
 
 
 void coresight_disable_path(struct list_head *path)
 void coresight_disable_path(struct list_head *path)
@@ -550,6 +558,9 @@ int coresight_enable(struct coresight_device *csdev)
 	int cpu, ret = 0;
 	int cpu, ret = 0;
 	struct coresight_device *sink;
 	struct coresight_device *sink;
 	struct list_head *path;
 	struct list_head *path;
+	enum coresight_dev_subtype_source subtype;
+
+	subtype = csdev->subtype.source_subtype;
 
 
 	mutex_lock(&coresight_mutex);
 	mutex_lock(&coresight_mutex);
 
 
@@ -557,8 +568,16 @@ int coresight_enable(struct coresight_device *csdev)
 	if (ret)
 	if (ret)
 		goto out;
 		goto out;
 
 
-	if (csdev->enable)
+	if (csdev->enable) {
+		/*
+		 * There could be multiple applications driving the software
+		 * source. So keep the refcount for each such user when the
+		 * source is already enabled.
+		 */
+		if (subtype == CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE)
+			atomic_inc(csdev->refcnt);
 		goto out;
 		goto out;
+	}
 
 
 	/*
 	/*
 	 * Search for a valid sink for this session but don't reset the
 	 * Search for a valid sink for this session but don't reset the
@@ -585,7 +604,7 @@ int coresight_enable(struct coresight_device *csdev)
 	if (ret)
 	if (ret)
 		goto err_source;
 		goto err_source;
 
 
-	switch (csdev->subtype.source_subtype) {
+	switch (subtype) {
 	case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC:
 	case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC:
 		/*
 		/*
 		 * When working from sysFS it is important to keep track
 		 * When working from sysFS it is important to keep track
@@ -629,7 +648,7 @@ void coresight_disable(struct coresight_device *csdev)
 	if (ret)
 	if (ret)
 		goto out;
 		goto out;
 
 
-	if (!csdev->enable)
+	if (!csdev->enable || !coresight_disable_source(csdev))
 		goto out;
 		goto out;
 
 
 	switch (csdev->subtype.source_subtype) {
 	switch (csdev->subtype.source_subtype) {
@@ -647,7 +666,6 @@ void coresight_disable(struct coresight_device *csdev)
 		break;
 		break;
 	}
 	}
 
 
-	coresight_disable_source(csdev);
 	coresight_disable_path(path);
 	coresight_disable_path(path);
 	coresight_release_path(path);
 	coresight_release_path(path);
 
 

+ 32 - 15
drivers/hwtracing/coresight/of_coresight.c

@@ -52,7 +52,7 @@ of_coresight_get_endpoint_device(struct device_node *endpoint)
 			       endpoint, of_dev_node_match);
 			       endpoint, of_dev_node_match);
 }
 }
 
 
-static void of_coresight_get_ports(struct device_node *node,
+static void of_coresight_get_ports(const struct device_node *node,
 				   int *nr_inport, int *nr_outport)
 				   int *nr_inport, int *nr_outport)
 {
 {
 	struct device_node *ep = NULL;
 	struct device_node *ep = NULL;
@@ -101,14 +101,40 @@ static int of_coresight_alloc_memory(struct device *dev,
 	return 0;
 	return 0;
 }
 }
 
 
-struct coresight_platform_data *of_get_coresight_platform_data(
-				struct device *dev, struct device_node *node)
+int of_coresight_get_cpu(const struct device_node *node)
 {
 {
-	int i = 0, ret = 0, cpu;
+	int cpu;
+	bool found;
+	struct device_node *dn, *np;
+
+	dn = of_parse_phandle(node, "cpu", 0);
+
+	/* Affinity defaults to CPU0 */
+	if (!dn)
+		return 0;
+
+	for_each_possible_cpu(cpu) {
+		np = of_cpu_device_node_get(cpu);
+		found = (dn == np);
+		of_node_put(np);
+		if (found)
+			break;
+	}
+	of_node_put(dn);
+
+	/* Affinity to CPU0 if no cpu nodes are found */
+	return found ? cpu : 0;
+}
+EXPORT_SYMBOL_GPL(of_coresight_get_cpu);
+
+struct coresight_platform_data *
+of_get_coresight_platform_data(struct device *dev,
+			       const struct device_node *node)
+{
+	int i = 0, ret = 0;
 	struct coresight_platform_data *pdata;
 	struct coresight_platform_data *pdata;
 	struct of_endpoint endpoint, rendpoint;
 	struct of_endpoint endpoint, rendpoint;
 	struct device *rdev;
 	struct device *rdev;
-	struct device_node *dn;
 	struct device_node *ep = NULL;
 	struct device_node *ep = NULL;
 	struct device_node *rparent = NULL;
 	struct device_node *rparent = NULL;
 	struct device_node *rport = NULL;
 	struct device_node *rport = NULL;
@@ -175,16 +201,7 @@ struct coresight_platform_data *of_get_coresight_platform_data(
 		} while (ep);
 		} while (ep);
 	}
 	}
 
 
-	/* Affinity defaults to CPU0 */
-	pdata->cpu = 0;
-	dn = of_parse_phandle(node, "cpu", 0);
-	for (cpu = 0; dn && cpu < nr_cpu_ids; cpu++) {
-		if (dn == of_get_cpu_node(cpu, NULL)) {
-			pdata->cpu = cpu;
-			break;
-		}
-	}
-	of_node_put(dn);
+	pdata->cpu = of_coresight_get_cpu(node);
 
 
 	return pdata;
 	return pdata;
 }
 }

+ 13 - 0
drivers/i2c/muxes/Kconfig

@@ -30,6 +30,19 @@ config I2C_MUX_GPIO
 	  This driver can also be built as a module.  If so, the module
 	  This driver can also be built as a module.  If so, the module
 	  will be called i2c-mux-gpio.
 	  will be called i2c-mux-gpio.
 
 
+config I2C_MUX_GPMUX
+	tristate "General Purpose I2C multiplexer"
+	select MULTIPLEXER
+	depends on OF || COMPILE_TEST
+	help
+	  If you say yes to this option, support will be included for a
+	  general purpose I2C multiplexer. This driver provides access to
+	  I2C busses connected through a MUX, which in turn is controlled
+	  by a MUX-controller from the MUX subsystem.
+
+	  This driver can also be built as a module.  If so, the module
+	  will be called i2c-mux-gpmux.
+
 config I2C_MUX_LTC4306
 config I2C_MUX_LTC4306
 	tristate "LTC LTC4306/5 I2C multiplexer"
 	tristate "LTC LTC4306/5 I2C multiplexer"
 	select GPIOLIB
 	select GPIOLIB

+ 1 - 0
drivers/i2c/muxes/Makefile

@@ -6,6 +6,7 @@ obj-$(CONFIG_I2C_ARB_GPIO_CHALLENGE)	+= i2c-arb-gpio-challenge.o
 obj-$(CONFIG_I2C_DEMUX_PINCTRL)		+= i2c-demux-pinctrl.o
 obj-$(CONFIG_I2C_DEMUX_PINCTRL)		+= i2c-demux-pinctrl.o
 
 
 obj-$(CONFIG_I2C_MUX_GPIO)	+= i2c-mux-gpio.o
 obj-$(CONFIG_I2C_MUX_GPIO)	+= i2c-mux-gpio.o
+obj-$(CONFIG_I2C_MUX_GPMUX)	+= i2c-mux-gpmux.o
 obj-$(CONFIG_I2C_MUX_LTC4306)	+= i2c-mux-ltc4306.o
 obj-$(CONFIG_I2C_MUX_LTC4306)	+= i2c-mux-ltc4306.o
 obj-$(CONFIG_I2C_MUX_MLXCPLD)	+= i2c-mux-mlxcpld.o
 obj-$(CONFIG_I2C_MUX_MLXCPLD)	+= i2c-mux-mlxcpld.o
 obj-$(CONFIG_I2C_MUX_PCA9541)	+= i2c-mux-pca9541.o
 obj-$(CONFIG_I2C_MUX_PCA9541)	+= i2c-mux-pca9541.o

+ 173 - 0
drivers/i2c/muxes/i2c-mux-gpmux.c

@@ -0,0 +1,173 @@
+/*
+ * General Purpose I2C multiplexer
+ *
+ * Copyright (C) 2017 Axentia Technologies AB
+ *
+ * Author: Peter Rosin <peda@axentia.se>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/i2c.h>
+#include <linux/i2c-mux.h>
+#include <linux/module.h>
+#include <linux/mux/consumer.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+
+struct mux {
+	struct mux_control *control;
+
+	bool do_not_deselect;
+};
+
+static int i2c_mux_select(struct i2c_mux_core *muxc, u32 chan)
+{
+	struct mux *mux = i2c_mux_priv(muxc);
+	int ret;
+
+	ret = mux_control_select(mux->control, chan);
+	mux->do_not_deselect = ret < 0;
+
+	return ret;
+}
+
+static int i2c_mux_deselect(struct i2c_mux_core *muxc, u32 chan)
+{
+	struct mux *mux = i2c_mux_priv(muxc);
+
+	if (mux->do_not_deselect)
+		return 0;
+
+	return mux_control_deselect(mux->control);
+}
+
+static struct i2c_adapter *mux_parent_adapter(struct device *dev)
+{
+	struct device_node *np = dev->of_node;
+	struct device_node *parent_np;
+	struct i2c_adapter *parent;
+
+	parent_np = of_parse_phandle(np, "i2c-parent", 0);
+	if (!parent_np) {
+		dev_err(dev, "Cannot parse i2c-parent\n");
+		return ERR_PTR(-ENODEV);
+	}
+	parent = of_find_i2c_adapter_by_node(parent_np);
+	of_node_put(parent_np);
+	if (!parent)
+		return ERR_PTR(-EPROBE_DEFER);
+
+	return parent;
+}
+
+static const struct of_device_id i2c_mux_of_match[] = {
+	{ .compatible = "i2c-mux", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, i2c_mux_of_match);
+
+static int i2c_mux_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	struct device_node *child;
+	struct i2c_mux_core *muxc;
+	struct mux *mux;
+	struct i2c_adapter *parent;
+	int children;
+	int ret;
+
+	if (!np)
+		return -ENODEV;
+
+	mux = devm_kzalloc(dev, sizeof(*mux), GFP_KERNEL);
+	if (!mux)
+		return -ENOMEM;
+
+	mux->control = devm_mux_control_get(dev, NULL);
+	if (IS_ERR(mux->control)) {
+		if (PTR_ERR(mux->control) != -EPROBE_DEFER)
+			dev_err(dev, "failed to get control-mux\n");
+		return PTR_ERR(mux->control);
+	}
+
+	parent = mux_parent_adapter(dev);
+	if (IS_ERR(parent)) {
+		if (PTR_ERR(parent) != -EPROBE_DEFER)
+			dev_err(dev, "failed to get i2c-parent adapter\n");
+		return PTR_ERR(parent);
+	}
+
+	children = of_get_child_count(np);
+
+	muxc = i2c_mux_alloc(parent, dev, children, 0, 0,
+			     i2c_mux_select, i2c_mux_deselect);
+	if (!muxc) {
+		ret = -ENOMEM;
+		goto err_parent;
+	}
+	muxc->priv = mux;
+
+	platform_set_drvdata(pdev, muxc);
+
+	muxc->mux_locked = of_property_read_bool(np, "mux-locked");
+
+	for_each_child_of_node(np, child) {
+		u32 chan;
+
+		ret = of_property_read_u32(child, "reg", &chan);
+		if (ret < 0) {
+			dev_err(dev, "no reg property for node '%s'\n",
+				child->name);
+			goto err_children;
+		}
+
+		if (chan >= mux_control_states(mux->control)) {
+			dev_err(dev, "invalid reg %u\n", chan);
+			ret = -EINVAL;
+			goto err_children;
+		}
+
+		ret = i2c_mux_add_adapter(muxc, 0, chan, 0);
+		if (ret)
+			goto err_children;
+	}
+
+	dev_info(dev, "%d-port mux on %s adapter\n", children, parent->name);
+
+	return 0;
+
+err_children:
+	i2c_mux_del_adapters(muxc);
+err_parent:
+	i2c_put_adapter(parent);
+
+	return ret;
+}
+
+static int i2c_mux_remove(struct platform_device *pdev)
+{
+	struct i2c_mux_core *muxc = platform_get_drvdata(pdev);
+
+	i2c_mux_del_adapters(muxc);
+	i2c_put_adapter(muxc->parent);
+
+	return 0;
+}
+
+static struct platform_driver i2c_mux_driver = {
+	.probe	= i2c_mux_probe,
+	.remove	= i2c_mux_remove,
+	.driver	= {
+		.name	= "i2c-mux-gpmux",
+		.of_match_table = i2c_mux_of_match,
+	},
+};
+module_platform_driver(i2c_mux_driver);
+
+MODULE_DESCRIPTION("General Purpose I2C multiplexer driver");
+MODULE_AUTHOR("Peter Rosin <peda@axentia.se>");
+MODULE_LICENSE("GPL v2");

+ 1 - 0
drivers/iio/Kconfig

@@ -83,6 +83,7 @@ source "drivers/iio/humidity/Kconfig"
 source "drivers/iio/imu/Kconfig"
 source "drivers/iio/imu/Kconfig"
 source "drivers/iio/light/Kconfig"
 source "drivers/iio/light/Kconfig"
 source "drivers/iio/magnetometer/Kconfig"
 source "drivers/iio/magnetometer/Kconfig"
+source "drivers/iio/multiplexer/Kconfig"
 source "drivers/iio/orientation/Kconfig"
 source "drivers/iio/orientation/Kconfig"
 if IIO_TRIGGER
 if IIO_TRIGGER
    source "drivers/iio/trigger/Kconfig"
    source "drivers/iio/trigger/Kconfig"

+ 1 - 0
drivers/iio/Makefile

@@ -28,6 +28,7 @@ obj-y += humidity/
 obj-y += imu/
 obj-y += imu/
 obj-y += light/
 obj-y += light/
 obj-y += magnetometer/
 obj-y += magnetometer/
+obj-y += multiplexer/
 obj-y += orientation/
 obj-y += orientation/
 obj-y += potentiometer/
 obj-y += potentiometer/
 obj-y += potentiostat/
 obj-y += potentiostat/

+ 60 - 0
drivers/iio/inkern.c

@@ -867,3 +867,63 @@ err_unlock:
 	return ret;
 	return ret;
 }
 }
 EXPORT_SYMBOL_GPL(iio_write_channel_raw);
 EXPORT_SYMBOL_GPL(iio_write_channel_raw);
+
+unsigned int iio_get_channel_ext_info_count(struct iio_channel *chan)
+{
+	const struct iio_chan_spec_ext_info *ext_info;
+	unsigned int i = 0;
+
+	if (!chan->channel->ext_info)
+		return i;
+
+	for (ext_info = chan->channel->ext_info; ext_info->name; ext_info++)
+		++i;
+
+	return i;
+}
+EXPORT_SYMBOL_GPL(iio_get_channel_ext_info_count);
+
+static const struct iio_chan_spec_ext_info *iio_lookup_ext_info(
+						const struct iio_channel *chan,
+						const char *attr)
+{
+	const struct iio_chan_spec_ext_info *ext_info;
+
+	if (!chan->channel->ext_info)
+		return NULL;
+
+	for (ext_info = chan->channel->ext_info; ext_info->name; ++ext_info) {
+		if (!strcmp(attr, ext_info->name))
+			return ext_info;
+	}
+
+	return NULL;
+}
+
+ssize_t iio_read_channel_ext_info(struct iio_channel *chan,
+				  const char *attr, char *buf)
+{
+	const struct iio_chan_spec_ext_info *ext_info;
+
+	ext_info = iio_lookup_ext_info(chan, attr);
+	if (!ext_info)
+		return -EINVAL;
+
+	return ext_info->read(chan->indio_dev, ext_info->private,
+			      chan->channel, buf);
+}
+EXPORT_SYMBOL_GPL(iio_read_channel_ext_info);
+
+ssize_t iio_write_channel_ext_info(struct iio_channel *chan, const char *attr,
+				   const char *buf, size_t len)
+{
+	const struct iio_chan_spec_ext_info *ext_info;
+
+	ext_info = iio_lookup_ext_info(chan, attr);
+	if (!ext_info)
+		return -EINVAL;
+
+	return ext_info->write(chan->indio_dev, ext_info->private,
+			       chan->channel, buf, len);
+}
+EXPORT_SYMBOL_GPL(iio_write_channel_ext_info);

+ 18 - 0
drivers/iio/multiplexer/Kconfig

@@ -0,0 +1,18 @@
+#
+# Multiplexer drivers
+#
+# When adding new entries keep the list in alphabetical order
+
+menu "Multiplexers"
+
+config IIO_MUX
+	tristate "IIO multiplexer driver"
+	select MULTIPLEXER
+	depends on OF || COMPILE_TEST
+	help
+	  Say yes here to build support for the IIO multiplexer.
+
+	  To compile this driver as a module, choose M here: the
+	  module will be called iio-mux.
+
+endmenu

+ 6 - 0
drivers/iio/multiplexer/Makefile

@@ -0,0 +1,6 @@
+#
+# Makefile for industrial I/O multiplexer drivers
+#
+
+# When adding new entries keep the list in alphabetical order
+obj-$(CONFIG_IIO_MUX) += iio-mux.o

+ 459 - 0
drivers/iio/multiplexer/iio-mux.c

@@ -0,0 +1,459 @@
+/*
+ * IIO multiplexer driver
+ *
+ * Copyright (C) 2017 Axentia Technologies AB
+ *
+ * Author: Peter Rosin <peda@axentia.se>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/err.h>
+#include <linux/iio/consumer.h>
+#include <linux/iio/iio.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/mux/consumer.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+struct mux_ext_info_cache {
+	char *data;
+	ssize_t size;
+};
+
+struct mux_child {
+	struct mux_ext_info_cache *ext_info_cache;
+};
+
+struct mux {
+	int cached_state;
+	struct mux_control *control;
+	struct iio_channel *parent;
+	struct iio_dev *indio_dev;
+	struct iio_chan_spec *chan;
+	struct iio_chan_spec_ext_info *ext_info;
+	struct mux_child *child;
+};
+
+static int iio_mux_select(struct mux *mux, int idx)
+{
+	struct mux_child *child = &mux->child[idx];
+	struct iio_chan_spec const *chan = &mux->chan[idx];
+	int ret;
+	int i;
+
+	ret = mux_control_select(mux->control, chan->channel);
+	if (ret < 0) {
+		mux->cached_state = -1;
+		return ret;
+	}
+
+	if (mux->cached_state == chan->channel)
+		return 0;
+
+	if (chan->ext_info) {
+		for (i = 0; chan->ext_info[i].name; ++i) {
+			const char *attr = chan->ext_info[i].name;
+			struct mux_ext_info_cache *cache;
+
+			cache = &child->ext_info_cache[i];
+
+			if (cache->size < 0)
+				continue;
+
+			ret = iio_write_channel_ext_info(mux->parent, attr,
+							 cache->data,
+							 cache->size);
+
+			if (ret < 0) {
+				mux_control_deselect(mux->control);
+				mux->cached_state = -1;
+				return ret;
+			}
+		}
+	}
+	mux->cached_state = chan->channel;
+
+	return 0;
+}
+
+static void iio_mux_deselect(struct mux *mux)
+{
+	mux_control_deselect(mux->control);
+}
+
+static int mux_read_raw(struct iio_dev *indio_dev,
+			struct iio_chan_spec const *chan,
+			int *val, int *val2, long mask)
+{
+	struct mux *mux = iio_priv(indio_dev);
+	int idx = chan - mux->chan;
+	int ret;
+
+	ret = iio_mux_select(mux, idx);
+	if (ret < 0)
+		return ret;
+
+	switch (mask) {
+	case IIO_CHAN_INFO_RAW:
+		ret = iio_read_channel_raw(mux->parent, val);
+		break;
+
+	case IIO_CHAN_INFO_SCALE:
+		ret = iio_read_channel_scale(mux->parent, val, val2);
+		break;
+
+	default:
+		ret = -EINVAL;
+	}
+
+	iio_mux_deselect(mux);
+
+	return ret;
+}
+
+static int mux_read_avail(struct iio_dev *indio_dev,
+			  struct iio_chan_spec const *chan,
+			  const int **vals, int *type, int *length,
+			  long mask)
+{
+	struct mux *mux = iio_priv(indio_dev);
+	int idx = chan - mux->chan;
+	int ret;
+
+	ret = iio_mux_select(mux, idx);
+	if (ret < 0)
+		return ret;
+
+	switch (mask) {
+	case IIO_CHAN_INFO_RAW:
+		*type = IIO_VAL_INT;
+		ret = iio_read_avail_channel_raw(mux->parent, vals, length);
+		break;
+
+	default:
+		ret = -EINVAL;
+	}
+
+	iio_mux_deselect(mux);
+
+	return ret;
+}
+
+static int mux_write_raw(struct iio_dev *indio_dev,
+			 struct iio_chan_spec const *chan,
+			 int val, int val2, long mask)
+{
+	struct mux *mux = iio_priv(indio_dev);
+	int idx = chan - mux->chan;
+	int ret;
+
+	ret = iio_mux_select(mux, idx);
+	if (ret < 0)
+		return ret;
+
+	switch (mask) {
+	case IIO_CHAN_INFO_RAW:
+		ret = iio_write_channel_raw(mux->parent, val);
+		break;
+
+	default:
+		ret = -EINVAL;
+	}
+
+	iio_mux_deselect(mux);
+
+	return ret;
+}
+
+static const struct iio_info mux_info = {
+	.read_raw = mux_read_raw,
+	.read_avail = mux_read_avail,
+	.write_raw = mux_write_raw,
+	.driver_module = THIS_MODULE,
+};
+
+static ssize_t mux_read_ext_info(struct iio_dev *indio_dev, uintptr_t private,
+				 struct iio_chan_spec const *chan, char *buf)
+{
+	struct mux *mux = iio_priv(indio_dev);
+	int idx = chan - mux->chan;
+	ssize_t ret;
+
+	ret = iio_mux_select(mux, idx);
+	if (ret < 0)
+		return ret;
+
+	ret = iio_read_channel_ext_info(mux->parent,
+					mux->ext_info[private].name,
+					buf);
+
+	iio_mux_deselect(mux);
+
+	return ret;
+}
+
+static ssize_t mux_write_ext_info(struct iio_dev *indio_dev, uintptr_t private,
+				  struct iio_chan_spec const *chan,
+				  const char *buf, size_t len)
+{
+	struct device *dev = indio_dev->dev.parent;
+	struct mux *mux = iio_priv(indio_dev);
+	int idx = chan - mux->chan;
+	char *new;
+	ssize_t ret;
+
+	if (len >= PAGE_SIZE)
+		return -EINVAL;
+
+	ret = iio_mux_select(mux, idx);
+	if (ret < 0)
+		return ret;
+
+	new = devm_kmemdup(dev, buf, len + 1, GFP_KERNEL);
+	if (!new) {
+		iio_mux_deselect(mux);
+		return -ENOMEM;
+	}
+
+	new[len] = 0;
+
+	ret = iio_write_channel_ext_info(mux->parent,
+					 mux->ext_info[private].name,
+					 buf, len);
+	if (ret < 0) {
+		iio_mux_deselect(mux);
+		devm_kfree(dev, new);
+		return ret;
+	}
+
+	devm_kfree(dev, mux->child[idx].ext_info_cache[private].data);
+	mux->child[idx].ext_info_cache[private].data = new;
+	mux->child[idx].ext_info_cache[private].size = len;
+
+	iio_mux_deselect(mux);
+
+	return ret;
+}
+
+static int mux_configure_channel(struct device *dev, struct mux *mux,
+				 u32 state, const char *label, int idx)
+{
+	struct mux_child *child = &mux->child[idx];
+	struct iio_chan_spec *chan = &mux->chan[idx];
+	struct iio_chan_spec const *pchan = mux->parent->channel;
+	char *page = NULL;
+	int num_ext_info;
+	int i;
+	int ret;
+
+	chan->indexed = 1;
+	chan->output = pchan->output;
+	chan->datasheet_name = label;
+	chan->ext_info = mux->ext_info;
+
+	ret = iio_get_channel_type(mux->parent, &chan->type);
+	if (ret < 0) {
+		dev_err(dev, "failed to get parent channel type\n");
+		return ret;
+	}
+
+	if (iio_channel_has_info(pchan, IIO_CHAN_INFO_RAW))
+		chan->info_mask_separate |= BIT(IIO_CHAN_INFO_RAW);
+	if (iio_channel_has_info(pchan, IIO_CHAN_INFO_SCALE))
+		chan->info_mask_separate |= BIT(IIO_CHAN_INFO_SCALE);
+
+	if (iio_channel_has_available(pchan, IIO_CHAN_INFO_RAW))
+		chan->info_mask_separate_available |= BIT(IIO_CHAN_INFO_RAW);
+
+	if (state >= mux_control_states(mux->control)) {
+		dev_err(dev, "too many channels\n");
+		return -EINVAL;
+	}
+
+	chan->channel = state;
+
+	num_ext_info = iio_get_channel_ext_info_count(mux->parent);
+	if (num_ext_info) {
+		page = devm_kzalloc(dev, PAGE_SIZE, GFP_KERNEL);
+		if (!page)
+			return -ENOMEM;
+	}
+	child->ext_info_cache = devm_kzalloc(dev,
+					     sizeof(*child->ext_info_cache) *
+					     num_ext_info, GFP_KERNEL);
+	for (i = 0; i < num_ext_info; ++i) {
+		child->ext_info_cache[i].size = -1;
+
+		if (!pchan->ext_info[i].write)
+			continue;
+		if (!pchan->ext_info[i].read)
+			continue;
+
+		ret = iio_read_channel_ext_info(mux->parent,
+						mux->ext_info[i].name,
+						page);
+		if (ret < 0) {
+			dev_err(dev, "failed to get ext_info '%s'\n",
+				pchan->ext_info[i].name);
+			return ret;
+		}
+		if (ret >= PAGE_SIZE) {
+			dev_err(dev, "too large ext_info '%s'\n",
+				pchan->ext_info[i].name);
+			return -EINVAL;
+		}
+
+		child->ext_info_cache[i].data = devm_kmemdup(dev, page, ret + 1,
+							     GFP_KERNEL);
+		child->ext_info_cache[i].data[ret] = 0;
+		child->ext_info_cache[i].size = ret;
+	}
+
+	if (page)
+		devm_kfree(dev, page);
+
+	return 0;
+}
+
+/*
+ * Same as of_property_for_each_string(), but also keeps track of the
+ * index of each string.
+ */
+#define of_property_for_each_string_index(np, propname, prop, s, i)	\
+	for (prop = of_find_property(np, propname, NULL),		\
+	     s = of_prop_next_string(prop, NULL),			\
+	     i = 0;							\
+	     s;								\
+	     s = of_prop_next_string(prop, s),				\
+	     i++)
+
+static int mux_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *np = pdev->dev.of_node;
+	struct iio_dev *indio_dev;
+	struct iio_channel *parent;
+	struct mux *mux;
+	struct property *prop;
+	const char *label;
+	u32 state;
+	int sizeof_ext_info;
+	int children;
+	int sizeof_priv;
+	int i;
+	int ret;
+
+	if (!np)
+		return -ENODEV;
+
+	parent = devm_iio_channel_get(dev, "parent");
+	if (IS_ERR(parent)) {
+		if (PTR_ERR(parent) != -EPROBE_DEFER)
+			dev_err(dev, "failed to get parent channel\n");
+		return PTR_ERR(parent);
+	}
+
+	sizeof_ext_info = iio_get_channel_ext_info_count(parent);
+	if (sizeof_ext_info) {
+		sizeof_ext_info += 1; /* one extra entry for the sentinel */
+		sizeof_ext_info *= sizeof(*mux->ext_info);
+	}
+
+	children = 0;
+	of_property_for_each_string(np, "channels", prop, label) {
+		if (*label)
+			children++;
+	}
+	if (children <= 0) {
+		dev_err(dev, "not even a single child\n");
+		return -EINVAL;
+	}
+
+	sizeof_priv = sizeof(*mux);
+	sizeof_priv += sizeof(*mux->child) * children;
+	sizeof_priv += sizeof(*mux->chan) * children;
+	sizeof_priv += sizeof_ext_info;
+
+	indio_dev = devm_iio_device_alloc(dev, sizeof_priv);
+	if (!indio_dev)
+		return -ENOMEM;
+
+	mux = iio_priv(indio_dev);
+	mux->child = (struct mux_child *)(mux + 1);
+	mux->chan = (struct iio_chan_spec *)(mux->child + children);
+
+	platform_set_drvdata(pdev, indio_dev);
+
+	mux->parent = parent;
+	mux->cached_state = -1;
+
+	indio_dev->name = dev_name(dev);
+	indio_dev->dev.parent = dev;
+	indio_dev->info = &mux_info;
+	indio_dev->modes = INDIO_DIRECT_MODE;
+	indio_dev->channels = mux->chan;
+	indio_dev->num_channels = children;
+	if (sizeof_ext_info) {
+		mux->ext_info = devm_kmemdup(dev,
+					     parent->channel->ext_info,
+					     sizeof_ext_info, GFP_KERNEL);
+		if (!mux->ext_info)
+			return -ENOMEM;
+
+		for (i = 0; mux->ext_info[i].name; ++i) {
+			if (parent->channel->ext_info[i].read)
+				mux->ext_info[i].read = mux_read_ext_info;
+			if (parent->channel->ext_info[i].write)
+				mux->ext_info[i].write = mux_write_ext_info;
+			mux->ext_info[i].private = i;
+		}
+	}
+
+	mux->control = devm_mux_control_get(dev, NULL);
+	if (IS_ERR(mux->control)) {
+		if (PTR_ERR(mux->control) != -EPROBE_DEFER)
+			dev_err(dev, "failed to get control-mux\n");
+		return PTR_ERR(mux->control);
+	}
+
+	i = 0;
+	of_property_for_each_string_index(np, "channels", prop, label, state) {
+		if (!*label)
+			continue;
+
+		ret = mux_configure_channel(dev, mux, state, label, i++);
+		if (ret < 0)
+			return ret;
+	}
+
+	ret = devm_iio_device_register(dev, indio_dev);
+	if (ret) {
+		dev_err(dev, "failed to register iio device\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static const struct of_device_id mux_match[] = {
+	{ .compatible = "io-channel-mux" },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, mux_match);
+
+static struct platform_driver mux_driver = {
+	.probe = mux_probe,
+	.driver = {
+		.name = "iio-mux",
+		.of_match_table = mux_match,
+	},
+};
+module_platform_driver(mux_driver);
+
+MODULE_DESCRIPTION("IIO multiplexer driver");
+MODULE_AUTHOR("Peter Rosin <peda@axentia.se>");
+MODULE_LICENSE("GPL v2");

+ 1 - 2
drivers/ipack/ipack.c

@@ -212,7 +212,7 @@ struct ipack_bus_device *ipack_bus_register(struct device *parent, int slots,
 	int bus_nr;
 	int bus_nr;
 	struct ipack_bus_device *bus;
 	struct ipack_bus_device *bus;
 
 
-	bus = kzalloc(sizeof(struct ipack_bus_device), GFP_KERNEL);
+	bus = kzalloc(sizeof(*bus), GFP_KERNEL);
 	if (!bus)
 	if (!bus)
 		return NULL;
 		return NULL;
 
 
@@ -402,7 +402,6 @@ static int ipack_device_read_id(struct ipack_device *dev)
 	 * ID ROM contents */
 	 * ID ROM contents */
 	dev->id = kmalloc(dev->id_avail, GFP_KERNEL);
 	dev->id = kmalloc(dev->id_avail, GFP_KERNEL);
 	if (!dev->id) {
 	if (!dev->id) {
-		dev_err(&dev->dev, "dev->id alloc failed.\n");
 		ret = -ENOMEM;
 		ret = -ENOMEM;
 		goto out;
 		goto out;
 	}
 	}

+ 4 - 1
drivers/memory/ti-aemif.c

@@ -357,7 +357,10 @@ static int aemif_probe(struct platform_device *pdev)
 		return PTR_ERR(aemif->clk);
 		return PTR_ERR(aemif->clk);
 	}
 	}
 
 
-	clk_prepare_enable(aemif->clk);
+	ret = clk_prepare_enable(aemif->clk);
+	if (ret)
+		return ret;
+
 	aemif->clk_rate = clk_get_rate(aemif->clk) / MSEC_PER_SEC;
 	aemif->clk_rate = clk_get_rate(aemif->clk) / MSEC_PER_SEC;
 
 
 	if (of_device_is_compatible(np, "ti,da850-aemif"))
 	if (of_device_is_compatible(np, "ti,da850-aemif"))

+ 8 - 0
drivers/misc/Kconfig

@@ -490,6 +490,14 @@ config ASPEED_LPC_CTRL
 	  ioctl()s, the driver also provides a read/write interface to a BMC ram
 	  ioctl()s, the driver also provides a read/write interface to a BMC ram
 	  region where the host LPC read/write region can be buffered.
 	  region where the host LPC read/write region can be buffered.
 
 
+config ASPEED_LPC_SNOOP
+	tristate "Aspeed ast2500 HOST LPC snoop support"
+	depends on (ARCH_ASPEED || COMPILE_TEST) && REGMAP && MFD_SYSCON
+	help
+	  Provides a driver to control the LPC snoop interface which
+	  allows the BMC to listen on and save the data written by
+	  the host to an arbitrary LPC I/O port.
+
 config PCI_ENDPOINT_TEST
 config PCI_ENDPOINT_TEST
 	depends on PCI
 	depends on PCI
 	select CRC32
 	select CRC32

+ 1 - 0
drivers/misc/Makefile

@@ -53,6 +53,7 @@ obj-$(CONFIG_ECHO)		+= echo/
 obj-$(CONFIG_VEXPRESS_SYSCFG)	+= vexpress-syscfg.o
 obj-$(CONFIG_VEXPRESS_SYSCFG)	+= vexpress-syscfg.o
 obj-$(CONFIG_CXL_BASE)		+= cxl/
 obj-$(CONFIG_CXL_BASE)		+= cxl/
 obj-$(CONFIG_ASPEED_LPC_CTRL)	+= aspeed-lpc-ctrl.o
 obj-$(CONFIG_ASPEED_LPC_CTRL)	+= aspeed-lpc-ctrl.o
+obj-$(CONFIG_ASPEED_LPC_SNOOP)	+= aspeed-lpc-snoop.o
 obj-$(CONFIG_PCI_ENDPOINT_TEST)	+= pci_endpoint_test.o
 obj-$(CONFIG_PCI_ENDPOINT_TEST)	+= pci_endpoint_test.o
 
 
 lkdtm-$(CONFIG_LKDTM)		+= lkdtm_core.o
 lkdtm-$(CONFIG_LKDTM)		+= lkdtm_core.o

+ 8 - 8
drivers/misc/apds990x.c

@@ -32,7 +32,7 @@
 #include <linux/delay.h>
 #include <linux/delay.h>
 #include <linux/wait.h>
 #include <linux/wait.h>
 #include <linux/slab.h>
 #include <linux/slab.h>
-#include <linux/i2c/apds990x.h>
+#include <linux/platform_data/apds990x.h>
 
 
 /* Register map */
 /* Register map */
 #define APDS990X_ENABLE	 0x00 /* Enable of states and interrupts */
 #define APDS990X_ENABLE	 0x00 /* Enable of states and interrupts */
@@ -841,7 +841,7 @@ static ssize_t apds990x_prox_enable_store(struct device *dev,
 static DEVICE_ATTR(prox0_raw_en, S_IRUGO | S_IWUSR, apds990x_prox_enable_show,
 static DEVICE_ATTR(prox0_raw_en, S_IRUGO | S_IWUSR, apds990x_prox_enable_show,
 						   apds990x_prox_enable_store);
 						   apds990x_prox_enable_store);
 
 
-static const char reporting_modes[][9] = {"trigger", "periodic"};
+static const char *reporting_modes[] = {"trigger", "periodic"};
 
 
 static ssize_t apds990x_prox_reporting_mode_show(struct device *dev,
 static ssize_t apds990x_prox_reporting_mode_show(struct device *dev,
 				   struct device_attribute *attr, char *buf)
 				   struct device_attribute *attr, char *buf)
@@ -856,13 +856,13 @@ static ssize_t apds990x_prox_reporting_mode_store(struct device *dev,
 				  const char *buf, size_t len)
 				  const char *buf, size_t len)
 {
 {
 	struct apds990x_chip *chip =  dev_get_drvdata(dev);
 	struct apds990x_chip *chip =  dev_get_drvdata(dev);
+	int ret;
 
 
-	if (sysfs_streq(buf, reporting_modes[0]))
-		chip->prox_continuous_mode = 0;
-	else if (sysfs_streq(buf, reporting_modes[1]))
-		chip->prox_continuous_mode = 1;
-	else
-		return -EINVAL;
+	ret = sysfs_match_string(reporting_modes, buf);
+	if (ret < 0)
+		return ret;
+
+	chip->prox_continuous_mode = ret;
 	return len;
 	return len;
 }
 }
 
 

+ 261 - 0
drivers/misc/aspeed-lpc-snoop.c

@@ -0,0 +1,261 @@
+/*
+ * Copyright 2017 Google Inc
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Provides a simple driver to control the ASPEED LPC snoop interface which
+ * allows the BMC to listen on and save the data written by
+ * the host to an arbitrary LPC I/O port.
+ *
+ * Typically used by the BMC to "watch" host boot progress via port
+ * 0x80 writes made by the BIOS during the boot process.
+ */
+
+#include <linux/bitops.h>
+#include <linux/interrupt.h>
+#include <linux/kfifo.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+
+#define DEVICE_NAME	"aspeed-lpc-snoop"
+
+#define NUM_SNOOP_CHANNELS 2
+#define SNOOP_FIFO_SIZE 2048
+
+#define HICR5	0x0
+#define HICR5_EN_SNP0W		BIT(0)
+#define HICR5_ENINT_SNP0W	BIT(1)
+#define HICR5_EN_SNP1W		BIT(2)
+#define HICR5_ENINT_SNP1W	BIT(3)
+
+#define HICR6	0x4
+#define HICR6_STR_SNP0W		BIT(0)
+#define HICR6_STR_SNP1W		BIT(1)
+#define SNPWADR	0x10
+#define SNPWADR_CH0_MASK	GENMASK(15, 0)
+#define SNPWADR_CH0_SHIFT	0
+#define SNPWADR_CH1_MASK	GENMASK(31, 16)
+#define SNPWADR_CH1_SHIFT	16
+#define SNPWDR	0x14
+#define SNPWDR_CH0_MASK		GENMASK(7, 0)
+#define SNPWDR_CH0_SHIFT	0
+#define SNPWDR_CH1_MASK		GENMASK(15, 8)
+#define SNPWDR_CH1_SHIFT	8
+#define HICRB	0x80
+#define HICRB_ENSNP0D		BIT(14)
+#define HICRB_ENSNP1D		BIT(15)
+
+struct aspeed_lpc_snoop {
+	struct regmap		*regmap;
+	int			irq;
+	struct kfifo		snoop_fifo[NUM_SNOOP_CHANNELS];
+};
+
+/* Save a byte to a FIFO and discard the oldest byte if FIFO is full */
+static void put_fifo_with_discard(struct kfifo *fifo, u8 val)
+{
+	if (!kfifo_initialized(fifo))
+		return;
+	if (kfifo_is_full(fifo))
+		kfifo_skip(fifo);
+	kfifo_put(fifo, val);
+}
+
+static irqreturn_t aspeed_lpc_snoop_irq(int irq, void *arg)
+{
+	struct aspeed_lpc_snoop *lpc_snoop = arg;
+	u32 reg, data;
+
+	if (regmap_read(lpc_snoop->regmap, HICR6, &reg))
+		return IRQ_NONE;
+
+	/* Check if one of the snoop channels is interrupting */
+	reg &= (HICR6_STR_SNP0W | HICR6_STR_SNP1W);
+	if (!reg)
+		return IRQ_NONE;
+
+	/* Ack pending IRQs */
+	regmap_write(lpc_snoop->regmap, HICR6, reg);
+
+	/* Read and save most recent snoop'ed data byte to FIFO */
+	regmap_read(lpc_snoop->regmap, SNPWDR, &data);
+
+	if (reg & HICR6_STR_SNP0W) {
+		u8 val = (data & SNPWDR_CH0_MASK) >> SNPWDR_CH0_SHIFT;
+
+		put_fifo_with_discard(&lpc_snoop->snoop_fifo[0], val);
+	}
+	if (reg & HICR6_STR_SNP1W) {
+		u8 val = (data & SNPWDR_CH1_MASK) >> SNPWDR_CH1_SHIFT;
+
+		put_fifo_with_discard(&lpc_snoop->snoop_fifo[1], val);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static int aspeed_lpc_snoop_config_irq(struct aspeed_lpc_snoop *lpc_snoop,
+				       struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	int rc;
+
+	lpc_snoop->irq = platform_get_irq(pdev, 0);
+	if (!lpc_snoop->irq)
+		return -ENODEV;
+
+	rc = devm_request_irq(dev, lpc_snoop->irq,
+			      aspeed_lpc_snoop_irq, IRQF_SHARED,
+			      DEVICE_NAME, lpc_snoop);
+	if (rc < 0) {
+		dev_warn(dev, "Unable to request IRQ %d\n", lpc_snoop->irq);
+		lpc_snoop->irq = 0;
+		return rc;
+	}
+
+	return 0;
+}
+
+static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+				  int channel, u16 lpc_port)
+{
+	int rc = 0;
+	u32 hicr5_en, snpwadr_mask, snpwadr_shift, hicrb_en;
+
+	/* Create FIFO datastructure */
+	rc = kfifo_alloc(&lpc_snoop->snoop_fifo[channel],
+			 SNOOP_FIFO_SIZE, GFP_KERNEL);
+	if (rc)
+		return rc;
+
+	/* Enable LPC snoop channel at requested port */
+	switch (channel) {
+	case 0:
+		hicr5_en = HICR5_EN_SNP0W | HICR5_ENINT_SNP0W;
+		snpwadr_mask = SNPWADR_CH0_MASK;
+		snpwadr_shift = SNPWADR_CH0_SHIFT;
+		hicrb_en = HICRB_ENSNP0D;
+		break;
+	case 1:
+		hicr5_en = HICR5_EN_SNP1W | HICR5_ENINT_SNP1W;
+		snpwadr_mask = SNPWADR_CH1_MASK;
+		snpwadr_shift = SNPWADR_CH1_SHIFT;
+		hicrb_en = HICRB_ENSNP1D;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	regmap_update_bits(lpc_snoop->regmap, HICR5, hicr5_en, hicr5_en);
+	regmap_update_bits(lpc_snoop->regmap, SNPWADR, snpwadr_mask,
+			   lpc_port << snpwadr_shift);
+	regmap_update_bits(lpc_snoop->regmap, HICRB, hicrb_en, hicrb_en);
+
+	return rc;
+}
+
+static void aspeed_lpc_disable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+				     int channel)
+{
+	switch (channel) {
+	case 0:
+		regmap_update_bits(lpc_snoop->regmap, HICR5,
+				   HICR5_EN_SNP0W | HICR5_ENINT_SNP0W,
+				   0);
+		break;
+	case 1:
+		regmap_update_bits(lpc_snoop->regmap, HICR5,
+				   HICR5_EN_SNP1W | HICR5_ENINT_SNP1W,
+				   0);
+		break;
+	default:
+		return;
+	}
+
+	kfifo_free(&lpc_snoop->snoop_fifo[channel]);
+}
+
+static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
+{
+	struct aspeed_lpc_snoop *lpc_snoop;
+	struct device *dev;
+	u32 port;
+	int rc;
+
+	dev = &pdev->dev;
+
+	lpc_snoop = devm_kzalloc(dev, sizeof(*lpc_snoop), GFP_KERNEL);
+	if (!lpc_snoop)
+		return -ENOMEM;
+
+	lpc_snoop->regmap = syscon_node_to_regmap(
+			pdev->dev.parent->of_node);
+	if (IS_ERR(lpc_snoop->regmap)) {
+		dev_err(dev, "Couldn't get regmap\n");
+		return -ENODEV;
+	}
+
+	dev_set_drvdata(&pdev->dev, lpc_snoop);
+
+	rc = of_property_read_u32_index(dev->of_node, "snoop-ports", 0, &port);
+	if (rc) {
+		dev_err(dev, "no snoop ports configured\n");
+		return -ENODEV;
+	}
+
+	rc = aspeed_lpc_snoop_config_irq(lpc_snoop, pdev);
+	if (rc)
+		return rc;
+
+	rc = aspeed_lpc_enable_snoop(lpc_snoop, 0, port);
+	if (rc)
+		return rc;
+
+	/* Configuration of 2nd snoop channel port is optional */
+	if (of_property_read_u32_index(dev->of_node, "snoop-ports",
+				       1, &port) == 0) {
+		rc = aspeed_lpc_enable_snoop(lpc_snoop, 1, port);
+		if (rc)
+			aspeed_lpc_disable_snoop(lpc_snoop, 0);
+	}
+
+	return rc;
+}
+
+static int aspeed_lpc_snoop_remove(struct platform_device *pdev)
+{
+	struct aspeed_lpc_snoop *lpc_snoop = dev_get_drvdata(&pdev->dev);
+
+	/* Disable both snoop channels */
+	aspeed_lpc_disable_snoop(lpc_snoop, 0);
+	aspeed_lpc_disable_snoop(lpc_snoop, 1);
+
+	return 0;
+}
+
+static const struct of_device_id aspeed_lpc_snoop_match[] = {
+	{ .compatible = "aspeed,ast2500-lpc-snoop" },
+	{ },
+};
+
+static struct platform_driver aspeed_lpc_snoop_driver = {
+	.driver = {
+		.name		= DEVICE_NAME,
+		.of_match_table = aspeed_lpc_snoop_match,
+	},
+	.probe = aspeed_lpc_snoop_probe,
+	.remove = aspeed_lpc_snoop_remove,
+};
+
+module_platform_driver(aspeed_lpc_snoop_driver);
+
+MODULE_DEVICE_TABLE(of, aspeed_lpc_snoop_match);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Robert Lippert <rlippert@google.com>");
+MODULE_DESCRIPTION("Linux driver to control Aspeed LPC snoop functionality");

+ 1 - 1
drivers/misc/bh1770glc.c

@@ -27,7 +27,7 @@
 #include <linux/i2c.h>
 #include <linux/i2c.h>
 #include <linux/interrupt.h>
 #include <linux/interrupt.h>
 #include <linux/mutex.h>
 #include <linux/mutex.h>
-#include <linux/i2c/bh1770glc.h>
+#include <linux/platform_data/bh1770glc.h>
 #include <linux/regulator/consumer.h>
 #include <linux/regulator/consumer.h>
 #include <linux/pm_runtime.h>
 #include <linux/pm_runtime.h>
 #include <linux/workqueue.h>
 #include <linux/workqueue.h>

+ 1 - 1
drivers/misc/mei/bus.c

@@ -1040,7 +1040,7 @@ static void mei_cl_bus_dev_init(struct mei_device *bus,
  *
  *
  * @bus: mei device
  * @bus: mei device
  */
  */
-void mei_cl_bus_rescan(struct mei_device *bus)
+static void mei_cl_bus_rescan(struct mei_device *bus)
 {
 {
 	struct mei_cl_device *cldev, *n;
 	struct mei_cl_device *cldev, *n;
 	struct mei_me_client *me_cl;
 	struct mei_me_client *me_cl;

+ 1 - 1
drivers/misc/mei/hw.h

@@ -65,7 +65,7 @@
 #define HBM_MAJOR_VERSION_DOT              2
 #define HBM_MAJOR_VERSION_DOT              2
 
 
 /*
 /*
- * MEI version with notifcation support
+ * MEI version with notification support
  */
  */
 #define HBM_MINOR_VERSION_EV               0
 #define HBM_MINOR_VERSION_EV               0
 #define HBM_MAJOR_VERSION_EV               2
 #define HBM_MAJOR_VERSION_EV               2

+ 0 - 6
drivers/misc/mei/init.c

@@ -215,12 +215,6 @@ int mei_start(struct mei_device *dev)
 		}
 		}
 	} while (ret);
 	} while (ret);
 
 
-	/* we cannot start the device w/o hbm start message completed */
-	if (dev->dev_state == MEI_DEV_DISABLED) {
-		dev_err(dev->dev, "reset failed");
-		goto err;
-	}
-
 	if (mei_hbm_start_wait(dev)) {
 	if (mei_hbm_start_wait(dev)) {
 		dev_err(dev->dev, "HBM haven't started");
 		dev_err(dev->dev, "HBM haven't started");
 		goto err;
 		goto err;

+ 19 - 7
drivers/misc/mei/interrupt.c

@@ -235,6 +235,17 @@ static inline bool hdr_is_fixed(struct mei_msg_hdr *mei_hdr)
 	return mei_hdr->host_addr == 0 && mei_hdr->me_addr != 0;
 	return mei_hdr->host_addr == 0 && mei_hdr->me_addr != 0;
 }
 }
 
 
+static inline int hdr_is_valid(u32 msg_hdr)
+{
+	struct mei_msg_hdr *mei_hdr;
+
+	mei_hdr = (struct mei_msg_hdr *)&msg_hdr;
+	if (!msg_hdr || mei_hdr->reserved)
+		return -EBADMSG;
+
+	return 0;
+}
+
 /**
 /**
  * mei_irq_read_handler - bottom half read routine after ISR to
  * mei_irq_read_handler - bottom half read routine after ISR to
  * handle the read processing.
  * handle the read processing.
@@ -256,17 +267,18 @@ int mei_irq_read_handler(struct mei_device *dev,
 		dev->rd_msg_hdr = mei_read_hdr(dev);
 		dev->rd_msg_hdr = mei_read_hdr(dev);
 		(*slots)--;
 		(*slots)--;
 		dev_dbg(dev->dev, "slots =%08x.\n", *slots);
 		dev_dbg(dev->dev, "slots =%08x.\n", *slots);
-	}
-	mei_hdr = (struct mei_msg_hdr *) &dev->rd_msg_hdr;
-	dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(mei_hdr));
 
 
-	if (mei_hdr->reserved || !dev->rd_msg_hdr) {
-		dev_err(dev->dev, "corrupted message header 0x%08X\n",
+		ret = hdr_is_valid(dev->rd_msg_hdr);
+		if (ret) {
+			dev_err(dev->dev, "corrupted message header 0x%08X\n",
 				dev->rd_msg_hdr);
 				dev->rd_msg_hdr);
-		ret = -EBADMSG;
-		goto end;
+			goto end;
+		}
 	}
 	}
 
 
+	mei_hdr = (struct mei_msg_hdr *)&dev->rd_msg_hdr;
+	dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(mei_hdr));
+
 	if (mei_slots2data(*slots) < mei_hdr->length) {
 	if (mei_slots2data(*slots) < mei_hdr->length) {
 		dev_err(dev->dev, "less data available than length=%08x.\n",
 		dev_err(dev->dev, "less data available than length=%08x.\n",
 				*slots);
 				*slots);

+ 0 - 1
drivers/misc/mei/mei_dev.h

@@ -306,7 +306,6 @@ struct mei_hw_ops {
 };
 };
 
 
 /* MEI bus API*/
 /* MEI bus API*/
-void mei_cl_bus_rescan(struct mei_device *bus);
 void mei_cl_bus_rescan_work(struct work_struct *work);
 void mei_cl_bus_rescan_work(struct work_struct *work);
 void mei_cl_bus_dev_fixup(struct mei_cl_device *dev);
 void mei_cl_bus_dev_fixup(struct mei_cl_device *dev);
 ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
 ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,

+ 20 - 7
drivers/misc/sram-exec.c

@@ -19,6 +19,7 @@
 #include <linux/mm.h>
 #include <linux/mm.h>
 #include <linux/sram.h>
 #include <linux/sram.h>
 
 
+#include <asm/fncpy.h>
 #include <asm/set_memory.h>
 #include <asm/set_memory.h>
 
 
 #include "sram.h"
 #include "sram.h"
@@ -58,20 +59,32 @@ int sram_add_protect_exec(struct sram_partition *part)
  * @src: Source address for the data to copy
  * @src: Source address for the data to copy
  * @size: Size of copy to perform, which starting from dst, must reside in pool
  * @size: Size of copy to perform, which starting from dst, must reside in pool
  *
  *
+ * Return: Address for copied data that can safely be called through function
+ *	   pointer, or NULL if problem.
+ *
  * This helper function allows sram driver to act as central control location
  * This helper function allows sram driver to act as central control location
  * of 'protect-exec' pools which are normal sram pools but are always set
  * of 'protect-exec' pools which are normal sram pools but are always set
  * read-only and executable except when copying data to them, at which point
  * read-only and executable except when copying data to them, at which point
  * they are set to read-write non-executable, to make sure no memory is
  * they are set to read-write non-executable, to make sure no memory is
  * writeable and executable at the same time. This region must be page-aligned
  * writeable and executable at the same time. This region must be page-aligned
  * and is checked during probe, otherwise page attribute manipulation would
  * and is checked during probe, otherwise page attribute manipulation would
- * not be possible.
+ * not be possible. Care must be taken to only call the returned address as
+ * dst address is not guaranteed to be safely callable.
+ *
+ * NOTE: This function uses the fncpy macro to move code to the executable
+ * region. Some architectures have strict requirements for relocating
+ * executable code, so fncpy is a macro that must be defined by any arch
+ * making use of this functionality that guarantees a safe copy of exec
+ * data and returns a safe address that can be called as a C function
+ * pointer.
  */
  */
-int sram_exec_copy(struct gen_pool *pool, void *dst, void *src,
-		   size_t size)
+void *sram_exec_copy(struct gen_pool *pool, void *dst, void *src,
+		     size_t size)
 {
 {
 	struct sram_partition *part = NULL, *p;
 	struct sram_partition *part = NULL, *p;
 	unsigned long base;
 	unsigned long base;
 	int pages;
 	int pages;
+	void *dst_cpy;
 
 
 	mutex_lock(&exec_pool_list_mutex);
 	mutex_lock(&exec_pool_list_mutex);
 	list_for_each_entry(p, &exec_pool_list, list) {
 	list_for_each_entry(p, &exec_pool_list, list) {
@@ -81,10 +94,10 @@ int sram_exec_copy(struct gen_pool *pool, void *dst, void *src,
 	mutex_unlock(&exec_pool_list_mutex);
 	mutex_unlock(&exec_pool_list_mutex);
 
 
 	if (!part)
 	if (!part)
-		return -EINVAL;
+		return NULL;
 
 
 	if (!addr_in_gen_pool(pool, (unsigned long)dst, size))
 	if (!addr_in_gen_pool(pool, (unsigned long)dst, size))
-		return -EINVAL;
+		return NULL;
 
 
 	base = (unsigned long)part->base;
 	base = (unsigned long)part->base;
 	pages = PAGE_ALIGN(size) / PAGE_SIZE;
 	pages = PAGE_ALIGN(size) / PAGE_SIZE;
@@ -94,13 +107,13 @@ int sram_exec_copy(struct gen_pool *pool, void *dst, void *src,
 	set_memory_nx((unsigned long)base, pages);
 	set_memory_nx((unsigned long)base, pages);
 	set_memory_rw((unsigned long)base, pages);
 	set_memory_rw((unsigned long)base, pages);
 
 
-	memcpy(dst, src, size);
+	dst_cpy = fncpy(dst, src, size);
 
 
 	set_memory_ro((unsigned long)base, pages);
 	set_memory_ro((unsigned long)base, pages);
 	set_memory_x((unsigned long)base, pages);
 	set_memory_x((unsigned long)base, pages);
 
 
 	mutex_unlock(&part->lock);
 	mutex_unlock(&part->lock);
 
 
-	return 0;
+	return dst_cpy;
 }
 }
 EXPORT_SYMBOL_GPL(sram_exec_copy);
 EXPORT_SYMBOL_GPL(sram_exec_copy);

+ 59 - 0
drivers/mux/Kconfig

@@ -0,0 +1,59 @@
+#
+# Multiplexer devices
+#
+
+menuconfig MULTIPLEXER
+	tristate "Multiplexer subsystem"
+	help
+	  Multiplexer controller subsystem. Multiplexers are used in a
+	  variety of settings, and this subsystem abstracts their use
+	  so that the rest of the kernel sees a common interface. When
+	  multiple parallel multiplexers are controlled by one single
+	  multiplexer controller, this subsystem also coordinates the
+	  multiplexer accesses.
+
+	  To compile the subsystem as a module, choose M here: the module will
+	  be called mux-core.
+
+if MULTIPLEXER
+
+config MUX_ADG792A
+	tristate "Analog Devices ADG792A/ADG792G Multiplexers"
+	depends on I2C
+	help
+	  ADG792A and ADG792G Wide Bandwidth Triple 4:1 Multiplexers
+
+	  The driver supports both operating the three multiplexers in
+	  parallel and operating them independently.
+
+	  To compile the driver as a module, choose M here: the module will
+	  be called mux-adg792a.
+
+config MUX_GPIO
+	tristate "GPIO-controlled Multiplexer"
+	depends on GPIOLIB || COMPILE_TEST
+	help
+	  GPIO-controlled Multiplexer controller.
+
+	  The driver builds a single multiplexer controller using a number
+	  of gpio pins. For N pins, there will be 2^N possible multiplexer
+	  states. The GPIO pins can be connected (by the hardware) to several
+	  multiplexers, which in that case will be operated in parallel.
+
+	  To compile the driver as a module, choose M here: the module will
+	  be called mux-gpio.
+
+config MUX_MMIO
+	tristate "MMIO register bitfield-controlled Multiplexer"
+	depends on (OF && MFD_SYSCON) || COMPILE_TEST
+	help
+	  MMIO register bitfield-controlled Multiplexer controller.
+
+	  The driver builds multiplexer controllers for bitfields in a syscon
+	  register. For N bit wide bitfields, there will be 2^N possible
+	  multiplexer states.
+
+	  To compile the driver as a module, choose M here: the module will
+	  be called mux-mmio.
+
+endif

+ 8 - 0
drivers/mux/Makefile

@@ -0,0 +1,8 @@
+#
+# Makefile for multiplexer devices.
+#
+
+obj-$(CONFIG_MULTIPLEXER)	+= mux-core.o
+obj-$(CONFIG_MUX_ADG792A)	+= mux-adg792a.o
+obj-$(CONFIG_MUX_GPIO)		+= mux-gpio.o
+obj-$(CONFIG_MUX_MMIO)		+= mux-mmio.o

+ 157 - 0
drivers/mux/mux-adg792a.c

@@ -0,0 +1,157 @@
+/*
+ * Multiplexer driver for Analog Devices ADG792A/G Triple 4:1 mux
+ *
+ * Copyright (C) 2017 Axentia Technologies AB
+ *
+ * Author: Peter Rosin <peda@axentia.se>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/err.h>
+#include <linux/i2c.h>
+#include <linux/module.h>
+#include <linux/mux/driver.h>
+#include <linux/property.h>
+
+#define ADG792A_LDSW		BIT(0)
+#define ADG792A_RESETB		BIT(1)
+#define ADG792A_DISABLE(mux)	(0x50 | (mux))
+#define ADG792A_DISABLE_ALL	(0x5f)
+#define ADG792A_MUX(mux, state)	(0xc0 | (((mux) + 1) << 2) | (state))
+#define ADG792A_MUX_ALL(state)	(0xc0 | (state))
+
+static int adg792a_write_cmd(struct i2c_client *i2c, u8 cmd, int reset)
+{
+	u8 data = ADG792A_RESETB | ADG792A_LDSW;
+
+	/* ADG792A_RESETB is active low, the chip resets when it is zero. */
+	if (reset)
+		data &= ~ADG792A_RESETB;
+
+	return i2c_smbus_write_byte_data(i2c, cmd, data);
+}
+
+static int adg792a_set(struct mux_control *mux, int state)
+{
+	struct i2c_client *i2c = to_i2c_client(mux->chip->dev.parent);
+	u8 cmd;
+
+	if (mux->chip->controllers == 1) {
+		/* parallel mux controller operation */
+		if (state == MUX_IDLE_DISCONNECT)
+			cmd = ADG792A_DISABLE_ALL;
+		else
+			cmd = ADG792A_MUX_ALL(state);
+	} else {
+		unsigned int controller = mux_control_get_index(mux);
+
+		if (state == MUX_IDLE_DISCONNECT)
+			cmd = ADG792A_DISABLE(controller);
+		else
+			cmd = ADG792A_MUX(controller, state);
+	}
+
+	return adg792a_write_cmd(i2c, cmd, 0);
+}
+
+static const struct mux_control_ops adg792a_ops = {
+	.set = adg792a_set,
+};
+
+static int adg792a_probe(struct i2c_client *i2c,
+			 const struct i2c_device_id *id)
+{
+	struct device *dev = &i2c->dev;
+	struct mux_chip *mux_chip;
+	s32 idle_state[3];
+	u32 cells;
+	int ret;
+	int i;
+
+	if (!i2c_check_functionality(i2c->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+		return -ENODEV;
+
+	ret = device_property_read_u32(dev, "#mux-control-cells", &cells);
+	if (ret < 0)
+		return ret;
+	if (cells >= 2)
+		return -EINVAL;
+
+	mux_chip = devm_mux_chip_alloc(dev, cells ? 3 : 1, 0);
+	if (IS_ERR(mux_chip))
+		return PTR_ERR(mux_chip);
+
+	mux_chip->ops = &adg792a_ops;
+
+	ret = adg792a_write_cmd(i2c, ADG792A_DISABLE_ALL, 1);
+	if (ret < 0)
+		return ret;
+
+	ret = device_property_read_u32_array(dev, "idle-state",
+					     (u32 *)idle_state,
+					     mux_chip->controllers);
+	if (ret < 0) {
+		idle_state[0] = MUX_IDLE_AS_IS;
+		idle_state[1] = MUX_IDLE_AS_IS;
+		idle_state[2] = MUX_IDLE_AS_IS;
+	}
+
+	for (i = 0; i < mux_chip->controllers; ++i) {
+		struct mux_control *mux = &mux_chip->mux[i];
+
+		mux->states = 4;
+
+		switch (idle_state[i]) {
+		case MUX_IDLE_DISCONNECT:
+		case MUX_IDLE_AS_IS:
+		case 0 ... 4:
+			mux->idle_state = idle_state[i];
+			break;
+		default:
+			dev_err(dev, "invalid idle-state %d\n", idle_state[i]);
+			return -EINVAL;
+		}
+	}
+
+	ret = devm_mux_chip_register(dev, mux_chip);
+	if (ret < 0)
+		return ret;
+
+	if (cells)
+		dev_info(dev, "3x single pole quadruple throw muxes registered\n");
+	else
+		dev_info(dev, "triple pole quadruple throw mux registered\n");
+
+	return 0;
+}
+
+static const struct i2c_device_id adg792a_id[] = {
+	{ .name = "adg792a", },
+	{ .name = "adg792g", },
+	{ }
+};
+MODULE_DEVICE_TABLE(i2c, adg792a_id);
+
+static const struct of_device_id adg792a_of_match[] = {
+	{ .compatible = "adi,adg792a", },
+	{ .compatible = "adi,adg792g", },
+	{ }
+};
+MODULE_DEVICE_TABLE(of, adg792a_of_match);
+
+static struct i2c_driver adg792a_driver = {
+	.driver		= {
+		.name		= "adg792a",
+		.of_match_table = of_match_ptr(adg792a_of_match),
+	},
+	.probe		= adg792a_probe,
+	.id_table	= adg792a_id,
+};
+module_i2c_driver(adg792a_driver);
+
+MODULE_DESCRIPTION("Analog Devices ADG792A/G Triple 4:1 mux driver");
+MODULE_AUTHOR("Peter Rosin <peda@axentia.se>");
+MODULE_LICENSE("GPL v2");

+ 547 - 0
drivers/mux/mux-core.c

@@ -0,0 +1,547 @@
+/*
+ * Multiplexer subsystem
+ *
+ * Copyright (C) 2017 Axentia Technologies AB
+ *
+ * Author: Peter Rosin <peda@axentia.se>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) "mux-core: " fmt
+
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/idr.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mux/consumer.h>
+#include <linux/mux/driver.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/slab.h>
+
+/*
+ * The idle-as-is "state" is not an actual state that may be selected, it
+ * only implies that the state should not be changed. So, use that state
+ * as indication that the cached state of the multiplexer is unknown.
+ */
+#define MUX_CACHE_UNKNOWN MUX_IDLE_AS_IS
+
+static struct class mux_class = {
+	.name = "mux",
+	.owner = THIS_MODULE,
+};
+
+static DEFINE_IDA(mux_ida);
+
+static int __init mux_init(void)
+{
+	ida_init(&mux_ida);
+	return class_register(&mux_class);
+}
+
+static void __exit mux_exit(void)
+{
+	class_register(&mux_class);
+	ida_destroy(&mux_ida);
+}
+
+static void mux_chip_release(struct device *dev)
+{
+	struct mux_chip *mux_chip = to_mux_chip(dev);
+
+	ida_simple_remove(&mux_ida, mux_chip->id);
+	kfree(mux_chip);
+}
+
+static struct device_type mux_type = {
+	.name = "mux-chip",
+	.release = mux_chip_release,
+};
+
+/**
+ * mux_chip_alloc() - Allocate a mux-chip.
+ * @dev: The parent device implementing the mux interface.
+ * @controllers: The number of mux controllers to allocate for this chip.
+ * @sizeof_priv: Size of extra memory area for private use by the caller.
+ *
+ * After allocating the mux-chip with the desired number of mux controllers
+ * but before registering the chip, the mux driver is required to configure
+ * the number of valid mux states in the mux_chip->mux[N].states members and
+ * the desired idle state in the returned mux_chip->mux[N].idle_state members.
+ * The default idle state is MUX_IDLE_AS_IS. The mux driver also needs to
+ * provide a pointer to the operations struct in the mux_chip->ops member
+ * before registering the mux-chip with mux_chip_register.
+ *
+ * Return: A pointer to the new mux-chip, or an ERR_PTR with a negative errno.
+ */
+struct mux_chip *mux_chip_alloc(struct device *dev,
+				unsigned int controllers, size_t sizeof_priv)
+{
+	struct mux_chip *mux_chip;
+	int i;
+
+	if (WARN_ON(!dev || !controllers))
+		return ERR_PTR(-EINVAL);
+
+	mux_chip = kzalloc(sizeof(*mux_chip) +
+			   controllers * sizeof(*mux_chip->mux) +
+			   sizeof_priv, GFP_KERNEL);
+	if (!mux_chip)
+		return ERR_PTR(-ENOMEM);
+
+	mux_chip->mux = (struct mux_control *)(mux_chip + 1);
+	mux_chip->dev.class = &mux_class;
+	mux_chip->dev.type = &mux_type;
+	mux_chip->dev.parent = dev;
+	mux_chip->dev.of_node = dev->of_node;
+	dev_set_drvdata(&mux_chip->dev, mux_chip);
+
+	mux_chip->id = ida_simple_get(&mux_ida, 0, 0, GFP_KERNEL);
+	if (mux_chip->id < 0) {
+		int err = mux_chip->id;
+
+		pr_err("muxchipX failed to get a device id\n");
+		kfree(mux_chip);
+		return ERR_PTR(err);
+	}
+	dev_set_name(&mux_chip->dev, "muxchip%d", mux_chip->id);
+
+	mux_chip->controllers = controllers;
+	for (i = 0; i < controllers; ++i) {
+		struct mux_control *mux = &mux_chip->mux[i];
+
+		mux->chip = mux_chip;
+		sema_init(&mux->lock, 1);
+		mux->cached_state = MUX_CACHE_UNKNOWN;
+		mux->idle_state = MUX_IDLE_AS_IS;
+	}
+
+	device_initialize(&mux_chip->dev);
+
+	return mux_chip;
+}
+EXPORT_SYMBOL_GPL(mux_chip_alloc);
+
+static int mux_control_set(struct mux_control *mux, int state)
+{
+	int ret = mux->chip->ops->set(mux, state);
+
+	mux->cached_state = ret < 0 ? MUX_CACHE_UNKNOWN : state;
+
+	return ret;
+}
+
+/**
+ * mux_chip_register() - Register a mux-chip, thus readying the controllers
+ *			 for use.
+ * @mux_chip: The mux-chip to register.
+ *
+ * Do not retry registration of the same mux-chip on failure. You should
+ * instead put it away with mux_chip_free() and allocate a new one, if you
+ * for some reason would like to retry registration.
+ *
+ * Return: Zero on success or a negative errno on error.
+ */
+int mux_chip_register(struct mux_chip *mux_chip)
+{
+	int i;
+	int ret;
+
+	for (i = 0; i < mux_chip->controllers; ++i) {
+		struct mux_control *mux = &mux_chip->mux[i];
+
+		if (mux->idle_state == mux->cached_state)
+			continue;
+
+		ret = mux_control_set(mux, mux->idle_state);
+		if (ret < 0) {
+			dev_err(&mux_chip->dev, "unable to set idle state\n");
+			return ret;
+		}
+	}
+
+	ret = device_add(&mux_chip->dev);
+	if (ret < 0)
+		dev_err(&mux_chip->dev,
+			"device_add failed in %s: %d\n", __func__, ret);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mux_chip_register);
+
+/**
+ * mux_chip_unregister() - Take the mux-chip off-line.
+ * @mux_chip: The mux-chip to unregister.
+ *
+ * mux_chip_unregister() reverses the effects of mux_chip_register().
+ * But not completely, you should not try to call mux_chip_register()
+ * on a mux-chip that has been registered before.
+ */
+void mux_chip_unregister(struct mux_chip *mux_chip)
+{
+	device_del(&mux_chip->dev);
+}
+EXPORT_SYMBOL_GPL(mux_chip_unregister);
+
+/**
+ * mux_chip_free() - Free the mux-chip for good.
+ * @mux_chip: The mux-chip to free.
+ *
+ * mux_chip_free() reverses the effects of mux_chip_alloc().
+ */
+void mux_chip_free(struct mux_chip *mux_chip)
+{
+	if (!mux_chip)
+		return;
+
+	put_device(&mux_chip->dev);
+}
+EXPORT_SYMBOL_GPL(mux_chip_free);
+
+static void devm_mux_chip_release(struct device *dev, void *res)
+{
+	struct mux_chip *mux_chip = *(struct mux_chip **)res;
+
+	mux_chip_free(mux_chip);
+}
+
+/**
+ * devm_mux_chip_alloc() - Resource-managed version of mux_chip_alloc().
+ * @dev: The parent device implementing the mux interface.
+ * @controllers: The number of mux controllers to allocate for this chip.
+ * @sizeof_priv: Size of extra memory area for private use by the caller.
+ *
+ * See mux_chip_alloc() for more details.
+ *
+ * Return: A pointer to the new mux-chip, or an ERR_PTR with a negative errno.
+ */
+struct mux_chip *devm_mux_chip_alloc(struct device *dev,
+				     unsigned int controllers,
+				     size_t sizeof_priv)
+{
+	struct mux_chip **ptr, *mux_chip;
+
+	ptr = devres_alloc(devm_mux_chip_release, sizeof(*ptr), GFP_KERNEL);
+	if (!ptr)
+		return ERR_PTR(-ENOMEM);
+
+	mux_chip = mux_chip_alloc(dev, controllers, sizeof_priv);
+	if (IS_ERR(mux_chip)) {
+		devres_free(ptr);
+		return mux_chip;
+	}
+
+	*ptr = mux_chip;
+	devres_add(dev, ptr);
+
+	return mux_chip;
+}
+EXPORT_SYMBOL_GPL(devm_mux_chip_alloc);
+
+static void devm_mux_chip_reg_release(struct device *dev, void *res)
+{
+	struct mux_chip *mux_chip = *(struct mux_chip **)res;
+
+	mux_chip_unregister(mux_chip);
+}
+
+/**
+ * devm_mux_chip_register() - Resource-managed version mux_chip_register().
+ * @dev: The parent device implementing the mux interface.
+ * @mux_chip: The mux-chip to register.
+ *
+ * See mux_chip_register() for more details.
+ *
+ * Return: Zero on success or a negative errno on error.
+ */
+int devm_mux_chip_register(struct device *dev,
+			   struct mux_chip *mux_chip)
+{
+	struct mux_chip **ptr;
+	int res;
+
+	ptr = devres_alloc(devm_mux_chip_reg_release, sizeof(*ptr), GFP_KERNEL);
+	if (!ptr)
+		return -ENOMEM;
+
+	res = mux_chip_register(mux_chip);
+	if (res) {
+		devres_free(ptr);
+		return res;
+	}
+
+	*ptr = mux_chip;
+	devres_add(dev, ptr);
+
+	return res;
+}
+EXPORT_SYMBOL_GPL(devm_mux_chip_register);
+
+/**
+ * mux_control_states() - Query the number of multiplexer states.
+ * @mux: The mux-control to query.
+ *
+ * Return: The number of multiplexer states.
+ */
+unsigned int mux_control_states(struct mux_control *mux)
+{
+	return mux->states;
+}
+EXPORT_SYMBOL_GPL(mux_control_states);
+
+/*
+ * The mux->lock must be down when calling this function.
+ */
+static int __mux_control_select(struct mux_control *mux, int state)
+{
+	int ret;
+
+	if (WARN_ON(state < 0 || state >= mux->states))
+		return -EINVAL;
+
+	if (mux->cached_state == state)
+		return 0;
+
+	ret = mux_control_set(mux, state);
+	if (ret >= 0)
+		return 0;
+
+	/* The mux update failed, try to revert if appropriate... */
+	if (mux->idle_state != MUX_IDLE_AS_IS)
+		mux_control_set(mux, mux->idle_state);
+
+	return ret;
+}
+
+/**
+ * mux_control_select() - Select the given multiplexer state.
+ * @mux: The mux-control to request a change of state from.
+ * @state: The new requested state.
+ *
+ * On successfully selecting the mux-control state, it will be locked until
+ * there is a call to mux_control_deselect(). If the mux-control is already
+ * selected when mux_control_select() is called, the caller will be blocked
+ * until mux_control_deselect() is called (by someone else).
+ *
+ * Therefore, make sure to call mux_control_deselect() when the operation is
+ * complete and the mux-control is free for others to use, but do not call
+ * mux_control_deselect() if mux_control_select() fails.
+ *
+ * Return: 0 when the mux-control state has the requested state or a negative
+ * errno on error.
+ */
+int mux_control_select(struct mux_control *mux, unsigned int state)
+{
+	int ret;
+
+	ret = down_killable(&mux->lock);
+	if (ret < 0)
+		return ret;
+
+	ret = __mux_control_select(mux, state);
+
+	if (ret < 0)
+		up(&mux->lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mux_control_select);
+
+/**
+ * mux_control_try_select() - Try to select the given multiplexer state.
+ * @mux: The mux-control to request a change of state from.
+ * @state: The new requested state.
+ *
+ * On successfully selecting the mux-control state, it will be locked until
+ * mux_control_deselect() called.
+ *
+ * Therefore, make sure to call mux_control_deselect() when the operation is
+ * complete and the mux-control is free for others to use, but do not call
+ * mux_control_deselect() if mux_control_try_select() fails.
+ *
+ * Return: 0 when the mux-control state has the requested state or a negative
+ * errno on error. Specifically -EBUSY if the mux-control is contended.
+ */
+int mux_control_try_select(struct mux_control *mux, unsigned int state)
+{
+	int ret;
+
+	if (down_trylock(&mux->lock))
+		return -EBUSY;
+
+	ret = __mux_control_select(mux, state);
+
+	if (ret < 0)
+		up(&mux->lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mux_control_try_select);
+
+/**
+ * mux_control_deselect() - Deselect the previously selected multiplexer state.
+ * @mux: The mux-control to deselect.
+ *
+ * It is required that a single call is made to mux_control_deselect() for
+ * each and every successful call made to either of mux_control_select() or
+ * mux_control_try_select().
+ *
+ * Return: 0 on success and a negative errno on error. An error can only
+ * occur if the mux has an idle state. Note that even if an error occurs, the
+ * mux-control is unlocked and is thus free for the next access.
+ */
+int mux_control_deselect(struct mux_control *mux)
+{
+	int ret = 0;
+
+	if (mux->idle_state != MUX_IDLE_AS_IS &&
+	    mux->idle_state != mux->cached_state)
+		ret = mux_control_set(mux, mux->idle_state);
+
+	up(&mux->lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mux_control_deselect);
+
+static int of_dev_node_match(struct device *dev, const void *data)
+{
+	return dev->of_node == data;
+}
+
+static struct mux_chip *of_find_mux_chip_by_node(struct device_node *np)
+{
+	struct device *dev;
+
+	dev = class_find_device(&mux_class, NULL, np, of_dev_node_match);
+
+	return dev ? to_mux_chip(dev) : NULL;
+}
+
+/**
+ * mux_control_get() - Get the mux-control for a device.
+ * @dev: The device that needs a mux-control.
+ * @mux_name: The name identifying the mux-control.
+ *
+ * Return: A pointer to the mux-control, or an ERR_PTR with a negative errno.
+ */
+struct mux_control *mux_control_get(struct device *dev, const char *mux_name)
+{
+	struct device_node *np = dev->of_node;
+	struct of_phandle_args args;
+	struct mux_chip *mux_chip;
+	unsigned int controller;
+	int index = 0;
+	int ret;
+
+	if (mux_name) {
+		index = of_property_match_string(np, "mux-control-names",
+						 mux_name);
+		if (index < 0) {
+			dev_err(dev, "mux controller '%s' not found\n",
+				mux_name);
+			return ERR_PTR(index);
+		}
+	}
+
+	ret = of_parse_phandle_with_args(np,
+					 "mux-controls", "#mux-control-cells",
+					 index, &args);
+	if (ret) {
+		dev_err(dev, "%s: failed to get mux-control %s(%i)\n",
+			np->full_name, mux_name ?: "", index);
+		return ERR_PTR(ret);
+	}
+
+	mux_chip = of_find_mux_chip_by_node(args.np);
+	of_node_put(args.np);
+	if (!mux_chip)
+		return ERR_PTR(-EPROBE_DEFER);
+
+	if (args.args_count > 1 ||
+	    (!args.args_count && (mux_chip->controllers > 1))) {
+		dev_err(dev, "%s: wrong #mux-control-cells for %s\n",
+			np->full_name, args.np->full_name);
+		return ERR_PTR(-EINVAL);
+	}
+
+	controller = 0;
+	if (args.args_count)
+		controller = args.args[0];
+
+	if (controller >= mux_chip->controllers) {
+		dev_err(dev, "%s: bad mux controller %u specified in %s\n",
+			np->full_name, controller, args.np->full_name);
+		return ERR_PTR(-EINVAL);
+	}
+
+	get_device(&mux_chip->dev);
+	return &mux_chip->mux[controller];
+}
+EXPORT_SYMBOL_GPL(mux_control_get);
+
+/**
+ * mux_control_put() - Put away the mux-control for good.
+ * @mux: The mux-control to put away.
+ *
+ * mux_control_put() reverses the effects of mux_control_get().
+ */
+void mux_control_put(struct mux_control *mux)
+{
+	put_device(&mux->chip->dev);
+}
+EXPORT_SYMBOL_GPL(mux_control_put);
+
+static void devm_mux_control_release(struct device *dev, void *res)
+{
+	struct mux_control *mux = *(struct mux_control **)res;
+
+	mux_control_put(mux);
+}
+
+/**
+ * devm_mux_control_get() - Get the mux-control for a device, with resource
+ *			    management.
+ * @dev: The device that needs a mux-control.
+ * @mux_name: The name identifying the mux-control.
+ *
+ * Return: Pointer to the mux-control, or an ERR_PTR with a negative errno.
+ */
+struct mux_control *devm_mux_control_get(struct device *dev,
+					 const char *mux_name)
+{
+	struct mux_control **ptr, *mux;
+
+	ptr = devres_alloc(devm_mux_control_release, sizeof(*ptr), GFP_KERNEL);
+	if (!ptr)
+		return ERR_PTR(-ENOMEM);
+
+	mux = mux_control_get(dev, mux_name);
+	if (IS_ERR(mux)) {
+		devres_free(ptr);
+		return mux;
+	}
+
+	*ptr = mux;
+	devres_add(dev, ptr);
+
+	return mux;
+}
+EXPORT_SYMBOL_GPL(devm_mux_control_get);
+
+/*
+ * Using subsys_initcall instead of module_init here to try to ensure - for
+ * the non-modular case - that the subsystem is initialized when mux consumers
+ * and mux controllers start to use it.
+ * For the modular case, the ordering is ensured with module dependencies.
+ */
+subsys_initcall(mux_init);
+module_exit(mux_exit);
+
+MODULE_DESCRIPTION("Multiplexer subsystem");
+MODULE_AUTHOR("Peter Rosin <peda@axentia.se>");
+MODULE_LICENSE("GPL v2");

+ 114 - 0
drivers/mux/mux-gpio.c

@@ -0,0 +1,114 @@
+/*
+ * GPIO-controlled multiplexer driver
+ *
+ * Copyright (C) 2017 Axentia Technologies AB
+ *
+ * Author: Peter Rosin <peda@axentia.se>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/err.h>
+#include <linux/gpio/consumer.h>
+#include <linux/module.h>
+#include <linux/mux/driver.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/property.h>
+
+struct mux_gpio {
+	struct gpio_descs *gpios;
+	int *val;
+};
+
+static int mux_gpio_set(struct mux_control *mux, int state)
+{
+	struct mux_gpio *mux_gpio = mux_chip_priv(mux->chip);
+	int i;
+
+	for (i = 0; i < mux_gpio->gpios->ndescs; i++)
+		mux_gpio->val[i] = (state >> i) & 1;
+
+	gpiod_set_array_value_cansleep(mux_gpio->gpios->ndescs,
+				       mux_gpio->gpios->desc,
+				       mux_gpio->val);
+
+	return 0;
+}
+
+static const struct mux_control_ops mux_gpio_ops = {
+	.set = mux_gpio_set,
+};
+
+static const struct of_device_id mux_gpio_dt_ids[] = {
+	{ .compatible = "gpio-mux", },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, mux_gpio_dt_ids);
+
+static int mux_gpio_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct mux_chip *mux_chip;
+	struct mux_gpio *mux_gpio;
+	int pins;
+	s32 idle_state;
+	int ret;
+
+	pins = gpiod_count(dev, "mux");
+	if (pins < 0)
+		return pins;
+
+	mux_chip = devm_mux_chip_alloc(dev, 1, sizeof(*mux_gpio) +
+				       pins * sizeof(*mux_gpio->val));
+	if (IS_ERR(mux_chip))
+		return PTR_ERR(mux_chip);
+
+	mux_gpio = mux_chip_priv(mux_chip);
+	mux_gpio->val = (int *)(mux_gpio + 1);
+	mux_chip->ops = &mux_gpio_ops;
+
+	mux_gpio->gpios = devm_gpiod_get_array(dev, "mux", GPIOD_OUT_LOW);
+	if (IS_ERR(mux_gpio->gpios)) {
+		ret = PTR_ERR(mux_gpio->gpios);
+		if (ret != -EPROBE_DEFER)
+			dev_err(dev, "failed to get gpios\n");
+		return ret;
+	}
+	WARN_ON(pins != mux_gpio->gpios->ndescs);
+	mux_chip->mux->states = 1 << pins;
+
+	ret = device_property_read_u32(dev, "idle-state", (u32 *)&idle_state);
+	if (ret >= 0 && idle_state != MUX_IDLE_AS_IS) {
+		if (idle_state < 0 || idle_state >= mux_chip->mux->states) {
+			dev_err(dev, "invalid idle-state %u\n", idle_state);
+			return -EINVAL;
+		}
+
+		mux_chip->mux->idle_state = idle_state;
+	}
+
+	ret = devm_mux_chip_register(dev, mux_chip);
+	if (ret < 0)
+		return ret;
+
+	dev_info(dev, "%u-way mux-controller registered\n",
+		 mux_chip->mux->states);
+
+	return 0;
+}
+
+static struct platform_driver mux_gpio_driver = {
+	.driver = {
+		.name = "gpio-mux",
+		.of_match_table	= of_match_ptr(mux_gpio_dt_ids),
+	},
+	.probe = mux_gpio_probe,
+};
+module_platform_driver(mux_gpio_driver);
+
+MODULE_DESCRIPTION("GPIO-controlled multiplexer driver");
+MODULE_AUTHOR("Peter Rosin <peda@axentia.se>");
+MODULE_LICENSE("GPL v2");

+ 141 - 0
drivers/mux/mux-mmio.c

@@ -0,0 +1,141 @@
+/*
+ * MMIO register bitfield-controlled multiplexer driver
+ *
+ * Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/bitops.h>
+#include <linux/err.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/mux/driver.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/property.h>
+#include <linux/regmap.h>
+
+static int mux_mmio_set(struct mux_control *mux, int state)
+{
+	struct regmap_field **fields = mux_chip_priv(mux->chip);
+
+	return regmap_field_write(fields[mux_control_get_index(mux)], state);
+}
+
+static const struct mux_control_ops mux_mmio_ops = {
+	.set = mux_mmio_set,
+};
+
+static const struct of_device_id mux_mmio_dt_ids[] = {
+	{ .compatible = "mmio-mux", },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, mux_mmio_dt_ids);
+
+static int mux_mmio_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	struct regmap_field **fields;
+	struct mux_chip *mux_chip;
+	struct regmap *regmap;
+	int num_fields;
+	int ret;
+	int i;
+
+	regmap = syscon_node_to_regmap(np->parent);
+	if (IS_ERR(regmap)) {
+		ret = PTR_ERR(regmap);
+		dev_err(dev, "failed to get regmap: %d\n", ret);
+		return ret;
+	}
+
+	ret = of_property_count_u32_elems(np, "mux-reg-masks");
+	if (ret == 0 || ret % 2)
+		ret = -EINVAL;
+	if (ret < 0) {
+		dev_err(dev, "mux-reg-masks property missing or invalid: %d\n",
+			ret);
+		return ret;
+	}
+	num_fields = ret / 2;
+
+	mux_chip = devm_mux_chip_alloc(dev, num_fields, num_fields *
+				       sizeof(*fields));
+	if (IS_ERR(mux_chip))
+		return PTR_ERR(mux_chip);
+
+	fields = mux_chip_priv(mux_chip);
+
+	for (i = 0; i < num_fields; i++) {
+		struct mux_control *mux = &mux_chip->mux[i];
+		struct reg_field field;
+		s32 idle_state = MUX_IDLE_AS_IS;
+		u32 reg, mask;
+		int bits;
+
+		ret = of_property_read_u32_index(np, "mux-reg-masks",
+						 2 * i, &reg);
+		if (!ret)
+			ret = of_property_read_u32_index(np, "mux-reg-masks",
+							 2 * i + 1, &mask);
+		if (ret < 0) {
+			dev_err(dev, "bitfield %d: failed to read mux-reg-masks property: %d\n",
+				i, ret);
+			return ret;
+		}
+
+		field.reg = reg;
+		field.msb = fls(mask) - 1;
+		field.lsb = ffs(mask) - 1;
+
+		if (mask != GENMASK(field.msb, field.lsb)) {
+			dev_err(dev, "bitfield %d: invalid mask 0x%x\n",
+				i, mask);
+			return -EINVAL;
+		}
+
+		fields[i] = devm_regmap_field_alloc(dev, regmap, field);
+		if (IS_ERR(fields[i])) {
+			ret = PTR_ERR(fields[i]);
+			dev_err(dev, "bitfield %d: failed allocate: %d\n",
+				i, ret);
+			return ret;
+		}
+
+		bits = 1 + field.msb - field.lsb;
+		mux->states = 1 << bits;
+
+		of_property_read_u32_index(np, "idle-states", i,
+					   (u32 *)&idle_state);
+		if (idle_state != MUX_IDLE_AS_IS) {
+			if (idle_state < 0 || idle_state >= mux->states) {
+				dev_err(dev, "bitfield: %d: out of range idle state %d\n",
+					i, idle_state);
+				return -EINVAL;
+			}
+
+			mux->idle_state = idle_state;
+		}
+	}
+
+	mux_chip->ops = &mux_mmio_ops;
+
+	return devm_mux_chip_register(dev, mux_chip);
+}
+
+static struct platform_driver mux_mmio_driver = {
+	.driver = {
+		.name = "mmio-mux",
+		.of_match_table	= of_match_ptr(mux_mmio_dt_ids),
+	},
+	.probe = mux_mmio_probe,
+};
+module_platform_driver(mux_mmio_driver);
+
+MODULE_DESCRIPTION("MMIO register bitfield-controlled multiplexer driver");
+MODULE_AUTHOR("Philipp Zabel <p.zabel@pengutronix.de>");
+MODULE_LICENSE("GPL v2");

+ 2 - 2
drivers/nvmem/bcm-ocotp.c

@@ -34,7 +34,7 @@
 #define OTPC_CMD_READ                0x0
 #define OTPC_CMD_READ                0x0
 #define OTPC_CMD_OTP_PROG_ENABLE     0x2
 #define OTPC_CMD_OTP_PROG_ENABLE     0x2
 #define OTPC_CMD_OTP_PROG_DISABLE    0x3
 #define OTPC_CMD_OTP_PROG_DISABLE    0x3
-#define OTPC_CMD_PROGRAM             0xA
+#define OTPC_CMD_PROGRAM             0x8
 
 
 /* OTPC Status Bits */
 /* OTPC Status Bits */
 #define OTPC_STAT_CMD_DONE           BIT(1)
 #define OTPC_STAT_CMD_DONE           BIT(1)
@@ -209,7 +209,7 @@ static int bcm_otpc_write(void *context, unsigned int offset, void *val,
 		set_command(priv->base, OTPC_CMD_PROGRAM);
 		set_command(priv->base, OTPC_CMD_PROGRAM);
 		set_cpu_address(priv->base, address++);
 		set_cpu_address(priv->base, address++);
 		for (i = 0; i < priv->map->otpc_row_size; i++) {
 		for (i = 0; i < priv->map->otpc_row_size; i++) {
-			writel(*buf, priv->base + priv->map->data_r_offset[i]);
+			writel(*buf, priv->base + priv->map->data_w_offset[i]);
 			buf++;
 			buf++;
 			bytes_written += sizeof(*buf);
 			bytes_written += sizeof(*buf);
 		}
 		}

+ 16 - 6
drivers/nvmem/core.c

@@ -287,9 +287,15 @@ static struct nvmem_cell *nvmem_find_cell(const char *cell_id)
 {
 {
 	struct nvmem_cell *p;
 	struct nvmem_cell *p;
 
 
+	mutex_lock(&nvmem_cells_mutex);
+
 	list_for_each_entry(p, &nvmem_cells, node)
 	list_for_each_entry(p, &nvmem_cells, node)
-		if (p && !strcmp(p->name, cell_id))
+		if (p && !strcmp(p->name, cell_id)) {
+			mutex_unlock(&nvmem_cells_mutex);
 			return p;
 			return p;
+		}
+
+	mutex_unlock(&nvmem_cells_mutex);
 
 
 	return NULL;
 	return NULL;
 }
 }
@@ -489,21 +495,24 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
 
 
 	rval = device_add(&nvmem->dev);
 	rval = device_add(&nvmem->dev);
 	if (rval)
 	if (rval)
-		goto out;
+		goto err_put_device;
 
 
 	if (config->compat) {
 	if (config->compat) {
 		rval = nvmem_setup_compat(nvmem, config);
 		rval = nvmem_setup_compat(nvmem, config);
 		if (rval)
 		if (rval)
-			goto out;
+			goto err_device_del;
 	}
 	}
 
 
 	if (config->cells)
 	if (config->cells)
 		nvmem_add_cells(nvmem, config);
 		nvmem_add_cells(nvmem, config);
 
 
 	return nvmem;
 	return nvmem;
-out:
-	ida_simple_remove(&nvmem_ida, nvmem->id);
-	kfree(nvmem);
+
+err_device_del:
+	device_del(&nvmem->dev);
+err_put_device:
+	put_device(&nvmem->dev);
+
 	return ERR_PTR(rval);
 	return ERR_PTR(rval);
 }
 }
 EXPORT_SYMBOL_GPL(nvmem_register);
 EXPORT_SYMBOL_GPL(nvmem_register);
@@ -529,6 +538,7 @@ int nvmem_unregister(struct nvmem_device *nvmem)
 
 
 	nvmem_device_remove_all_cells(nvmem);
 	nvmem_device_remove_all_cells(nvmem);
 	device_del(&nvmem->dev);
 	device_del(&nvmem->dev);
+	put_device(&nvmem->dev);
 
 
 	return 0;
 	return 0;
 }
 }

+ 4 - 0
drivers/nvmem/rockchip-efuse.c

@@ -169,6 +169,10 @@ static const struct of_device_id rockchip_efuse_match[] = {
 		.compatible = "rockchip,rk3188-efuse",
 		.compatible = "rockchip,rk3188-efuse",
 		.data = (void *)&rockchip_rk3288_efuse_read,
 		.data = (void *)&rockchip_rk3288_efuse_read,
 	},
 	},
+	{
+		.compatible = "rockchip,rk322x-efuse",
+		.data = (void *)&rockchip_rk3288_efuse_read,
+	},
 	{
 	{
 		.compatible = "rockchip,rk3288-efuse",
 		.compatible = "rockchip,rk3288-efuse",
 		.data = (void *)&rockchip_rk3288_efuse_read,
 		.data = (void *)&rockchip_rk3288_efuse_read,

+ 1 - 1
drivers/platform/goldfish/goldfish_pipe.c

@@ -266,7 +266,7 @@ struct goldfish_pipe_dev {
 	unsigned char __iomem *base;
 	unsigned char __iomem *base;
 };
 };
 
 
-struct goldfish_pipe_dev pipe_dev[1] = {};
+static struct goldfish_pipe_dev pipe_dev[1] = {};
 
 
 static int goldfish_cmd_locked(struct goldfish_pipe *pipe, enum PipeCmdCode cmd)
 static int goldfish_cmd_locked(struct goldfish_pipe *pipe, enum PipeCmdCode cmd)
 {
 {

+ 1 - 1
drivers/power/supply/ds2760_battery.c

@@ -28,7 +28,7 @@
 #include <linux/platform_device.h>
 #include <linux/platform_device.h>
 #include <linux/power_supply.h>
 #include <linux/power_supply.h>
 
 
-#include "../../w1/w1.h"
+#include <linux/w1.h>
 #include "../../w1/slaves/w1_ds2760.h"
 #include "../../w1/slaves/w1_ds2760.h"
 
 
 struct ds2760_device_info {
 struct ds2760_device_info {

+ 1 - 1
drivers/power/supply/ds2780_battery.c

@@ -21,7 +21,7 @@
 #include <linux/power_supply.h>
 #include <linux/power_supply.h>
 #include <linux/idr.h>
 #include <linux/idr.h>
 
 
-#include "../../w1/w1.h"
+#include <linux/w1.h>
 #include "../../w1/slaves/w1_ds2780.h"
 #include "../../w1/slaves/w1_ds2780.h"
 
 
 /* Current unit measurement in uA for a 1 milli-ohm sense resistor */
 /* Current unit measurement in uA for a 1 milli-ohm sense resistor */

+ 1 - 1
drivers/power/supply/ds2781_battery.c

@@ -19,7 +19,7 @@
 #include <linux/power_supply.h>
 #include <linux/power_supply.h>
 #include <linux/idr.h>
 #include <linux/idr.h>
 
 
-#include "../../w1/w1.h"
+#include <linux/w1.h>
 #include "../../w1/slaves/w1_ds2781.h"
 #include "../../w1/slaves/w1_ds2781.h"
 
 
 /* Current unit measurement in uA for a 1 milli-ohm sense resistor */
 /* Current unit measurement in uA for a 1 milli-ohm sense resistor */

+ 3 - 9
drivers/pps/Kconfig

@@ -2,9 +2,7 @@
 # PPS support configuration
 # PPS support configuration
 #
 #
 
 
-menu "PPS support"
-
-config PPS
+menuconfig PPS
 	tristate "PPS support"
 	tristate "PPS support"
 	---help---
 	---help---
 	  PPS (Pulse Per Second) is a special pulse provided by some GPS
 	  PPS (Pulse Per Second) is a special pulse provided by some GPS
@@ -20,10 +18,10 @@ config PPS
 
 
 	  To compile this driver as a module, choose M here: the module
 	  To compile this driver as a module, choose M here: the module
 	  will be called pps_core.ko.
 	  will be called pps_core.ko.
-if PPS
 
 
 config PPS_DEBUG
 config PPS_DEBUG
 	bool "PPS debugging messages"
 	bool "PPS debugging messages"
+	depends on PPS
 	help
 	help
 	  Say Y here if you want the PPS support to produce a bunch of debug
 	  Say Y here if you want the PPS support to produce a bunch of debug
 	  messages to the system log.  Select this if you are having a
 	  messages to the system log.  Select this if you are having a
@@ -31,17 +29,13 @@ config PPS_DEBUG
 
 
 config NTP_PPS
 config NTP_PPS
 	bool "PPS kernel consumer support"
 	bool "PPS kernel consumer support"
-	depends on !NO_HZ_COMMON
+	depends on PPS && !NO_HZ_COMMON
 	help
 	help
 	  This option adds support for direct in-kernel time
 	  This option adds support for direct in-kernel time
 	  synchronization using an external PPS signal.
 	  synchronization using an external PPS signal.
 
 
 	  It doesn't work on tickless systems at the moment.
 	  It doesn't work on tickless systems at the moment.
 
 
-endif
-
 source drivers/pps/clients/Kconfig
 source drivers/pps/clients/Kconfig
 
 
 source drivers/pps/generators/Kconfig
 source drivers/pps/generators/Kconfig
-
-endmenu

+ 2 - 4
drivers/pps/clients/Kconfig

@@ -2,12 +2,12 @@
 # PPS clients configuration
 # PPS clients configuration
 #
 #
 
 
-if PPS
-
 comment "PPS clients support"
 comment "PPS clients support"
+	depends on PPS
 
 
 config PPS_CLIENT_KTIMER
 config PPS_CLIENT_KTIMER
 	tristate "Kernel timer client (Testing client, use for debug)"
 	tristate "Kernel timer client (Testing client, use for debug)"
+	depends on PPS
 	help
 	help
 	  If you say yes here you get support for a PPS debugging client
 	  If you say yes here you get support for a PPS debugging client
 	  which uses a kernel timer to generate the PPS signal.
 	  which uses a kernel timer to generate the PPS signal.
@@ -37,5 +37,3 @@ config PPS_CLIENT_GPIO
 	  GPIO. To be useful you must also register a platform device
 	  GPIO. To be useful you must also register a platform device
 	  specifying the GPIO pin and other options, usually in your board
 	  specifying the GPIO pin and other options, usually in your board
 	  setup.
 	  setup.
-
-endif

+ 2 - 1
drivers/pps/generators/Kconfig

@@ -3,10 +3,11 @@
 #
 #
 
 
 comment "PPS generators support"
 comment "PPS generators support"
+	depends on PPS
 
 
 config PPS_GENERATOR_PARPORT
 config PPS_GENERATOR_PARPORT
 	tristate "Parallel port PPS signal generator"
 	tristate "Parallel port PPS signal generator"
-	depends on PARPORT && BROKEN
+	depends on PPS && PARPORT && BROKEN
 	help
 	help
 	  If you say yes here you get support for a PPS signal generator which
 	  If you say yes here you get support for a PPS signal generator which
 	  utilizes STROBE pin of a parallel port to send PPS signals. It uses
 	  utilizes STROBE pin of a parallel port to send PPS signals. It uses

+ 369 - 239
drivers/spmi/spmi-pmic-arb.c

@@ -28,6 +28,7 @@
 /* PMIC Arbiter configuration registers */
 /* PMIC Arbiter configuration registers */
 #define PMIC_ARB_VERSION		0x0000
 #define PMIC_ARB_VERSION		0x0000
 #define PMIC_ARB_VERSION_V2_MIN		0x20010000
 #define PMIC_ARB_VERSION_V2_MIN		0x20010000
+#define PMIC_ARB_VERSION_V3_MIN		0x30000000
 #define PMIC_ARB_INT_EN			0x0004
 #define PMIC_ARB_INT_EN			0x0004
 
 
 /* PMIC Arbiter channel registers offsets */
 /* PMIC Arbiter channel registers offsets */
@@ -58,10 +59,10 @@
 
 
 /* Channel Status fields */
 /* Channel Status fields */
 enum pmic_arb_chnl_status {
 enum pmic_arb_chnl_status {
-	PMIC_ARB_STATUS_DONE	= (1 << 0),
-	PMIC_ARB_STATUS_FAILURE	= (1 << 1),
-	PMIC_ARB_STATUS_DENIED	= (1 << 2),
-	PMIC_ARB_STATUS_DROPPED	= (1 << 3),
+	PMIC_ARB_STATUS_DONE	= BIT(0),
+	PMIC_ARB_STATUS_FAILURE	= BIT(1),
+	PMIC_ARB_STATUS_DENIED	= BIT(2),
+	PMIC_ARB_STATUS_DROPPED	= BIT(3),
 };
 };
 
 
 /* Command register fields */
 /* Command register fields */
@@ -96,10 +97,26 @@ enum pmic_arb_cmd_op_code {
 /* interrupt enable bit */
 /* interrupt enable bit */
 #define SPMI_PIC_ACC_ENABLE_BIT		BIT(0)
 #define SPMI_PIC_ACC_ENABLE_BIT		BIT(0)
 
 
+#define HWIRQ(slave_id, periph_id, irq_id, apid) \
+	((((slave_id) & 0xF)   << 28) | \
+	(((periph_id) & 0xFF)  << 20) | \
+	(((irq_id)    & 0x7)   << 16) | \
+	(((apid)      & 0x1FF) << 0))
+
+#define HWIRQ_SID(hwirq)  (((hwirq) >> 28) & 0xF)
+#define HWIRQ_PER(hwirq)  (((hwirq) >> 20) & 0xFF)
+#define HWIRQ_IRQ(hwirq)  (((hwirq) >> 16) & 0x7)
+#define HWIRQ_APID(hwirq) (((hwirq) >> 0)  & 0x1FF)
+
 struct pmic_arb_ver_ops;
 struct pmic_arb_ver_ops;
 
 
+struct apid_data {
+	u16		ppid;
+	u8		owner;
+};
+
 /**
 /**
- * spmi_pmic_arb_dev - SPMI PMIC Arbiter object
+ * spmi_pmic_arb - SPMI PMIC Arbiter object
  *
  *
  * @rd_base:		on v1 "core", on v2 "observer" register base off DT.
  * @rd_base:		on v1 "core", on v2 "observer" register base off DT.
  * @wr_base:		on v1 "core", on v2 "chnls"    register base off DT.
  * @wr_base:		on v1 "core", on v2 "chnls"    register base off DT.
@@ -111,15 +128,15 @@ struct pmic_arb_ver_ops;
  * @ee:			the current Execution Environment
  * @ee:			the current Execution Environment
  * @min_apid:		minimum APID (used for bounding IRQ search)
  * @min_apid:		minimum APID (used for bounding IRQ search)
  * @max_apid:		maximum APID
  * @max_apid:		maximum APID
+ * @max_periph:		maximum number of PMIC peripherals supported by HW.
  * @mapping_table:	in-memory copy of PPID -> APID mapping table.
  * @mapping_table:	in-memory copy of PPID -> APID mapping table.
  * @domain:		irq domain object for PMIC IRQ domain
  * @domain:		irq domain object for PMIC IRQ domain
  * @spmic:		SPMI controller object
  * @spmic:		SPMI controller object
- * @apid_to_ppid:	in-memory copy of APID -> PPID mapping table.
  * @ver_ops:		version dependent operations.
  * @ver_ops:		version dependent operations.
- * @ppid_to_chan	in-memory copy of PPID -> channel (APID) mapping table.
+ * @ppid_to_apid	in-memory copy of PPID -> channel (APID) mapping table.
  *			v2 only.
  *			v2 only.
  */
  */
-struct spmi_pmic_arb_dev {
+struct spmi_pmic_arb {
 	void __iomem		*rd_base;
 	void __iomem		*rd_base;
 	void __iomem		*wr_base;
 	void __iomem		*wr_base;
 	void __iomem		*intr;
 	void __iomem		*intr;
@@ -132,19 +149,23 @@ struct spmi_pmic_arb_dev {
 	u8			ee;
 	u8			ee;
 	u16			min_apid;
 	u16			min_apid;
 	u16			max_apid;
 	u16			max_apid;
+	u16			max_periph;
 	u32			*mapping_table;
 	u32			*mapping_table;
 	DECLARE_BITMAP(mapping_table_valid, PMIC_ARB_MAX_PERIPHS);
 	DECLARE_BITMAP(mapping_table_valid, PMIC_ARB_MAX_PERIPHS);
 	struct irq_domain	*domain;
 	struct irq_domain	*domain;
 	struct spmi_controller	*spmic;
 	struct spmi_controller	*spmic;
-	u16			*apid_to_ppid;
 	const struct pmic_arb_ver_ops *ver_ops;
 	const struct pmic_arb_ver_ops *ver_ops;
-	u16			*ppid_to_chan;
-	u16			last_channel;
+	u16			*ppid_to_apid;
+	u16			last_apid;
+	struct apid_data	apid_data[PMIC_ARB_MAX_PERIPHS];
 };
 };
 
 
 /**
 /**
  * pmic_arb_ver: version dependent functionality.
  * pmic_arb_ver: version dependent functionality.
  *
  *
+ * @ver_str:		version string.
+ * @ppid_to_apid:	finds the apid for a given ppid.
+ * @mode:		access rights to specified pmic peripheral.
  * @non_data_cmd:	on v1 issues an spmi non-data command.
  * @non_data_cmd:	on v1 issues an spmi non-data command.
  *			on v2 no HW support, returns -EOPNOTSUPP.
  *			on v2 no HW support, returns -EOPNOTSUPP.
  * @offset:		on v1 offset of per-ee channel.
  * @offset:		on v1 offset of per-ee channel.
@@ -160,28 +181,33 @@ struct spmi_pmic_arb_dev {
  *			on v2 offset of SPMI_PIC_IRQ_CLEARn.
  *			on v2 offset of SPMI_PIC_IRQ_CLEARn.
  */
  */
 struct pmic_arb_ver_ops {
 struct pmic_arb_ver_ops {
+	const char *ver_str;
+	int (*ppid_to_apid)(struct spmi_pmic_arb *pa, u8 sid, u16 addr,
+			u16 *apid);
+	int (*mode)(struct spmi_pmic_arb *dev, u8 sid, u16 addr,
+			mode_t *mode);
 	/* spmi commands (read_cmd, write_cmd, cmd) functionality */
 	/* spmi commands (read_cmd, write_cmd, cmd) functionality */
-	int (*offset)(struct spmi_pmic_arb_dev *dev, u8 sid, u16 addr,
+	int (*offset)(struct spmi_pmic_arb *dev, u8 sid, u16 addr,
 		      u32 *offset);
 		      u32 *offset);
 	u32 (*fmt_cmd)(u8 opc, u8 sid, u16 addr, u8 bc);
 	u32 (*fmt_cmd)(u8 opc, u8 sid, u16 addr, u8 bc);
 	int (*non_data_cmd)(struct spmi_controller *ctrl, u8 opc, u8 sid);
 	int (*non_data_cmd)(struct spmi_controller *ctrl, u8 opc, u8 sid);
 	/* Interrupts controller functionality (offset of PIC registers) */
 	/* Interrupts controller functionality (offset of PIC registers) */
-	u32 (*owner_acc_status)(u8 m, u8 n);
-	u32 (*acc_enable)(u8 n);
-	u32 (*irq_status)(u8 n);
-	u32 (*irq_clear)(u8 n);
+	u32 (*owner_acc_status)(u8 m, u16 n);
+	u32 (*acc_enable)(u16 n);
+	u32 (*irq_status)(u16 n);
+	u32 (*irq_clear)(u16 n);
 };
 };
 
 
-static inline void pmic_arb_base_write(struct spmi_pmic_arb_dev *dev,
+static inline void pmic_arb_base_write(struct spmi_pmic_arb *pa,
 				       u32 offset, u32 val)
 				       u32 offset, u32 val)
 {
 {
-	writel_relaxed(val, dev->wr_base + offset);
+	writel_relaxed(val, pa->wr_base + offset);
 }
 }
 
 
-static inline void pmic_arb_set_rd_cmd(struct spmi_pmic_arb_dev *dev,
+static inline void pmic_arb_set_rd_cmd(struct spmi_pmic_arb *pa,
 				       u32 offset, u32 val)
 				       u32 offset, u32 val)
 {
 {
-	writel_relaxed(val, dev->rd_base + offset);
+	writel_relaxed(val, pa->rd_base + offset);
 }
 }
 
 
 /**
 /**
@@ -190,9 +216,10 @@ static inline void pmic_arb_set_rd_cmd(struct spmi_pmic_arb_dev *dev,
  * @reg:	register's address
  * @reg:	register's address
  * @buf:	output parameter, length must be bc + 1
  * @buf:	output parameter, length must be bc + 1
  */
  */
-static void pa_read_data(struct spmi_pmic_arb_dev *dev, u8 *buf, u32 reg, u8 bc)
+static void pa_read_data(struct spmi_pmic_arb *pa, u8 *buf, u32 reg, u8 bc)
 {
 {
-	u32 data = __raw_readl(dev->rd_base + reg);
+	u32 data = __raw_readl(pa->rd_base + reg);
+
 	memcpy(buf, &data, (bc & 3) + 1);
 	memcpy(buf, &data, (bc & 3) + 1);
 }
 }
 
 
@@ -203,23 +230,24 @@ static void pa_read_data(struct spmi_pmic_arb_dev *dev, u8 *buf, u32 reg, u8 bc)
  * @buf:	buffer to write. length must be bc + 1.
  * @buf:	buffer to write. length must be bc + 1.
  */
  */
 static void
 static void
-pa_write_data(struct spmi_pmic_arb_dev *dev, const u8 *buf, u32 reg, u8 bc)
+pa_write_data(struct spmi_pmic_arb *pa, const u8 *buf, u32 reg, u8 bc)
 {
 {
 	u32 data = 0;
 	u32 data = 0;
+
 	memcpy(&data, buf, (bc & 3) + 1);
 	memcpy(&data, buf, (bc & 3) + 1);
-	__raw_writel(data, dev->wr_base + reg);
+	pmic_arb_base_write(pa, reg, data);
 }
 }
 
 
 static int pmic_arb_wait_for_done(struct spmi_controller *ctrl,
 static int pmic_arb_wait_for_done(struct spmi_controller *ctrl,
 				  void __iomem *base, u8 sid, u16 addr)
 				  void __iomem *base, u8 sid, u16 addr)
 {
 {
-	struct spmi_pmic_arb_dev *dev = spmi_controller_get_drvdata(ctrl);
+	struct spmi_pmic_arb *pa = spmi_controller_get_drvdata(ctrl);
 	u32 status = 0;
 	u32 status = 0;
 	u32 timeout = PMIC_ARB_TIMEOUT_US;
 	u32 timeout = PMIC_ARB_TIMEOUT_US;
 	u32 offset;
 	u32 offset;
 	int rc;
 	int rc;
 
 
-	rc = dev->ver_ops->offset(dev, sid, addr, &offset);
+	rc = pa->ver_ops->offset(pa, sid, addr, &offset);
 	if (rc)
 	if (rc)
 		return rc;
 		return rc;
 
 
@@ -264,22 +292,22 @@ static int pmic_arb_wait_for_done(struct spmi_controller *ctrl,
 static int
 static int
 pmic_arb_non_data_cmd_v1(struct spmi_controller *ctrl, u8 opc, u8 sid)
 pmic_arb_non_data_cmd_v1(struct spmi_controller *ctrl, u8 opc, u8 sid)
 {
 {
-	struct spmi_pmic_arb_dev *pmic_arb = spmi_controller_get_drvdata(ctrl);
+	struct spmi_pmic_arb *pa = spmi_controller_get_drvdata(ctrl);
 	unsigned long flags;
 	unsigned long flags;
 	u32 cmd;
 	u32 cmd;
 	int rc;
 	int rc;
 	u32 offset;
 	u32 offset;
 
 
-	rc = pmic_arb->ver_ops->offset(pmic_arb, sid, 0, &offset);
+	rc = pa->ver_ops->offset(pa, sid, 0, &offset);
 	if (rc)
 	if (rc)
 		return rc;
 		return rc;
 
 
 	cmd = ((opc | 0x40) << 27) | ((sid & 0xf) << 20);
 	cmd = ((opc | 0x40) << 27) | ((sid & 0xf) << 20);
 
 
-	raw_spin_lock_irqsave(&pmic_arb->lock, flags);
-	pmic_arb_base_write(pmic_arb, offset + PMIC_ARB_CMD, cmd);
-	rc = pmic_arb_wait_for_done(ctrl, pmic_arb->wr_base, sid, 0);
-	raw_spin_unlock_irqrestore(&pmic_arb->lock, flags);
+	raw_spin_lock_irqsave(&pa->lock, flags);
+	pmic_arb_base_write(pa, offset + PMIC_ARB_CMD, cmd);
+	rc = pmic_arb_wait_for_done(ctrl, pa->wr_base, sid, 0);
+	raw_spin_unlock_irqrestore(&pa->lock, flags);
 
 
 	return rc;
 	return rc;
 }
 }
@@ -293,7 +321,7 @@ pmic_arb_non_data_cmd_v2(struct spmi_controller *ctrl, u8 opc, u8 sid)
 /* Non-data command */
 /* Non-data command */
 static int pmic_arb_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid)
 static int pmic_arb_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid)
 {
 {
-	struct spmi_pmic_arb_dev *pmic_arb = spmi_controller_get_drvdata(ctrl);
+	struct spmi_pmic_arb *pa = spmi_controller_get_drvdata(ctrl);
 
 
 	dev_dbg(&ctrl->dev, "cmd op:0x%x sid:%d\n", opc, sid);
 	dev_dbg(&ctrl->dev, "cmd op:0x%x sid:%d\n", opc, sid);
 
 
@@ -301,23 +329,35 @@ static int pmic_arb_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid)
 	if (opc < SPMI_CMD_RESET || opc > SPMI_CMD_WAKEUP)
 	if (opc < SPMI_CMD_RESET || opc > SPMI_CMD_WAKEUP)
 		return -EINVAL;
 		return -EINVAL;
 
 
-	return pmic_arb->ver_ops->non_data_cmd(ctrl, opc, sid);
+	return pa->ver_ops->non_data_cmd(ctrl, opc, sid);
 }
 }
 
 
 static int pmic_arb_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
 static int pmic_arb_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
 			     u16 addr, u8 *buf, size_t len)
 			     u16 addr, u8 *buf, size_t len)
 {
 {
-	struct spmi_pmic_arb_dev *pmic_arb = spmi_controller_get_drvdata(ctrl);
+	struct spmi_pmic_arb *pa = spmi_controller_get_drvdata(ctrl);
 	unsigned long flags;
 	unsigned long flags;
 	u8 bc = len - 1;
 	u8 bc = len - 1;
 	u32 cmd;
 	u32 cmd;
 	int rc;
 	int rc;
 	u32 offset;
 	u32 offset;
+	mode_t mode;
+
+	rc = pa->ver_ops->offset(pa, sid, addr, &offset);
+	if (rc)
+		return rc;
 
 
-	rc = pmic_arb->ver_ops->offset(pmic_arb, sid, addr, &offset);
+	rc = pa->ver_ops->mode(pa, sid, addr, &mode);
 	if (rc)
 	if (rc)
 		return rc;
 		return rc;
 
 
+	if (!(mode & S_IRUSR)) {
+		dev_err(&pa->spmic->dev,
+			"error: impermissible read from peripheral sid:%d addr:0x%x\n",
+			sid, addr);
+		return -EPERM;
+	}
+
 	if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
 	if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
 		dev_err(&ctrl->dev,
 		dev_err(&ctrl->dev,
 			"pmic-arb supports 1..%d bytes per trans, but:%zu requested",
 			"pmic-arb supports 1..%d bytes per trans, but:%zu requested",
@@ -335,40 +375,51 @@ static int pmic_arb_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
 	else
 	else
 		return -EINVAL;
 		return -EINVAL;
 
 
-	cmd = pmic_arb->ver_ops->fmt_cmd(opc, sid, addr, bc);
+	cmd = pa->ver_ops->fmt_cmd(opc, sid, addr, bc);
 
 
-	raw_spin_lock_irqsave(&pmic_arb->lock, flags);
-	pmic_arb_set_rd_cmd(pmic_arb, offset + PMIC_ARB_CMD, cmd);
-	rc = pmic_arb_wait_for_done(ctrl, pmic_arb->rd_base, sid, addr);
+	raw_spin_lock_irqsave(&pa->lock, flags);
+	pmic_arb_set_rd_cmd(pa, offset + PMIC_ARB_CMD, cmd);
+	rc = pmic_arb_wait_for_done(ctrl, pa->rd_base, sid, addr);
 	if (rc)
 	if (rc)
 		goto done;
 		goto done;
 
 
-	pa_read_data(pmic_arb, buf, offset + PMIC_ARB_RDATA0,
+	pa_read_data(pa, buf, offset + PMIC_ARB_RDATA0,
 		     min_t(u8, bc, 3));
 		     min_t(u8, bc, 3));
 
 
 	if (bc > 3)
 	if (bc > 3)
-		pa_read_data(pmic_arb, buf + 4,
-				offset + PMIC_ARB_RDATA1, bc - 4);
+		pa_read_data(pa, buf + 4, offset + PMIC_ARB_RDATA1, bc - 4);
 
 
 done:
 done:
-	raw_spin_unlock_irqrestore(&pmic_arb->lock, flags);
+	raw_spin_unlock_irqrestore(&pa->lock, flags);
 	return rc;
 	return rc;
 }
 }
 
 
 static int pmic_arb_write_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
 static int pmic_arb_write_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
 			      u16 addr, const u8 *buf, size_t len)
 			      u16 addr, const u8 *buf, size_t len)
 {
 {
-	struct spmi_pmic_arb_dev *pmic_arb = spmi_controller_get_drvdata(ctrl);
+	struct spmi_pmic_arb *pa = spmi_controller_get_drvdata(ctrl);
 	unsigned long flags;
 	unsigned long flags;
 	u8 bc = len - 1;
 	u8 bc = len - 1;
 	u32 cmd;
 	u32 cmd;
 	int rc;
 	int rc;
 	u32 offset;
 	u32 offset;
+	mode_t mode;
+
+	rc = pa->ver_ops->offset(pa, sid, addr, &offset);
+	if (rc)
+		return rc;
 
 
-	rc = pmic_arb->ver_ops->offset(pmic_arb, sid, addr, &offset);
+	rc = pa->ver_ops->mode(pa, sid, addr, &mode);
 	if (rc)
 	if (rc)
 		return rc;
 		return rc;
 
 
+	if (!(mode & S_IWUSR)) {
+		dev_err(&pa->spmic->dev,
+			"error: impermissible write to peripheral sid:%d addr:0x%x\n",
+			sid, addr);
+		return -EPERM;
+	}
+
 	if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
 	if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
 		dev_err(&ctrl->dev,
 		dev_err(&ctrl->dev,
 			"pmic-arb supports 1..%d bytes per trans, but:%zu requested",
 			"pmic-arb supports 1..%d bytes per trans, but:%zu requested",
@@ -388,20 +439,18 @@ static int pmic_arb_write_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
 	else
 	else
 		return -EINVAL;
 		return -EINVAL;
 
 
-	cmd = pmic_arb->ver_ops->fmt_cmd(opc, sid, addr, bc);
+	cmd = pa->ver_ops->fmt_cmd(opc, sid, addr, bc);
 
 
 	/* Write data to FIFOs */
 	/* Write data to FIFOs */
-	raw_spin_lock_irqsave(&pmic_arb->lock, flags);
-	pa_write_data(pmic_arb, buf, offset + PMIC_ARB_WDATA0,
-		      min_t(u8, bc, 3));
+	raw_spin_lock_irqsave(&pa->lock, flags);
+	pa_write_data(pa, buf, offset + PMIC_ARB_WDATA0, min_t(u8, bc, 3));
 	if (bc > 3)
 	if (bc > 3)
-		pa_write_data(pmic_arb, buf + 4,
-				offset + PMIC_ARB_WDATA1, bc - 4);
+		pa_write_data(pa, buf + 4, offset + PMIC_ARB_WDATA1, bc - 4);
 
 
 	/* Start the transaction */
 	/* Start the transaction */
-	pmic_arb_base_write(pmic_arb, offset + PMIC_ARB_CMD, cmd);
-	rc = pmic_arb_wait_for_done(ctrl, pmic_arb->wr_base, sid, addr);
-	raw_spin_unlock_irqrestore(&pmic_arb->lock, flags);
+	pmic_arb_base_write(pa, offset + PMIC_ARB_CMD, cmd);
+	rc = pmic_arb_wait_for_done(ctrl, pa->wr_base, sid, addr);
+	raw_spin_unlock_irqrestore(&pa->lock, flags);
 
 
 	return rc;
 	return rc;
 }
 }
@@ -427,9 +476,9 @@ struct spmi_pmic_arb_qpnpint_type {
 static void qpnpint_spmi_write(struct irq_data *d, u8 reg, void *buf,
 static void qpnpint_spmi_write(struct irq_data *d, u8 reg, void *buf,
 			       size_t len)
 			       size_t len)
 {
 {
-	struct spmi_pmic_arb_dev *pa = irq_data_get_irq_chip_data(d);
-	u8 sid = d->hwirq >> 24;
-	u8 per = d->hwirq >> 16;
+	struct spmi_pmic_arb *pa = irq_data_get_irq_chip_data(d);
+	u8 sid = HWIRQ_SID(d->hwirq);
+	u8 per = HWIRQ_PER(d->hwirq);
 
 
 	if (pmic_arb_write_cmd(pa->spmic, SPMI_CMD_EXT_WRITEL, sid,
 	if (pmic_arb_write_cmd(pa->spmic, SPMI_CMD_EXT_WRITEL, sid,
 			       (per << 8) + reg, buf, len))
 			       (per << 8) + reg, buf, len))
@@ -440,9 +489,9 @@ static void qpnpint_spmi_write(struct irq_data *d, u8 reg, void *buf,
 
 
 static void qpnpint_spmi_read(struct irq_data *d, u8 reg, void *buf, size_t len)
 static void qpnpint_spmi_read(struct irq_data *d, u8 reg, void *buf, size_t len)
 {
 {
-	struct spmi_pmic_arb_dev *pa = irq_data_get_irq_chip_data(d);
-	u8 sid = d->hwirq >> 24;
-	u8 per = d->hwirq >> 16;
+	struct spmi_pmic_arb *pa = irq_data_get_irq_chip_data(d);
+	u8 sid = HWIRQ_SID(d->hwirq);
+	u8 per = HWIRQ_PER(d->hwirq);
 
 
 	if (pmic_arb_read_cmd(pa->spmic, SPMI_CMD_EXT_READL, sid,
 	if (pmic_arb_read_cmd(pa->spmic, SPMI_CMD_EXT_READL, sid,
 			      (per << 8) + reg, buf, len))
 			      (per << 8) + reg, buf, len))
@@ -451,33 +500,58 @@ static void qpnpint_spmi_read(struct irq_data *d, u8 reg, void *buf, size_t len)
 				    d->irq);
 				    d->irq);
 }
 }
 
 
-static void periph_interrupt(struct spmi_pmic_arb_dev *pa, u8 apid)
+static void cleanup_irq(struct spmi_pmic_arb *pa, u16 apid, int id)
+{
+	u16 ppid = pa->apid_data[apid].ppid;
+	u8 sid = ppid >> 8;
+	u8 per = ppid & 0xFF;
+	u8 irq_mask = BIT(id);
+
+	writel_relaxed(irq_mask, pa->intr + pa->ver_ops->irq_clear(apid));
+
+	if (pmic_arb_write_cmd(pa->spmic, SPMI_CMD_EXT_WRITEL, sid,
+			(per << 8) + QPNPINT_REG_LATCHED_CLR, &irq_mask, 1))
+		dev_err_ratelimited(&pa->spmic->dev,
+				"failed to ack irq_mask = 0x%x for ppid = %x\n",
+				irq_mask, ppid);
+
+	if (pmic_arb_write_cmd(pa->spmic, SPMI_CMD_EXT_WRITEL, sid,
+			       (per << 8) + QPNPINT_REG_EN_CLR, &irq_mask, 1))
+		dev_err_ratelimited(&pa->spmic->dev,
+				"failed to ack irq_mask = 0x%x for ppid = %x\n",
+				irq_mask, ppid);
+}
+
+static void periph_interrupt(struct spmi_pmic_arb *pa, u16 apid)
 {
 {
 	unsigned int irq;
 	unsigned int irq;
 	u32 status;
 	u32 status;
 	int id;
 	int id;
+	u8 sid = (pa->apid_data[apid].ppid >> 8) & 0xF;
+	u8 per = pa->apid_data[apid].ppid & 0xFF;
 
 
 	status = readl_relaxed(pa->intr + pa->ver_ops->irq_status(apid));
 	status = readl_relaxed(pa->intr + pa->ver_ops->irq_status(apid));
 	while (status) {
 	while (status) {
 		id = ffs(status) - 1;
 		id = ffs(status) - 1;
-		status &= ~(1 << id);
-		irq = irq_find_mapping(pa->domain,
-				       pa->apid_to_ppid[apid] << 16
-				     | id << 8
-				     | apid);
+		status &= ~BIT(id);
+		irq = irq_find_mapping(pa->domain, HWIRQ(sid, per, id, apid));
+		if (irq == 0) {
+			cleanup_irq(pa, apid, id);
+			continue;
+		}
 		generic_handle_irq(irq);
 		generic_handle_irq(irq);
 	}
 	}
 }
 }
 
 
 static void pmic_arb_chained_irq(struct irq_desc *desc)
 static void pmic_arb_chained_irq(struct irq_desc *desc)
 {
 {
-	struct spmi_pmic_arb_dev *pa = irq_desc_get_handler_data(desc);
+	struct spmi_pmic_arb *pa = irq_desc_get_handler_data(desc);
 	struct irq_chip *chip = irq_desc_get_chip(desc);
 	struct irq_chip *chip = irq_desc_get_chip(desc);
 	void __iomem *intr = pa->intr;
 	void __iomem *intr = pa->intr;
 	int first = pa->min_apid >> 5;
 	int first = pa->min_apid >> 5;
 	int last = pa->max_apid >> 5;
 	int last = pa->max_apid >> 5;
-	u32 status;
-	int i, id;
+	u32 status, enable;
+	int i, id, apid;
 
 
 	chained_irq_enter(chip, desc);
 	chained_irq_enter(chip, desc);
 
 
@@ -486,8 +560,12 @@ static void pmic_arb_chained_irq(struct irq_desc *desc)
 				      pa->ver_ops->owner_acc_status(pa->ee, i));
 				      pa->ver_ops->owner_acc_status(pa->ee, i));
 		while (status) {
 		while (status) {
 			id = ffs(status) - 1;
 			id = ffs(status) - 1;
-			status &= ~(1 << id);
-			periph_interrupt(pa, id + i * 32);
+			status &= ~BIT(id);
+			apid = id + i * 32;
+			enable = readl_relaxed(intr +
+					pa->ver_ops->acc_enable(apid));
+			if (enable & SPMI_PIC_ACC_ENABLE_BIT)
+				periph_interrupt(pa, apid);
 		}
 		}
 	}
 	}
 
 
@@ -496,100 +574,81 @@ static void pmic_arb_chained_irq(struct irq_desc *desc)
 
 
 static void qpnpint_irq_ack(struct irq_data *d)
 static void qpnpint_irq_ack(struct irq_data *d)
 {
 {
-	struct spmi_pmic_arb_dev *pa = irq_data_get_irq_chip_data(d);
-	u8 irq  = d->hwirq >> 8;
-	u8 apid = d->hwirq;
-	unsigned long flags;
+	struct spmi_pmic_arb *pa = irq_data_get_irq_chip_data(d);
+	u8 irq = HWIRQ_IRQ(d->hwirq);
+	u16 apid = HWIRQ_APID(d->hwirq);
 	u8 data;
 	u8 data;
 
 
-	raw_spin_lock_irqsave(&pa->lock, flags);
-	writel_relaxed(1 << irq, pa->intr + pa->ver_ops->irq_clear(apid));
-	raw_spin_unlock_irqrestore(&pa->lock, flags);
+	writel_relaxed(BIT(irq), pa->intr + pa->ver_ops->irq_clear(apid));
 
 
-	data = 1 << irq;
+	data = BIT(irq);
 	qpnpint_spmi_write(d, QPNPINT_REG_LATCHED_CLR, &data, 1);
 	qpnpint_spmi_write(d, QPNPINT_REG_LATCHED_CLR, &data, 1);
 }
 }
 
 
 static void qpnpint_irq_mask(struct irq_data *d)
 static void qpnpint_irq_mask(struct irq_data *d)
 {
 {
-	struct spmi_pmic_arb_dev *pa = irq_data_get_irq_chip_data(d);
-	u8 irq  = d->hwirq >> 8;
-	u8 apid = d->hwirq;
-	unsigned long flags;
-	u32 status;
-	u8 data;
+	u8 irq = HWIRQ_IRQ(d->hwirq);
+	u8 data = BIT(irq);
 
 
-	raw_spin_lock_irqsave(&pa->lock, flags);
-	status = readl_relaxed(pa->intr + pa->ver_ops->acc_enable(apid));
-	if (status & SPMI_PIC_ACC_ENABLE_BIT) {
-		status = status & ~SPMI_PIC_ACC_ENABLE_BIT;
-		writel_relaxed(status, pa->intr +
-			       pa->ver_ops->acc_enable(apid));
-	}
-	raw_spin_unlock_irqrestore(&pa->lock, flags);
-
-	data = 1 << irq;
 	qpnpint_spmi_write(d, QPNPINT_REG_EN_CLR, &data, 1);
 	qpnpint_spmi_write(d, QPNPINT_REG_EN_CLR, &data, 1);
 }
 }
 
 
 static void qpnpint_irq_unmask(struct irq_data *d)
 static void qpnpint_irq_unmask(struct irq_data *d)
 {
 {
-	struct spmi_pmic_arb_dev *pa = irq_data_get_irq_chip_data(d);
-	u8 irq  = d->hwirq >> 8;
-	u8 apid = d->hwirq;
-	unsigned long flags;
-	u32 status;
-	u8 data;
-
-	raw_spin_lock_irqsave(&pa->lock, flags);
-	status = readl_relaxed(pa->intr + pa->ver_ops->acc_enable(apid));
-	if (!(status & SPMI_PIC_ACC_ENABLE_BIT)) {
-		writel_relaxed(status | SPMI_PIC_ACC_ENABLE_BIT,
-				pa->intr + pa->ver_ops->acc_enable(apid));
+	struct spmi_pmic_arb *pa = irq_data_get_irq_chip_data(d);
+	u8 irq = HWIRQ_IRQ(d->hwirq);
+	u16 apid = HWIRQ_APID(d->hwirq);
+	u8 buf[2];
+
+	writel_relaxed(SPMI_PIC_ACC_ENABLE_BIT,
+		pa->intr + pa->ver_ops->acc_enable(apid));
+
+	qpnpint_spmi_read(d, QPNPINT_REG_EN_SET, &buf[0], 1);
+	if (!(buf[0] & BIT(irq))) {
+		/*
+		 * Since the interrupt is currently disabled, write to both the
+		 * LATCHED_CLR and EN_SET registers so that a spurious interrupt
+		 * cannot be triggered when the interrupt is enabled
+		 */
+		buf[0] = BIT(irq);
+		buf[1] = BIT(irq);
+		qpnpint_spmi_write(d, QPNPINT_REG_LATCHED_CLR, &buf, 2);
 	}
 	}
-	raw_spin_unlock_irqrestore(&pa->lock, flags);
-
-	data = 1 << irq;
-	qpnpint_spmi_write(d, QPNPINT_REG_EN_SET, &data, 1);
-}
-
-static void qpnpint_irq_enable(struct irq_data *d)
-{
-	u8 irq  = d->hwirq >> 8;
-	u8 data;
-
-	qpnpint_irq_unmask(d);
-
-	data = 1 << irq;
-	qpnpint_spmi_write(d, QPNPINT_REG_LATCHED_CLR, &data, 1);
 }
 }
 
 
 static int qpnpint_irq_set_type(struct irq_data *d, unsigned int flow_type)
 static int qpnpint_irq_set_type(struct irq_data *d, unsigned int flow_type)
 {
 {
 	struct spmi_pmic_arb_qpnpint_type type;
 	struct spmi_pmic_arb_qpnpint_type type;
-	u8 irq = d->hwirq >> 8;
+	u8 irq = HWIRQ_IRQ(d->hwirq);
+	u8 bit_mask_irq = BIT(irq);
 
 
 	qpnpint_spmi_read(d, QPNPINT_REG_SET_TYPE, &type, sizeof(type));
 	qpnpint_spmi_read(d, QPNPINT_REG_SET_TYPE, &type, sizeof(type));
 
 
 	if (flow_type & (IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING)) {
 	if (flow_type & (IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING)) {
-		type.type |= 1 << irq;
+		type.type |= bit_mask_irq;
 		if (flow_type & IRQF_TRIGGER_RISING)
 		if (flow_type & IRQF_TRIGGER_RISING)
-			type.polarity_high |= 1 << irq;
+			type.polarity_high |= bit_mask_irq;
 		if (flow_type & IRQF_TRIGGER_FALLING)
 		if (flow_type & IRQF_TRIGGER_FALLING)
-			type.polarity_low  |= 1 << irq;
+			type.polarity_low  |= bit_mask_irq;
 	} else {
 	} else {
 		if ((flow_type & (IRQF_TRIGGER_HIGH)) &&
 		if ((flow_type & (IRQF_TRIGGER_HIGH)) &&
 		    (flow_type & (IRQF_TRIGGER_LOW)))
 		    (flow_type & (IRQF_TRIGGER_LOW)))
 			return -EINVAL;
 			return -EINVAL;
 
 
-		type.type &= ~(1 << irq); /* level trig */
+		type.type &= ~bit_mask_irq; /* level trig */
 		if (flow_type & IRQF_TRIGGER_HIGH)
 		if (flow_type & IRQF_TRIGGER_HIGH)
-			type.polarity_high |= 1 << irq;
+			type.polarity_high |= bit_mask_irq;
 		else
 		else
-			type.polarity_low  |= 1 << irq;
+			type.polarity_low  |= bit_mask_irq;
 	}
 	}
 
 
 	qpnpint_spmi_write(d, QPNPINT_REG_SET_TYPE, &type, sizeof(type));
 	qpnpint_spmi_write(d, QPNPINT_REG_SET_TYPE, &type, sizeof(type));
+
+	if (flow_type & IRQ_TYPE_EDGE_BOTH)
+		irq_set_handler_locked(d, handle_edge_irq);
+	else
+		irq_set_handler_locked(d, handle_level_irq);
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -597,7 +656,7 @@ static int qpnpint_get_irqchip_state(struct irq_data *d,
 				     enum irqchip_irq_state which,
 				     enum irqchip_irq_state which,
 				     bool *state)
 				     bool *state)
 {
 {
-	u8 irq = d->hwirq >> 8;
+	u8 irq = HWIRQ_IRQ(d->hwirq);
 	u8 status = 0;
 	u8 status = 0;
 
 
 	if (which != IRQCHIP_STATE_LINE_LEVEL)
 	if (which != IRQCHIP_STATE_LINE_LEVEL)
@@ -611,7 +670,6 @@ static int qpnpint_get_irqchip_state(struct irq_data *d,
 
 
 static struct irq_chip pmic_arb_irqchip = {
 static struct irq_chip pmic_arb_irqchip = {
 	.name		= "pmic_arb",
 	.name		= "pmic_arb",
-	.irq_enable	= qpnpint_irq_enable,
 	.irq_ack	= qpnpint_irq_ack,
 	.irq_ack	= qpnpint_irq_ack,
 	.irq_mask	= qpnpint_irq_mask,
 	.irq_mask	= qpnpint_irq_mask,
 	.irq_unmask	= qpnpint_irq_unmask,
 	.irq_unmask	= qpnpint_irq_unmask,
@@ -621,48 +679,6 @@ static struct irq_chip pmic_arb_irqchip = {
 			| IRQCHIP_SKIP_SET_WAKE,
 			| IRQCHIP_SKIP_SET_WAKE,
 };
 };
 
 
-struct spmi_pmic_arb_irq_spec {
-	unsigned slave:4;
-	unsigned per:8;
-	unsigned irq:3;
-};
-
-static int search_mapping_table(struct spmi_pmic_arb_dev *pa,
-				struct spmi_pmic_arb_irq_spec *spec,
-				u8 *apid)
-{
-	u16 ppid = spec->slave << 8 | spec->per;
-	u32 *mapping_table = pa->mapping_table;
-	int index = 0, i;
-	u32 data;
-
-	for (i = 0; i < SPMI_MAPPING_TABLE_TREE_DEPTH; ++i) {
-		if (!test_and_set_bit(index, pa->mapping_table_valid))
-			mapping_table[index] = readl_relaxed(pa->cnfg +
-						SPMI_MAPPING_TABLE_REG(index));
-
-		data = mapping_table[index];
-
-		if (ppid & (1 << SPMI_MAPPING_BIT_INDEX(data))) {
-			if (SPMI_MAPPING_BIT_IS_1_FLAG(data)) {
-				index = SPMI_MAPPING_BIT_IS_1_RESULT(data);
-			} else {
-				*apid = SPMI_MAPPING_BIT_IS_1_RESULT(data);
-				return 0;
-			}
-		} else {
-			if (SPMI_MAPPING_BIT_IS_0_FLAG(data)) {
-				index = SPMI_MAPPING_BIT_IS_0_RESULT(data);
-			} else {
-				*apid = SPMI_MAPPING_BIT_IS_0_RESULT(data);
-				return 0;
-			}
-		}
-	}
-
-	return -ENODEV;
-}
-
 static int qpnpint_irq_domain_dt_translate(struct irq_domain *d,
 static int qpnpint_irq_domain_dt_translate(struct irq_domain *d,
 					   struct device_node *controller,
 					   struct device_node *controller,
 					   const u32 *intspec,
 					   const u32 *intspec,
@@ -670,10 +686,9 @@ static int qpnpint_irq_domain_dt_translate(struct irq_domain *d,
 					   unsigned long *out_hwirq,
 					   unsigned long *out_hwirq,
 					   unsigned int *out_type)
 					   unsigned int *out_type)
 {
 {
-	struct spmi_pmic_arb_dev *pa = d->host_data;
-	struct spmi_pmic_arb_irq_spec spec;
-	int err;
-	u8 apid;
+	struct spmi_pmic_arb *pa = d->host_data;
+	int rc;
+	u16 apid;
 
 
 	dev_dbg(&pa->spmic->dev,
 	dev_dbg(&pa->spmic->dev,
 		"intspec[0] 0x%1x intspec[1] 0x%02x intspec[2] 0x%02x\n",
 		"intspec[0] 0x%1x intspec[1] 0x%02x intspec[2] 0x%02x\n",
@@ -686,15 +701,14 @@ static int qpnpint_irq_domain_dt_translate(struct irq_domain *d,
 	if (intspec[0] > 0xF || intspec[1] > 0xFF || intspec[2] > 0x7)
 	if (intspec[0] > 0xF || intspec[1] > 0xFF || intspec[2] > 0x7)
 		return -EINVAL;
 		return -EINVAL;
 
 
-	spec.slave = intspec[0];
-	spec.per   = intspec[1];
-	spec.irq   = intspec[2];
-
-	err = search_mapping_table(pa, &spec, &apid);
-	if (err)
-		return err;
-
-	pa->apid_to_ppid[apid] = spec.slave << 8 | spec.per;
+	rc = pa->ver_ops->ppid_to_apid(pa, intspec[0],
+			(intspec[1] << 8), &apid);
+	if (rc < 0) {
+		dev_err(&pa->spmic->dev,
+		"failed to xlate sid = 0x%x, periph = 0x%x, irq = %x rc = %d\n",
+		intspec[0], intspec[1], intspec[2], rc);
+		return rc;
+	}
 
 
 	/* Keep track of {max,min}_apid for bounding search during interrupt */
 	/* Keep track of {max,min}_apid for bounding search during interrupt */
 	if (apid > pa->max_apid)
 	if (apid > pa->max_apid)
@@ -702,10 +716,7 @@ static int qpnpint_irq_domain_dt_translate(struct irq_domain *d,
 	if (apid < pa->min_apid)
 	if (apid < pa->min_apid)
 		pa->min_apid = apid;
 		pa->min_apid = apid;
 
 
-	*out_hwirq = spec.slave << 24
-		   | spec.per   << 16
-		   | spec.irq   << 8
-		   | apid;
+	*out_hwirq = HWIRQ(intspec[0], intspec[1], intspec[2], apid);
 	*out_type  = intspec[3] & IRQ_TYPE_SENSE_MASK;
 	*out_type  = intspec[3] & IRQ_TYPE_SENSE_MASK;
 
 
 	dev_dbg(&pa->spmic->dev, "out_hwirq = %lu\n", *out_hwirq);
 	dev_dbg(&pa->spmic->dev, "out_hwirq = %lu\n", *out_hwirq);
@@ -717,7 +728,7 @@ static int qpnpint_irq_domain_map(struct irq_domain *d,
 				  unsigned int virq,
 				  unsigned int virq,
 				  irq_hw_number_t hwirq)
 				  irq_hw_number_t hwirq)
 {
 {
-	struct spmi_pmic_arb_dev *pa = d->host_data;
+	struct spmi_pmic_arb *pa = d->host_data;
 
 
 	dev_dbg(&pa->spmic->dev, "virq = %u, hwirq = %lu\n", virq, hwirq);
 	dev_dbg(&pa->spmic->dev, "virq = %u, hwirq = %lu\n", virq, hwirq);
 
 
@@ -727,26 +738,85 @@ static int qpnpint_irq_domain_map(struct irq_domain *d,
 	return 0;
 	return 0;
 }
 }
 
 
+static int
+pmic_arb_ppid_to_apid_v1(struct spmi_pmic_arb *pa, u8 sid, u16 addr, u16 *apid)
+{
+	u16 ppid = sid << 8 | ((addr >> 8) & 0xFF);
+	u32 *mapping_table = pa->mapping_table;
+	int index = 0, i;
+	u16 apid_valid;
+	u32 data;
+
+	apid_valid = pa->ppid_to_apid[ppid];
+	if (apid_valid & PMIC_ARB_CHAN_VALID) {
+		*apid = (apid_valid & ~PMIC_ARB_CHAN_VALID);
+		return 0;
+	}
+
+	for (i = 0; i < SPMI_MAPPING_TABLE_TREE_DEPTH; ++i) {
+		if (!test_and_set_bit(index, pa->mapping_table_valid))
+			mapping_table[index] = readl_relaxed(pa->cnfg +
+						SPMI_MAPPING_TABLE_REG(index));
+
+		data = mapping_table[index];
+
+		if (ppid & BIT(SPMI_MAPPING_BIT_INDEX(data))) {
+			if (SPMI_MAPPING_BIT_IS_1_FLAG(data)) {
+				index = SPMI_MAPPING_BIT_IS_1_RESULT(data);
+			} else {
+				*apid = SPMI_MAPPING_BIT_IS_1_RESULT(data);
+				pa->ppid_to_apid[ppid]
+					= *apid | PMIC_ARB_CHAN_VALID;
+				pa->apid_data[*apid].ppid = ppid;
+				return 0;
+			}
+		} else {
+			if (SPMI_MAPPING_BIT_IS_0_FLAG(data)) {
+				index = SPMI_MAPPING_BIT_IS_0_RESULT(data);
+			} else {
+				*apid = SPMI_MAPPING_BIT_IS_0_RESULT(data);
+				pa->ppid_to_apid[ppid]
+					= *apid | PMIC_ARB_CHAN_VALID;
+				pa->apid_data[*apid].ppid = ppid;
+				return 0;
+			}
+		}
+	}
+
+	return -ENODEV;
+}
+
+static int
+pmic_arb_mode_v1_v3(struct spmi_pmic_arb *pa, u8 sid, u16 addr, mode_t *mode)
+{
+	*mode = S_IRUSR | S_IWUSR;
+	return 0;
+}
+
 /* v1 offset per ee */
 /* v1 offset per ee */
 static int
 static int
-pmic_arb_offset_v1(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr, u32 *offset)
+pmic_arb_offset_v1(struct spmi_pmic_arb *pa, u8 sid, u16 addr, u32 *offset)
 {
 {
 	*offset = 0x800 + 0x80 * pa->channel;
 	*offset = 0x800 + 0x80 * pa->channel;
 	return 0;
 	return 0;
 }
 }
 
 
-static u16 pmic_arb_find_chan(struct spmi_pmic_arb_dev *pa, u16 ppid)
+static u16 pmic_arb_find_apid(struct spmi_pmic_arb *pa, u16 ppid)
 {
 {
 	u32 regval, offset;
 	u32 regval, offset;
-	u16 chan;
+	u16 apid;
 	u16 id;
 	u16 id;
 
 
 	/*
 	/*
 	 * PMIC_ARB_REG_CHNL is a table in HW mapping channel to ppid.
 	 * PMIC_ARB_REG_CHNL is a table in HW mapping channel to ppid.
-	 * ppid_to_chan is an in-memory invert of that table.
+	 * ppid_to_apid is an in-memory invert of that table.
 	 */
 	 */
-	for (chan = pa->last_channel; ; chan++) {
-		offset = PMIC_ARB_REG_CHNL(chan);
+	for (apid = pa->last_apid; apid < pa->max_periph; apid++) {
+		regval = readl_relaxed(pa->cnfg +
+				      SPMI_OWNERSHIP_TABLE_REG(apid));
+		pa->apid_data[apid].owner = SPMI_OWNERSHIP_PERIPH2OWNER(regval);
+
+		offset = PMIC_ARB_REG_CHNL(apid);
 		if (offset >= pa->core_size)
 		if (offset >= pa->core_size)
 			break;
 			break;
 
 
@@ -755,33 +825,65 @@ static u16 pmic_arb_find_chan(struct spmi_pmic_arb_dev *pa, u16 ppid)
 			continue;
 			continue;
 
 
 		id = (regval >> 8) & PMIC_ARB_PPID_MASK;
 		id = (regval >> 8) & PMIC_ARB_PPID_MASK;
-		pa->ppid_to_chan[id] = chan | PMIC_ARB_CHAN_VALID;
+		pa->ppid_to_apid[id] = apid | PMIC_ARB_CHAN_VALID;
+		pa->apid_data[apid].ppid = id;
 		if (id == ppid) {
 		if (id == ppid) {
-			chan |= PMIC_ARB_CHAN_VALID;
+			apid |= PMIC_ARB_CHAN_VALID;
 			break;
 			break;
 		}
 		}
 	}
 	}
-	pa->last_channel = chan & ~PMIC_ARB_CHAN_VALID;
+	pa->last_apid = apid & ~PMIC_ARB_CHAN_VALID;
 
 
-	return chan;
+	return apid;
 }
 }
 
 
 
 
-/* v2 offset per ppid (chan) and per ee */
 static int
 static int
-pmic_arb_offset_v2(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr, u32 *offset)
+pmic_arb_ppid_to_apid_v2(struct spmi_pmic_arb *pa, u8 sid, u16 addr, u16 *apid)
 {
 {
 	u16 ppid = (sid << 8) | (addr >> 8);
 	u16 ppid = (sid << 8) | (addr >> 8);
-	u16 chan;
+	u16 apid_valid;
 
 
-	chan = pa->ppid_to_chan[ppid];
-	if (!(chan & PMIC_ARB_CHAN_VALID))
-		chan = pmic_arb_find_chan(pa, ppid);
-	if (!(chan & PMIC_ARB_CHAN_VALID))
+	apid_valid = pa->ppid_to_apid[ppid];
+	if (!(apid_valid & PMIC_ARB_CHAN_VALID))
+		apid_valid = pmic_arb_find_apid(pa, ppid);
+	if (!(apid_valid & PMIC_ARB_CHAN_VALID))
 		return -ENODEV;
 		return -ENODEV;
-	chan &= ~PMIC_ARB_CHAN_VALID;
 
 
-	*offset = 0x1000 * pa->ee + 0x8000 * chan;
+	*apid = (apid_valid & ~PMIC_ARB_CHAN_VALID);
+	return 0;
+}
+
+static int
+pmic_arb_mode_v2(struct spmi_pmic_arb *pa, u8 sid, u16 addr, mode_t *mode)
+{
+	u16 apid;
+	int rc;
+
+	rc = pmic_arb_ppid_to_apid_v2(pa, sid, addr, &apid);
+	if (rc < 0)
+		return rc;
+
+	*mode = 0;
+	*mode |= S_IRUSR;
+
+	if (pa->ee == pa->apid_data[apid].owner)
+		*mode |= S_IWUSR;
+	return 0;
+}
+
+/* v2 offset per ppid and per ee */
+static int
+pmic_arb_offset_v2(struct spmi_pmic_arb *pa, u8 sid, u16 addr, u32 *offset)
+{
+	u16 apid;
+	int rc;
+
+	rc = pmic_arb_ppid_to_apid_v2(pa, sid, addr, &apid);
+	if (rc < 0)
+		return rc;
+
+	*offset = 0x1000 * pa->ee + 0x8000 * apid;
 	return 0;
 	return 0;
 }
 }
 
 
@@ -795,47 +897,55 @@ static u32 pmic_arb_fmt_cmd_v2(u8 opc, u8 sid, u16 addr, u8 bc)
 	return (opc << 27) | ((addr & 0xff) << 4) | (bc & 0x7);
 	return (opc << 27) | ((addr & 0xff) << 4) | (bc & 0x7);
 }
 }
 
 
-static u32 pmic_arb_owner_acc_status_v1(u8 m, u8 n)
+static u32 pmic_arb_owner_acc_status_v1(u8 m, u16 n)
 {
 {
 	return 0x20 * m + 0x4 * n;
 	return 0x20 * m + 0x4 * n;
 }
 }
 
 
-static u32 pmic_arb_owner_acc_status_v2(u8 m, u8 n)
+static u32 pmic_arb_owner_acc_status_v2(u8 m, u16 n)
 {
 {
 	return 0x100000 + 0x1000 * m + 0x4 * n;
 	return 0x100000 + 0x1000 * m + 0x4 * n;
 }
 }
 
 
-static u32 pmic_arb_acc_enable_v1(u8 n)
+static u32 pmic_arb_owner_acc_status_v3(u8 m, u16 n)
+{
+	return 0x200000 + 0x1000 * m + 0x4 * n;
+}
+
+static u32 pmic_arb_acc_enable_v1(u16 n)
 {
 {
 	return 0x200 + 0x4 * n;
 	return 0x200 + 0x4 * n;
 }
 }
 
 
-static u32 pmic_arb_acc_enable_v2(u8 n)
+static u32 pmic_arb_acc_enable_v2(u16 n)
 {
 {
 	return 0x1000 * n;
 	return 0x1000 * n;
 }
 }
 
 
-static u32 pmic_arb_irq_status_v1(u8 n)
+static u32 pmic_arb_irq_status_v1(u16 n)
 {
 {
 	return 0x600 + 0x4 * n;
 	return 0x600 + 0x4 * n;
 }
 }
 
 
-static u32 pmic_arb_irq_status_v2(u8 n)
+static u32 pmic_arb_irq_status_v2(u16 n)
 {
 {
 	return 0x4 + 0x1000 * n;
 	return 0x4 + 0x1000 * n;
 }
 }
 
 
-static u32 pmic_arb_irq_clear_v1(u8 n)
+static u32 pmic_arb_irq_clear_v1(u16 n)
 {
 {
 	return 0xA00 + 0x4 * n;
 	return 0xA00 + 0x4 * n;
 }
 }
 
 
-static u32 pmic_arb_irq_clear_v2(u8 n)
+static u32 pmic_arb_irq_clear_v2(u16 n)
 {
 {
 	return 0x8 + 0x1000 * n;
 	return 0x8 + 0x1000 * n;
 }
 }
 
 
 static const struct pmic_arb_ver_ops pmic_arb_v1 = {
 static const struct pmic_arb_ver_ops pmic_arb_v1 = {
+	.ver_str		= "v1",
+	.ppid_to_apid		= pmic_arb_ppid_to_apid_v1,
+	.mode			= pmic_arb_mode_v1_v3,
 	.non_data_cmd		= pmic_arb_non_data_cmd_v1,
 	.non_data_cmd		= pmic_arb_non_data_cmd_v1,
 	.offset			= pmic_arb_offset_v1,
 	.offset			= pmic_arb_offset_v1,
 	.fmt_cmd		= pmic_arb_fmt_cmd_v1,
 	.fmt_cmd		= pmic_arb_fmt_cmd_v1,
@@ -846,6 +956,9 @@ static const struct pmic_arb_ver_ops pmic_arb_v1 = {
 };
 };
 
 
 static const struct pmic_arb_ver_ops pmic_arb_v2 = {
 static const struct pmic_arb_ver_ops pmic_arb_v2 = {
+	.ver_str		= "v2",
+	.ppid_to_apid		= pmic_arb_ppid_to_apid_v2,
+	.mode			= pmic_arb_mode_v2,
 	.non_data_cmd		= pmic_arb_non_data_cmd_v2,
 	.non_data_cmd		= pmic_arb_non_data_cmd_v2,
 	.offset			= pmic_arb_offset_v2,
 	.offset			= pmic_arb_offset_v2,
 	.fmt_cmd		= pmic_arb_fmt_cmd_v2,
 	.fmt_cmd		= pmic_arb_fmt_cmd_v2,
@@ -855,6 +968,19 @@ static const struct pmic_arb_ver_ops pmic_arb_v2 = {
 	.irq_clear		= pmic_arb_irq_clear_v2,
 	.irq_clear		= pmic_arb_irq_clear_v2,
 };
 };
 
 
+static const struct pmic_arb_ver_ops pmic_arb_v3 = {
+	.ver_str		= "v3",
+	.ppid_to_apid		= pmic_arb_ppid_to_apid_v2,
+	.mode			= pmic_arb_mode_v1_v3,
+	.non_data_cmd		= pmic_arb_non_data_cmd_v2,
+	.offset			= pmic_arb_offset_v2,
+	.fmt_cmd		= pmic_arb_fmt_cmd_v2,
+	.owner_acc_status	= pmic_arb_owner_acc_status_v3,
+	.acc_enable		= pmic_arb_acc_enable_v2,
+	.irq_status		= pmic_arb_irq_status_v2,
+	.irq_clear		= pmic_arb_irq_clear_v2,
+};
+
 static const struct irq_domain_ops pmic_arb_irq_domain_ops = {
 static const struct irq_domain_ops pmic_arb_irq_domain_ops = {
 	.map	= qpnpint_irq_domain_map,
 	.map	= qpnpint_irq_domain_map,
 	.xlate	= qpnpint_irq_domain_dt_translate,
 	.xlate	= qpnpint_irq_domain_dt_translate,
@@ -862,13 +988,12 @@ static const struct irq_domain_ops pmic_arb_irq_domain_ops = {
 
 
 static int spmi_pmic_arb_probe(struct platform_device *pdev)
 static int spmi_pmic_arb_probe(struct platform_device *pdev)
 {
 {
-	struct spmi_pmic_arb_dev *pa;
+	struct spmi_pmic_arb *pa;
 	struct spmi_controller *ctrl;
 	struct spmi_controller *ctrl;
 	struct resource *res;
 	struct resource *res;
 	void __iomem *core;
 	void __iomem *core;
 	u32 channel, ee, hw_ver;
 	u32 channel, ee, hw_ver;
 	int err;
 	int err;
-	bool is_v1;
 
 
 	ctrl = spmi_controller_alloc(&pdev->dev, sizeof(*pa));
 	ctrl = spmi_controller_alloc(&pdev->dev, sizeof(*pa));
 	if (!ctrl)
 	if (!ctrl)
@@ -879,6 +1004,12 @@ static int spmi_pmic_arb_probe(struct platform_device *pdev)
 
 
 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "core");
 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "core");
 	pa->core_size = resource_size(res);
 	pa->core_size = resource_size(res);
+	if (pa->core_size <= 0x800) {
+		dev_err(&pdev->dev, "core_size is smaller than 0x800. Failing Probe\n");
+		err = -EINVAL;
+		goto err_put_ctrl;
+	}
+
 	core = devm_ioremap_resource(&ctrl->dev, res);
 	core = devm_ioremap_resource(&ctrl->dev, res);
 	if (IS_ERR(core)) {
 	if (IS_ERR(core)) {
 		err = PTR_ERR(core);
 		err = PTR_ERR(core);
@@ -886,18 +1017,21 @@ static int spmi_pmic_arb_probe(struct platform_device *pdev)
 	}
 	}
 
 
 	hw_ver = readl_relaxed(core + PMIC_ARB_VERSION);
 	hw_ver = readl_relaxed(core + PMIC_ARB_VERSION);
-	is_v1  = (hw_ver < PMIC_ARB_VERSION_V2_MIN);
-
-	dev_info(&ctrl->dev, "PMIC Arb Version-%d (0x%x)\n", (is_v1 ? 1 : 2),
-		hw_ver);
 
 
-	if (is_v1) {
+	if (hw_ver < PMIC_ARB_VERSION_V2_MIN) {
 		pa->ver_ops = &pmic_arb_v1;
 		pa->ver_ops = &pmic_arb_v1;
 		pa->wr_base = core;
 		pa->wr_base = core;
 		pa->rd_base = core;
 		pa->rd_base = core;
 	} else {
 	} else {
 		pa->core = core;
 		pa->core = core;
-		pa->ver_ops = &pmic_arb_v2;
+
+		if (hw_ver < PMIC_ARB_VERSION_V3_MIN)
+			pa->ver_ops = &pmic_arb_v2;
+		else
+			pa->ver_ops = &pmic_arb_v3;
+
+		/* the apid to ppid table starts at PMIC_ARB_REG_CHNL(0) */
+		pa->max_periph = (pa->core_size - PMIC_ARB_REG_CHNL(0)) / 4;
 
 
 		res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
 		res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
 						   "obsrvr");
 						   "obsrvr");
@@ -915,16 +1049,19 @@ static int spmi_pmic_arb_probe(struct platform_device *pdev)
 			goto err_put_ctrl;
 			goto err_put_ctrl;
 		}
 		}
 
 
-		pa->ppid_to_chan = devm_kcalloc(&ctrl->dev,
+		pa->ppid_to_apid = devm_kcalloc(&ctrl->dev,
 						PMIC_ARB_MAX_PPID,
 						PMIC_ARB_MAX_PPID,
-						sizeof(*pa->ppid_to_chan),
+						sizeof(*pa->ppid_to_apid),
 						GFP_KERNEL);
 						GFP_KERNEL);
-		if (!pa->ppid_to_chan) {
+		if (!pa->ppid_to_apid) {
 			err = -ENOMEM;
 			err = -ENOMEM;
 			goto err_put_ctrl;
 			goto err_put_ctrl;
 		}
 		}
 	}
 	}
 
 
+	dev_info(&ctrl->dev, "PMIC arbiter version %s (0x%x)\n",
+		 pa->ver_ops->ver_str, hw_ver);
+
 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "intr");
 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "intr");
 	pa->intr = devm_ioremap_resource(&ctrl->dev, res);
 	pa->intr = devm_ioremap_resource(&ctrl->dev, res);
 	if (IS_ERR(pa->intr)) {
 	if (IS_ERR(pa->intr)) {
@@ -974,14 +1111,6 @@ static int spmi_pmic_arb_probe(struct platform_device *pdev)
 
 
 	pa->ee = ee;
 	pa->ee = ee;
 
 
-	pa->apid_to_ppid = devm_kcalloc(&ctrl->dev, PMIC_ARB_MAX_PERIPHS,
-					    sizeof(*pa->apid_to_ppid),
-					    GFP_KERNEL);
-	if (!pa->apid_to_ppid) {
-		err = -ENOMEM;
-		goto err_put_ctrl;
-	}
-
 	pa->mapping_table = devm_kcalloc(&ctrl->dev, PMIC_ARB_MAX_PERIPHS - 1,
 	pa->mapping_table = devm_kcalloc(&ctrl->dev, PMIC_ARB_MAX_PERIPHS - 1,
 					sizeof(*pa->mapping_table), GFP_KERNEL);
 					sizeof(*pa->mapping_table), GFP_KERNEL);
 	if (!pa->mapping_table) {
 	if (!pa->mapping_table) {
@@ -1011,6 +1140,7 @@ static int spmi_pmic_arb_probe(struct platform_device *pdev)
 	}
 	}
 
 
 	irq_set_chained_handler_and_data(pa->irq, pmic_arb_chained_irq, pa);
 	irq_set_chained_handler_and_data(pa->irq, pmic_arb_chained_irq, pa);
+	enable_irq_wake(pa->irq);
 
 
 	err = spmi_controller_add(ctrl);
 	err = spmi_controller_add(ctrl);
 	if (err)
 	if (err)
@@ -1029,7 +1159,7 @@ err_put_ctrl:
 static int spmi_pmic_arb_remove(struct platform_device *pdev)
 static int spmi_pmic_arb_remove(struct platform_device *pdev)
 {
 {
 	struct spmi_controller *ctrl = platform_get_drvdata(pdev);
 	struct spmi_controller *ctrl = platform_get_drvdata(pdev);
-	struct spmi_pmic_arb_dev *pa = spmi_controller_get_drvdata(ctrl);
+	struct spmi_pmic_arb *pa = spmi_controller_get_drvdata(ctrl);
 	spmi_controller_remove(ctrl);
 	spmi_controller_remove(ctrl);
 	irq_set_chained_handler_and_data(pa->irq, NULL, NULL);
 	irq_set_chained_handler_and_data(pa->irq, NULL, NULL);
 	irq_domain_remove(pa->domain);
 	irq_domain_remove(pa->domain);

+ 7 - 6
drivers/thunderbolt/Kconfig

@@ -1,15 +1,16 @@
 menuconfig THUNDERBOLT
 menuconfig THUNDERBOLT
-	tristate "Thunderbolt support for Apple devices"
+	tristate "Thunderbolt support"
 	depends on PCI
 	depends on PCI
 	depends on X86 || COMPILE_TEST
 	depends on X86 || COMPILE_TEST
 	select APPLE_PROPERTIES if EFI_STUB && X86
 	select APPLE_PROPERTIES if EFI_STUB && X86
 	select CRC32
 	select CRC32
+	select CRYPTO
+	select CRYPTO_HASH
+	select NVMEM
 	help
 	help
-	  Cactus Ridge Thunderbolt Controller driver
-	  This driver is required if you want to hotplug Thunderbolt devices on
-	  Apple hardware.
-
-	  Device chaining is currently not supported.
+	  Thunderbolt Controller driver. This driver is required if you
+	  want to hotplug Thunderbolt devices on Apple hardware or on PCs
+	  with Intel Falcon Ridge or newer.
 
 
 	  To compile this driver a module, choose M here. The module will be
 	  To compile this driver a module, choose M here. The module will be
 	  called thunderbolt.
 	  called thunderbolt.

+ 1 - 1
drivers/thunderbolt/Makefile

@@ -1,3 +1,3 @@
 obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
 obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
 thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
 thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
-
+thunderbolt-objs += domain.o dma_port.o icm.o

+ 91 - 78
drivers/thunderbolt/cap.c

@@ -9,6 +9,8 @@
 
 
 #include "tb.h"
 #include "tb.h"
 
 
+#define CAP_OFFSET_MAX		0xff
+#define VSE_CAP_OFFSET_MAX	0xffff
 
 
 struct tb_cap_any {
 struct tb_cap_any {
 	union {
 	union {
@@ -18,99 +20,110 @@ struct tb_cap_any {
 	};
 	};
 } __packed;
 } __packed;
 
 
-static bool tb_cap_is_basic(struct tb_cap_any *cap)
-{
-	/* basic.cap is u8. This checks only the lower 8 bit of cap. */
-	return cap->basic.cap != 5;
-}
-
-static bool tb_cap_is_long(struct tb_cap_any *cap)
+/**
+ * tb_port_find_cap() - Find port capability
+ * @port: Port to find the capability for
+ * @cap: Capability to look
+ *
+ * Returns offset to start of capability or %-ENOENT if no such
+ * capability was found. Negative errno is returned if there was an
+ * error.
+ */
+int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
 {
 {
-	return !tb_cap_is_basic(cap)
-	       && cap->extended_short.next == 0
-	       && cap->extended_short.length == 0;
-}
+	u32 offset;
 
 
-static enum tb_cap tb_cap(struct tb_cap_any *cap)
-{
-	if (tb_cap_is_basic(cap))
-		return cap->basic.cap;
+	/*
+	 * DP out adapters claim to implement TMU capability but in
+	 * reality they do not so we hard code the adapter specific
+	 * capability offset here.
+	 */
+	if (port->config.type == TB_TYPE_DP_HDMI_OUT)
+		offset = 0x39;
 	else
 	else
-		/* extended_short/long have cap at the same offset. */
-		return cap->extended_short.cap;
+		offset = 0x1;
+
+	do {
+		struct tb_cap_any header;
+		int ret;
+
+		ret = tb_port_read(port, &header, TB_CFG_PORT, offset, 1);
+		if (ret)
+			return ret;
+
+		if (header.basic.cap == cap)
+			return offset;
+
+		offset = header.basic.next;
+	} while (offset);
+
+	return -ENOENT;
 }
 }
 
 
-static u32 tb_cap_next(struct tb_cap_any *cap, u32 offset)
+static int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap)
 {
 {
-	int next;
-	if (offset == 1) {
-		/*
-		 * The first pointer is part of the switch header and always
-		 * a simple pointer.
-		 */
-		next = cap->basic.next;
-	} else {
-		/*
-		 * Somehow Intel decided to use 3 different types of capability
-		 * headers. It is not like anyone could have predicted that
-		 * single byte offsets are not enough...
-		 */
-		if (tb_cap_is_basic(cap))
-			next = cap->basic.next;
-		else if (!tb_cap_is_long(cap))
-			next = cap->extended_short.next;
-		else
-			next = cap->extended_long.next;
+	int offset = sw->config.first_cap_offset;
+
+	while (offset > 0 && offset < CAP_OFFSET_MAX) {
+		struct tb_cap_any header;
+		int ret;
+
+		ret = tb_sw_read(sw, &header, TB_CFG_SWITCH, offset, 1);
+		if (ret)
+			return ret;
+
+		if (header.basic.cap == cap)
+			return offset;
+
+		offset = header.basic.next;
 	}
 	}
-	/*
-	 * "Hey, we could terminate some capability lists with a null offset
-	 *  and others with a pointer to the last element." - "Great idea!"
-	 */
-	if (next == offset)
-		return 0;
-	return next;
+
+	return -ENOENT;
 }
 }
 
 
 /**
 /**
- * tb_find_cap() - find a capability
+ * tb_switch_find_vse_cap() - Find switch vendor specific capability
+ * @sw: Switch to find the capability for
+ * @vsec: Vendor specific capability to look
  *
  *
- * Return: Returns a positive offset if the capability was found and 0 if not.
- * Returns an error code on failure.
+ * Functions enumerates vendor specific capabilities (VSEC) of a switch
+ * and returns offset when capability matching @vsec is found. If no
+ * such capability is found returns %-ENOENT. In case of error returns
+ * negative errno.
  */
  */
-int tb_find_cap(struct tb_port *port, enum tb_cfg_space space, enum tb_cap cap)
+int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec)
 {
 {
-	u32 offset = 1;
 	struct tb_cap_any header;
 	struct tb_cap_any header;
-	int res;
-	int retries = 10;
-	while (retries--) {
-		res = tb_port_read(port, &header, space, offset, 1);
-		if (res) {
-			/* Intel needs some help with linked lists. */
-			if (space == TB_CFG_PORT && offset == 0xa
-			    && port->config.type == TB_TYPE_DP_HDMI_OUT) {
-				offset = 0x39;
-				continue;
-			}
-			return res;
-		}
-		if (offset != 1) {
-			if (tb_cap(&header) == cap)
+	int offset;
+
+	offset = tb_switch_find_cap(sw, TB_SWITCH_CAP_VSE);
+	if (offset < 0)
+		return offset;
+
+	while (offset > 0 && offset < VSE_CAP_OFFSET_MAX) {
+		int ret;
+
+		ret = tb_sw_read(sw, &header, TB_CFG_SWITCH, offset, 2);
+		if (ret)
+			return ret;
+
+		/*
+		 * Extended vendor specific capabilities come in two
+		 * flavors: short and long. The latter is used when
+		 * offset is over 0xff.
+		 */
+		if (offset >= CAP_OFFSET_MAX) {
+			if (header.extended_long.vsec_id == vsec)
 				return offset;
 				return offset;
-			if (tb_cap_is_long(&header)) {
-				/* tb_cap_extended_long is 2 dwords */
-				res = tb_port_read(port, &header, space,
-						   offset, 2);
-				if (res)
-					return res;
-			}
+			offset = header.extended_long.next;
+		} else {
+			if (header.extended_short.vsec_id == vsec)
+				return offset;
+			if (!header.extended_short.length)
+				return -ENOENT;
+			offset = header.extended_short.next;
 		}
 		}
-		offset = tb_cap_next(&header, offset);
-		if (!offset)
-			return 0;
 	}
 	}
-	tb_port_WARN(port,
-		     "run out of retries while looking for cap %#x in config space %d, last offset: %#x\n",
-		     cap, space, offset);
-	return -EIO;
+
+	return -ENOENT;
 }
 }

+ 475 - 190
drivers/thunderbolt/ctl.c

@@ -5,22 +5,17 @@
  */
  */
 
 
 #include <linux/crc32.h>
 #include <linux/crc32.h>
+#include <linux/delay.h>
 #include <linux/slab.h>
 #include <linux/slab.h>
 #include <linux/pci.h>
 #include <linux/pci.h>
 #include <linux/dmapool.h>
 #include <linux/dmapool.h>
 #include <linux/workqueue.h>
 #include <linux/workqueue.h>
-#include <linux/kfifo.h>
 
 
 #include "ctl.h"
 #include "ctl.h"
 
 
 
 
-struct ctl_pkg {
-	struct tb_ctl *ctl;
-	void *buffer;
-	struct ring_frame frame;
-};
-
-#define TB_CTL_RX_PKG_COUNT 10
+#define TB_CTL_RX_PKG_COUNT	10
+#define TB_CTL_RETRIES		4
 
 
 /**
 /**
  * struct tb_cfg - thunderbolt control channel
  * struct tb_cfg - thunderbolt control channel
@@ -32,10 +27,11 @@ struct tb_ctl {
 
 
 	struct dma_pool *frame_pool;
 	struct dma_pool *frame_pool;
 	struct ctl_pkg *rx_packets[TB_CTL_RX_PKG_COUNT];
 	struct ctl_pkg *rx_packets[TB_CTL_RX_PKG_COUNT];
-	DECLARE_KFIFO(response_fifo, struct ctl_pkg*, 16);
-	struct completion response_ready;
+	struct mutex request_queue_lock;
+	struct list_head request_queue;
+	bool running;
 
 
-	hotplug_cb callback;
+	event_cb callback;
 	void *callback_data;
 	void *callback_data;
 };
 };
 
 
@@ -52,102 +48,124 @@ struct tb_ctl {
 #define tb_ctl_info(ctl, format, arg...) \
 #define tb_ctl_info(ctl, format, arg...) \
 	dev_info(&(ctl)->nhi->pdev->dev, format, ## arg)
 	dev_info(&(ctl)->nhi->pdev->dev, format, ## arg)
 
 
+#define tb_ctl_dbg(ctl, format, arg...) \
+	dev_dbg(&(ctl)->nhi->pdev->dev, format, ## arg)
 
 
-/* configuration packets definitions */
+static DECLARE_WAIT_QUEUE_HEAD(tb_cfg_request_cancel_queue);
+/* Serializes access to request kref_get/put */
+static DEFINE_MUTEX(tb_cfg_request_lock);
 
 
-enum tb_cfg_pkg_type {
-	TB_CFG_PKG_READ = 1,
-	TB_CFG_PKG_WRITE = 2,
-	TB_CFG_PKG_ERROR = 3,
-	TB_CFG_PKG_NOTIFY_ACK = 4,
-	TB_CFG_PKG_EVENT = 5,
-	TB_CFG_PKG_XDOMAIN_REQ = 6,
-	TB_CFG_PKG_XDOMAIN_RESP = 7,
-	TB_CFG_PKG_OVERRIDE = 8,
-	TB_CFG_PKG_RESET = 9,
-	TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd,
-};
+/**
+ * tb_cfg_request_alloc() - Allocates a new config request
+ *
+ * This is refcounted object so when you are done with this, call
+ * tb_cfg_request_put() to it.
+ */
+struct tb_cfg_request *tb_cfg_request_alloc(void)
+{
+	struct tb_cfg_request *req;
 
 
-/* common header */
-struct tb_cfg_header {
-	u32 route_hi:22;
-	u32 unknown:10; /* highest order bit is set on replies */
-	u32 route_lo;
-} __packed;
-
-/* additional header for read/write packets */
-struct tb_cfg_address {
-	u32 offset:13; /* in dwords */
-	u32 length:6; /* in dwords */
-	u32 port:6;
-	enum tb_cfg_space space:2;
-	u32 seq:2; /* sequence number  */
-	u32 zero:3;
-} __packed;
-
-/* TB_CFG_PKG_READ, response for TB_CFG_PKG_WRITE */
-struct cfg_read_pkg {
-	struct tb_cfg_header header;
-	struct tb_cfg_address addr;
-} __packed;
-
-/* TB_CFG_PKG_WRITE, response for TB_CFG_PKG_READ */
-struct cfg_write_pkg {
-	struct tb_cfg_header header;
-	struct tb_cfg_address addr;
-	u32 data[64]; /* maximum size, tb_cfg_address.length has 6 bits */
-} __packed;
-
-/* TB_CFG_PKG_ERROR */
-struct cfg_error_pkg {
-	struct tb_cfg_header header;
-	enum tb_cfg_error error:4;
-	u32 zero1:4;
-	u32 port:6;
-	u32 zero2:2; /* Both should be zero, still they are different fields. */
-	u32 zero3:16;
-} __packed;
-
-/* TB_CFG_PKG_EVENT */
-struct cfg_event_pkg {
-	struct tb_cfg_header header;
-	u32 port:6;
-	u32 zero:25;
-	bool unplug:1;
-} __packed;
-
-/* TB_CFG_PKG_RESET */
-struct cfg_reset_pkg {
-	struct tb_cfg_header header;
-} __packed;
-
-/* TB_CFG_PKG_PREPARE_TO_SLEEP */
-struct cfg_pts_pkg {
-	struct tb_cfg_header header;
-	u32 data;
-} __packed;
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return NULL;
 
 
+	kref_init(&req->kref);
 
 
-/* utility functions */
+	return req;
+}
 
 
-static u64 get_route(struct tb_cfg_header header)
+/**
+ * tb_cfg_request_get() - Increase refcount of a request
+ * @req: Request whose refcount is increased
+ */
+void tb_cfg_request_get(struct tb_cfg_request *req)
 {
 {
-	return (u64) header.route_hi << 32 | header.route_lo;
+	mutex_lock(&tb_cfg_request_lock);
+	kref_get(&req->kref);
+	mutex_unlock(&tb_cfg_request_lock);
 }
 }
 
 
-static struct tb_cfg_header make_header(u64 route)
+static void tb_cfg_request_destroy(struct kref *kref)
 {
 {
-	struct tb_cfg_header header = {
-		.route_hi = route >> 32,
-		.route_lo = route,
-	};
-	/* check for overflow, route_hi is not 32 bits! */
-	WARN_ON(get_route(header) != route);
-	return header;
+	struct tb_cfg_request *req = container_of(kref, typeof(*req), kref);
+
+	kfree(req);
+}
+
+/**
+ * tb_cfg_request_put() - Decrease refcount and possibly release the request
+ * @req: Request whose refcount is decreased
+ *
+ * Call this function when you are done with the request. When refcount
+ * goes to %0 the object is released.
+ */
+void tb_cfg_request_put(struct tb_cfg_request *req)
+{
+	mutex_lock(&tb_cfg_request_lock);
+	kref_put(&req->kref, tb_cfg_request_destroy);
+	mutex_unlock(&tb_cfg_request_lock);
 }
 }
 
 
-static int check_header(struct ctl_pkg *pkg, u32 len, enum tb_cfg_pkg_type type,
-			u64 route)
+static int tb_cfg_request_enqueue(struct tb_ctl *ctl,
+				  struct tb_cfg_request *req)
+{
+	WARN_ON(test_bit(TB_CFG_REQUEST_ACTIVE, &req->flags));
+	WARN_ON(req->ctl);
+
+	mutex_lock(&ctl->request_queue_lock);
+	if (!ctl->running) {
+		mutex_unlock(&ctl->request_queue_lock);
+		return -ENOTCONN;
+	}
+	req->ctl = ctl;
+	list_add_tail(&req->list, &ctl->request_queue);
+	set_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+	mutex_unlock(&ctl->request_queue_lock);
+	return 0;
+}
+
+static void tb_cfg_request_dequeue(struct tb_cfg_request *req)
+{
+	struct tb_ctl *ctl = req->ctl;
+
+	mutex_lock(&ctl->request_queue_lock);
+	list_del(&req->list);
+	clear_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+	if (test_bit(TB_CFG_REQUEST_CANCELED, &req->flags))
+		wake_up(&tb_cfg_request_cancel_queue);
+	mutex_unlock(&ctl->request_queue_lock);
+}
+
+static bool tb_cfg_request_is_active(struct tb_cfg_request *req)
+{
+	return test_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+}
+
+static struct tb_cfg_request *
+tb_cfg_request_find(struct tb_ctl *ctl, struct ctl_pkg *pkg)
+{
+	struct tb_cfg_request *req;
+	bool found = false;
+
+	mutex_lock(&pkg->ctl->request_queue_lock);
+	list_for_each_entry(req, &pkg->ctl->request_queue, list) {
+		tb_cfg_request_get(req);
+		if (req->match(req, pkg)) {
+			found = true;
+			break;
+		}
+		tb_cfg_request_put(req);
+	}
+	mutex_unlock(&pkg->ctl->request_queue_lock);
+
+	return found ? req : NULL;
+}
+
+/* utility functions */
+
+
+static int check_header(const struct ctl_pkg *pkg, u32 len,
+			enum tb_cfg_pkg_type type, u64 route)
 {
 {
 	struct tb_cfg_header *header = pkg->buffer;
 	struct tb_cfg_header *header = pkg->buffer;
 
 
@@ -167,9 +185,9 @@ static int check_header(struct ctl_pkg *pkg, u32 len, enum tb_cfg_pkg_type type,
 	if (WARN(header->unknown != 1 << 9,
 	if (WARN(header->unknown != 1 << 9,
 			"header->unknown is %#x\n", header->unknown))
 			"header->unknown is %#x\n", header->unknown))
 		return -EIO;
 		return -EIO;
-	if (WARN(route != get_route(*header),
+	if (WARN(route != tb_cfg_get_route(header),
 			"wrong route (expected %llx, got %llx)",
 			"wrong route (expected %llx, got %llx)",
-			route, get_route(*header)))
+			route, tb_cfg_get_route(header)))
 		return -EIO;
 		return -EIO;
 	return 0;
 	return 0;
 }
 }
@@ -189,8 +207,6 @@ static int check_config_address(struct tb_cfg_address addr,
 	if (WARN(length != addr.length, "wrong space (expected %x, got %x\n)",
 	if (WARN(length != addr.length, "wrong space (expected %x, got %x\n)",
 			length, addr.length))
 			length, addr.length))
 		return -EIO;
 		return -EIO;
-	if (WARN(addr.seq, "addr.seq is %#x\n", addr.seq))
-		return -EIO;
 	/*
 	/*
 	 * We cannot check addr->port as it is set to the upstream port of the
 	 * We cannot check addr->port as it is set to the upstream port of the
 	 * sender.
 	 * sender.
@@ -198,14 +214,14 @@ static int check_config_address(struct tb_cfg_address addr,
 	return 0;
 	return 0;
 }
 }
 
 
-static struct tb_cfg_result decode_error(struct ctl_pkg *response)
+static struct tb_cfg_result decode_error(const struct ctl_pkg *response)
 {
 {
 	struct cfg_error_pkg *pkg = response->buffer;
 	struct cfg_error_pkg *pkg = response->buffer;
 	struct tb_cfg_result res = { 0 };
 	struct tb_cfg_result res = { 0 };
-	res.response_route = get_route(pkg->header);
+	res.response_route = tb_cfg_get_route(&pkg->header);
 	res.response_port = 0;
 	res.response_port = 0;
 	res.err = check_header(response, sizeof(*pkg), TB_CFG_PKG_ERROR,
 	res.err = check_header(response, sizeof(*pkg), TB_CFG_PKG_ERROR,
-			       get_route(pkg->header));
+			       tb_cfg_get_route(&pkg->header));
 	if (res.err)
 	if (res.err)
 		return res;
 		return res;
 
 
@@ -219,7 +235,7 @@ static struct tb_cfg_result decode_error(struct ctl_pkg *response)
 
 
 }
 }
 
 
-static struct tb_cfg_result parse_header(struct ctl_pkg *pkg, u32 len,
+static struct tb_cfg_result parse_header(const struct ctl_pkg *pkg, u32 len,
 					 enum tb_cfg_pkg_type type, u64 route)
 					 enum tb_cfg_pkg_type type, u64 route)
 {
 {
 	struct tb_cfg_header *header = pkg->buffer;
 	struct tb_cfg_header *header = pkg->buffer;
@@ -229,7 +245,7 @@ static struct tb_cfg_result parse_header(struct ctl_pkg *pkg, u32 len,
 		return decode_error(pkg);
 		return decode_error(pkg);
 
 
 	res.response_port = 0; /* will be updated later for cfg_read/write */
 	res.response_port = 0; /* will be updated later for cfg_read/write */
-	res.response_route = get_route(*header);
+	res.response_route = tb_cfg_get_route(header);
 	res.err = check_header(pkg, len, type, route);
 	res.err = check_header(pkg, len, type, route);
 	return res;
 	return res;
 }
 }
@@ -273,7 +289,7 @@ static void tb_cfg_print_error(struct tb_ctl *ctl,
 	}
 	}
 }
 }
 
 
-static void cpu_to_be32_array(__be32 *dst, u32 *src, size_t len)
+static void cpu_to_be32_array(__be32 *dst, const u32 *src, size_t len)
 {
 {
 	int i;
 	int i;
 	for (i = 0; i < len; i++)
 	for (i = 0; i < len; i++)
@@ -287,7 +303,7 @@ static void be32_to_cpu_array(u32 *dst, __be32 *src, size_t len)
 		dst[i] = be32_to_cpu(src[i]);
 		dst[i] = be32_to_cpu(src[i]);
 }
 }
 
 
-static __be32 tb_crc(void *data, size_t len)
+static __be32 tb_crc(const void *data, size_t len)
 {
 {
 	return cpu_to_be32(~__crc32c_le(~0, data, len));
 	return cpu_to_be32(~__crc32c_le(~0, data, len));
 }
 }
@@ -333,7 +349,7 @@ static void tb_ctl_tx_callback(struct tb_ring *ring, struct ring_frame *frame,
  *
  *
  * Return: Returns 0 on success or an error code on failure.
  * Return: Returns 0 on success or an error code on failure.
  */
  */
-static int tb_ctl_tx(struct tb_ctl *ctl, void *data, size_t len,
+static int tb_ctl_tx(struct tb_ctl *ctl, const void *data, size_t len,
 		     enum tb_cfg_pkg_type type)
 		     enum tb_cfg_pkg_type type)
 {
 {
 	int res;
 	int res;
@@ -364,24 +380,12 @@ static int tb_ctl_tx(struct tb_ctl *ctl, void *data, size_t len,
 }
 }
 
 
 /**
 /**
- * tb_ctl_handle_plug_event() - acknowledge a plug event, invoke ctl->callback
+ * tb_ctl_handle_event() - acknowledge a plug event, invoke ctl->callback
  */
  */
-static void tb_ctl_handle_plug_event(struct tb_ctl *ctl,
-				     struct ctl_pkg *response)
+static void tb_ctl_handle_event(struct tb_ctl *ctl, enum tb_cfg_pkg_type type,
+				struct ctl_pkg *pkg, size_t size)
 {
 {
-	struct cfg_event_pkg *pkg = response->buffer;
-	u64 route = get_route(pkg->header);
-
-	if (check_header(response, sizeof(*pkg), TB_CFG_PKG_EVENT, route)) {
-		tb_ctl_warn(ctl, "malformed TB_CFG_PKG_EVENT\n");
-		return;
-	}
-
-	if (tb_cfg_error(ctl, route, pkg->port, TB_CFG_ERROR_ACK_PLUG_EVENT))
-		tb_ctl_warn(ctl, "could not ack plug event on %llx:%x\n",
-			    route, pkg->port);
-	WARN(pkg->zero, "pkg->zero is %#x\n", pkg->zero);
-	ctl->callback(ctl->callback_data, route, pkg->port, pkg->unplug);
+	ctl->callback(ctl->callback_data, type, pkg->buffer, size);
 }
 }
 
 
 static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
 static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
@@ -394,10 +398,30 @@ static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
 					     */
 					     */
 }
 }
 
 
+static int tb_async_error(const struct ctl_pkg *pkg)
+{
+	const struct cfg_error_pkg *error = (const struct cfg_error_pkg *)pkg;
+
+	if (pkg->frame.eof != TB_CFG_PKG_ERROR)
+		return false;
+
+	switch (error->error) {
+	case TB_CFG_ERROR_LINK_ERROR:
+	case TB_CFG_ERROR_HEC_ERROR_DETECTED:
+	case TB_CFG_ERROR_FLOW_CONTROL_ERROR:
+		return true;
+
+	default:
+		return false;
+	}
+}
+
 static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
 static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
 			       bool canceled)
 			       bool canceled)
 {
 {
 	struct ctl_pkg *pkg = container_of(frame, typeof(*pkg), frame);
 	struct ctl_pkg *pkg = container_of(frame, typeof(*pkg), frame);
+	struct tb_cfg_request *req;
+	__be32 crc32;
 
 
 	if (canceled)
 	if (canceled)
 		return; /*
 		return; /*
@@ -412,55 +436,168 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
 	}
 	}
 
 
 	frame->size -= 4; /* remove checksum */
 	frame->size -= 4; /* remove checksum */
-	if (*(__be32 *) (pkg->buffer + frame->size)
-			!= tb_crc(pkg->buffer, frame->size)) {
-		tb_ctl_err(pkg->ctl,
-			   "RX: checksum mismatch, dropping packet\n");
-		goto rx;
-	}
+	crc32 = tb_crc(pkg->buffer, frame->size);
 	be32_to_cpu_array(pkg->buffer, pkg->buffer, frame->size / 4);
 	be32_to_cpu_array(pkg->buffer, pkg->buffer, frame->size / 4);
 
 
-	if (frame->eof == TB_CFG_PKG_EVENT) {
-		tb_ctl_handle_plug_event(pkg->ctl, pkg);
+	switch (frame->eof) {
+	case TB_CFG_PKG_READ:
+	case TB_CFG_PKG_WRITE:
+	case TB_CFG_PKG_ERROR:
+	case TB_CFG_PKG_OVERRIDE:
+	case TB_CFG_PKG_RESET:
+		if (*(__be32 *)(pkg->buffer + frame->size) != crc32) {
+			tb_ctl_err(pkg->ctl,
+				   "RX: checksum mismatch, dropping packet\n");
+			goto rx;
+		}
+		if (tb_async_error(pkg)) {
+			tb_ctl_handle_event(pkg->ctl, frame->eof,
+					    pkg, frame->size);
+			goto rx;
+		}
+		break;
+
+	case TB_CFG_PKG_EVENT:
+		if (*(__be32 *)(pkg->buffer + frame->size) != crc32) {
+			tb_ctl_err(pkg->ctl,
+				   "RX: checksum mismatch, dropping packet\n");
+			goto rx;
+		}
+		/* Fall through */
+	case TB_CFG_PKG_ICM_EVENT:
+		tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size);
 		goto rx;
 		goto rx;
+
+	default:
+		break;
 	}
 	}
-	if (!kfifo_put(&pkg->ctl->response_fifo, pkg)) {
-		tb_ctl_err(pkg->ctl, "RX: fifo is full\n");
-		goto rx;
+
+	/*
+	 * The received packet will be processed only if there is an
+	 * active request and that the packet is what is expected. This
+	 * prevents packets such as replies coming after timeout has
+	 * triggered from messing with the active requests.
+	 */
+	req = tb_cfg_request_find(pkg->ctl, pkg);
+	if (req) {
+		if (req->copy(req, pkg))
+			schedule_work(&req->work);
+		tb_cfg_request_put(req);
 	}
 	}
-	complete(&pkg->ctl->response_ready);
-	return;
+
 rx:
 rx:
 	tb_ctl_rx_submit(pkg);
 	tb_ctl_rx_submit(pkg);
 }
 }
 
 
+static void tb_cfg_request_work(struct work_struct *work)
+{
+	struct tb_cfg_request *req = container_of(work, typeof(*req), work);
+
+	if (!test_bit(TB_CFG_REQUEST_CANCELED, &req->flags))
+		req->callback(req->callback_data);
+
+	tb_cfg_request_dequeue(req);
+	tb_cfg_request_put(req);
+}
+
 /**
 /**
- * tb_ctl_rx() - receive a packet from the control channel
+ * tb_cfg_request() - Start control request not waiting for it to complete
+ * @ctl: Control channel to use
+ * @req: Request to start
+ * @callback: Callback called when the request is completed
+ * @callback_data: Data to be passed to @callback
+ *
+ * This queues @req on the given control channel without waiting for it
+ * to complete. When the request completes @callback is called.
  */
  */
-static struct tb_cfg_result tb_ctl_rx(struct tb_ctl *ctl, void *buffer,
-				      size_t length, int timeout_msec,
-				      u64 route, enum tb_cfg_pkg_type type)
+int tb_cfg_request(struct tb_ctl *ctl, struct tb_cfg_request *req,
+		   void (*callback)(void *), void *callback_data)
 {
 {
-	struct tb_cfg_result res;
-	struct ctl_pkg *pkg;
+	int ret;
 
 
-	if (!wait_for_completion_timeout(&ctl->response_ready,
-					 msecs_to_jiffies(timeout_msec))) {
-		tb_ctl_WARN(ctl, "RX: timeout\n");
-		return (struct tb_cfg_result) { .err = -ETIMEDOUT };
-	}
-	if (!kfifo_get(&ctl->response_fifo, &pkg)) {
-		tb_ctl_WARN(ctl, "empty kfifo\n");
-		return (struct tb_cfg_result) { .err = -EIO };
-	}
+	req->flags = 0;
+	req->callback = callback;
+	req->callback_data = callback_data;
+	INIT_WORK(&req->work, tb_cfg_request_work);
+	INIT_LIST_HEAD(&req->list);
 
 
-	res = parse_header(pkg, length, type, route);
-	if (!res.err)
-		memcpy(buffer, pkg->buffer, length);
-	tb_ctl_rx_submit(pkg);
-	return res;
+	tb_cfg_request_get(req);
+	ret = tb_cfg_request_enqueue(ctl, req);
+	if (ret)
+		goto err_put;
+
+	ret = tb_ctl_tx(ctl, req->request, req->request_size,
+			req->request_type);
+	if (ret)
+		goto err_dequeue;
+
+	if (!req->response)
+		schedule_work(&req->work);
+
+	return 0;
+
+err_dequeue:
+	tb_cfg_request_dequeue(req);
+err_put:
+	tb_cfg_request_put(req);
+
+	return ret;
+}
+
+/**
+ * tb_cfg_request_cancel() - Cancel a control request
+ * @req: Request to cancel
+ * @err: Error to assign to the request
+ *
+ * This function can be used to cancel ongoing request. It will wait
+ * until the request is not active anymore.
+ */
+void tb_cfg_request_cancel(struct tb_cfg_request *req, int err)
+{
+	set_bit(TB_CFG_REQUEST_CANCELED, &req->flags);
+	schedule_work(&req->work);
+	wait_event(tb_cfg_request_cancel_queue, !tb_cfg_request_is_active(req));
+	req->result.err = err;
+}
+
+static void tb_cfg_request_complete(void *data)
+{
+	complete(data);
 }
 }
 
 
+/**
+ * tb_cfg_request_sync() - Start control request and wait until it completes
+ * @ctl: Control channel to use
+ * @req: Request to start
+ * @timeout_msec: Timeout how long to wait @req to complete
+ *
+ * Starts a control request and waits until it completes. If timeout
+ * triggers the request is canceled before function returns. Note the
+ * caller needs to make sure only one message for given switch is active
+ * at a time.
+ */
+struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
+					 struct tb_cfg_request *req,
+					 int timeout_msec)
+{
+	unsigned long timeout = msecs_to_jiffies(timeout_msec);
+	struct tb_cfg_result res = { 0 };
+	DECLARE_COMPLETION_ONSTACK(done);
+	int ret;
+
+	ret = tb_cfg_request(ctl, req, tb_cfg_request_complete, &done);
+	if (ret) {
+		res.err = ret;
+		return res;
+	}
+
+	if (!wait_for_completion_timeout(&done, timeout))
+		tb_cfg_request_cancel(req, -ETIMEDOUT);
+
+	flush_work(&req->work);
+
+	return req->result;
+}
 
 
 /* public interface, alloc/start/stop/free */
 /* public interface, alloc/start/stop/free */
 
 
@@ -471,7 +608,7 @@ static struct tb_cfg_result tb_ctl_rx(struct tb_ctl *ctl, void *buffer,
  *
  *
  * Return: Returns a pointer on success or NULL on failure.
  * Return: Returns a pointer on success or NULL on failure.
  */
  */
-struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, hotplug_cb cb, void *cb_data)
+struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
 {
 {
 	int i;
 	int i;
 	struct tb_ctl *ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
 	struct tb_ctl *ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
@@ -481,18 +618,18 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, hotplug_cb cb, void *cb_data)
 	ctl->callback = cb;
 	ctl->callback = cb;
 	ctl->callback_data = cb_data;
 	ctl->callback_data = cb_data;
 
 
-	init_completion(&ctl->response_ready);
-	INIT_KFIFO(ctl->response_fifo);
+	mutex_init(&ctl->request_queue_lock);
+	INIT_LIST_HEAD(&ctl->request_queue);
 	ctl->frame_pool = dma_pool_create("thunderbolt_ctl", &nhi->pdev->dev,
 	ctl->frame_pool = dma_pool_create("thunderbolt_ctl", &nhi->pdev->dev,
 					 TB_FRAME_SIZE, 4, 0);
 					 TB_FRAME_SIZE, 4, 0);
 	if (!ctl->frame_pool)
 	if (!ctl->frame_pool)
 		goto err;
 		goto err;
 
 
-	ctl->tx = ring_alloc_tx(nhi, 0, 10);
+	ctl->tx = ring_alloc_tx(nhi, 0, 10, RING_FLAG_NO_SUSPEND);
 	if (!ctl->tx)
 	if (!ctl->tx)
 		goto err;
 		goto err;
 
 
-	ctl->rx = ring_alloc_rx(nhi, 0, 10);
+	ctl->rx = ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND);
 	if (!ctl->rx)
 	if (!ctl->rx)
 		goto err;
 		goto err;
 
 
@@ -520,6 +657,10 @@ err:
 void tb_ctl_free(struct tb_ctl *ctl)
 void tb_ctl_free(struct tb_ctl *ctl)
 {
 {
 	int i;
 	int i;
+
+	if (!ctl)
+		return;
+
 	if (ctl->rx)
 	if (ctl->rx)
 		ring_free(ctl->rx);
 		ring_free(ctl->rx);
 	if (ctl->tx)
 	if (ctl->tx)
@@ -546,6 +687,8 @@ void tb_ctl_start(struct tb_ctl *ctl)
 	ring_start(ctl->rx);
 	ring_start(ctl->rx);
 	for (i = 0; i < TB_CTL_RX_PKG_COUNT; i++)
 	for (i = 0; i < TB_CTL_RX_PKG_COUNT; i++)
 		tb_ctl_rx_submit(ctl->rx_packets[i]);
 		tb_ctl_rx_submit(ctl->rx_packets[i]);
+
+	ctl->running = true;
 }
 }
 
 
 /**
 /**
@@ -558,12 +701,16 @@ void tb_ctl_start(struct tb_ctl *ctl)
  */
  */
 void tb_ctl_stop(struct tb_ctl *ctl)
 void tb_ctl_stop(struct tb_ctl *ctl)
 {
 {
+	mutex_lock(&ctl->request_queue_lock);
+	ctl->running = false;
+	mutex_unlock(&ctl->request_queue_lock);
+
 	ring_stop(ctl->rx);
 	ring_stop(ctl->rx);
 	ring_stop(ctl->tx);
 	ring_stop(ctl->tx);
 
 
-	if (!kfifo_is_empty(&ctl->response_fifo))
-		tb_ctl_WARN(ctl, "dangling response in response_fifo\n");
-	kfifo_reset(&ctl->response_fifo);
+	if (!list_empty(&ctl->request_queue))
+		tb_ctl_WARN(ctl, "dangling request in request_queue\n");
+	INIT_LIST_HEAD(&ctl->request_queue);
 	tb_ctl_info(ctl, "control channel stopped\n");
 	tb_ctl_info(ctl, "control channel stopped\n");
 }
 }
 
 
@@ -578,7 +725,7 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
 		 enum tb_cfg_error error)
 		 enum tb_cfg_error error)
 {
 {
 	struct cfg_error_pkg pkg = {
 	struct cfg_error_pkg pkg = {
-		.header = make_header(route),
+		.header = tb_cfg_make_header(route),
 		.port = port,
 		.port = port,
 		.error = error,
 		.error = error,
 	};
 	};
@@ -586,6 +733,49 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
 	return tb_ctl_tx(ctl, &pkg, sizeof(pkg), TB_CFG_PKG_ERROR);
 	return tb_ctl_tx(ctl, &pkg, sizeof(pkg), TB_CFG_PKG_ERROR);
 }
 }
 
 
+static bool tb_cfg_match(const struct tb_cfg_request *req,
+			 const struct ctl_pkg *pkg)
+{
+	u64 route = tb_cfg_get_route(pkg->buffer) & ~BIT_ULL(63);
+
+	if (pkg->frame.eof == TB_CFG_PKG_ERROR)
+		return true;
+
+	if (pkg->frame.eof != req->response_type)
+		return false;
+	if (route != tb_cfg_get_route(req->request))
+		return false;
+	if (pkg->frame.size != req->response_size)
+		return false;
+
+	if (pkg->frame.eof == TB_CFG_PKG_READ ||
+	    pkg->frame.eof == TB_CFG_PKG_WRITE) {
+		const struct cfg_read_pkg *req_hdr = req->request;
+		const struct cfg_read_pkg *res_hdr = pkg->buffer;
+
+		if (req_hdr->addr.seq != res_hdr->addr.seq)
+			return false;
+	}
+
+	return true;
+}
+
+static bool tb_cfg_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
+{
+	struct tb_cfg_result res;
+
+	/* Now make sure it is in expected format */
+	res = parse_header(pkg, req->response_size, req->response_type,
+			   tb_cfg_get_route(req->request));
+	if (!res.err)
+		memcpy(req->response, pkg->buffer, req->response_size);
+
+	req->result = res;
+
+	/* Always complete when first response is received */
+	return true;
+}
+
 /**
 /**
  * tb_cfg_reset() - send a reset packet and wait for a response
  * tb_cfg_reset() - send a reset packet and wait for a response
  *
  *
@@ -596,16 +786,31 @@ int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
 struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
 struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
 				  int timeout_msec)
 				  int timeout_msec)
 {
 {
-	int err;
-	struct cfg_reset_pkg request = { .header = make_header(route) };
+	struct cfg_reset_pkg request = { .header = tb_cfg_make_header(route) };
+	struct tb_cfg_result res = { 0 };
 	struct tb_cfg_header reply;
 	struct tb_cfg_header reply;
+	struct tb_cfg_request *req;
+
+	req = tb_cfg_request_alloc();
+	if (!req) {
+		res.err = -ENOMEM;
+		return res;
+	}
+
+	req->match = tb_cfg_match;
+	req->copy = tb_cfg_copy;
+	req->request = &request;
+	req->request_size = sizeof(request);
+	req->request_type = TB_CFG_PKG_RESET;
+	req->response = &reply;
+	req->response_size = sizeof(reply);
+	req->response_type = sizeof(TB_CFG_PKG_RESET);
+
+	res = tb_cfg_request_sync(ctl, req, timeout_msec);
 
 
-	err = tb_ctl_tx(ctl, &request, sizeof(request), TB_CFG_PKG_RESET);
-	if (err)
-		return (struct tb_cfg_result) { .err = err };
+	tb_cfg_request_put(req);
 
 
-	return tb_ctl_rx(ctl, &reply, sizeof(reply), timeout_msec, route,
-			 TB_CFG_PKG_RESET);
+	return res;
 }
 }
 
 
 /**
 /**
@@ -619,7 +824,7 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
 {
 {
 	struct tb_cfg_result res = { 0 };
 	struct tb_cfg_result res = { 0 };
 	struct cfg_read_pkg request = {
 	struct cfg_read_pkg request = {
-		.header = make_header(route),
+		.header = tb_cfg_make_header(route),
 		.addr = {
 		.addr = {
 			.port = port,
 			.port = port,
 			.space = space,
 			.space = space,
@@ -628,13 +833,39 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
 		},
 		},
 	};
 	};
 	struct cfg_write_pkg reply;
 	struct cfg_write_pkg reply;
+	int retries = 0;
 
 
-	res.err = tb_ctl_tx(ctl, &request, sizeof(request), TB_CFG_PKG_READ);
-	if (res.err)
-		return res;
+	while (retries < TB_CTL_RETRIES) {
+		struct tb_cfg_request *req;
+
+		req = tb_cfg_request_alloc();
+		if (!req) {
+			res.err = -ENOMEM;
+			return res;
+		}
+
+		request.addr.seq = retries++;
+
+		req->match = tb_cfg_match;
+		req->copy = tb_cfg_copy;
+		req->request = &request;
+		req->request_size = sizeof(request);
+		req->request_type = TB_CFG_PKG_READ;
+		req->response = &reply;
+		req->response_size = 12 + 4 * length;
+		req->response_type = TB_CFG_PKG_READ;
+
+		res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+		tb_cfg_request_put(req);
+
+		if (res.err != -ETIMEDOUT)
+			break;
+
+		/* Wait a bit (arbitrary time) until we send a retry */
+		usleep_range(10, 100);
+	}
 
 
-	res = tb_ctl_rx(ctl, &reply, 12 + 4 * length, timeout_msec, route,
-			TB_CFG_PKG_READ);
 	if (res.err)
 	if (res.err)
 		return res;
 		return res;
 
 
@@ -650,13 +881,13 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
  *
  *
  * Offset and length are in dwords.
  * Offset and length are in dwords.
  */
  */
-struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, void *buffer,
+struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, const void *buffer,
 		u64 route, u32 port, enum tb_cfg_space space,
 		u64 route, u32 port, enum tb_cfg_space space,
 		u32 offset, u32 length, int timeout_msec)
 		u32 offset, u32 length, int timeout_msec)
 {
 {
 	struct tb_cfg_result res = { 0 };
 	struct tb_cfg_result res = { 0 };
 	struct cfg_write_pkg request = {
 	struct cfg_write_pkg request = {
-		.header = make_header(route),
+		.header = tb_cfg_make_header(route),
 		.addr = {
 		.addr = {
 			.port = port,
 			.port = port,
 			.space = space,
 			.space = space,
@@ -665,15 +896,41 @@ struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, void *buffer,
 		},
 		},
 	};
 	};
 	struct cfg_read_pkg reply;
 	struct cfg_read_pkg reply;
+	int retries = 0;
 
 
 	memcpy(&request.data, buffer, length * 4);
 	memcpy(&request.data, buffer, length * 4);
 
 
-	res.err = tb_ctl_tx(ctl, &request, 12 + 4 * length, TB_CFG_PKG_WRITE);
-	if (res.err)
-		return res;
+	while (retries < TB_CTL_RETRIES) {
+		struct tb_cfg_request *req;
+
+		req = tb_cfg_request_alloc();
+		if (!req) {
+			res.err = -ENOMEM;
+			return res;
+		}
+
+		request.addr.seq = retries++;
+
+		req->match = tb_cfg_match;
+		req->copy = tb_cfg_copy;
+		req->request = &request;
+		req->request_size = 12 + 4 * length;
+		req->request_type = TB_CFG_PKG_WRITE;
+		req->response = &reply;
+		req->response_size = sizeof(reply);
+		req->response_type = TB_CFG_PKG_WRITE;
+
+		res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+		tb_cfg_request_put(req);
+
+		if (res.err != -ETIMEDOUT)
+			break;
+
+		/* Wait a bit (arbitrary time) until we send a retry */
+		usleep_range(10, 100);
+	}
 
 
-	res = tb_ctl_rx(ctl, &reply, sizeof(reply), timeout_msec, route,
-			TB_CFG_PKG_WRITE);
 	if (res.err)
 	if (res.err)
 		return res;
 		return res;
 
 
@@ -687,24 +944,52 @@ int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
 {
 {
 	struct tb_cfg_result res = tb_cfg_read_raw(ctl, buffer, route, port,
 	struct tb_cfg_result res = tb_cfg_read_raw(ctl, buffer, route, port,
 			space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
 			space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
-	if (res.err == 1) {
+	switch (res.err) {
+	case 0:
+		/* Success */
+		break;
+
+	case 1:
+		/* Thunderbolt error, tb_error holds the actual number */
 		tb_cfg_print_error(ctl, &res);
 		tb_cfg_print_error(ctl, &res);
 		return -EIO;
 		return -EIO;
+
+	case -ETIMEDOUT:
+		tb_ctl_warn(ctl, "timeout reading config space %u from %#x\n",
+			    space, offset);
+		break;
+
+	default:
+		WARN(1, "tb_cfg_read: %d\n", res.err);
+		break;
 	}
 	}
-	WARN(res.err, "tb_cfg_read: %d\n", res.err);
 	return res.err;
 	return res.err;
 }
 }
 
 
-int tb_cfg_write(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
+int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
 		 enum tb_cfg_space space, u32 offset, u32 length)
 		 enum tb_cfg_space space, u32 offset, u32 length)
 {
 {
 	struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port,
 	struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port,
 			space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
 			space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
-	if (res.err == 1) {
+	switch (res.err) {
+	case 0:
+		/* Success */
+		break;
+
+	case 1:
+		/* Thunderbolt error, tb_error holds the actual number */
 		tb_cfg_print_error(ctl, &res);
 		tb_cfg_print_error(ctl, &res);
 		return -EIO;
 		return -EIO;
+
+	case -ETIMEDOUT:
+		tb_ctl_warn(ctl, "timeout writing config space %u to %#x\n",
+			    space, offset);
+		break;
+
+	default:
+		WARN(1, "tb_cfg_write: %d\n", res.err);
+		break;
 	}
 	}
-	WARN(res.err, "tb_cfg_write: %d\n", res.err);
 	return res.err;
 	return res.err;
 }
 }
 
 

+ 86 - 19
drivers/thunderbolt/ctl.h

@@ -7,14 +7,18 @@
 #ifndef _TB_CFG
 #ifndef _TB_CFG
 #define _TB_CFG
 #define _TB_CFG
 
 
+#include <linux/kref.h>
+
 #include "nhi.h"
 #include "nhi.h"
+#include "tb_msgs.h"
 
 
 /* control channel */
 /* control channel */
 struct tb_ctl;
 struct tb_ctl;
 
 
-typedef void (*hotplug_cb)(void *data, u64 route, u8 port, bool unplug);
+typedef void (*event_cb)(void *data, enum tb_cfg_pkg_type type,
+			 const void *buf, size_t size);
 
 
-struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, hotplug_cb cb, void *cb_data);
+struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data);
 void tb_ctl_start(struct tb_ctl *ctl);
 void tb_ctl_start(struct tb_ctl *ctl);
 void tb_ctl_stop(struct tb_ctl *ctl);
 void tb_ctl_stop(struct tb_ctl *ctl);
 void tb_ctl_free(struct tb_ctl *ctl);
 void tb_ctl_free(struct tb_ctl *ctl);
@@ -23,21 +27,6 @@ void tb_ctl_free(struct tb_ctl *ctl);
 
 
 #define TB_CFG_DEFAULT_TIMEOUT 5000 /* msec */
 #define TB_CFG_DEFAULT_TIMEOUT 5000 /* msec */
 
 
-enum tb_cfg_space {
-	TB_CFG_HOPS = 0,
-	TB_CFG_PORT = 1,
-	TB_CFG_SWITCH = 2,
-	TB_CFG_COUNTERS = 3,
-};
-
-enum tb_cfg_error {
-	TB_CFG_ERROR_PORT_NOT_CONNECTED = 0,
-	TB_CFG_ERROR_INVALID_CONFIG_SPACE = 2,
-	TB_CFG_ERROR_NO_SUCH_PORT = 4,
-	TB_CFG_ERROR_ACK_PLUG_EVENT = 7, /* send as reply to TB_CFG_PKG_EVENT */
-	TB_CFG_ERROR_LOOP = 8,
-};
-
 struct tb_cfg_result {
 struct tb_cfg_result {
 	u64 response_route;
 	u64 response_route;
 	u32 response_port; /*
 	u32 response_port; /*
@@ -52,6 +41,84 @@ struct tb_cfg_result {
 	enum tb_cfg_error tb_error; /* valid if err == 1 */
 	enum tb_cfg_error tb_error; /* valid if err == 1 */
 };
 };
 
 
+struct ctl_pkg {
+	struct tb_ctl *ctl;
+	void *buffer;
+	struct ring_frame frame;
+};
+
+/**
+ * struct tb_cfg_request - Control channel request
+ * @kref: Reference count
+ * @ctl: Pointer to the control channel structure. Only set when the
+ *	 request is queued.
+ * @request_size: Size of the request packet (in bytes)
+ * @request_type: Type of the request packet
+ * @response: Response is stored here
+ * @response_size: Maximum size of one response packet
+ * @response_type: Expected type of the response packet
+ * @npackets: Number of packets expected to be returned with this request
+ * @match: Function used to match the incoming packet
+ * @copy: Function used to copy the incoming packet to @response
+ * @callback: Callback called when the request is finished successfully
+ * @callback_data: Data to be passed to @callback
+ * @flags: Flags for the request
+ * @work: Work item used to complete the request
+ * @result: Result after the request has been completed
+ * @list: Requests are queued using this field
+ *
+ * An arbitrary request over Thunderbolt control channel. For standard
+ * control channel message, one should use tb_cfg_read/write() and
+ * friends if possible.
+ */
+struct tb_cfg_request {
+	struct kref kref;
+	struct tb_ctl *ctl;
+	const void *request;
+	size_t request_size;
+	enum tb_cfg_pkg_type request_type;
+	void *response;
+	size_t response_size;
+	enum tb_cfg_pkg_type response_type;
+	size_t npackets;
+	bool (*match)(const struct tb_cfg_request *req,
+		      const struct ctl_pkg *pkg);
+	bool (*copy)(struct tb_cfg_request *req, const struct ctl_pkg *pkg);
+	void (*callback)(void *callback_data);
+	void *callback_data;
+	unsigned long flags;
+	struct work_struct work;
+	struct tb_cfg_result result;
+	struct list_head list;
+};
+
+#define TB_CFG_REQUEST_ACTIVE		0
+#define TB_CFG_REQUEST_CANCELED		1
+
+struct tb_cfg_request *tb_cfg_request_alloc(void);
+void tb_cfg_request_get(struct tb_cfg_request *req);
+void tb_cfg_request_put(struct tb_cfg_request *req);
+int tb_cfg_request(struct tb_ctl *ctl, struct tb_cfg_request *req,
+		   void (*callback)(void *), void *callback_data);
+void tb_cfg_request_cancel(struct tb_cfg_request *req, int err);
+struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
+			struct tb_cfg_request *req, int timeout_msec);
+
+static inline u64 tb_cfg_get_route(const struct tb_cfg_header *header)
+{
+	return (u64) header->route_hi << 32 | header->route_lo;
+}
+
+static inline struct tb_cfg_header tb_cfg_make_header(u64 route)
+{
+	struct tb_cfg_header header = {
+		.route_hi = route >> 32,
+		.route_lo = route,
+	};
+	/* check for overflow, route_hi is not 32 bits! */
+	WARN_ON(tb_cfg_get_route(&header) != route);
+	return header;
+}
 
 
 int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
 int tb_cfg_error(struct tb_ctl *ctl, u64 route, u32 port,
 		 enum tb_cfg_error error);
 		 enum tb_cfg_error error);
@@ -61,13 +128,13 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
 				     u64 route, u32 port,
 				     u64 route, u32 port,
 				     enum tb_cfg_space space, u32 offset,
 				     enum tb_cfg_space space, u32 offset,
 				     u32 length, int timeout_msec);
 				     u32 length, int timeout_msec);
-struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, void *buffer,
+struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, const void *buffer,
 				      u64 route, u32 port,
 				      u64 route, u32 port,
 				      enum tb_cfg_space space, u32 offset,
 				      enum tb_cfg_space space, u32 offset,
 				      u32 length, int timeout_msec);
 				      u32 length, int timeout_msec);
 int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
 int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
 		enum tb_cfg_space space, u32 offset, u32 length);
 		enum tb_cfg_space space, u32 offset, u32 length);
-int tb_cfg_write(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
+int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
 		 enum tb_cfg_space space, u32 offset, u32 length);
 		 enum tb_cfg_space space, u32 offset, u32 length);
 int tb_cfg_get_upstream_port(struct tb_ctl *ctl, u64 route);
 int tb_cfg_get_upstream_port(struct tb_ctl *ctl, u64 route);
 
 

+ 524 - 0
drivers/thunderbolt/dma_port.c

@@ -0,0 +1,524 @@
+/*
+ * Thunderbolt DMA configuration based mailbox support
+ *
+ * Copyright (C) 2017, Intel Corporation
+ * Authors: Michael Jamet <michael.jamet@intel.com>
+ *          Mika Westerberg <mika.westerberg@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/delay.h>
+#include <linux/slab.h>
+
+#include "dma_port.h"
+#include "tb_regs.h"
+
+#define DMA_PORT_CAP			0x3e
+
+#define MAIL_DATA			1
+#define MAIL_DATA_DWORDS		16
+
+#define MAIL_IN				17
+#define MAIL_IN_CMD_SHIFT		28
+#define MAIL_IN_CMD_MASK		GENMASK(31, 28)
+#define MAIL_IN_CMD_FLASH_WRITE		0x0
+#define MAIL_IN_CMD_FLASH_UPDATE_AUTH	0x1
+#define MAIL_IN_CMD_FLASH_READ		0x2
+#define MAIL_IN_CMD_POWER_CYCLE		0x4
+#define MAIL_IN_DWORDS_SHIFT		24
+#define MAIL_IN_DWORDS_MASK		GENMASK(27, 24)
+#define MAIL_IN_ADDRESS_SHIFT		2
+#define MAIL_IN_ADDRESS_MASK		GENMASK(23, 2)
+#define MAIL_IN_CSS			BIT(1)
+#define MAIL_IN_OP_REQUEST		BIT(0)
+
+#define MAIL_OUT			18
+#define MAIL_OUT_STATUS_RESPONSE	BIT(29)
+#define MAIL_OUT_STATUS_CMD_SHIFT	4
+#define MAIL_OUT_STATUS_CMD_MASK	GENMASK(7, 4)
+#define MAIL_OUT_STATUS_MASK		GENMASK(3, 0)
+#define MAIL_OUT_STATUS_COMPLETED	0
+#define MAIL_OUT_STATUS_ERR_AUTH	1
+#define MAIL_OUT_STATUS_ERR_ACCESS	2
+
+#define DMA_PORT_TIMEOUT		5000 /* ms */
+#define DMA_PORT_RETRIES		3
+
+/**
+ * struct tb_dma_port - DMA control port
+ * @sw: Switch the DMA port belongs to
+ * @port: Switch port number where DMA capability is found
+ * @base: Start offset of the mailbox registers
+ * @buf: Temporary buffer to store a single block
+ */
+struct tb_dma_port {
+	struct tb_switch *sw;
+	u8 port;
+	u32 base;
+	u8 *buf;
+};
+
+/*
+ * When the switch is in safe mode it supports very little functionality
+ * so we don't validate that much here.
+ */
+static bool dma_port_match(const struct tb_cfg_request *req,
+			   const struct ctl_pkg *pkg)
+{
+	u64 route = tb_cfg_get_route(pkg->buffer) & ~BIT_ULL(63);
+
+	if (pkg->frame.eof == TB_CFG_PKG_ERROR)
+		return true;
+	if (pkg->frame.eof != req->response_type)
+		return false;
+	if (route != tb_cfg_get_route(req->request))
+		return false;
+	if (pkg->frame.size != req->response_size)
+		return false;
+
+	return true;
+}
+
+static bool dma_port_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
+{
+	memcpy(req->response, pkg->buffer, req->response_size);
+	return true;
+}
+
+static int dma_port_read(struct tb_ctl *ctl, void *buffer, u64 route,
+			 u32 port, u32 offset, u32 length, int timeout_msec)
+{
+	struct cfg_read_pkg request = {
+		.header = tb_cfg_make_header(route),
+		.addr = {
+			.seq = 1,
+			.port = port,
+			.space = TB_CFG_PORT,
+			.offset = offset,
+			.length = length,
+		},
+	};
+	struct tb_cfg_request *req;
+	struct cfg_write_pkg reply;
+	struct tb_cfg_result res;
+
+	req = tb_cfg_request_alloc();
+	if (!req)
+		return -ENOMEM;
+
+	req->match = dma_port_match;
+	req->copy = dma_port_copy;
+	req->request = &request;
+	req->request_size = sizeof(request);
+	req->request_type = TB_CFG_PKG_READ;
+	req->response = &reply;
+	req->response_size = 12 + 4 * length;
+	req->response_type = TB_CFG_PKG_READ;
+
+	res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+	tb_cfg_request_put(req);
+
+	if (res.err)
+		return res.err;
+
+	memcpy(buffer, &reply.data, 4 * length);
+	return 0;
+}
+
+static int dma_port_write(struct tb_ctl *ctl, const void *buffer, u64 route,
+			  u32 port, u32 offset, u32 length, int timeout_msec)
+{
+	struct cfg_write_pkg request = {
+		.header = tb_cfg_make_header(route),
+		.addr = {
+			.seq = 1,
+			.port = port,
+			.space = TB_CFG_PORT,
+			.offset = offset,
+			.length = length,
+		},
+	};
+	struct tb_cfg_request *req;
+	struct cfg_read_pkg reply;
+	struct tb_cfg_result res;
+
+	memcpy(&request.data, buffer, length * 4);
+
+	req = tb_cfg_request_alloc();
+	if (!req)
+		return -ENOMEM;
+
+	req->match = dma_port_match;
+	req->copy = dma_port_copy;
+	req->request = &request;
+	req->request_size = 12 + 4 * length;
+	req->request_type = TB_CFG_PKG_WRITE;
+	req->response = &reply;
+	req->response_size = sizeof(reply);
+	req->response_type = TB_CFG_PKG_WRITE;
+
+	res = tb_cfg_request_sync(ctl, req, timeout_msec);
+
+	tb_cfg_request_put(req);
+
+	return res.err;
+}
+
+static int dma_find_port(struct tb_switch *sw)
+{
+	int port, ret;
+	u32 type;
+
+	/*
+	 * The DMA (NHI) port is either 3 or 5 depending on the
+	 * controller. Try both starting from 5 which is more common.
+	 */
+	port = 5;
+	ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1,
+			    DMA_PORT_TIMEOUT);
+	if (!ret && (type & 0xffffff) == TB_TYPE_NHI)
+		return port;
+
+	port = 3;
+	ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1,
+			    DMA_PORT_TIMEOUT);
+	if (!ret && (type & 0xffffff) == TB_TYPE_NHI)
+		return port;
+
+	return -ENODEV;
+}
+
+/**
+ * dma_port_alloc() - Finds DMA control port from a switch pointed by route
+ * @sw: Switch from where find the DMA port
+ *
+ * Function checks if the switch NHI port supports DMA configuration
+ * based mailbox capability and if it does, allocates and initializes
+ * DMA port structure. Returns %NULL if the capabity was not found.
+ *
+ * The DMA control port is functional also when the switch is in safe
+ * mode.
+ */
+struct tb_dma_port *dma_port_alloc(struct tb_switch *sw)
+{
+	struct tb_dma_port *dma;
+	int port;
+
+	port = dma_find_port(sw);
+	if (port < 0)
+		return NULL;
+
+	dma = kzalloc(sizeof(*dma), GFP_KERNEL);
+	if (!dma)
+		return NULL;
+
+	dma->buf = kmalloc_array(MAIL_DATA_DWORDS, sizeof(u32), GFP_KERNEL);
+	if (!dma->buf) {
+		kfree(dma);
+		return NULL;
+	}
+
+	dma->sw = sw;
+	dma->port = port;
+	dma->base = DMA_PORT_CAP;
+
+	return dma;
+}
+
+/**
+ * dma_port_free() - Release DMA control port structure
+ * @dma: DMA control port
+ */
+void dma_port_free(struct tb_dma_port *dma)
+{
+	if (dma) {
+		kfree(dma->buf);
+		kfree(dma);
+	}
+}
+
+static int dma_port_wait_for_completion(struct tb_dma_port *dma,
+					unsigned int timeout)
+{
+	unsigned long end = jiffies + msecs_to_jiffies(timeout);
+	struct tb_switch *sw = dma->sw;
+
+	do {
+		int ret;
+		u32 in;
+
+		ret = dma_port_read(sw->tb->ctl, &in, tb_route(sw), dma->port,
+				    dma->base + MAIL_IN, 1, 50);
+		if (ret) {
+			if (ret != -ETIMEDOUT)
+				return ret;
+		} else if (!(in & MAIL_IN_OP_REQUEST)) {
+			return 0;
+		}
+
+		usleep_range(50, 100);
+	} while (time_before(jiffies, end));
+
+	return -ETIMEDOUT;
+}
+
+static int status_to_errno(u32 status)
+{
+	switch (status & MAIL_OUT_STATUS_MASK) {
+	case MAIL_OUT_STATUS_COMPLETED:
+		return 0;
+	case MAIL_OUT_STATUS_ERR_AUTH:
+		return -EINVAL;
+	case MAIL_OUT_STATUS_ERR_ACCESS:
+		return -EACCES;
+	}
+
+	return -EIO;
+}
+
+static int dma_port_request(struct tb_dma_port *dma, u32 in,
+			    unsigned int timeout)
+{
+	struct tb_switch *sw = dma->sw;
+	u32 out;
+	int ret;
+
+	ret = dma_port_write(sw->tb->ctl, &in, tb_route(sw), dma->port,
+			     dma->base + MAIL_IN, 1, DMA_PORT_TIMEOUT);
+	if (ret)
+		return ret;
+
+	ret = dma_port_wait_for_completion(dma, timeout);
+	if (ret)
+		return ret;
+
+	ret = dma_port_read(sw->tb->ctl, &out, tb_route(sw), dma->port,
+			    dma->base + MAIL_OUT, 1, DMA_PORT_TIMEOUT);
+	if (ret)
+		return ret;
+
+	return status_to_errno(out);
+}
+
+static int dma_port_flash_read_block(struct tb_dma_port *dma, u32 address,
+				     void *buf, u32 size)
+{
+	struct tb_switch *sw = dma->sw;
+	u32 in, dwaddress, dwords;
+	int ret;
+
+	dwaddress = address / 4;
+	dwords = size / 4;
+
+	in = MAIL_IN_CMD_FLASH_READ << MAIL_IN_CMD_SHIFT;
+	if (dwords < MAIL_DATA_DWORDS)
+		in |= (dwords << MAIL_IN_DWORDS_SHIFT) & MAIL_IN_DWORDS_MASK;
+	in |= (dwaddress << MAIL_IN_ADDRESS_SHIFT) & MAIL_IN_ADDRESS_MASK;
+	in |= MAIL_IN_OP_REQUEST;
+
+	ret = dma_port_request(dma, in, DMA_PORT_TIMEOUT);
+	if (ret)
+		return ret;
+
+	return dma_port_read(sw->tb->ctl, buf, tb_route(sw), dma->port,
+			     dma->base + MAIL_DATA, dwords, DMA_PORT_TIMEOUT);
+}
+
+static int dma_port_flash_write_block(struct tb_dma_port *dma, u32 address,
+				      const void *buf, u32 size)
+{
+	struct tb_switch *sw = dma->sw;
+	u32 in, dwaddress, dwords;
+	int ret;
+
+	dwords = size / 4;
+
+	/* Write the block to MAIL_DATA registers */
+	ret = dma_port_write(sw->tb->ctl, buf, tb_route(sw), dma->port,
+			    dma->base + MAIL_DATA, dwords, DMA_PORT_TIMEOUT);
+
+	in = MAIL_IN_CMD_FLASH_WRITE << MAIL_IN_CMD_SHIFT;
+
+	/* CSS header write is always done to the same magic address */
+	if (address >= DMA_PORT_CSS_ADDRESS) {
+		dwaddress = DMA_PORT_CSS_ADDRESS;
+		in |= MAIL_IN_CSS;
+	} else {
+		dwaddress = address / 4;
+	}
+
+	in |= ((dwords - 1) << MAIL_IN_DWORDS_SHIFT) & MAIL_IN_DWORDS_MASK;
+	in |= (dwaddress << MAIL_IN_ADDRESS_SHIFT) & MAIL_IN_ADDRESS_MASK;
+	in |= MAIL_IN_OP_REQUEST;
+
+	return dma_port_request(dma, in, DMA_PORT_TIMEOUT);
+}
+
+/**
+ * dma_port_flash_read() - Read from active flash region
+ * @dma: DMA control port
+ * @address: Address relative to the start of active region
+ * @buf: Buffer where the data is read
+ * @size: Size of the buffer
+ */
+int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
+			void *buf, size_t size)
+{
+	unsigned int retries = DMA_PORT_RETRIES;
+	unsigned int offset;
+
+	offset = address & 3;
+	address = address & ~3;
+
+	do {
+		u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
+		int ret;
+
+		ret = dma_port_flash_read_block(dma, address, dma->buf,
+						ALIGN(nbytes, 4));
+		if (ret) {
+			if (ret == -ETIMEDOUT) {
+				if (retries--)
+					continue;
+				ret = -EIO;
+			}
+			return ret;
+		}
+
+		memcpy(buf, dma->buf + offset, nbytes);
+
+		size -= nbytes;
+		address += nbytes;
+		buf += nbytes;
+	} while (size > 0);
+
+	return 0;
+}
+
+/**
+ * dma_port_flash_write() - Write to non-active flash region
+ * @dma: DMA control port
+ * @address: Address relative to the start of non-active region
+ * @buf: Data to write
+ * @size: Size of the buffer
+ *
+ * Writes block of data to the non-active flash region of the switch. If
+ * the address is given as %DMA_PORT_CSS_ADDRESS the block is written
+ * using CSS command.
+ */
+int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address,
+			 const void *buf, size_t size)
+{
+	unsigned int retries = DMA_PORT_RETRIES;
+	unsigned int offset;
+
+	if (address >= DMA_PORT_CSS_ADDRESS) {
+		offset = 0;
+		if (size > DMA_PORT_CSS_MAX_SIZE)
+			return -E2BIG;
+	} else {
+		offset = address & 3;
+		address = address & ~3;
+	}
+
+	do {
+		u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
+		int ret;
+
+		memcpy(dma->buf + offset, buf, nbytes);
+
+		ret = dma_port_flash_write_block(dma, address, buf, nbytes);
+		if (ret) {
+			if (ret == -ETIMEDOUT) {
+				if (retries--)
+					continue;
+				ret = -EIO;
+			}
+			return ret;
+		}
+
+		size -= nbytes;
+		address += nbytes;
+		buf += nbytes;
+	} while (size > 0);
+
+	return 0;
+}
+
+/**
+ * dma_port_flash_update_auth() - Starts flash authenticate cycle
+ * @dma: DMA control port
+ *
+ * Starts the flash update authentication cycle. If the image in the
+ * non-active area was valid, the switch starts upgrade process where
+ * active and non-active area get swapped in the end. Caller should call
+ * dma_port_flash_update_auth_status() to get status of this command.
+ * This is because if the switch in question is root switch the
+ * thunderbolt host controller gets reset as well.
+ */
+int dma_port_flash_update_auth(struct tb_dma_port *dma)
+{
+	u32 in;
+
+	in = MAIL_IN_CMD_FLASH_UPDATE_AUTH << MAIL_IN_CMD_SHIFT;
+	in |= MAIL_IN_OP_REQUEST;
+
+	return dma_port_request(dma, in, 150);
+}
+
+/**
+ * dma_port_flash_update_auth_status() - Reads status of update auth command
+ * @dma: DMA control port
+ * @status: Status code of the operation
+ *
+ * The function checks if there is status available from the last update
+ * auth command. Returns %0 if there is no status and no further
+ * action is required. If there is status, %1 is returned instead and
+ * @status holds the failure code.
+ *
+ * Negative return means there was an error reading status from the
+ * switch.
+ */
+int dma_port_flash_update_auth_status(struct tb_dma_port *dma, u32 *status)
+{
+	struct tb_switch *sw = dma->sw;
+	u32 out, cmd;
+	int ret;
+
+	ret = dma_port_read(sw->tb->ctl, &out, tb_route(sw), dma->port,
+			    dma->base + MAIL_OUT, 1, DMA_PORT_TIMEOUT);
+	if (ret)
+		return ret;
+
+	/* Check if the status relates to flash update auth */
+	cmd = (out & MAIL_OUT_STATUS_CMD_MASK) >> MAIL_OUT_STATUS_CMD_SHIFT;
+	if (cmd == MAIL_IN_CMD_FLASH_UPDATE_AUTH) {
+		if (status)
+			*status = out & MAIL_OUT_STATUS_MASK;
+
+		/* Reset is needed in any case */
+		return 1;
+	}
+
+	return 0;
+}
+
+/**
+ * dma_port_power_cycle() - Power cycles the switch
+ * @dma: DMA control port
+ *
+ * Triggers power cycle to the switch.
+ */
+int dma_port_power_cycle(struct tb_dma_port *dma)
+{
+	u32 in;
+
+	in = MAIL_IN_CMD_POWER_CYCLE << MAIL_IN_CMD_SHIFT;
+	in |= MAIL_IN_OP_REQUEST;
+
+	return dma_port_request(dma, in, 150);
+}

部分文件因文件數量過多而無法顯示